When I first got involved in embedded development, I always wanted to find such a book, which can solve some of my doubts. But unfortunately, there is no such written world until now, and I think it will never be available. Because my doubts are too much and too complicated. These doubts are hard to find in the textbook. The C tutorial focuses on the syntax of C, and the compilation principle focuses on grammar and semantic analysis. Every textbook has its focus, so those cross-cutting questions become three. The books on the market that call themselves "XX Collection" and "XX Bible" always say something that may not even be clarified by the author himself. So I think, what I want to know may be that everyone wants to know. Then write down some of the things I have learned. You may be able to spend less time on it and leave valuable brain resources to make more sense. Things. Language choice, C or other Just involved in the embedded developer always read some guide type articles, and then began to choose the development language. Is it C or C++? Or is it like the more popular JAVA? Don't hesitate, at least it seems that C is still your choice. The essence of embedded development is custom development. The hardware platform has a lot of processing power. If you want to protect your energy investment, C is the best “performance stockâ€. The advantage of C++ is that its code is reused, but the efficiency is much lower than C. Most importantly, not all chip compilers can support C++. Not to mention JAVA, the advantage of developing on a virtual platform is that you don't care about the specific hardware details, but this is not an embedded developer's style. In other words, this kind of development cannot be called embedded development. C is called a low-level language in a high-level language, and a high-level language in a low-level language because it has a language system close to human thought on the one hand and address and bit operations on the other hand. It is convenient to deal with hardware. Embedded development must operate IO, hardware address, how can you do it without bit operations and pointers? Embedded development general process The process of embedded development is similar to high-level development, coding - compiling, linking - running. Of course, there can be recursive processes such as online debugging, re-encoding, etc. But there are some differences. First of all, the development platform is different. Limited by the processing power of the embedded platform, embedded development is generally developed using a cross-compilation environment. The so-called cross-compilation is to compile the target program running on the B platform on the A platform. The B platform program compiler running on the A platform is called a cross compiler. For a beginner, it may take a few days to build such a compilation environment. Second, the debugging method is different. Programs developed on Windows or Linux can be run immediately to see the results of the run, or the IDE can be used to debug the running process, but embedded developers need at least a series of work to get to this point. At present, the most popular is to use JTAG to connect to the target system, download the compiled code successfully, and the advanced debugger can debug the program almost as much as the VC environment. Furthermore, developers understand the hierarchy is different. High-level software developers focus their efforts on understanding and implementing application requirements. Embedded developers must have a deeper understanding of the entire process than higher-level developers. The biggest difference is that programs that are supported by the operating system do not need you to care about the running address of the program and the last position of each block after the program is linked. Operating systems such as Windows and Linux that require MMU support are programs that are placed in a fixed memory address in the virtual address space. Regardless of where the address of the program is in the real RAM space, it is ultimately mapped by the MMU to a fixed address in the virtual address space. Why is the running of the program related to the stored address? Those who have learned the assembly principle, or who have read the final compiled into the machine code program, know that the variables and functions in the program are finally reflected in the machine code as the address, the jump of the program. The transfer, the subroutine call, and the variable call are all implemented by the CPU by directly extracting its address. The embedded learning penguin is so arrogant that it will be easy for a long time. The TEXT_BASE specified at compile time is the reference value for all addresses. If the address you specify does not match the address placed in the last program, it obviously does not work properly. But there are exceptions, but unusual usages certainly require an unusual effort. There are two ways to solve this problem. One way is to write address-independent code at the very beginning of the program, and finally move the following program to the TEXT_BASE you actually specify and jump to the code you are going to run. Another method is to specify TEXT_BASE as the storage address of your program, then move the program to the address that is actually running. There is a variable to record the latter address as a reference value. In the future symbol table address, this value is used as the reference value. The reference and offset values ​​are combined into their real addresses. It sounds sloppy and hard to implement. There is a better solution in the following content - with a BootLoader support. In addition, a complete program must have at least three segments of TEXT (body, that is, the machine instruction compiled with the program) segment, BSS (not initial variable) segment DATA (initialization variable) segment. The TEXT_BASE mentioned above is only the base address of the TEXT segment. For the other BSS segments and DATA segments, if the last entire program is placed in RAM, then the three segments can be placed consecutively, but if the program is placed in ROM or FLASH In read-only memory, then you also need to specify the address of your other segments, because the code does not change during the run, but the latter two are different. These tasks are done at the time of the link, and the compiler will definitely provide you with some means for you to do the work. Again, there are operating system-supported programming that shields these details so that you don't have to worry about these headaches at all. But embedded developers are not so lucky, they always start from scratch on a cold chip. The CPU power-on reset always looks for a program from a fixed address and starts its busy work. For our PC, this address is our BIOS program. For embedded systems, there is generally no BIOS support. RAM can't keep your program in case of power failure, so you must store the program in ROM or FLASH, but generally In other words, the width and speed of these memories cannot be compared to RAM. Programs running on these memories will slow down the run. Most of the solutions are to store a BootLoader here. The BootLoader can do more or less. A basic BootLoader only performs some system initialization and moves the user program to a certain address, and then jumps to the user program to hand over the CPU. Control, powerful BootLoad can also support network, serial download, and even debugging functions. But don't expect to have a BootLoader that is as generic as the PC BIOS for you to use. At least you need to do some porting work to match your system. This porting is also a part of your development. As a beginner in embedded development, You can benefit from porting or writing a BootLoader. No BootLoader does not work? Of course, either you sacrifice efficiency directly from ROM, or you can write your own program to move code to RAM. The most important thing is that you need to have good debugging tools to support online debugging during development. Otherwise, you have to re-fire the film if you change even one variable. Continuing the topic of program entry, regardless of the process, the program ends up as a machine instruction at execution time. A pure execution program is a collection of these machine instructions. Programs like our operating system are not pure executables, but are formatted. In addition to the several paragraphs mentioned above, there are also the length of the program, the checksum and the program entry - from where to start executing the user program. Why do you need a program entry for the program address? This is because the code you want to actually start is not necessarily placed at the very beginning of a file, even at the very beginning, unless you control the link, in multiple files. In this case, the compiler does not necessarily place your program at the very top of the final program. Like our general operating system-supported program, just have a main in your code as the program entry point - note that this main is just the entry point for most compilers, unless you use someone else's initialization library, otherwise The program entry can be set by yourself - just. Obviously, this executable file with format is more flexible to use, but requires the support of BootLoader. For the contents of the executable file format, you can look at the ELF file format. Compile preprocessing First look at the file inclusion. Starting with our first C program, Hello World!, we use header files, but what's amazing is that many people still don't include files after a long time of development. The correct understanding or the concept is unclear, and more people confuse the header file with the library associated with it. In order to take care of these beginners, here is the basics of the file is to cut a large file into several small files for easy management and reading. If you include that file, then you will not have all the contents of this file intact. Copying it to the file you include, the effect is exactly the same. On the other hand, if you compile some intermediate code, such as a library file, you can provide the header file to inform the caller of the functions and calls contained in your library. Format, but the real code has become the object code in the form of a library file. As for the suffix of the include file, such as .h, it just tells the user that this is a header file. You don't care about the compiler with any other name. Those friends who are confused about header files and libraries should suddenly realize that the header files can only guarantee that your program will not compile without syntax errors, but will not actually use the library until the last link, and only copy one header file. People who want to have a library should never make such mistakes. If the number of source programs in your project makes you feel that management is difficult, it is not a bad idea to include them all in one file. Another problem often encountered by beginners is the confusion caused by repeated inclusion. If a file contains another file two or more times is likely to cause duplicate definitions, but no one is stupid enough to repeat the same file twice, this problem is implicitly repeated, such as The A file contains the B file and the C file, and the B file contains the C file, so that the A file actually contains the C file twice. However, a good header file cleverly uses compile preprocessing to avoid this situation. In the header file you may find some preprocessing like this: #ifndef __TEST_H__ #define __TEST_H__ ... #endif /* __TEST_H__ */ The first two lines of the three lines of precompilation are usually at the top of the file, and the last file is at the very end of the file. It means that if __TEST_H__ is not defined, then __TEST_H__ is defined and the following code is compiled until #endif. And vice versa. What a clever design, with these three lines of simple pre-processing, this file can only be counted once even if it is contained tens of thousands of times. Let's take a look at the use of macros. Beginners always think when they look at other people's code, why use so many macros? Seeing people confused, indeed, sometimes the use of macros will reduce the readability of the code. But sometimes macros can also improve the readability of the code. Take a look at the two pieces of code below: 1) #define SCC_GSMRH_RSYN 0x00000001 /* receive sync timing */ #define SCC_GSMRH_RTSM 0x00000002 /* RTS* mode */ #define SCC_GSMRH_SYNL 0x0000000c /* sync length */ #define SCC_GSMRH_TXSY 0x00000010 /* transmitter/receiver sync*/ #define SCC_GSMRH_RFW 0x00000020 /* Rx FIFO width */ #define SCC_GSMRH_TFL 0x00000040 /*transmit FIFO length */ #define SCC_GSMRH_CTSS 0x00000080 /* CTS* sampling */ #define SCC_GSMRH_CDS 0x00000100 /* CD* sampling */ #define SCC_GSMRH_CTSP 0x00000200 /* CTS* pulse */ #define SCC_GSMRH_CDP 0x00000400 /* CD* pulse */ #define SCC_GSMRH_TTX 0x00000800 /* transparent transmitter */ #define SCC_GSMRH_TRX 0x00001000 /* transparent receiver */ #define SCC_GSMRH_REVD 0x00002000 /* reverse data */ #define SCC_GSMRH_TCRC 0x0000c000 /* transparent CRC */ #define SCC_GSMRH_GDE 0x00010000 /* glitch detect enable */ *(int *)0xff000a04 = SCC_GSMRH_REVD | SCC_GSMRH_TRX | SCC_GSMRH_TTX | SCC_GSMRH_CDP | SCC_GSMRH_CTSP | SCC_GSMRH_CDS | SCC_GSMRH_CTSS; 2) *(int *)0xff000a04 = 0x00003f80; This is an assignment procedure for a particular register, and both do the exact same thing. The first piece of code is a bit verbose, the second piece of code is very succinct, but if you want to change the settings of this register, you obviously prefer to see the first piece of code, because its existing value is already very clear, Assigning values ​​to those bits can be done with the corresponding macro definition. You don't have to take the pen and recalculate it every time you change it. This is very important for embedded developers. Sometimes when we debug a device, the value of a key register may be modified by us many times. It is a headache to calculate the value of each bit every time. In addition, the use of macros can also improve the efficiency of the code. Subprogram calls need to be pushed out of the stack. If this process is too frequent, it will consume a lot of CPU resources. So some code with small amount of code but running frequently will improve the efficiency of the code if it is implemented with parameter macros. For example, we often use the operation of assigning external IO, you can write a function similar to the following: Void outb(unsigned char val, unsigned int *addr) { *addr = val; } It's just a function of a statement, but you have to call a function. If you don't use a function, repeating the above statement is quite confusing. It is better to use the following macro to achieve. #define outb(b, addr) (*(volatile unsigned char *)(addr) = (b)) Since there is no need to call a sub-function, the macro improves the efficiency of the operation, but it wastes the program space. This is because wherever the macro is used, it is replaced with a sentence instead. Developers need to choose time and space based on system requirements. Mono solar module with high Efficiency PV Cells, PV solar module`s color sorting ensure consistent appearance on each module. Therefore. high conversion efficiency resulting in superior power output performance. 144 half cell solar panel, high efficiency 460W 550W for different clients requests.if you have interest,please contact us. 460W Solar Panels,Solar Panels Silicon,Half Cell Solar Panel,Half Cell Solar Energy Panel Jiangxi Huayang New Energy Co.,Ltd , https://www.huayangenergy.com