Macros are very common and often used in C programming to declare constants, such as strings, addresses, or any other values such as maximum number of elements in an array, and so on. For those who are not familiar with them, Macros are declared by the #define keyword. Macros are also used to define some basic functionality given a set of 1 or more typeless parameters, similarly to an inline function.
There are some benefits by using Macros:
- Code maintainability: Using a macro to define a constant provides an easy and correct method to ensure that if this value needs to be updated; all instances of this value will be automatically updated. Otherwise, the program may behave incorrectly, and even crash.
- Code readability: Specifying a macro name is much more readable and understandable to someone who needs to read the code, than a plain number or address.
There are also some disadvantages:
- Hard to extend: If the macros become complex and a change is required, any errors you introduce may yield vague compilation errors by the compiler, and always, the error line number points to the start of the macro.
- Hard to debug: Debuggers often do not provide clear access and step ability to a code inside macros.
Memory corruption is a scenario where a given buffer (or memory area) is unintentionally modified by an unknown piece of code. This data change causes the rightful users of the buffer to receive bad information which could either modify their behavior or even crash the program. It is usually very hard to find the root cause of memory corruptions because the corruption itself may take place asynchronously to the actual use of the buffer, thus when it actually happens, the crash may occur seconds or minutes later.
In this post I will present one of the ways that may help you catch such corruptions using memory protections.
U-boot is one of the most popular boot-loaders that are used in the industry. It supports plenty of platforms and it’s open source. There are many other advantages of using u-boot, however, in this post I will present the concept of u-boot scripts (or macros) which could improve your productivity (and your life…) if you are a programmer or a user of u-boot and do many repeating operations.
We all know that when we work on a project, we need to perform various repeating operations, such as erasing a partition in the flash, loading an image, loading parameters block, or any other repeating operation. These operations usually are composed of several commands which also require us to type addresses in the memory. This operation is inconvenient and error prone. There are some cases where the memory map changes, and the commands which were used are invalid and need to be updated.
The NFS (Network Filesystem) is a very common Linux feature. It enables any Linux machine to mount directories which are not physically available in the machine’s hardware, but located somewhere else, and reachable only via a network interface. Mounting an NFS allows us to extend the storage capabilities of the mounting machine beyond its physical storage limitations. The NFS mount can be used to store additional software or data. For embedded devices, NFS is usually used for debugging purposes, mostly because these products must operate without network availability and must contain all required software in their storage device. With limited storage capabilities, NFS is a great way to extend the machine’s storage capabilities without changing any hardware. Make sure that you grant access to the IP address of your target to the requested directory.
Today’s consumer electronics, networking and communication devices are small and highly sophisticated computers (or Systems-on-Chips; SoCs). Each chip contains numerous peripherals and special features that require the right software and drivers to drive them correctly and efficiently. In this article, I am going to describe in high level the system’s hardware components and the system’s software components. I will also describe what they do in high level. This article is meant for people who want to get a high level introduction to the Real-Time Embedded systems.
Bitwise operations are widely used in embedded systems, both in assembly code and in C code. Bitwise operations are used for reading and writing hardware registers, enabling or disabling hardware features, setting or masking interrupts, writing values to GPIO pins and many other uses. Bitwise operations are different than the corresponding logical operations. In this article I will cover the basic bitwise operations and provide some macros for the most common bitwise operations and some common practices when using them.
Core dumps are the standard Linux way for post-mortem analysis of crashed applications. Given some preconditions, the core dumps can provide a detailed back trace and shed some light about the last whereabouts of the crashed application.
The core dump itself is an executable file format, and it is generated by the Kernel. The core dump contains a list of memory sections which were accessible by the process, and their memory image. This allows you to analyze the core dump offline, on a host machine, using gdb which was compiled for the appropriate target.
Core dump is not enabled by default in embedded systems mainly due to memory limitations. In this article I will explain what is required in order to enable core dumps support on Embedded Linux/uClibc/Busybox platforms for a specific debug task.
The Open Source concept is not new, and has been studied and researched by professionals and by the academia for quite a while. Open Source software products are the outcome of collaboration and cooperation of individuals programmers from all over the world, who gather in communities. The communities are non-profit organizations that distribute and support their software without any charge. The term Open Source, also referred as Free Software, constitutes the main concept of the Open Source movement, which believes that the user must have the rights to use, modify or redistribute the software as he wishes, while granting the same rights to the users of his derivative work or redistribution. The term “Free” is referred as “Free Speech” and not “Free Beer”.
Nowadays, Open Source software is widely used by for-profit organizations. In this article, I will provide an overview of the most used Open Source Licenses, the rights that they grant, and the obligations they require.
Ever wanted to see (in run-time) which function a program is currently executing, or what is the current calling stack of a program? I’m sure you do. In many cases, it is very helpful to see a back trace of all the calling functions in a program, at any given time. Normally, this is a basic feature when you are using a debugger (either a JTAG based hardware debugger or a software based debugger like gdb). However, in this post I would like to show you that this is also possible, with some preconditions, to be done without any attached debugger. It could be useful to remotely debug units in the field or in the lab, where any extra debug equipment may be unavailable.
The traditional definition of efficiency has two aspects: speed and size. In most cases, optimizing the first aspect causes a degradation in the other, and it’s a matter of balancing between the two, according to the specific needs. Per each embedded system, or even a software module, the appropriate strategy must be balanced between the two. Nowadays, there is a new dimension to this definition; power. In this article, I am going to discuss about the traditional aspects. In the years I’ve been working as a software engineer, I gained a lot of experience about C code efficiency. I saw how changing a few lines of code makes the difference, either in performance, final size or memory consumption. The examples I’ll show here were written in C and tested on an ARM platform. You should expect similar behavior by other processors. I might update this article from time to time with some more tips, so it is recommended to bookmark it and catch up.
The proc files implement the simplest method of communication and data exchange between the Linux kernel or its drivers, and the user space applications or human users. The kernel provides many proc files for setting various settings, and getting plenty of information (see the article about process procs). Each driver or loadable kernel module can add more proc files in the /proc directory, in order to set its specific parameters or to get its specific information. The proc files are not really files, but referred as “pseudo-files”, where all the /proc directory is a “pseudo-filesystem”. The reason for this name convention is the fact that unlike other filesystems, these files do not really exist. Instead, the kernel (or each driver which needs a proc file) registers a file, defines its permissions and implements a read and/or write functions. The read function will be invoked whenever a user space application wishes to read information, so the information that is read is actually generated by the kernel upon request. The same applies for the write function.
Now that you’ve optimized your applications (programs) and archives (static libraries), we’ll discuss how to optimize your shared libraries. Unlike archives which are used only during link time on the host machine, shared libraries reside on the target’s file system, and cannot be reduced using the the same techniques. Furthermore, when you create a shared library, you can not know which functions will be used by the applications and which functions are not used. It is also not trivial to figure out the dependency between the library’s functions (which function requires another one inside the library). Therefore, shared libraries always contain the full set of functions. The question that we ask is; how much storage space is wasted for unused code of a shared library?
The tips and information in part 1 are too general and common, and it is highly likely that you’ve already implemented them. In this article, I will show how we can reduce the size of static libraries (archives) and applications (programs) by specifying advanced compilation flags which utilize the special properties of archives and applications.
In embedded systems, size does matter. Embedded products are usually limited in resources of RAM and storage (usually Flash) and the cost pressure forces you to think about creative ways to reduce the overall size of the binary applications and libraries, without reducing the features and functionality. In the years I’ve been working in the embedded systems business, I often deal with requirements to reduce the overall size of the application due to system limitation. Therefore, I have a lot of experience in this field which I am going to share with you in this “Size optimization” series. The series will include information about size optimization and reduction, general tips, optimizing applications, static libraries, shared libraries, file systms and the Linux kernel.
One measurement of the quality of your project’s code is the amount of outstanding compilation warnings. In my opinion, these compilation warnings could potentially be responsible for a system crash and unexplained behaviour, and are one of the top 5 destructive bugs. If you are a project manager, it is your interest to eliminate them, especially if your deliverable is the actual source code. As a customer, a software deliverable with plenty of compilation warnings appears to be non-professional just by looking at the compilation process. If you are a developer, it is your aim to deliver warning-less code.
Last week, one of the company’s customers had a major issue with a mysterious process that is periodically spawned, and when it runs, it allocates a resource and terminates without freeing it. Unfortunately, the allocated resource is a shared memory segment, which is not controlled by the kernel. Unlike dynamic memory allocation which gets cleaned up when a process terminates, this resource kept on leaking until it was completely exhausted. Once all the shared memory was consumed, the system was unable to operate correctly, although the kernel was still alive, and there was plenty of free RAM. This is just a single scenario where you might need to get a clear picture about the running processes in the system. How can we log and monitor the creation, execution and termination of processes in the system? Just continue reading.