A DO-178B Level A certification is perhaps the highest certificate of reliability that can be rewarded to a piece of software. Nothing says reliable like meeting the requirements of a standard designed for software where even a single fault can result in an airplane crashing and the loss of human life. Like many reliability standards, it places strict requirements on how a software product is designed, developed and verified.
Software projects pursuing this certification will often follow a traditional software lifecycle model. Modern implementations of these processes have two key components in common: First, they all involve an iterative process, whether within a product cycle or between product cycles. Second, they all involve a Design->Write Code->Verify pathway.
A critical component of this pathway is the final step: Verification.
The reality of software development is no matter how good the design is going into the implementation and code development stages, software developers are bound to introduce mistakes and errors. In large and complex systems, this is especially true and problems may not be limited to simple coding errors but can extend to a problem with the system architecture itself.
Generally, it is too expensive to wait until the final integration steps of a software development schedule to perform the majority of the verification efforts to uncover the problems. Studies have shown that errors found in the final stages of the development process can be 50 to 200 times more expensive to fix than those found early on.
This is often the result of the associated rework required to modify the software or system architecture to account for the design and implementation flaws; in fact, it is common for a project to expend 50% of its effort in avoidable rework . It can also influence an organization’s ability to get to market quickly. IBM has previously found that products with lower defect counts have shorter schedules.
One can therefore surmise that there are strong advantages to performing verification of software not just at the formal verification step but also throughout the entire software development process. Once that conclusion is made, a common question heard is how to implement verification protocols and techniques throughout the development process using tools available on the market today, a topic this article will attempt to address.
Often this will require an investment in tooling to achieve some of these capabilities. It should be understood that the development of reliable and certifiable software could be significantly more expensive than traditional software practices, but that significant cost savings can still be achieved by intelligent investment in proper tools and in the selection of appropriately refined processes.
And this investment is not without purpose, since not only are these best practices for the development of safety-critical software, but some of the techniques and processes discussed are actually required by various safety and reliability standards, notably DO-178B which requires that software verification be an integral and iterative component at each phase of project development.
Purpose of a coding standard
Some of the best development practices can begin before the first line of code is written or even before the software is designed, by selecting and enforcing a coding standard for the project.
Coding standards have various purposes, two of which are to reduce faults in the program, and to increase the maintainability of the software. The last point is important, a significant amount of time and cost associated with a software project is spent in the maintenance phase, often as high as 60%.
Section 11.8 of the DO-178B standard requires that a project develop a Software Code Standard, as part of the life cycle data to be collected in conjunction with the certification effort. All FAA approvals will require that these documents be submitted to the U.S. DOT for evaluation.
It is important that the Software Code Standard be enforced throughout the development lifecycle. It has been empirically demonstrated that retrofitting a coding standard into existing software may introduce more faults than it helps prevent, if it is not done with extreme care.
Automatically enforced standards
Making use of tools that can automatically enforce coding standards is a simple way to ensure that the coding standard is consistently applied across a project. The popular MISRA C: 2004 coding standard (Figure 1 below) has been adopted by many in the industry and it was designed with automatic enforcement in mind, something that is supported by all good compilers on the market.
Figure 1. Automated MISRA C enforcement example
Several tools will also allow the automatic enforcement of other coding standards; Green Hills Software’s MULTI IDE even provides an option to enforce the same coding standards they use to develop their software.
Automatic enforcement is important; research has suggested that the introduction of automatic enforcement and detection tools has led to a significant decrease in memory-related bugs in software.
Consequently, making use of tools that can effectively enforce coding standards should be an essential step in the selection of a development environment for projects seeking a software certification for safety or reliability, both in terms of ensuring a reduction in software faults, and in demonstrating to a certifying body the project’s commitment to the certification process.