Verifying object code can mean the difference between success and failure, quality and crap. But skipping the step because a standard doesn’t require it or because it theoretically eats into profits is a surprisingly common practice. The author postulates that this practice is not only shortsighted but no longer valid.
No longer is it enough to write robust software. The recent trend in standards development is to prove that a project’s requirements are fulfilled, even if those requirements have changed during the course of a project’s lifecycle. Requirements traceability yields a more predictable outcome at deployment and responds to an increased demand for sound monitoring and management techniques during development, particularly between project phases.
Most requirements traceability stops short of object code, suggesting an implied reliance on the faithful adherence of compiled object code to the intentions expressed by the author of the source code. This can have critical consequences, such as putting people’s lives risk or having a significant impact on business.
Where an industry standard is enforced, a development team will usually adhere only to the parts of standard that are relevant to their application. Object-code verification, on the other hand, ensures that critical parts of an application are not compromised by the object code, which in principle is a desirable outcome for any software–whatever its purpose. However, can object-code verification be justified as part of a test regime outside the confines of its enforcement through the required adherence to a standard–particularly in those industries where software failure brings dire consequences, and yet standards are less mature.
In this article, I explain why it’s important to verify object code and how it’s possible to manage requirements so you can trace them right through to object-code verification (OCV).
Standards and certifications
Irrespective of the industry and the maturity of its safety standards, the case for software that has been proven and certified to be reliable through standards compliance and a requirements-traceable process is becoming ever more compelling.
According to research directed by the U.S. National Institute of Security Technology, 64% of software vulnerabilities stem from programming errors. For example, an analysis of 3,140 medical device recalls conducted between 1992 and 1998 by the U.S. Food and Drug Administration (FDA) revealed that 242 of the recalls (7.7%) were attributable to software failures. In April 2010, the FDA warned users about faulty components in defibrillators manufactured by Cardiac Science Corp. Unable to remedy the problems with software patches, Cardiac Science was forced to replace 24,000 defibrillators. As a result, Cardiac Science’s shares were hit; the company reported a net loss of $18.5 million.
The medical-equipment standard IEC 62304 is designed specifically to provide suitable processes to minimize the likelihood of such problems in medical devices. Other industries have similar standards as shown in Table 1.
Although each is tuned to a specific industry sector, these standards have much in common. In particular, the IEC 61508 industrial standard is sometimes used as a basis for other standards, including all of the others shown except DO-178B/C.
One example of how this commonality of purpose shows itself is in the use of Safety Integrity Levels. In each case, a risk assessment is completed for every software project to assign the required assessment safety level of each part of the system. The more demanding the safety level, the more rigorous and thorough the process and testing need to be.