by Raimund Kirner and Susanne Kandl
The testing process for safety-critical systems is usually evaluated with code coverage criteria such as MC/DC (Modified Condition/Decision Coverage) defined in the standard DO-178B, Software Considerations in Airborne Systems and Equipment Certification (a de-facto standard for certifying software in the civil avionic domain). For requirements-based testing techniques we work on coverage metrics that are defined on a higher level of program representation (eg on the requirements), and that are independent of a specific implementation. For that purpose we analyse the relationship between existing definitions for structural requirement-coverage metrics and structural code-coverage metrics. In addition, we work on techniques that preserve structural code-coverage between different program-representation levels.
We work on requirements-based testing following the two-step approach of the DO-178B for testing a safety-critical system (see Figure 1). Test cases are generated from the requirements until all requirements are covered (inner dotted loop in the figure). Then the structural code coverage is determined. If the coverage is insufficient, additional test cases are generated (outer dotted loop in the figure). The aim is to achieve full requirement coverage, ie toverify that all requirements are correct and fully cover the intended system behaviour.
Formal Requirements as a Program Implementation
The informal requirements given by the software specification are the primary source of information for the software developer implementing the system. The requirements are also used to derive the test suite for testing purposes. To enable the automatic systematic test-case generation, the informal requirements are specified as formal requirements using an appropriate specification language, eg based on temporal logic.
We consider formal requirements to be another implementation of the specification. The logical conjunction of all the formal requirements should summarize the software behaviour. In fact, formalizing the requirements together with the original software implementation is an example of n-version programming. Treating the formal requirements as a program implementation, we work on defining requirement-coverage metrics as structural coverage criteria. These structural coverage metrics will be used to guide the test-data generation to derive a test suite. As an inherent property of n-version programming, the logical structure of the formal requirements and the program code may be different, thus the achieved structural code coverage at the program code must be analysed empirically.
Structural Code Coverage
Structural code coverage is a class of coverage metrics often used to reason about the sufficiency of a given test suite. A variety of metrics exists, including statement coverage, decision coverage etc. In the safety-critical domain, one established structural coverage metric is the modified condition/decision coverage (MC/DC). The basic idea of MC/DC is to test whether each condition of a decision can independently control the outcome of the decision (ie without changing the outcome of the other decision's conditions).
The terms condition and decision refer to the structure of the source code. At the machine-code level there is no grouping of conditions into decisions, which means that the condition coverage of source code is equivalent to branch coverage at machine code. At higher abstraction levels, notions like model coverage are used. At model level, structural coverage is used in a rather ad hoc fashion, without having established definitions as they are common for source-code level.
Within the project SECCO (Sustaining Entire Code-Coverage on Code Optimization), we work on the mapping and preservation of structural code coverage between the different program representation levels. As shown in Figure 2, the implementation is done in a domain-specific modelling environment (SCADE, Simulink etc), from which a code generator produces source code that is then transformed into machine code. The SECCO approach is to define the properties of code transformations such that the chosen structural code-coverage metrics is preserved by the code transformation. With this approach one can use the source code or the model to generate test cases automatically and independently of the hardware platform.
There exist basic definitions for structural requirement coverage. Experimental results show that there is a weak correlation between structural requirement coverage and structural code coverage, ie full requirement coverage yields less code coverage. On the basis of these results we want to identify what is necessary to guarantee the preservation of coverage criteria starting from structural requirement-coverage criteria to achieve structural code coverage. We address the following questions: Is there a better structural coverage metric for the requirements than the existing coverage metrics? Which type of formal requirements yields good values for MC/DC? Which requirements are 'hard' to test (ie the derived test suite executes only a subset of the necessary paths defined by MC/DC)? Which implementation variants are possible and how do they affect the MC/DC criterion?
It is rather challenging to match current coverage metrics between the different program representations shown in Figure 2. For example, MC/DC is based on the conditions and decisions of the source code. But given the (low-level) specification there is no hint about what logical structures should be grouped into one decision. The same is true for automatically generating code out of a modelling environment like Simulink. This is a serious issue for testing safety-critical systems, since with a given test suite, the different logical structuring of a program results in different structural code coverages. Within the project SECCO we are working on coverage metrics that are more robust against the logical restructuring of programs.
Real-Time Systems Group, Vienna University of Technology, Austria
Tel: +43 1 58801 18223