Software testing has the goal to detect failures i.e. to detect differences between the specification and the actual implementation of a module, subsystem or system under test.
In  Utting and Legeard define model based testing as “the automation of the design of back-box tests” and further “…in addition white-box coverage metrics can be used to check which parts of the implementation have been tested so far and if more tests are required“.
MBT is a relatively new topic and one can find different definitions of it. But in general it can be said that the main aspect of model-based testing is to automate the generation of test cases from explicit behavior models such as state machines. Therefore the focus of this article is on model based testing of state machine based software.
For the rest of this article, a PLC-based sump controller is used as an example that was inspired by a book on Real-Time Systems from Burns and Wellings . The system is shown in Figure 1 below.
Figure 1: Sump controller with a level and methane sensor.
As shown, a controller monitors the methane and the water level in a sump. Whenever the water level is above and the methane level is below given limits the pump starts. If the water level is low again or the methane level is above a critical level the pump is stopped. A basic state diagram that implements the required behavior is shown in Figure Two. It is a relatively simple machine without hierarchy or other advanced features of UML state machines.
Click on image to enlarge.
Figure 2: Simplified state diagram of the sump controller.
When testing state based software it is important to understand how a state machine can fail. In the book of Binder  the following main problems are listed:
1. Missing transition (the machine does not correctly change state in case of a correct event)
2. Incorrect transitions (the machine ends in the wrong state)
3. Hidden transitions not shown in the state machine model. (i.e. the implementation does not reflect the state machine model.)
4. Missing or incorrect events or conditions triggering a transition
5. Missing or incorrect actions in a transition or when entering or leaving a state
6. An extra state or a missing state. (i.e. the implementation does not reflect the state machine model.)
7. Weak implementation of the machine. (E.g. it can’t handle illegal events.)
Some of these problems can be avoided by using checklists or generate the state machine code from the model. But hand-made checks are time-consuming and the result is very dependent on the reviewer. In practice a tool is needed to ensure that checks are really performed.
The classic testing process
Before looking how a model based testing process can look like let’s first take a look at the classic testing process. It consists of the following main steps:
Step #1. Develop a state machine model that is precise enough. (i.e. it covers the relevant aspects that should be tested.) If the state machine model is also used to generate code from it you can assume that it is precise enough. But in general you can’t assume this. We come back to this topic later on.
Step #2. Design of the test cases, the definition of test input data and expected test results based on the specification and test objectives. This is usually manual work and can take quite some time even for a mid-sized state machine. A commonly used approach is to go through every state transition with a highlighter in hand, crossing off the arrows on the state transition diagram to indicate which transitions were already covered in a test case.
This approach was described in an earlier article on www.embedded.com . Figure 3 below shows the sump controller’s state diagram with one first test route highlighted in yellow.
More test routes must be found until a defined coverage criterion is met (e.g. all transitions and states must be visited once). The results of this task are abstract test cases that cannot be executed yet.
Click on image to enlarge.
Figure 3: The state diagram of the sump controller with a test route marked in yellow.
Step #3. Implementing the tests: The abstract test cases defined in step two must be transformed into executable test cases. This step depends a lot on the concrete system under test. Before the implementation can start some decisions must be taken. E.g. how should the machine be stimulated, how can reactions be observed and how can the generated output be traced.
Step #4. Execute tests & compare real with expected outputs in a test harness. The tester has to run the test cases step by step and write down the test results. This is the basis to determine when to terminate testing and release the software or to revise the model (i.e. fix bugs) or to generate more tests.
Steps 3 and/or 4 might be automated to make testing faster and to reduce the effort needed for regressing testing. For that purpose a test execution environment and test adapter is needed. (i.e. it is not anymore necessary to watch that the pump is running and to make a tick in the test list.)