by Jordi Cabot and Robert Clarisó

For non-critical software systems, the use of light-weight formal evaluation methods can guarantee sufficient levels of quality.

Model-Driven Engineering (MDE) is a software engineering paradigm aimed at improving developer productivity and software quality. To this end, the development process in MDE does not focus on the code but rather, on the models of the system being built. Models characterize the relevant features of a system for a specific purpose, for example, documentation, analysis, code generation or simulation. By abstracting irrelevant features, system complexity is reduced and, due to support from (semi)automatic tools for some development tasks, developers can minimise human-introduced errors and enhance productivity.

For this to hold true, model quality should be a primary concern: a defect in a model can propagate into the final implementation of the software system. As for the software, the quality of the models can be regarded from many different perspectives. It is necessary to make sure that the models are realizable (i.e., the structural models should be able to be satisfied, the states in a behavioral model should be reachable, etc.). In addition, models that offer complementary views of the same system should be consistent.

Formal methods provide valuable techniques to ensure the correctness of these software models. Fortunately, abstraction makes models more amenable to analysis than source code. In any case, most formal verification problems are undecidable or have such high computational complexity that it hampers scalability. Thus, software verification is, and will remain, a grand challenge for software engineering research in the foreseeable future [1].

Beyond issues of scale, other model quality challenges include incomplete models and model evolution. These factors make the application of fully-fledged (complete and possibly undecidable) formal methods unsuitable in this context. Instead, light-weight approaches that are able to provide quick feedback and support large and complex models may be preferred, even if their outcomes can sometimes be inconclusive. We believe this pragmatic approach offers the best trade-off for non-critical software systems. In particular, we have been applying a family of light-weight formal methods, based on bounded verification by means of constraint programming, to evaluate the correctness of Unified Modeling Language(UML)/Object Constraint Language (OCL) software models.

The UML is a de facto standard for describing software models, providing a collection of visual and textual notations to describe different facets of a software system. The most popular UML notation is the class diagram, which depicts classes within an object-oriented hierarchy. However, UML class diagrams are unable to capture complex integrity constraints beyond basic multiplicity constraints. For more advanced constraints, class diagrams can be augmented using a companion textual notation, the OCL.

Using UML/OCL allows complex software systems to be designed without committing to a specific technology or platform. These designs can be verified to detect defects before investing effort in implementation. Typical design flaws include redundant constraints or inconsistencies arising from unexpected interactions among integrity constraints. These errors are far from trivial and may be very hard to detect and diagnose at the code level.

In this context, we have developed two open source tools capable of analyzing UML/OCL class diagrams: UMLtoCSP and its evolution, EMFtoCSP, which is able to deal with more general EMF-based models and is integrated within the Eclipse IDE. These tools frame correctness properties as Constraint Satisfaction Problems (CSPs), whose solution is an instance that constitutes an example that proves (or a counterexample that disproves) the property being checked. This mapping is transparent to the user, which means that they do not need a formal methods background to execute the tools or understand their output.

Figure 1: The architecture of the EMFtoCSP tool.
Figure 1: The architecture of the EMFtoCSP tool.

The constraint-logic programming solver ECLiPSe is used as the reasoning engine to find instances. This choice offers advantages and disadvantages with respect to SAT solvers, e.g., better support for complex numerical constraints. As with all bounded verification approaches, when a solution is not found, no conclusion can be drawn about the property since a solution could exist beyond the search bounds.

Figure 2: The EMFtoCSP interface, including the selection of the constraint (a), bounds (b) and properties (c) used to make the verification. The visualization of the results are presented in (d) and (e).
Figure 2: The EMFtoCSP interface, including the selection of the constraint (a), bounds (b) and properties (c) used to make the verification. The visualization of the results are presented in (d) and (e).

We believe that using this approach could enable the adoption of model verification practices in many software companies, where currently none are employed due to the lack of a usable solution. Ultimately, this absence risks the quality of the software they produce. Nevertheless, there is still a lot of work to be done. A number of extensions are currently planned for EMFtoCSP, but we would like to highlight two. The first is incremental verification, where once a model has been checked, further evaluation should only consider the subset of the model that has changed since the last run. The second is an automatic suggestion of search bounds whereby a quick pre-analysis of the model could suggest promising bounds within which we should look for a solution to maximize the performance.


[1] C. Jones, P. O’Hearn, J. Woodcock: “Verified Software: A Grand Challenge”, Computer, vol. 39, no. 4, pp. 93-95, April 2006.

Please contact:
Jordi Cabot Sagrera
Inria and École des Mines de Nantes, France
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Robert Clarisó Viladrosa
Internet Interdisciplinary Institute – Universitat Oberta de Catalunya
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: July 2023
Special theme:
"Eplainable AI"
Call for the next issue
Image ERCIM News 99 epub
This issue in ePub format
Get the latest issue to your desktop
RSS Feed
Cookies user preferences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Set of techniques which have for object the commercial strategy and in particular the market study.
DoubleClick/Google Marketing