by Jurgen Vinju and Anthony Cleve, guest editors for the special theme section

The introduction of fast and cheap computer and networking hardware enables the spread of software. Software, in a nutshell, represents an unprecedented ability to channel creativity and innovation. The joyful act of simply writing computer programs for existing ICT infrastructure can change the world. We are currently witnessing how our lives can change rapidly as a result, at every level of organization and society and in practically every aspect of the human condition: work, play, love and war.

The act of writing software does not imply an understanding of the resulting creation. We are surprised by failing software (due to bugs), the inability of rigid computer systems to “just do what we want”, the loss of privacy and information security, and last but not least, the million euro software project failures that occur in the public sector. These surprises are generally not due to negligence or unethical behaviour but rather reflect our incomplete understanding of what we are creating. Our creations, at present, are all much too complex and this lack of understanding leads to a lack of control.

Just as it is easy to write a new recipe for a dish the world has never seen before, it is also easy to create a unique computer program which does something the world has never seen before. When reading a recipe, it isn’t easy to predict how nice the dish will taste and, similarly, we cannot easily predict how a program will behave from reading its source code. The emergent properties of software occur on all levels of abstraction. Three examples illustrate this. A “while loop” can be written in a minute but it can take a person a week or even a lifetime to understand whether it will eventually terminate or not on any input. Now imagine planning the budget for a software project in which all loops should terminate quickly. Or take a scenario where you simply need to scale a computer system from a single database with a single front-end application to a shared database with two front-end applications running in parallel. Such an “improvement” can introduce the wildest, unpredictable behaviours such as random people not getting their goods delivered, or worse, the wrong limb amputated. In the third example, we do not know how the network will react to the load generated by the break of the next international soccer match between France and Germany, e.g., “When will it all crash?”.

Higher quality software is simpler software, with more predictable properties. Without limiting the endless possibilities of software, we need to be able to know what we are creating. Teaching state-of-the-art software engineering theory and skills is one way of improving understanding but alone, this is not enough. We are working on developing better theories and better tools to improve our understanding of complex software and to better control its complex emergent behaviours. We will be able to adapt existing software to satisfy new requirements and to understand how costly these adaptations will be and the quality of the results. We will be able to design software in a way that means that consciously made design decisions will lead to predictable, high quality software artifacts. We will be able to plan and budget software projects within reasonable margins of error.

In this special theme of ERCIM News, some of the recent steps developed to understand and manipulate software quality are presented. We aren’t yet at the stage where we fully understand, or can control software but we are certainly working towards this point. Some researchers are studying the current reality of software, discovering theories and tools that can improve our abilities to analyse, explain and manipulate. Other researchers are re-thinking and re-shaping the future of software by discovering new, simpler languages and tools to construct the next generation of software. These two perspectives should leapfrog us into a future where we understand it all.

As quality and simplicity are highly subjective concepts, our best bet is to strive to increasingly contextualising software engineering theory and technology. General theory, languages and tools have resulted in overly complex systems so now, more specialised tools and techniques for distinct groups of people and industries are being discovered. For example, instead of modelling computation in general, we are now modelling big data processing; instead of inventing new general purpose programming languages, we are now focusing on domain specific formalisms; and instead of reverse engineering all knowledge from source code, we are now extracting domain specific viewpoints.

We hope you will find this selection of articles an inspiring overview of state-of-the-art software quality engineering research and beyond.

Please contact:
Jurgen Vinju
CWI and TU Eindhoven, The Netherlands
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Anthony Cleve
University of Namur, Belgium
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: July 2023
Special theme:
"Eplainable AI"
Call for the next issue
Image ERCIM News 99 epub
This issue in ePub format
Get the latest issue to your desktop
RSS Feed
Cookies user preferences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Set of techniques which have for object the commercial strategy and in particular the market study.
DoubleClick/Google Marketing