ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Serge Demeyer, Ahmed Lamkanfi and Quinten Soetens

The production of reliable software is an important area of investigation within the software engineering community. For many years, reliable software was seen as software "without bugs". Today however, reliable software has come to mean “easy to adapt” because of the constant pressure to change.


demeyer
Figure 1: "in vitro" vs. "in vivo" research

As a consequence of the need to be adaptable, organizations producing software intensive systems must strive for a delicate balance between two opposing forces: the need for reliability and the need for agility. Regarding the former, organizations optimize for perfection; regarding the latter, they optimize for development speed. The ANSYMO research group at the University of Antwerp (Belgium) is investigating ways to reduce the tension between reliability and agility. We seek to bring the changes made to a software system into concrete entities that can be manipulated and analysed. With this method we expect to speed up the release process without sacrificing the safety net of quality control. A few examples follow.

  • Misclassified bug reports. Text-mining techniques (such as those employed in search engines on the web) classify reported bugs into two categories: severe and non-severe. Severe bugs cause system crashes, hence software is seldom released when it is known that such bugs are lurking inside. On the other hand, non-severe bugs cause less harm; consequently it is common practice to release systems with a list of known (non-critical) bugs. To avoid bug-reports being assigned to the wrong category, we have identified text-mining techniques that can verify whether a reported bug is indeed severe or not.
  • What to (re)test? To minimize bug resolution time, it is good practice to have a fine-grained suite of unit tests that is run frequently. If a small change to a program causes a test to fail, then the location of the defect is immediately clear and it can be resolved right away. Unfortunately, for large systems the complete suite of unit tests runs for several hours, hence software engineers tend to delay the testing to a batch mode. In that case if one unit test fails the cause of the defect is unclear and valuable time is lost trying to identify the precise location of a defect. With our research, we are able to identify which unit tests are affected by a given change, which enables us to run the respective unit tests in the background while the software engineer is programming.

"In Vitro" vs. "In Vivo" research
In this kind of research, industrial collaboration is a great asset. In the same way that a biologist must observe species under realistic circumstances, so must computer scientists observe software engineers in the process of building software systems. Industrial collaboration therefore shouldn't be seen as a symptom of “applied” research but rather be viewed as fundamental research in software engineering practices.

To stress the fundamental research angle, we adopt the terms “in vitro” and “in vivo”. Indeed, just like in life-sciences, we take methods and tools that have proven their virtue in artificial lab conditions ("in-vitro research") and apply them in uncontrolled, realistic circumstances ("in-vivo research"). This can sometimes make a big difference. Referring back to the misclassified bug reports mentioned earlier, we originally designed the experiment to verify whether text-mining algorithms could predict the severity of new bug reports. However, from discussions with software engineers in bug triage teams, we learnt that this wouldn't be the best possible use case for such an algorithm because they rarely face problems with incoming bug reports. Also, close to the release date, pressure builds up and it is then they want to verify whether the remaining bug reports are classified correctly. Such observations can only be made by talking to real software development teams; mailing lists and software repositories are a poor substitute.

Luckily our research group has a strong reputation in this regard. Indeed, our PhD students actively collaborate with software engineers in both large and small organisations producing software intensive systems. As such we remain in close contact with the current state of the practice and match it against the upcoming state of the art.

Link:
http://ansymo.ua.ac.be/

Please contact:
Serge Demeyer, Ahmed Lamkanfi and Quinten Soetens
ANSYMO research group
University of Antwerp, Belgium
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

{jcomments on}
Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Get the latest issue to your desktop
RSS Feed