ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Benoît Vanderose, Hajer Ayed and Naji Habra

For decades, software researchers have been chasing the goal of software quality through the development of rigorous and objective measurement frameworks and quality standards. However, in an increasingly agile world, this quest for a perfectly accurate and objective quantitative evaluation of software quality appears overrated and counter-productive in practice. In this context, we are investigating two complementary approaches to evaluating software quality, both of which are designed to take agility into account.

Most software engineering processes and tools claim to assess and improve the quality of software in some way. However, depending on its focus, each one characterizes quality and interprets evaluation metrics differently. These differences have led to software researchers questioning how quality is perceived across various domains and concluded that it was an elusive and multifaceted concept. However, two key perspectives stand out: a software’s objective quality and subjective quality.

The objective “rationalized” perspective is taught in influential quality models and promoted through standards such as ISO/IEC 25010. It envisions quality as conformance to a pre-defined set of characteristics: quality is an intrinsic data of the product that can be measured and must be compared against a standard in order to determine its relative quality level. Therefore, from this perspective, quality assurance is closely associated with quality control.

The subjective perspective, on the other hand, defines software quality as a constantly moving target based on customer’s actual experiences with the product. Therefore, quality has to be defined dynamically in collaboration with customers as opposed to pre-defined standards. This definition welcomes change that enhance the quality of the customer’s experience, emphasizes the possibility to tolerate deliberately bad quality (in order to do it better next time) and allows quality goals to be redefined. As such, quality is constructed and checked iteratively and can evolve over time. This then leads to constructive quality or emergent quality. The subjective perspective also promotes the idea of on-going customer’s satisfaction and garners everyone’s commitment to achieving quality: thus, quality assurance becomes an organization-wide effort or what is called holistic quality.

The suitability of either perspective depends on the software development process. In a production context, quality is defined as the conformance to set requirements while quality in a service context should take into account the fact that each stakeholder will have a different definition of what constitutes a quality experience, and furthermore, these perceptions will evolve over time. In the field of software engineering, there has been a move from the compliance view towards a constructive holistic quality assurance view. This is particularly notable in the case of iterative and incremental software engineering methods and agile methods.

Improving the support for this way of envisioning software quality is one of the research topics addressed by the PReCISE research center at the University of Namur, and current efforts focus on two complementary research areas: model-driven quality assessment and iterative context-driven process evolution.

MoCQA and AM-QuICk frameworks
Our attempts to capture the essence of a “traditional” quality assessment (i.e., metrics, quality models and standards) in a unified meta-model resulted in a fully-fledged model-driven quality assessment framework named MoCQA [1]. This framework aims to provide the methodology, tools and guidelines to integrate evaluation methods from different sources and associate them with a quality goal, a set of stakeholders and an artefact (e.g., a piece of code, UML diagram, etc.), allowing these elements to coexist in a coherent way. Being model-driven, the framework provides a unified view of the quality concerns of specific stakeholders and iteratively guides the actual assessment. However, in order to leverage the benefits of the framework, it is essential to perform the quality assessment iteratively and incrementally (the feedback from the assessment helps improve the products) and ensure that this feedback is taken into account to pilot the next steps of the development process.

Guaranteeing the positive impact of an assessment on the development process calls for iterative process evolution. Our research in the field of agile methods customization [2] revealed that this customization does not include the fact that the context itself may evolve over time. Another framework, AM-QuICk [3] is designed to allow a truly context-driven evolution of the development process, a review at each iteration ensuring that it can be adapted to the current context. It relies on the elaboration of a repository of reusable agile artefacts, practices and metrics and a context-sensitive composition system.

Figure 1: An agile and quality-oriented development process based on the complementarity between the model-driven quality assessment (MoCQA) and the iterative context-driven process evolution (AM-QuICk).
Figure 1: An agile and quality-oriented development process based on the complementarity between the model-driven quality assessment (MoCQA) and the iterative context-driven process evolution (AM-QuICk).

In order to exploit the benefits of an iterative process evolution, decision-making elements are needed to guide the evolution and decide which practices to include at the right time. This can be achieved through model-driven quality assessment, making the two approaches complementary (Figure 1).

Future Work
Looking to the future, our efforts will focus on tightening the integration between the two frameworks. Advancements in these complementary research areas offer great opportunities to provide development teams with comprehensive sets of tools with which to manage an evolving software development process that focuses on the global satisfaction of each stakeholder at each stage. Such tools ensuring these processes can operate more effectively in an increasingly agile world.

References:
[1] B. Vanderose, “Supporting a model-driven and iterative quality assessment methodology: The MoCQA framework,” Ph.D. dissertation, Namur Univ., Belgium, 2012
[2] H. Ayed et al., “A metamodel-based approach for customizing and assessing agile methods,” in proc. 2012 QUATIC.
[3] H. Ayed et al., “AM-QuICk : a measurement-based framework for agile methods customization,” in proc. 2013 IWSM/MENSURA.

Please contact:
Naji Habra, Hajer Ayed or Benoît Vanderose
University of Namur, Belgium
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 99 epub
This issue in ePub format
Get the latest issue to your desktop
RSS Feed