ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
Back Issues Online
Back Issues Online

by Danilo Brajovic and Marco F. Huber (Fraunhofer Institute for Manufacturing Engineering and Automation IPA)

The debate about reliable, transparent and thus, explainable AI applications is in full swing. Despite that, there is a lack of experience in how to integrate AI-specific safety aspects into standard software development. In the veoPipe research project, Fraunhofer IPA, Huber Automotive, and ROI-EFESO Management Consulting work on a joint approach to integrate these AI-specific aspects into an automotive-development process. In this article, we share details on one component of this framework – reporting the AI development.

The emerging regulation of Artificial Intelligence (AI) by the EU AI Act requires developers and companies to adjust their processes in order to be able to develop trustworthy AI systems. However, there is a lack of experience in how this integration can be accomplished, and there are still several technical challenges to be solved. Fundamentally, what makes development of trustworthy AI systems more difficult than standard software is the fast-paced development and the black-box character of many methods. This means that even experts are often unable to adequately understand how exactly an artificial neural network or any other machine learning model arrived at a particular result. This is unfavourable in several respects. It reduces user confidence in the AI application. AI experts or developers have little opportunity to specifically improve the application if even they do not fully understand the application. Finally, there are legal challenges around liability in the case of failures if the reason for the failure cannot be clarified. These challenges are addressed by the research field of explainable AI (XAI) [1].

On the industry side, there are several pieces of work to establish guidelines for developing trustworthy AI systems. Especially among US companies, reporting datasets and machine learning models has become an established measure [2,3]. However, most of these works only cover a single stage of AI development, lack technical details especially regarding XAI, or are not aligned with regulation. Our contribution is a structured reporting framework based on a set of cards that extends prior work and goes along four major development steps. Our cards not only provide a structured information overview on the corresponding development step, but also refer to additional material such as regulatory standards, toolboxes or scientific publications. With these interim results, developers should already be very well positioned when legal requirements for the use of AI applications come into force. The four developments steps covered in our approach are:

  1. Use Case Definition: This card describes the intended application in more detail and identifies potential risks, e.g. under the EU AI Act.
  2. Data: This card deals with the documentation and collection of data, labelling, its provision and pre-processing.
  3. Model Development: This card covers topics such as interpretability and explainability, model selection, training, evaluation, and testing.
  4. Model Operation: Once the model is practically in use, issues such as the concept of operation, model autonomy, monitoring, or adversarial attacks and data protection become relevant.

We evaluated our proposal on three use-cases within the automotive domain. Further, in the course of our project, we held interviews with several certification bodies and companies in order to understand their approach and requirements to safeguard AI systems and, in particular, to use XAI for this purpose. Our industry partners revealed that they had experimented with XAI methods but found them to be too unreliable for practical use. If any, they can help to find model errors, but they provide no help in safeguarding the application. On the other side, certification bodies mentioned that it is hard to define specific requirements due to the rapidly evolving field, but the best thing companies can do is to show that they recognised the problem, for example, by applying methods such as Local Interpretable Model-agnostic Explanations (LIME) or Shapley Additive Explanation (SHAP) .

Currently, a full paper describing the approach is being prepared and will be submitted soon. We are also integrating it into a classical automotive V-model and plan a follow-up study with certification bodies to better understand their approach to XAI.

The project team with the test vehicle. Within the project, the developed approach is evaluated on two use-cases. One of these use-cases involves pedestrian tracking on a vehicle.
The project team with the test vehicle. Within the project, the developed approach is evaluated on two use-cases. One of these use-cases involves pedestrian tracking on a vehicle.

References:
[1] N. Burkart and M. F. Huber, “A survey on the explainability of supervised machine learning,” Journal of Artificial Intelligence Research (JAIR), vol.70, pp. 245–317, 2021.
[2] Mitchell et al., “ Model cards for model reporting”, in Proc. Conf. on Fairness, Accountability, and Transparency (FAT), 2019.
[3] M. Pushkarna et al., “Data cards: purposeful and transparent dataset documentation for responsible AI”, in Conf. on Fairness, Accountability, and Transparency (FAccT), 2022.

Please contact:
Danilo Brajovic, Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Germany
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: July 2024
Special theme:
Sustainable Cities
Call for the next issue
Image ERCIM News 134
This issue in pdf

 

Image ERCIM News 134 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed