by Peter Biegelbauer, Anahid Jalali, Sven Schlarb, and Michela Vignoli (all AIT Austrian Institute of Technology)

Given the numerous opportunities provided by rapidly evolving digital innovations, we need to address and assess the social and political risks that come with naively applying AI algorithms, especially in high-risk sectors. We argue that though the existing guidelines and regulations are a good starting point, we still need to implement effective solutions that can be integrated into the current workflow of developing ethical AI applications. We introduce the idea of AI Ethics Labs as institutionalised “spaces for doubt” providing platforms for a frequent and intensive collaboration between developers and social scientists, thus reducing the potential risks of developed algorithms.

In our digital society, we adopt and integrate information and communication technologies for use at home, work, and in education. The internet offers an unprecedented source for sharing data and collecting information from a broad audience worldwide. AI software provides new means to cope with an increasing amount of data and thereby enables new forms of knowledge production, process optimisation, and decision making. AI solutions have the potential to improve services in a broad range of areas, such as health, agriculture, energy, transportation, retail, manufacturing, and public administration.

Figure 1: Ethical guidelines to develop an AI ensures a fair system.

Figure 1: Ethical guidelines to develop an AI ensures a fair system.

However, the latest digitisation wave, and especially AI, raises a number of challenges that need to be tackled. Widespread AI applications such as face recognition, the optimisation of mobility needs, the analysis of employee productivity, and speech recognition are based on the usage of personal data, raising questions regarding privacy, data protection, and ethics. Decision-supporting algorithmic systems may be and often are based on datasets containing biases, from which algorithms learn, and may also include explicitly discriminatory statements affecting predictions of AI applications. How serious the problem is becomes evident when we look at recent examples of political scandals that resulted from ill-used, biased algorithmic solutions using personal data that had led to financial and legal harm to innocent citizens (e.g., Robodebt in Australia, the childcare benefit scandal in the Netherlands) [L1][L2]. In Austria, the planned introduction of a job-market opportunities assistance system (AMAS) was heavily criticised due to its inherent biases and other implications [1].

We can see that our society, economy, and industries are being heavily impacted by AI technologies. However, the inherent risks and implications are often underestimated or neglected. In order to sustain an inclusive, secure, and socially just digital society that benefits from AI technologies, we need to establish clear regulations and guidance on how to develop and implement ethical AI solutions.

Recently, several ethical principles, guidelines and regulations have been proposed by national and international governmental bodies, with the EU being at the forefront of such activities. The EU digital package, containing literally dozens of regulations under discussion in 2022, includes the AI Act [L3], which focuses on AI and its impact on society.
In addition, professional associations such as the Institute of Electrical and Electronics Engineers (IEEE), the Association for Computing Machinery (ACM) and the North American Chapter of the Association for Computational Linguistics (NAACL) have repeatedly been engaged with AI-related norm-setting.

Recent research, however, has shown that ethics guidelines have only a very limited impact on the practice of software engineering [2][3]. The guidelines lack more practice-oriented guidance for addressing the complex impact of AI and perpetuate the disciplinary divide between social scientists and ethicists who produce the guidelines, and technical engineers who write the software. Common standards for creating more ethical AI solutions should integrate recommendations for technical solutions (e.g., bias identification), “ethics by design” principles, and self-assessment tools (e.g., ethics committees). Another issue is the availability of resources for thinking about ethics in context of the software development work – a factor regularly missing in the actual work of software engineers.

We see a need for institutionalised “spaces for doubt” [L4], which may be provided by inter- and transdisciplinary groups. Regular exchange between experts from different technical and non-technical disciplines as well as stakeholders is needed to build common standards and trust. At the AIT Austrian Institute of Technology, we created the AIT AI Ethics Lab [L5] in 2020, which since then has served as a platform for regular meetings between software engineers and social scientists. Many of these interactions took place between experts from AIT itself, serving as a forum of learning together about the ethical challenges of AI development. More recently, activities have involved also actors from other organisations and sectors, leading to cooperation with various actors such as the Austrian Federal Academy of Public Administration (VAB), the Austrian Federal Ministry for Civil Service (BMKOeS), and TAFTIE, the umbrella organisation of European innovation agencies.

The risks associated with the incorrect use of AI technologies are increasingly being perceived as endangering the acceptance of AI. Given their increasing importance, the ability to be compliant with ethical guidelines will turn into a competitive advantage – even more so if compliance with binding regulations will be demanded by authorities and companies in future. The model of AI ethics labs allows for an ongoing self-reflection regarding software development, and it provides the possibility for software engineers, social scientists, management and clients to engage in a productive exchange. An adequately financed ethics lab can contribute to address the above-outlined challenges of AI and propel the AI ethics debate from theory to practice.

Links:
[L1] https://apolitical.co/solution-articles/en/algorithms-and-ai-in-the-public-sector-the-rules
[L2] https://autoriteitpersoonsgegevens.nl/en/news/tax-administration-fined-discriminatory-and-unlawful-data-processing
[L3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
[L4] https://www.tandfonline.com/doi/abs/10.1080/1369118X.2021.2014547
[L5] https://cochangeproject.eu/labs/AIT 

References:
[1] D. Allhutter, et al., “Der AMS Algorithmus - Eine Soziotechnische Analyse des Arbeitsmarktchancen-Assistenz-Systems”, AMAS, p. 120, Wien, 2020. doi:/10.1553/ITA-pb-2020-02.
[2] P. Biegelbauer, et al., “Engineering Trust in AI: the impact of debates on AI regulation on the work of software developers”, in 20th STS Conference Graz 2022, Critical Issues in Science, Technology and Society Studies, 2022.
[3] T. Hagendorff, “The Ethics of AI Ethics: An Evaluation of Guidelines”, in Minds and Machines 30(1): 99-120, 2020. doi:10.1007/s11023-020-09517-8.

Please contact:
Peter Biegelbauer, AIT Austrian Institute of Technology, This email address is being protected from spambots. You need JavaScript enabled to view it.
Anahid Jalali, AIT Austrian Institute of Technology, This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: October 2024
Special theme:
Software Security
Call for the next issue
Image ERCIM News 131
This issue in pdf

 

Image ERCIM News 131 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed