by Erwin Schoitsch (AIT Austrian Institute of Technology)

The impending highly automated and autonomous systems enabled by artificial intelligence (AI) bring with them new challenges and risks. Placing too much trust in, or misusing, machines that make decisions is risky, and the legalities are complex in terms of liability and responsibility. Autonomous systems can be grouped into three broad categories: technical systems that make decisions in “no win” hazardous situations (vehicles in traffic, collaborating robots); decision support systems in governance applications (administration, government, court, staff acquisition, etc.), which may lead to unfair decisions for humans and society; and systems that are open to deliberate misuse by providing information that can’t be proven to be true or fake, potentially influencing elections, public opinion or legal processes to an extent unknown before. These risks cannot be easily countered by conventional methods. We give an overview of the potential and risks afforded by this technology.

Of course, there have long been risks associated with technology, with the potential for the dissemination of misinformation, failing algorithms and deliberate deception, but until recently the methodology at least allowed analysis and assessment of the predictable and deterministic algorithms behind the technology. We are now facing a completely different challenge – the age of highly automated and autonomous systems, artificial intelligence (AI) and decision making, whereby human decisions are made by machines through methods such as deep (machine) learning, which are neither “explainable”, nor be based on fair, unbiased training sets.

Public acceptance of highly automated and autonomous systems relies on trust in these systems. This is not just a technical issue, but also an ethical one, with technology having “big brother” potential and other possible problems as foreseen in science fiction, e.g., Isaac Asimov’s “Three Laws of Robotics”. Asimov’s laws seem reasonable and complete, but although they were complemented by an overarching “Zeroth law” (“A robot may not, through inaction, allow humanity to come to harm”), it has been demonstrated (even by Asimov himself) that realistic situations may result in unresolvable conflicts for a robot just because of adhering to this law.

AI technology is being implemented in automated driving, collaborative robots in the workspace, assistive robotic systems, highly automated production, and in management and decision systems in the medical and public service areas, the military, and many other fields. The EC, the European Parliament, the UN, many informatics and computer associations, and standardisation groups, the German Ethics Commission for Automated Driving, NGOs, and others, have created guidelines or even certificates for trustworthiness of highly automated systems, AI-systems, cognitive decision systems, automated vehicles, robotic systems, ethically aligned design, and the like (see [1], [2], [3]). A new science of “robot psychology” has evolved, that studies the interrelationship of human-robot collaboration and human wellbeing in a world of intelligent machines, and addresses how to keep human rights and individual decision making alive.

It seems that the question “Is it possible to create practical laws of robotics which can guarantee a safe, conflict free and peaceful co-existence between robots and humans?” cannot be given a definitive answer that is valid in all foreseeable situations. Even in Asimov’s stories, robots had to decide which type of risk of harm is acceptable (e.g. autonomous robotic surgeon). Other authors have assumed that a mental breakdown is the logical consequence of detecting that an activity which seemed to follow Law 1 had a disastrous outcome, e.g. in “The Robots of Dawn” the story’s plot revolves around a robot that was apparently destroyed by such a mental breakdown (like a “short circuit” in his computer brain).

These robotic laws were written in 1942, when robots were androids and just relatively simple “slaves” for humans, not the highly complex robots that are conceivable today. And what about a robot developed for an army? And who is defined as, or how do we define a “human being” (from history we know that sometimes a certain group of people is not considered as equally human and killed, e.g. genocide)? For this, we have to look at the humans behind the AI and robots. And this only partially covers the aspects of “machine decision making” and “machine ethics”, referred to in the abstract.

One initiative attempting to cover the principles for system designers and developers is the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (AI/AS) (April 2016, with a document 2019 [3]) (see Figure 1). It not only identifies and recommends ideas for standards projects focused on prioritising ethical considerations in AI/AS (i.e., machine/computer decision making), but also proposes a certificate for “ethically aligned design”. The basic concept states:

“Ultimately, our goal should be eudaimonia, a practice elucidated by Aristotle that defines human well-being, both at the individual and collective level, as the highest virtue for a society…. honouring holistic definitions of societal prosperity is essential versus pursuing one-dimensional goals of increased productivity or gross domestic product (GDP). Autonomous and intelligent systems should prioritize and have as their goal the explicit honouring of our inalienable fundamental rights and dignity as well as the increase of human flourishing and environmental sustainability. The goal of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (“The IEEE Global Initiative”) is that Ethically Aligned Design will provide pragmatic and directional insights and recommendations, serving as a key reference for the work of technologists, educators and policymakers in the coming years.”

Many standardization groups, the EC HLEG group document [1], and the German Ethics Commission on Automated and Connected Driving [2], provide a set of recommendations for decision making, placing human rights, independence and wellbeing in the centre, independent of economic or demographic attributes, such as age and race. But within the Trustworthiness groups on ethics and governance in ISO/IEC JTC1 SC41 (IoT) and SC42 (AI), the international discussion revealed that even the definition of (individual) human rights differs among cultures and different countries’ legal systems.

Figure 1: Examples for Ethics Guidelines from various organizations.
Figure 1: Examples for Ethics Guidelines from various organizations.

Perhaps it makes sense, then, to focus on “easier” issues in our cultural environment. The article “Machine Learning Based Audio Synthesis: Blessing and Curse” reveals an important risk: The benefits of helping people with special needs to express themselves verbally are likely to be marginal compared with the risk of eroding trust in audio media, with the potential for “deep fakes” to become this technology’s major application. “Covering Ethics in CPS Design” reports about the “Civis 4.0 Patria” project, on citizen participation in disaster management, which relies on trusted information and a trustworthy framework (cybersecurity, privacy, safety as some of the main properties), with an approach leading to an ethically aligned software design and security architecture.

Trustability in Algorithmic Systems Based on AI in the Public and Private Sector” addresses the governance challenge of machine-driven decision making in public administration, criminal justice, education and health. In addition to technical safety solutions, trustability and interpretability are key issues. Legislation, fairness and ethical principles (which, in certain contexts, contradict themselves) are main concerns. These systems should be evaluated by the public and checked for acceptance based on their decisions and the public’s perception of them. The “human comprehensible model” as addressed in the paper is a key precondition.

The article “Why your Robot Co-Worker Needs a Psychologist” addresses the new interdisciplinary research challenge of robot psychology. It outlines the research and development needs for this area, as addressed in the Austrian research lab “CoBot Studio”, where experts from different disciplines work together towards the common goal of human-centred trustworthy collaborative robots.
The article “You Can Make Computers See; Why not People?” discusses the achievements that have been made in the areas of computer vision and image recognition, e.g. for highly automated vehicles and storage control. The article queries why these technologies have not been more widely used to help vision-impaired people (“assisted vision”), and raises the question of whether our efforts might be better spent trying to directly help humans.

The issues raised in these articles represent only a few of the big dilemmas that we need to address. Hopefully the articles in this section will motivate the reader to ponder the ethical questions raised by technologies, their design, implementation and application.

Part of the work described received funding from the EC (Horizon 2020/ECSEL Joint Undertaking) and the partners National Funding Authorities (in Austria the Austrian Research Promotion Agency (FFG) and the Federal Ministry for Climate Action, Environment, Mobility, Innovation and Technology (BMK) ) through the projects AutoDrive (737469), Productive4.0 (737459) and SECREDAS (783119).

Link:
[L1] “When computers decide” - Informatics Europe and ACM Europe: https://www.acm.org/binaries/content/assets/public-policy/ie-euacm-adm-report-2018.pdf

References:
[1] European Commission, Independent High-Level Expert Group, “Ethics Guidelines for Trustworthy AI” (Final report April 2019, HLEG AI), Brussels. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai 
[2] Federal Ministry of Transport and Digital Infrastructure, Ethics Commission on Automated and Connected Driving – Report June 2017, Germany; Summary available in English on https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission-automated-and-connected-driving.pdf?__blob=publicationFile 
[3] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems”, First Edition. IEEE, 2019. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html 

Please contact:
Erwin Schoitsch
AIT Austrian Institute of Technology and Secure Business Austria (SBA)
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 122 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed