by Erwin Schoitsch (AIT Austrian Institute of Technology)

The impending highly automated and autonomous systems enabled by artificial intelligence (AI) bring with them new challenges and risks. Placing too much trust in, or misusing, machines that make decisions is risky, and the legalities are complex in terms of liability and responsibility. Autonomous systems can be grouped into three broad categories: technical systems that make decisions in “no win” hazardous situations (vehicles in traffic, collaborating robots); decision support systems in governance applications (administration, government, court, staff acquisition, etc.), which may lead to unfair decisions for humans and society; and systems that are open to deliberate misuse by providing information that can’t be proven to be true or fake, potentially influencing elections, public opinion or legal processes to an extent unknown before. These risks cannot be easily countered by conventional methods. We give an overview of the potential and risks afforded by this technology.

by Christoph Klikovits (Forschung Burgenland), Elke Szalai and Markus Tauber (FH Burgenland)

Digitisation is leading to the increased use of cyber-physical systems (CPS). A citizen participation and disaster management platform uses IoT components like sensors, which collect information about critical events in disaster scenarios. In this situation it is critical that all stakeholders can be assured of trustworthy information. We are researching an approach that takes ethical considerations into account during the development process, resulting in a secure, trustworthy framework.

by Sónia Teixeira, João Gama, Pedro Amorim and Gonçalo Figueira (University of Porto and INESC TEC, Portugal)

Algorithmic systems based on artificial intelligence (AI) increasingly play a role in decision-making processes, both in government and industry. These systems are used in areas such as retail, finances, and manufacturing. In the latter domain, the main priority is that the solutions are interpretable, as this characteristic correlates to the adoption rate of users (e.g., schedulers). However, more recently, these systems have been applied in areas of public interest, such as education, health, public administration, and criminal justice. The adoption of these systems in this domain, in particular the data-driven decision models, has raised questions about the risks associated with this technology, from which ethical problems may emerge. We analyse two important characteristics, interpretability and trustability, of AI-based systems in the industrial and public domains, respectively.

 by Martina Mara (Johannes Kepler University Linz)

As close collaborations between humans and robots increase, the latter must be programmed to be reliable, predictable and thus trustworthy for people. To make this a reality, interdisciplinary research and development is needed. The Austrian project CoBot Studio is a research initiative in which experts from different fields work together towards the common goal of human-centred collaborative robots.

by Nicolas Müller (Fraunhofer AISEC)

Machine learning based audio synthesis has made significant progress in recent years. Current techniques make it possible to clone any voice with deceptive authenticity, based on just a few seconds of reference recording. However, is this new technique more of a curse than a blessing? Its use in medicine (restoring the voice of the mute), in grief counselling or in the film and video game industry contrasts with its enormous potential for misuse (deep fakes). How can our society deal with a technology that has the potential to erode trust in (audio) media?

Next issue: October 2020
Special theme:
"Blue Growth"
Call for the next issue
Image ERCIM News 122 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed