by Erwin Schoitsch (AIT Austrian Institute of Technology)
The impending highly automated and autonomous systems enabled by artificial intelligence (AI) bring with them new challenges and risks. Placing too much trust in, or misusing, machines that make decisions is risky, and the legalities are complex in terms of liability and responsibility. Autonomous systems can be grouped into three broad categories: technical systems that make decisions in “no win” hazardous situations (vehicles in traffic, collaborating robots); decision support systems in governance applications (administration, government, court, staff acquisition, etc.), which may lead to unfair decisions for humans and society; and systems that are open to deliberate misuse by providing information that can’t be proven to be true or fake, potentially influencing elections, public opinion or legal processes to an extent unknown before. These risks cannot be easily countered by conventional methods. We give an overview of the potential and risks afforded by this technology.