by Anaëlle Martin (French National Advisory Ethics Council for Health and Life Sciences)
After Paris in 2022 [L1] followed by Porto in 2023 [L3], the third edition of the ERCIM Forum ‘Beyond Compliance’ was held in Budapest on 14-15 October 2024, at the HUN-REN Institute for Computer Science and Control [L3]. This year’s event, which took place both in person and online, continued the discussion on the tough ethical issues faced by researchers in digital sciences. The scientific richness of these two days lay not only in the distinguished status of the speakers, but also in the wide range of cutting-edge topics covered. The diversity of contributions and the high calibre of Forum participants made it possible to explore digital issues from cultural, legal, (geo)political, historical, philosophical, and ethical perspectives.
The programme of the first day was marked by two particularly brilliant keynotes, masterfully delivered by Julian Nida-Rümelin (“Beyond Compliance: Digital Humanism”) and Milad Doueihi (“Beyond Intelligence: Imaginative Computing”). While the first speaker focused on tracing the philosophical origins of Digital Humanism and describing its challenges through animism and mechanistic reductionism, the second one offered a historical and literary analysis of what we now refer to as thinking machines. These presentations revisited classic AI debates, drawing on the ideas of Turing, Gödel, Wittgenstein, and earlier thinkers such as Leibniz and Butler. Both speakers explored the intersection of humanity and digital technology, advocating for human-centered approach to AI. The German philosopher emphasized the centrality of human authorship, while the American historian discussed the transformative effects of digital memory on culture and knowledge. Ethically, both thinkers stressed the importance of responsibility in the use of technology, emphasizing that education should guide digital transformation. They both called for critical reflection to safeguard cultural values and advocated for the preservation of human relationships, while reflecting on how digital culture reshapes knowledge transmission.
The first session dedicated to the making of regulations featured three researchers. Firstly, Melodena Stephens discussed the complexities of AI regulation, emphasising the difficulty of implementing effective, intergenerational policies in a rapidly evolving technological landscape, and the need for a global, flexible, and ethically sound approach to address issues like human autonomy, security, and the future of jobs. Next, Anna Ujlaki critically reviews the political theory discourse on AI, focusing on its conceptual limitations, normative questions, and potential for addressing AI’s integration into society, while highlighting the political risks and ethical dilemmas involved in AI regulation. Finally, Nikolaus Forgo discussed how, since the introduction of computers into public administration, lawmakers have repeatedly overestimated the short-term effects of new technologies while underestimating their long-term impacts, exemplified by the development of data protection laws and the recent AI Act.
The rest of the day featured two additional sessions dedicated to emerging topics and cultural influences.
Anatole Lécuyer opened the emerging topics session by discussing the paradoxical effects of virtual reality and metaverse technologies, highlighting their history and their growing impact on the population, particularly children and young adults, and the emerging ethical questions surrounding them. He explored psychological effects such as the sense of embodiment, agency, and the Proteus effect, which leads users to behave according to the stereotypes of their avatars, while also examining the potential harms and benefits of VR, from therapeutic uses to the risk of altering identity. This fascinating discussion was extended by the following speakers, who were present in person: Michele Barbier and Ferran Argelaguet. They presented a project exploring the ethical challenges of social interactions in the metaverse, focusing on issues such as harassment, privacy, and the legal status of avatars, with the goal of fostering empathy, improving safety tools, and addressing social and cultural concerns around digital identities and regulation. Finally, and in a slightly unconventional style, Jean-Bernard Stefani discussed the concept of “conviviality” from Illich to highlight the moral dilemmas in the digital world, including its ecological impact, surveillance capitalism, algorithmic discrimination, and digital divides, while arguing that these issues require a critical approach and a shift towards more human-centered and de-automated technologies.
Finally, the last two remote speakers addressed the issue of cultural influences. Rockwell Clancy discussed the relationship between cultural responsiveness, psychological realism, and global AI ethics, highlighting the importance of understanding both the normative and empirical components of AI ethics, the challenges posed by cross-cultural contexts, and the need for culturally informed policy frameworks in AI development. Marianna Capasso presented a project on algorithmic discrimination, approaching it from a cross-cultural perspective. She highlighted how algorithmic discrimination should be understood in a nuanced way, using examples such as Amazon’s CV screening system, which discriminated against women due to biased historical training data. She examined various forms of algorithmic discrimination, including indirect and statistical discrimination, and explored how culturally specific norms influence discriminatory behaviours.
The second day began with a session on cooperative agents. Elias Fernández Domingos discussed the importance of studying delegation to AI, explaining its issues and presenting a behavioral experiment where AI delegation improved coordination in a collective risk scenario, emphasizing the need for well-designed systems that maintain human agency while delegating tasks. Rebecca Stower explored ethical and psychological implications of human-robot interactions, focusing on errors in robot behaviour, the impact on trust and risk-taking, and the challenges of balancing data privacy and user preferences in robot design. Finally, Michael Fisher discussed the importance of ensuring trustworthiness in autonomous systems, emphasizing the need for reliability, transparency, and ethical decision-making, while also addressing sustainability concerns related to both the environmental impact of AI and robotics, as well as the unnecessary deployment of technology.
At midday, the Forum participants had the opportunity to attend the Tutorial Training expertly delivered by Alexei Grinbaum. He emphasized the importance of operationalizing AI ethics and explained that ethics in AI should be viewed as a valuable framework rather than a constraint. The scientist addressed a range of ethical challenges, including security risks in robotics, and introduced tools to facilitate discussions between ethicists and engineers. He presented training courses featuring exercises on dilemmas and the evaluation of AI projects in sectors like healthcare. He also explored the issue of responsibility in personalised education, focusing on topics such as bias, fairness, and the role of teachers.
For the first time, the Forum left some space for an unconference session which allowed participants to discuss, in a more informal way, Open Science and Nobel Prize in Computer Science.
Finally, the Forum concluded with a session dedicated to democracy that gave the floor to four speakers. Natali Helberger argued that AI is a powerful political tool that can either strengthen or undermine democracy, highlighting concerns about misinformation and the influence of big tech, while also recognizing AI’s potential to enhance communication. Siddharth Peter de Souza discussed the creation of data governance norms, emphasizing the role of civil society and advocating for a pluralistic approach to regulation that includes marginalized voices. Attila Gyulai explored the impact of AI on democracy, questioning the assumption that democracy is solely about autonomy, and suggesting that a more realistic understanding of democracy, which accounts for representation, manipulation, and the constructed nature of preferences, is necessary to address the challenges AI poses. Finally, Bjorn Kleizen examined the level of trust citizens have in AI systems used by governments, exploring how transparency and public perceptions influence trust, and emphasizing the need for long-term strategies to maintain trust in AI applications.
Links:
[L1] https://www.ercim.eu/beyond-compliance/beyond-compliance-2022
[L2] https://www.ercim.eu/beyond-compliance/beyond-compliance-2023
[L3] https://www.ercim.eu/beyond-compliance
Please contact:
Anaëlle Martin
Comité consultatif national d’éthique, France