by Anaëlle Martin (CCNE)

On 18-20 October 2023, the ERCIM Ethics Working Group co-organised with the CNPEN, the French National Committee for Digital Ethics, and the University of Porto, the second edition of the forum dedicated to digital ethics beyond compliance [L1] The hybrid event, which was held at the Faculty of Engineering of Porto, consisted of one tutorial, two keynotes and six topics and discussion sessions. The forum organisers have endeavoured to combine both theoretical and practical aspects of digital ethics in research, bearing in mind that in the context of the European AI Act and the war in Ukraine, researchers need to be pragmatic and open-minded. The program also left plenty of room for discussion, debate and dispute among participants, particularly in the session devoted to Environmental research ethics.

The first keynote was devoted to the fascinating and somewhat speculative topic of artificial consciousness, adCodressed from an epistemological perspective: the question was whether a ‘digital mind’ could be possible or even foreseeable in the future. It was also an opportunity to address the moral and philosophical question — often perceived as science fiction — of technological singularity, and to recall that the emergence of truly intelligent systems calls for an appropriate legal regime related to rights and responsibilities of digital persons. The closeness between digital ethics and artificial intelligence (AI) regulation was illustrated by the session that immediately followed, entitled ‘AI: From ethics to regulation’, in which relationship between AI ethics narratives, standardisation, certification, self-regulation and governance was masterfully highlighted. One of the speakers pointed out that, in absence of artificial general intelligence, there was a risk that algorithmic fairness would turn into a tool of distraction to avoid legal regulation [L2]. It would seem that in Europe this is not the case since European Union (EU) is at the cutting edge of AI regulation, after years of ethics guidelines and self-regulation. But despite its ‘risk-based approach’, the AI Act appears to leave gaps in the protection of individuals, as has been stressed in that session. The second session of the first day focused on the importance of science in a digital world. The speeches on this theme were particularly original, as shown by the example of the Citizen science, which is increasingly recognised as advancing scientific research and social innovation by involving general public in gathering and analysing empirical data. The issue of trustworthy digital data research was tackled from the perspective of ethnography, which emphasises, in its analytical tools and research practices, relations of awareness and power [L3].

The first session of the forum’s second day began with presentations about data protection in the age of AI. Big data poses a new challenge for privacy when machine-learning techniques are used to make predictions about individuals. The need to deploy data privacy technologies has been raised several times, as well as the urge to move from theoretical work to practice in order to have positive impact on society [L4]. More specifically, in the big data context, the problematic status of inferred data in EU data protection law needs to be clarified. In order to remedy the accountability gap posed by big data analytics and artificial intelligence, some researchers call for a new data protection right: the “right to reasonable inference” which would be based on a rigorous justification. Another data protection concern involves machine-learning models, in context of secondary use of trained model. These models are a blind spot in data protection regimes; that is why they should be target of regulation before their application to concrete case. After a tutorial on research ethics in digital science, which recalled the importance of ethical and deontological reflection for (young) researchers [L5], discussions focused on the responsible use of generative AI in academia [L6], which is a highly topical issue today: 2023 was undoubtedly the year of generative AI and of ChatGPT in particular. The chatbot’s performance drew public attention to Large Language Models (LLMs). Yet, while advances in AI have a potential in reshaping scientific tasks, many disciplines are addressing a “reproducibility crisis” due to questionable research practices and lack of transparency [L7]. For this reason, it is crucial to engage in exploratory work on LLMs use cases in science and to ensure that generative AI does not hinder trust in scholarly knowledge. Above all, researchers need to adopt a sober approach and an ‘epistemic humility’ to understanding the limitation of this technology, in the hype-laden climate surrounding OpenAI. The final session of the second day focused on the environmental research ethics. This provided the opportunity for a radical doctrinal proposal formulated succinctly: in order to ensure the resilience of ecosystems and sustainability of life on Earth, we need to dismantle the deadly illusions of AI and to restore our animal and social intelligence to its rightful place. The other speakers on the panel were more moderate, focusing mainly on AI models' carbon footprint in different stages of their life cycle and researchers’ moral obligation to consider the environmental impacts associated with their research [L8 ].

The third and final day of the forum focused on ethical guidance and review for research beyond human subjects. From a methodological point of view, specific and even sectorial ethics guidelines are needed in the European ethics evaluation scheme. The principle of transparency, for example, must be applied and adapted to the fields in which AI is widely used, such as healthcare or virtual reality. The arrival of new digital entities such as metaverse avatars and healthcare chatbots reinforces the importance of ethical principles like the ‘principle of maintaining distinctions’ (between human beings and digital entities), which could possibly be achieved through watermarks [L9]. It is also imperative to take regulatory decisions to impose that principle in order to address legal issues of liability and sanctions. The session further shed light on the importance of distinct ethics conceptual approach for projects funded in the area of AI technologies, in order to enable human-centric and robust digital research ecosystem. More generally, the need for education, training and expertise was strongly emphasized for research ethics in digital sciences. Finally, the question of operationalisation of safety principles in AI research within the context of open source practices was discussed: safety, accountability, fairness, explainability, and data stewardship are required to ensure trustworthy AI development. The final keynote was dedicated to the “Vienna manifesto on Digital Humanism” which is based on the premise that digitalisation presents many opportunities but also raises serious concerns: monopolisation of the Web, rise of extremist opinions, loss of privacy, spread of digital surveillance, etc. The Vienna manifesto encourages academic communities, policy makers and industrial leaders to participate in policy formation. It is a call to act on current technological development in order to shape technologies in accordance with human values, instead of allowing technologies to shape humans. This sensible roadmap brought the 2023 edition of the forum on digital ethics in research to a close. 
The programme of the Forum including several links to the presentations is available at https://www.ercim.eu/beyond-compliance.

Links: 
[L1] https://www.ercim.eu/beyond-compliance
[L2] https://www.ercim.eu/publication/ethics2023/slidesDanielaTafani.pdf
[L3]  https://www.youtube.com/watch?v=7PTTfQlEKvg
[L4] https://www.youtube.com/watch?v=tRjdf-WQyds
[L5] https://www.youtube.com/watch?v=zk7yk_ruzOI
[L6] https://www.youtube.com/watch?v=rR0biPkX3-w
[L7] https://www.ercim.eu/publication/ethics2023/slidesTonyRoss-Hellauer.pdf
[L8] https://www.youtube.com/watch?v=aL_2qV9b0Gk; https://www.ercim.eu/publication/ethics2023/slidesGabrielleSamuel.pdf
[L9] https://www.youtube.com/watch?v=kGG1bOHxPMA; https://www.ercim.eu/publication/ethics2023/slidesAlexeiGrinbaum.pdf

Please contact: 
Anaëlle Martin, Comité Consultatif National d’Ethique, France
This email address is being protected from spambots. You need JavaScript enabled to view it.

Gabriel David, INESC-TEC, Portugal
ERCIM Ethics Working Group Chair
This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 137
This issue in pdf

 

Image ERCIM News 137 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed