by Diana E. Jimenez-Bejarano (Trialog), Stephan Krenn (AIT Austrian Institute of Technology), Antonio Kung (Trialog), and Angel Palomares Perez (Advanced Computing SL)
Privacy-Enhancing Technologies (PETs) have moved from academic prototypes to relevant building blocks of Europe’s digital policy agenda. Developments such as GDPR, the NIS2 Directive, the EU AI Act and the European Digital Identity framework show a clear direction: using or analysing sensitive data must not weaken confidentiality, privacy, data sovereignty, or compliance. At the same time, modern AI systems increasingly rely on cross-organisational collaboration. This dynamic creates a structural tension between data utility and privacy protection.
The Horizon Europe project LICORICE [L1] addresses the tension between data utility and privacy protection by bringing advanced cryptographic technologies into concrete, policy-relevant settings. The project develops interoperable tools for privacy-preserving identity management and for secure computation on sensitive data.
From Fragmented Research to Deployable Tools
In recent years, cryptographic research has produced powerful methods such as secure multi-party computation, homomorphic encryption, differential privacy and zero-knowledge proofs. Still, many of these approaches remain rather fragmented and strongly focused on theoretical questions. Integration into existing infrastructures is often complex, and usability or regulatory alignment is often underdeveloped.
LICORICE follows a more pragmatic approach. Instead of refining isolated primitives, the project combines mature and emerging PETs into coherent and deployable tools. This supports compliance with evolving European frameworks, including EUDI wallets, sector-specific data spaces and cybersecurity obligations under NIS2.
Two complementary technology stacks are developed:
- a privacy-oriented identity management toolset aligned with Self-Sovereign Identity concepts and European Digital Identity initiatives
- a secure computation toolset including secure federated learning, multi-party computation and secure neural network inference.
Together, these components form a practical foundation for trusted data ecosystems in regulated domains. The technologies are validated through two pilot implementations: one in healthcare and one in cybersecurity threat intelligence sharing.
Pilot 1: Privacy-Preserving AI for Healthcare
Healthcare data is among the most sensitive categories of personal data. Electronic Health Records, imaging data and information from wearable devices are essential for AI-supported medicine. At the same time, regulatory frameworks such as GDPR, HIPAA and the EU AI Act may restrict data sharing between hospitals and institutions. As a consequence, valuable datasets often remain siloed, limiting robust machine learning development.
LICORICE tackles this challenge in the context of chronic respiratory diseases, notably Chronic Obstructive Pulmonary Disease (COPD) and asthma, conditions that affect hundreds of millions of people worldwide and place a significant burden on healthcare systems. Earlier detection of exacerbations through AI-based prediction can reduce hospitalisations and costs, especially in ageing societies.
The pilot combines:
- self-sovereign identity mechanisms for strong and privacy-preserving authentication of patients and healthcare professionals, including cryptographically protected biometrics
- federated learning with enhanced privacy guarantees, allowing hospitals to jointly train models without transferring raw patient data
- secure aggregation based on multi-party computation, so that only aggregated model updates are visible.
- secure neural network inference [1], protecting both patient input data and proprietary models during prediction.
The scenario integrates representative hospital records and data streams from wearable devices measuring parameters such as ECG or oxygen saturation. Distributed machine learning across institutions becomes feasible while maintaining confidentiality and data sovereignty. The integration is illustrated in Figure 1.
![Figure 1: Secure framework for access and storage and storage of data in the LICORICE health pilot [3].](/images/stories/EN144/jimenez.png)
Figure 1: Secure framework for access and storage and storage of data in the LICORICE health pilot [3].
From a policy angle, this pilot gives a concrete example of how a European Health Data Space could enable cross-border analytics without centralising sensitive information. It shows, in practical terms, that AI innovation and fundamental rights protection need not contradict each other.
Pilot 2: Privacy-Preserving Cyber Threat Intelligence Sharing
The second pilot focuses on cybersecurity, where rapid information exchange is crucial to counter increasingly sophisticated attacks. AI-based cybersecurity assistants analyse large volumes of logs and threat intelligence to detect anomalies and predict vulnerabilities. However, these systems often depend on access to sensitive operational and, in some cases, personal data.
In practice, automated processing is often restricted once confidential information is present. This leads to manual review and slower incident response. At the same time, effective cyber defence depends on structured information sharing across organisational and national borders, which again raises privacy and compliance concerns.
LICORICE develops a privacy-preserving CTI assistant that addresses both issues. The system includes:
- automated anonymisation of queries sent to large language models provided as Machine Learning as a Service [2], enabling the safe use of external AI services
- differential privacy and multi-party computation techniques for secure data sharing across organisations
- integration with the MISP threat intelligence platform, allowing collaborative analysis without exposing sensitive underlying data.
By embedding PETs directly into threat intelligence workflows, LICORICE demonstrates that AI-driven security tools can remain compliant with data protection requirements while keeping operational efficiency.
This is particularly relevant in view of NIS2, which strengthens cybersecurity obligations across critical sectors and encourages structured information sharing. The pilot provides technical mechanisms to implement such sharing in a privacy-preserving way.
Towards Trustworthy Digital Infrastructures
Europe’s regulatory landscape is often perceived as a constraint on innovation. LICORICE instead treats regulation as a design requirement. By embedding compliance, accountability and privacy into system architectures from the outset, the project aligns technological capability with European values, proving that privacy and utility are not mutually exclusive.
In this sense, the project contributes to digital sovereignty in a concrete way: enabling advanced analytics and cross-border collaboration without losing control over sensitive data. The developed toolsets aim to be interoperable and practically usable within emerging European data spaces and identity frameworks.
As PETs continue to mature, large-scale deployment will depend less on new theoretical results and more on integration, usability and regulatory compatibility. LICORICE positions itself at this interface, translating cryptographic research into operational trust.
Links:
[L1] https://www.licorice-horizon.eu/
References:
[1] K. Batool, S. Anwar, and Z. Á. Mann, “SecFePAS: Secure facial-expression-based pain assessment with deep learning at the edge,” in Proc. SEC 2024, 2024, pp. 417–424.
[2] V. Prodomo, R. Gonzalez, and M. Gramaglia, “SCIPER: Secure collaborative inference via privacy-enhancing regularization,” IEEE Trans. Privacy, vol. 1, pp. 57–68, 2024.
[3] LICORICE Consortium, “D3.2 Pilot planning, requirements and LICORICE toolset definition,” Project Deliverable, 2025.
Please contact:
Diana E. Jimenez-Bejarano, Trialog, Paris, France
