by Eva M. Molin, Alexander Schindler and Peter Biegelbauer (AIT Austrian Institute of Technology)
Becoming “AI-ready” is not about chasing trends but about laying solid foundations. At the AIT Austrian Institute of Technology, a dedicated AI Task Force drives this shift by building the right infrastructure, fostering targeted training, and setting clear ethical and legal guidance. Together, these pillars turn artificial intelligence from promise into practice, making innovation both rapid and responsible.
Artificial intelligence (AI) has become a decisive factor in the competitiveness of research and innovation institutions. For Research and Technology Organizations (RTOs), success increasingly depends on the ability to rapidly integrate emerging AI technologies into both research and administrative processes, while ensuring compliance, efficiency, and long-term scalability.
At AIT Austrian Institute of Technology, it became evident that the various technology domains within the institute were adopting AI at different paces. While some centers were spearheading developments, others were progressing more slowly. At the same time, opportunities for increasing the efficiency and effectiveness of business processes remained untapped. As a result, top management installed an AI Advisory Board in 2023, which created the AIT AI Task Force in 2024 as a dedicated staff unit to provide an institute-wide framework for AI adoption [L1].
Comprising members rooted infrom different centers, bringing different disciplinary, age, gender, and cultural backgrounds, the Task Force was given a clear mandate. Its 11 members are to consolidate AI-related competences and infrastructure across the institute, to establish a common foundation for AI activities, and to integrate AI into administrative functions to enhance efficiency and effectiveness (Figure 1). Task Force members dedicate up to 50% of their working time to this role, while continuing their responsibilities within their respective centers. In this way, the Task Force both supports the advancement of AI research within AIT’s seven centers and the transformation of AIT’s corporate operations into an AI-enabled organization.
The main activities of the Task Force entail the coordination of high-performance computing infrastructure, the development of shared ethical and legal compliance frameworks, the rollout of accessible training programs, and the systematic evaluation of AI tools for research and administrative use. Together, these measures support both immediate needs and long-term strategic positioning. Its approach is deliberately engineering-oriented: start with an assessment of existing resources, identify overlaps and bottlenecks, and implement standardized solutions that can be scaled. The introduction of the ADA high-performance computing cluster was a concrete step in this direction. Instead of relying on scattered local servers, ADA now provides centralized access to advanced GPUs, petabyte-scale storage, and high-speed networking. By consolidating previously isolated resources, utilization efficiency improved measurably, while researchers gained the ability to train and deploy large AI models that were previously impractical. This centralization did not replace unit-level initiatives but complemented them, enabling both economies of scale and interdisciplinary collaboration.
Yet technology alone is not enough. Ensuring meaningful AI adoption across a diverse workforce requires inclusive and continuous skills development. To address this, the AI Task Force launched a broad-based internal training program. One notable example is the “ChatGPT compact” lecture series, which, in the first three online meetings, already introduced already over 25% of AIT’s more than 1,700 employees to large language models, according chatbots and the principles of prompt engineering. These, together with regular “AI Insights” info sessions and practical guidance on further AI-based tools, are designed to be approachable and relevant, lowering the barriers to entry and making AI more accessible to both technical and non-technical staff. This inclusive model has proven vital to cultivating a shared AI literacy across departments, units and support areas.
Paralleling these efforts, the AI Task Force has systematically embedded responsible AI practices into institutional workflows. With the European AI Act [L2] becoming enforced step-by-step, early integration of ethical and regulatory compliance has become a strategic advantage. The AI Task Force successfully partnered with AIT’s AI Ethics Lab [L3] and developed concrete tools to support this, ranging from ethics checklists to internal governance guidance, ensuring that researchers and administrators alike can align their work with evolving standards [1]. This proactive approach not only mitigates future compliance risks but also reinforces AIT’s commitment to responsible and trustworthy innovation.
Importantly, the AI Task Force fosters a collaborative culture that bridges traditional organizational silos to overcome the separation of expertise across centres. Through the creation of Special Interest Groups (SIGs), employees are invited to engage in peer-to-peer exchange on key AI topics such as Natural Language Processing, Retrieval-Augmented Generation, and Neurosymbolic AI. The newest SIG is coveringcovers AI-supported proposal writing, a topic of utmost importance for an RTO. These groups provide an open, interdisciplinary forum for shared learning, experimentation, and the co-creation of expertise. This model of distributed knowledge building and exchange strengthens institutional capacity and accelerates innovation.
Of course, transforming AI adoption at scale comes with its own set of challenges. Developing centralized infrastructure required alignment with competing needs across diverse units. Finding the balance between standardization and flexibility demanded both technical and organizational agility. Engaging a broad spectrum of staff, from AI beginners to advanced users, required ongoing dialogue, practical training formats, and visible management support. Yet these challenges have also offered valuable insights: that a clearly defined mandate, early and inclusive engagement, and embedded ethics [2] are critical success factors for any institution seeking to make AI a strategic capability.
Looking ahead, AIT’s AI Task Force will continue to evolve. Ongoing refinement of infrastructure, expansion of training initiatives, and adaptation to new regulatory environments remain key priorities. But the foundation is both robust and adjustable. With a centralized, inclusive, and ethically grounded model, the AI Task Force not only accelerates AIT’s AI journey, but it also offers a replicable blueprint for other RTOs navigating similar transformations. In sharing this experience, AIT contributes to a broader dialogue on how organizations can responsibly and effectively embrace AI. Becoming AI-ready is not a technical upgrade, ; it is a cultural and strategic shift that demands vision and commitment across the entire organization.

Figure 1: The AIT AI Task Force acts as a central enabler of AI adoption across the institute. By integrating infrastructure, training, ethics, collaboration, and strategy, it facilitates the responsible, inclusive, and efficient uptake of AI technologies across research and administrative units.
Links:
[L1] https://www.ait.ac.at/en/ai-taskforce/
[L2] https://artificialintelligenceact.eu/
[L3] https://cochangeproject.eu/labs/AIT
References:
[1] P. Biegelbauer, et al., “What drives change? Dynamic institutionalizations of responsible research and innovation in organizations through institutional entrepreneurship,” Journal of Responsible Innovation, vol. 12, no. 1, 2025. [Online]. Available: https://doi.org/10.1080/23299460.2025.2479323
[2] P. Biegelbauer, et al., “Ethical AI: Why and how?,” ERCIM News, vol. 131, pp. 8–9, Oct. 2022. [Online]. Available: https://ercim-news.ercim.eu/en131/special/ethical-ai-why-and-how
Please contact:
Alexander Schindler
AIT Austrian Institute of Technology, Austria

