KI Act Ehtiks

AI Regulation-Artificial Intelligence Act AI expertise becomes mandatory

A key provision of the regulation will come into force on 2 February 2025:
Providers and operators of AI systems must ensure that all persons involved have the necessary AI expertise. This poses particular challenges for universities, which are increasingly relying on AI technologies in research, teaching and administration. These include assessing the risk potential of the AI systems used and establishing targeted training measures for employees. The following article explains key aspects of the AI Regulation and offers practical recommendations for implementing the new requirements in a legally compliant and efficient manner.

Berlin/Germany, February 2, 2025 With the introduction of the AI Regulation (AI Regulation), which came into force on 2 August 2024, the European Union has created a legal framework to regulate the use of artificial intelligence (AI) and make it safer. The regulation aims to protect the fundamental rights of EU citizens, create legal certainty and establish binding standards for the operation of AI systems.

An artificial intelligence system (AI system) is defined in Article 3 No. 1 of the Artificial Intelligence Act (AI Act) as ‘a machine-based system that is designed to operate with varying degrees of autonomy and, once deployed, demonstrates adaptability and, for explicit or implicit goals, derives from the inputs it receives how it can produce outcomes such as predictions, content, recommendations or decisions that can influence physical or virtual environments’.


The AI Regulation is a legal act of the European Union intended to regulate the use of artificial intelligence and came into force on 2 August 2024. The aim is to protect the fundamental rights of EU citizens, establish legal certainty and set safety standards for the operation of AI systems and the handling of AI applications. It applies to providers, importers or distributors of AI systems that are placed on the market in the EU, as well as to operators of AI systems based in the EU. Suppliers, operators and product manufacturers based in third countries are also affected if the AI output is used in the EU.

– A provider is anyone who is actively involved in the development or integrates an existing AI model into their own product and sells it under their own brand.
– Anyone who uses an AI system for internal purposes without developing it further or offering it as their own product is categorised as an operator.



If research is conducted on AI models or systems and they are not also used productively, they are excluded from the scope of the AI Regulation. This research privilege does not apply to AI models and systems that are used as a tool in research, such as software for transcribing interviews, which have not been developed and put into operation specifically and for the sole purpose of scientific research and development.

The AI Regulation takes a risk-based approach. With this risk-based approach, the intensity of regulation is adapted to the risk posed by the specific application. Six months after the AI Regulation comes into force, AI systems with an unacceptable risk are prohibited because they interfere particularly deeply with fundamental rights and can potentially cause considerable harm. The AI Regulation defines four risk levels for AI systems (see original publication https://www.zki.de/)

From 2 February 2025, one of the central provisions of the AI Regulation (Art. 4 AI Regulation) will come into force. Providers and operators of AI systems must ensure that their employees and probably, depending on the individual case, students who are to use the university’s AI systems, as well as all other persons involved in the operation or use, have a sufficient level of AI competence. The legislation requires the assurance of AI competence, defined as a combination of skills, knowledge and understanding to use AI systems competently, including awareness of opportunities, risks and potential harm. The aim is to minimise the risks associated with AI and ensure the responsible use of AI technologies in line with the timetable since the AI Act came into force (see further information)

An AI inventory can be used to determine which AI systems are used for which purpose and how they function. Existing modern software asset management tools (SAM tools) can help with this.
On the one hand, an AI inventory can be used to identify which AI systems have an unacceptable risk and would therefore be prohibited. For the AI inventory, the intended use of each individual system must be briefly outlined. On this basis, it is necessary to check whether one of the applications falls within the description of one of the paragraphs of Article 5 of the AI Regulation. Such systems must be decommissioned by 2 February 2025 at the latest.


On the other hand, the need for training can also be analysed on the basis of an AI inventory in order to sound out possible low-threshold, free or low-cost options for teaching employees the general basics of AI (e.g. via collections of materials for different needs such as information materials, podcasts, courses, videos).

This could fulfil initial minimum requirements for AI-competent employees. On this basis – especially when using AI systems in higher risk categories – a target group-specific offering should then be developed step by step, possibly with partners. This process outline is based on the ‘AI Campus’ recommendations. The ‘KI-Campus’ is a joint initiative with the aim of creating unique structures for educational innovation at regional, national and European level (see original source https://www.zki.de/)

If personal data is also processed in an AI system, the GDPR must be observed in addition to the AI Regulation. The processing of personal data in AI systems will regularly result in a data protection impact assessment having to be carried out for the processing activities in which these systems are used. This applies to the use of AI systems in research, teaching and university administration and must be taken into account when planning the timing of projects.

Translated with DeepL_com


Original publication:

ZKI-News zum Inkrafttreten AI Act am 02.02.2025:
(https://www.zki.de/aktuelles/ki-verordnung-artificial-intelligence-act-ki-kompetenz-ab-2-februar-2025-pflicht/)

ZKI-Publikationsliste:
(https://www.zki.de/publikationen/)

ZKI-Handlungsempfehlung AI Act:
(https://www.zki.de/fileadmin/user_upload/Downloads/ZKI_KI-Verordnung_AI-Act_Handlungsempfehlung.pdf)

ZKI-Merkblatt zur KI-VO:
(https://www.zki.de/fileadmin/user_upload/Downloads/ZKI_KI-Verordnung_Anlage-Merkblatt.pdf)



Further Information:

http://Quelle: (https://www.digitalaustria.gv.at/Themen/KI/AI-Act.html)
http://Quelle: Dr. Till und weitere in „EU AI Act: Wie wird Deutschland KI-kompetent?“
(https://ki-campus.org/blog/ai-act-ki-kompetenzen)

ImageSource

Kohji Asakawa Pixabay, KI-Verordnung-Artificial Intelligence Act AI expertise becomes mandatory


Beitrag veröffentlicht

in

von

Schlagwörter: