Deutsch

Keyword search

Find your lawyers

Historic Milestone: Agreement on the world's first set of rules for Artificial Intelligence

12/16/2023

Author

Michael Froner

Attorney at Law

In December 2023, the European Council, in cooperation with the Parliament and mediated by the Commission, reached a historic milestone in the trilogue: Negotiators reached a provisional agreement on the world's first set of rules for Artificial Intelligence ("AI").

Background

The Artificial Intelligence Act ("AI Act") was initiated in 2021 to make a significant contribution to promoting the development and introduction of safe and trustworthy AI systems throughout the EU single market. This regulatory framework should not only enable technological advances, but also ensure that they are in line with fundamental rights and EU values.

In the negotiations, the conflicts and compromises surrounding civil rights and prohibited applications and the development and use of AI in Europe in general were central.
Compared to the Commission's original 2021 draft, the negotiated compromise is far more open to innovation and entails significantly less bureaucratic burdens for European companies – both for AI developers and their users.

The next step will now be to finalise the text of the regulation – as we all know, the devil is in the detail. It is expected that the regulation will come into force in the first half of 2024.

Risk-based Approach

The AI Act pursues a risk-based approach that recognises that less stringent transparency obligations should apply to low-risk AI systems. In contrast, high-risk AI systems are subject to stricter requirements and obligations for market access.

The authorisation of various high-risk AI systems is only permitted under specific conditions and in compliance with safety precautions. In addition, certain uses of AI are defined that are considered unacceptable and therefore lead to a ban on certain applications. These include cognitive behavioural manipulation, the recognition of emotions in the workplace and in educational institutions, social scoring, biometric categorisation to derive sensitive data and specific cases of predictive policing.

Use by Law Enforcement Authorities

The use of AI systems by law enforcement authorities is to be made possible under certain conditions.

One controversial topic in the run-up to the conference was facial recognition in public spaces with the help of AI. Concerns about data protection and civil rights centre in particular on the possibility of comprehensive surveillance and the potential misuse of the data obtained. Privacy advocates argue that facial recognition and biometric surveillance technologies could pose a significant threat to privacy, especially if they are deployed without sufficient legal controls. The protection of civil rights therefore requires a careful balance to be struck between the need for security and the right to privacy and individual freedoms.

Facial recognition in public spaces using AI should now be permitted in principle. However, the concerns have been taken into account insofar as protective measures have been implemented to prevent possible misuse. Its use should only be permitted with judicial authorisation and be limited to certain criminal offences.

Foundation Models

The AI Act also addresses the challenges posed by AI systems for general purposes. Special provisions are made for so-called foundation models, which are comprehensive AI systems that can handle a wide range of tasks and can be used in numerous applications.

Certain transparency obligations must be fulfilled before foundation models are launched on the market, with stricter regulations applying to particularly powerful and systemically risky foundation models. In order to be categorised as a systemically risky foundation model, a certain – comparatively high – computing power must be exceeded. Open source models are exempt from the stricter regulations.

Manufacturers of such models must check their data basis for possible "biases" before they sell their models to third-party providers. In particular, this is intended to prevent discrimination against people who are underrepresented in the data basis, which could lead to a higher error rate of the AI system.

In future, all providers of foundation models must disclose the data used to train their AI models, whereby trade secrets are excluded from this disclosure obligation. This is becoming increasingly relevant in light of the growing concerns of creators who suspect that their works could be used by AI providers without the necessary licences.

New Administrative Structure

In the context of the new guidelines for AI models and the need for their harmonised application across the EU, a dedicated AI Office will be created within the Commission. This AI Office is tasked with overseeing the implementation of these advanced AI models, actively participating in the development of standards and testing protocols and ensuring that the harmonised rules are complied with in all member states.

To strengthen its decision-making process, the AI Office will benefit from the insights of a scientific panel consisting of independent experts. The role of this panel extends to advising the AI Office, including participation in the development of methodologies and guidelines for the assessment and identification of high-impact baseline models and the monitoring of potential safety risks. This collaborative approach aims to ensure the responsible development of AI technologies for general purposes across the EU.

Sanctions

Penalties are provided for violations of the AI Act, with fines being staggered depending on the type of offence. The following levels are currently defined:

  • Fines of EUR 35 million or 7% of annual turnover are provided for the use of unacceptable AI applications.
  • Violations of the obligations of the AI Act can lead to fines of EUR 15 million or 3% of annual turnover.
  • The AI Act provides for fines of EUR 7.5 million or 1.5% of annual turnover for the provision of false information.

Particular attention has been paid to setting appropriate maximum limits for small and medium-sized enterprises ("SMEs") and start-ups in order to reduce the financial burden for these companies.

Outlook

In the coming weeks, technical details will be finalised before the final text of the AI Act is published. The AI Act, which is expected to come into force within a transitional period of two years, not only marks a historic milestone, but also provides a glimpse into the future of artificial intelligence in Europe.

With this groundbreaking set of regulations, significant changes and challenges lie ahead. The industry will have to adapt to meet the new requirements, particularly with regard to the transparency obligations for base models and the stricter regulations for systemically risky AI systems.

The disclosure obligations for providers of foundation models could, for example, lead to improved cooperation and trust-building between companies and consumers. At the same time, challenges relating to data protection and civil rights could be discussed and addressed more intensively.

The establishment of the AI Office and a scientific advisory board illustrates the endeavour to not only take regulatory measures, but also to actively contribute to the promotion of standards and innovation in the AI industry. The close cooperation between member states, industry and civil society promises a broad perspective on the implementation of the AI Act and a continuous dialogue on its impact.

Overall, the introduction of the AI Act is likely to herald a significant phase for AI development in Europe. While the exact impact will only become apparent in the coming years, this regulation undoubtedly marks a turning point in the way AI technologies are developed, implemented and monitored.

Author

Michael Froner

Attorney at Law