MIMO|SRG|Ericsson|c-band : EU Starts Getting Tough on Naughty AI

EU Starts Getting Tough on Naughty AI

The EU has announced agreement on a proposal for harmonised rules on Artificial Intelligence, known as the AI act. The legislation is due for implementation in late 2025 or early 2026.

The aim behind the act is to make sure that any AI systems used in Europe are ‘safe and respect fundamental rights and EU values’ according to press officer Dimitris Mamonas. This landmark proposal also aims to stimulate investment and innovation on AI in Europe.

This is the first legislation of its kind in the world to date but may pave the way for similar legislation elsewhere as we saw with GDPR for data protection. As well as its application within the EU, the fact that it applies to AI being used within Europe means that global companies or those wanting to work within the EU will need to comply with its strictures. As a result it is likely to have a potent effect on the development and deployment of AI systems globally.

The principle behind the regulation is to limit the amount of harm any AI system might cause: the greater the risk of harm, the stricter the rules applying to it. This means that any AI system deemed high-risk would, for example, be subject to a “fundamental rights impact assessment” before being made commercially available. The companies building or using them will have to show clear compliance with the law, from datasets through training and programming. As well as tracking the methods used for oversight. High-risk AI will need human oversight in its development and deployment. Public bodies which use high-risk systems may also have to enter this into a publicly available register.

Overall, it seems that transparency and clarity over what kinds of AI are being employed and when is the basic approach (for example, when AI-generated images are being used), with additional safeguards being overlaid on top in higher-risk cases.

There are some forms of AI which are considered too high-risk, however, and which will be banned from the EU under this legislation. These include:

  • Cognitive behavioural manipulation.
  • Untargeted scrapping of facial images from the internet or CCTV footage.
  • Emotion recognition in the workplace and educational institutions.
  • Social scoring.
  • Inferring sensitive data, such as sexual orientation, from biometric data.
  • Some cases of predictive policing.

The act takes particular note of AI systems designed to read emotions, noting that even in ‘acceptable’ uses of the AI, the end-users are made aware of it.   

The act does not apply to systems being used solely for either military or research purposes, or for people using AI for ‘non-professional reasons’. Law enforcement bodies have had some of the transparency requirements removed as a way to respect the need for confidentiality in some of their operations. They also are allowed certain cases of real-time biometric identification; the agreement is quite prescriptive about where and when such uses are permissible, but it opens up some questions about to what degree this can be monitored or managed in practice.

General purpose AI systems and foundation models have been given specific treatment in the act; not only in themselves, but also where they are built into other high-risk AI systems. The provisional agreement says that foundation models have to meet transparency obligations, with a stricter regime for ‘high impact’ foundation models. These are foundation models “trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain.” Ironically this may be a reason for some of the leaders in general purpose AI to scale back the bravado and hype around their systems.

General purpose AI is not a straightforward system or service, however, and the agreement also sets up an “AI Office” to oversee and enforce rules in member states. They will be supported by an independent expert panel which can help develop evaluation methods for foundation models, advise on how to recognise high-impact ones, and identify risks to models (for example, data poisoning).

In addition to this group we will see an ‘AI Board’ with representatives from member states, with an advisory forum for stakeholders adding their own expertise. Those stakeholders could include “industry representatives, SMEs, start-ups, civil society, and academia.”

Regulators will be able to put in place regulatory sandboxes to allow for testing and validating AI systems in real world conditions. This is designed to support a more objective and fact-based way to make regulatory decisions in this fast-paced environment.

Resistance Is Useless

As with GDPR, the fines for violating the act are going to be quite significant for major enterprises. There are minimum penalties but no maximums and are structured as percentages not of annual profit, but of turnover.

  • €35 million or 7% of turnover for using banned AI applications, whichever is higher.
  • €7.5 million or 1.5% of turnover for the supply of incorrect information, whichever is higher.
  • €15 million or 3% of turnover for violations of other obligations, whichever is higher.

These fines will not apply to SMEs, but they will be fined at a lower level, presumably on the basis that they would like not to destroy SMEs for sending in faulty information if they are making a mistake. Moreover, “to alleviate the administrative burden for smaller companies, the provisional agreement includes a list of actions to be undertaken to support such operators.”

Overall, for high-impact AI and those on the list of forbidden activities, this is going to be a watershed moment and one which causes a good deal of disruption. However, it also implies that companies can’t try to avoid taking responsibility for the decisions or activity of their AI.

Picture generated by Craiyon AI

SPONSORED BY:1
Share:
Share:

Insights

Registration

To reserve your ticket please fill out the registration form