Legal framework towards safe AI development

Artificial intelligence (AI) technology is developing rapidly, posing “difficult problems” in taking advantage of opportunities and minimising risks. With the AI Act recently passed by the European Parliament (EP), European Union (EU) member states are one step closer in the process of building the first legal framework for this advanced technology.
Illustrative image. (Source: linkedin.com)
Illustrative image. (Source: linkedin.com)

At the EP plenary session held in the French city of Strasbourg, the AI Act proposed by the European Commission (EC), also known as the AI Act, was approved with strong support from 523 parliamentarians. As one of the lawmakers pushing for this act, Brando Benifei emphasised that this is the first regulation in the world that outlines a clear roadmap towards safe and human-centred AI development.

The act passed by the EP set regulations for AI systems based on four levels of risk, low, limited, high and finally unacceptable. AI models with higher levels of risk must comply with stricter regulations. The act bans AI models that pose a risk of threatening the rights of citizens of EU member states, such as social scoring systems, applications capable of manipulating user perceptions and behaviour or collecting images to create a facial recognition database. The use of biometric identification systems will also be prohibited, except in certain specific cases.

To be licensed, high-risk AI systems are obliged to comply with regulations, such as undergoing assessment before being released to the EU market or taking steps to reduce negative impacts. Ensuring transparency is also a requirement emphasised in the AI bill. Non-compliance can result in fines ranging from 7.5 million EUR or 1.5% of turnover up to 35 million EUR or 7% of global turnover, depending on the violation and the size of the companies.

The EP’s passage of the AI Act immediately received feedback from technology companies. An Amazon spokesperson affirmed that the company is committed to working with the EU to support safe, secure and responsible AI development. Meanwhile, Meta's head of EU affairs emphasised that the issue that needs to be emphasised is that the act does not lose sight of AI’s potential to promote European innovation.

According to the trade organisation Digital Europe, currently only 3% of the world's “AI unicorns” are in the EU, while the global AI market is forecast to reach about 1,500 billion USD by 2030. Therefore, Digital Europe believes increased efforts are needed to ensure European companies can effectively exploit this potential market.

The next step in the process of ratifying the AI Act will be driven by the European Council. The upcoming effort is not expected to face many difficulties because the EP and the European Council have reached a political agreement on the provisions of the act from December 2023, and EU member countries have also cleared obstacles facing this issue last February.

Even though it was approved by the EP, the AI Act still received 46 against and 49 abstentions, showing that many European lawmakers are not ready to ratify it. The Confederation of European Business also worried about how the regulations would be interpreted and implemented in practice. Some legal experts are also concerned that the law will soon be outdated due to the rapid development of AI technology.

The goal of the AI bill is to protect the rights of EU citizens against risks from this technology while promoting innovation and putting Europe ahead in the field of AI, which is of top concern today. The EU expected that once in effect, the AI Act would contribute to setting global standards, serving as a foundation for policymaking and building a common approach to AI management worldwide.