Finding balance in the development of artificial intelligence

With great development potential, artificial intelligence (AI) is considered a magnet for huge investment, not only from enterprises but from many countries too. However, besides research and application efforts, countries have also quickly set up “safety barriers” to control potential dangers of this advanced technology.
Illustrative image. (Source: Barco/Vietnam+)
Illustrative image. (Source: Barco/Vietnam+)

Along with the strong explosion of AI in the past time, countries are increasingly focusing on balancing risk management and promoting innovation in this field. Recently, the European Parliament’s Internal Market Committee and the Civil Liberties Commission voted to approve the EU’s AI Act, marking a new stride in efforts to build a legal basis to control AI development activities.

The act requires AI developers to assess safety and provide data management and risk mitigation solutions before bringing products to market. For products created with generative foundational AI application like ChatGPT, the manufacturer must clearly warn that this is a machine-generated product. In addition, the act also bans the use of real-time facial recognition systems in publicly accessible spaces, as well as the use of AI systems to predict criminal behaviour. The EU act classifies AI tools into four groups based on the level of risk, from low to unacceptable, with high-risk applications facing stricter regulations.

The EP affirmed that the above act aims to ensure AI systems are supervised, safe, transparent, non-discriminatory, and environmentally friendly. Meanwhile, analysts say, this is an important step forward, paving the way for the birth of the world’s first comprehensive legal document on AI. The EP is expected to vote on the act in mid-June, before agreeing in negotiations with the European Commission (EC) and member governments.

In addition to the EU, the strong explosion of AI has also forced many other governments to act to limit potential risks while trying to keep pace with the rapid development of this advanced technology. The White House recently announced it will invest 140 million USD to set up seven research centres and publish new guidelines on using AI. This is one of the efforts of President Joe Biden’s administration to prevent AI-related security risks.

Meanwhile, the UK Competition and Markets Authority announced it has begun assessing the impact of AI on consumers, businesses, and the economy, as well as looking into the possibility of applying new measures to control tools like ChatGPT developed by OpenAI. In Japan, a Strategic Conference on AI was held with the goal of developing AI and solving problems that arise during its use.

Many analysts say that the benefits of AI cannot be denied, but the heat of AI also raises concerns that the rapid development of this technology may lead to unpredictable consequences.

According to Meta, the company that owns the social network Facebook, hackers take advantage of the craze of AI applications like ChatGPT to trick users into downloading malicious code. Meta’s chief information security officer describes this as a wave of malicious attacks. Meta detected and blocked more than 1,000 sites advertised as offering AI tools similar to ChatGPT.

Experts also warn about deepfake – AI technology used by criminals to create spoof images and voices for the purpose of appropriating property or spreading false information. This sophisticated trick is a concern to many internet users. Not only creating fake political news or taking personal revenge, criminals use deepfake to cheat or blackmail. Last year, it was estimated that the total loss in deepfake scams around the world reached tens of millions of US dollars.

According to experts’ estimates, the AI market value is 207.9 billion USD at the beginning of 2023, and it may exceed 1 trillion USD by 2028. It is increasingly urgent to build a legal corridor to control and prevent potential risks from AI. The United Nations affirmed that AI technologies need to be closely monitored, no matter how developed the technology, it must ensure that human rights are always a central factor.

NDO/ TIEN DUNG