Responsibility for AI technology development

Faced with growing risks from the explosion of artificial intelligence (AI) technology, lawmakers are urging technology companies to be responsible and cautious in AI research and development. Facing pressure, companies have initially made commitments to reduce the threat of AI, although words and actions still require significant effort and time.
Illutriative photo
Illutriative photo

Creating an AI management mechanism was the main topic in a recent meeting between US officials and tech giants, held on Capitol Hill. US President Joe Biden has required businesses to have a responsibility to make sure their AI products are safe before making them public.

The meeting brought together leaders of many large technology companies, such as Microsoft, Meta, Alphabet and IBM.

The importance of controlling AI is even more important in the context of increasing risks from this emerging technology. Lawmakers face the difficult problems of containing the risks of new technology and encouraging the spirit of innovation and creativity.

Chief Executive Officer (CEO) of social networking platform X, Billionaire Elon Musk called artificial intelligence a double-edged sword. He said there was a need for a regulator to ensure the safe use of AI.

A series of the biggest tech companies, such as Google, OpenAI, Microsoft, Adobe, IBM and Nvidia have signed voluntary commitments, which aim to ensure that AI power is not used for destructive purposes.

It is undeniable that AI technology is becoming more and more popular in everyday life, bringing many outstanding innovations in fields such as health and education. For technology companies, AI products are also helping revenue and profits skyrocket.

Unrelenting demand for AI accelerators fueled an 843 per cent year-over-year surge in profit for Nvidia, in the three months up to July 30.

During that second quarter of its fiscal 2024 year, the GPU giant recorded 6.2 billion USD in net income, on revenues of 13.5 billion USD, double that of the year-ago quarter.

Alphabet technology group, Google's parent company, announced revenue and net profit in the first quarter of 2023 exceeding analysts' forecasts, showing that this "giant" is gradually regaining a high position in the market, thanks to the field of AI.

However, besides benefiting from AI products, technology companies also face the need to deploy and carefully evaluate, screen, and test products before bringing them to the public.

Recently, a series of AI products have been massively launched, showing the fierce race between "technology giants". The public is increasingly realizing that the level of intelligence and outstanding features of products come with many potential risks in real life.

Microsoft Corporation President Brad Smith recently stated, that AI has the potential to become a useful tool but also has the risk of becoming a weapon against humanity if it goes beyond human control.

The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has warned about the dangers of artificial intelligence (AI) products to children.

Emphasizing the management of the AI field as an urgent requirement in the new era, President of the European Commission (EC) Ursula von der Leyen, recently called for a global panel, consisting of experts and technology company representatives, to assess the risks and benefits of AI.

Previously, the EU also announced 4 million euros in aid to organise events and advise governments on AI issues. A global summit on AI safety will be hosted by the UK in early November, in the city of Milton Keynes.

Experts say that establishing general regulations for AI is a complex process that needs to be carefully considered to avoid hindering the development of science and technology.

The task of developing and implementing AI management laws and ensuring this technology is used safely, humanely and effectively to serve human life and requires the contributions of leaders, scientific researchers and especially technology businesses.