AI Management - urgent problem for humanity

United Nations Secretary-General Antonio Guterres has established an Advisory Board in charge of recommendations in the field of artificial intelligence (AI), a technology which has the potential to create breakthrough changes but also poses many potential risks.
AI Management - urgent problem for humanity

Besides enjoying the outstanding advances that AI brings to humanity, the responsible management and exploitation of AI is also a big challenge for each country.

Outstanding development and potential risks

Last year, the world witnessed a giant leap forward in the capabilities of AI and the use of this technology through chatbots, voice cloning and video applications. This groundbreaking technology has been concentrated in several companies and countries and is effectively used in the fields of public health, education, and climate change response.

However, the potential harms of AI raise serious concerns about misinformation, discriminatory ideology, invasion of privacy, fraud and many human rights violations.

Research results showed that the open-source AI language model named Llama 2 by Meta Group (owner of Facebook and Instagram) has the highest transparency index, but only meets the criteria for transparency at 54%.

Scientists from Stanford University (USA) concluded, that AI models often lack transparency when there is little information about the way to form the operation of this model. Research results showed that the open-source AI language model named Llama 2 by Meta Group (owner of Facebook and Instagram) has the highest transparency index, but only meets the criteria for transparency at 54%.

This model is followed by OpenAI’s GPT-4 model and Google’s PaLM 2 model. According to researchers, no AI technology corporation provides information about the number of users depending on the AI models they create. Additionally, most AI companies have not disclosed the amount of copyrighted material used in setting up and operating the AI model.

The issue of transparency is a top priority in the policy-making process for AI technology management in European Union (EU) countries, as well as in the US, UK, China, Canada and many other countries. According to researcher Rishi Bommasani, AI companies need to achieve an even higher transparency index, meeting the criteria at a level of 80-100%.

AI companies need to achieve an even higher transparency index, meeting the criteria at a level of 80-100%.

Researcher Rishi Bommasani

This is a measure for policymakers to make effective policies in their efforts to manage AI technology. Accordingly, the lower the transparency index is, the more difficult it will be to make policies and vice versa. The same goes for other areas of application.

Since the launching of OpenAI's generative AI models, leading scholars and leaders of large corporations such as American billionaire Elon Musk, have warned about the risks related to AI, even calling for a halt to the development of powerful AI systems in six months.

Scientist Geoffrey Hinton from the University of Toronto (Canada), one of the pioneers in AI development, warned that this technology poses a more urgent threat to humanity than climate change. Hinton also urged governments around the world to take action to prevent the prospect of machines controlling human society.

Accelerating the development of management tools

Researchers are trying to guide policymakers worldwide on how to establish regulations on the management of rapidly growing AI technology. The EU is leading the efforts to regulate AI and hopes to introduce the world's first law governing AI, by the end of 2023.

Meanwhile, to become a world leader in the field of AI technology, the UK will host the first AI Safety Summit on November 1 and 2. The Summit is expected to attract the participation of about 100 guests, including US Vice President Kamala Harris and Google CEO Demis Hassabis, along with scholars and pioneers in the field of AI.

The Summit will focus on discussing the risk that criminals and terrorists could take advantage of this advanced technology to create weapons of mass destruction. The Summit’s agenda will also focus on discussions about unpredictable advances in technology, as well as the possibility of AI going beyond human control.

Right before the Summit, British Prime Minister Rishi Sunak announced that the country would establish the world's first AI Safety Institute. The Institute will examine, evaluate, and test new types of AI, to grasp the capabilities of each new model and identify all risks from harmful effects on society, such as biased opinions and information deviations, to the highest risks.

According to him, the British Government is not in a hurry to control AI. Still, it will build a world-leading capacity to capture and evaluate the safety of AI models within the country.

Yoshua Bengio, one of the people who laid the foundation for the development of AI, said that today's most advanced AI models are too powerful and have great influence, requiring supervision during the development process.

Today's most advanced AI models are too powerful and have great influence, requiring supervision during the development process.

Yoshua Bengio

Therefore, governments and businesses need to quickly invest in AI safety because this field is evolving much faster than current precautions and only governments can properly assess the risks of AI to national security. According to leading AI researchers, AI companies and governments should spend at least one-third of their budget on AI research and development, to ensure the safety and proper use of systems.

In a call-out letter issued ahead of the AI Safety Summit in the UK, experts have proposed measures that governments and businesses can apply to address risks related to AI. The Governments should also force companies to take legal responsibility for preventable and foreseeable harm from their AI systems, added experts.

The AI Advisory Board established by the UN Secretary-General includes about 40 experts in technology, law and personal data protection, who are working in academia, government agencies and the private sector.

Among the experts are Mr. Amandeep Singh Gill, special envoy on technology of the UN Secretary-General; James Manyika, Vice President of AI at Google and Alphabet; Mira Murati, Technical Director of OpenAI Company – the developer of the "famous" ChatGPT chatbot; and Minister of AI of the United Arab Emirates (UAE) Omar al-Olama.

According to the UN Secretary-General, AI can undermine trust in institutions, as well as social cohesion and threaten democracy itself. Faced with the challenges that AI creates, he called on the AI Advisory Board to race against time, to promptly provide recommendations on how to manage the use of AI by the end of this year, as well as identify risks and opportunities from this technology.