Safe and responsible use of AI promoted

Since the "blockbuster" ChatGPT was launched a year ago, the prospects for the development of artificial intelligence (AI), as well as concerns about the risks from this technology, have become hot topics in the technology world.
The international community is promoting strong cooperation to strengthen the safe and responsible management and use of AI.
The international community is promoting strong cooperation to strengthen the safe and responsible management and use of AI.

Speaking at the recent Developers Conference in San Francisco, California (the US), OpenAI chief executive Sam Altman affirmed: “We will be able to do more, to create more, and to have more. As intelligence is integrated everywhere, we will all have superpowers on demand”.

However, to ensure that AI effectively serves human life while still limiting the risks from this technology, the international community is promoting strong cooperation to strengthen the safe and responsible management and use of AI.

OpenAI, the creator of the ChatGPT tool, is trying to attract developers with lower costs and the ability to easily customize the features of AI "assistants" that help handle all tasks.

OpenAI is developing on-demand “assistants” called “GPTs”, which are capable of handling specific tasks, like laundry advice, business negotiations, homework help and support on technical issues.

In a post on its page, OpenAI stated: “Anyone can easily build their own GPT—no coding is required. You can make them for yourself, just for your company's internal use, or for everyone”.

As planned, OpenAI will launch a "GPT store" later this month, which is similar to the App Store model and creates conditions for developers to earn revenue based on the number of people using their GPT.

OpenAI's latest moves are expected to make it easier to create conversational AI interfaces in applications or on websites, opening up options for more companies. This is one of the advances in applying AI to life and shows the great potential of AI.

Tremendous progress in the field of AI is bringing benefits; however, there are increasing concerns surrounding this emerging technology, such as job loss, cyber-attacks and the possibility of human ability to maintain control over AI-powered systems in the future.

The rapid development of AI could have long-term consequences in all areas (jobs, culture, etc), while its concentration on a few countries and companies could increase geopolitical tensions.

United Nations Secretary-General Antonio Guterres warned that this situation could worsen ongoing inequality in the world. According to a report published by the Organisation for Economic Cooperation and Development (OECD) on November 6, more and more people have been concerned about the ethical risks caused by blooming AI applications, but employers have been paying very little attention to this issue during the recruitment stage.

In most countries, less than 1% of job vacancies mention keywords related to AI ethics. Billionaire Elon Musk, CEO of SpaceX and Tesla, admitted that the rapid development of AI is one of the existential risks facing the world.

Musk warned that AI could grant people's wishes, but there will come a time when the “human-like robots” will make it difficult for people to find the meaning of life when they do everything for humans, anytime and anywhere.

Stressing that the world needs to stay ahead of the AI wave, UN Secretary-General Antonio Guterres called on the world to respond in a unified, sustainable, and global manner to the risks of AI. The recent AI Summit held in the UK issued a joint statement on AI safety. Accordingly, the European Union (EU) and 28 countries participating in the conference agreed to promote new global efforts to ensure the safe use of AI.

The summit agreed that intentionally or unintentionally abusing AI with special capabilities can lead to potential risks, even tragedy, in many aspects of life, especially cybersecurity, biotechnology, and fake news.

The statement clearly stated that any organisation or individual that creates AI capabilities with special powers and potential risks, must bear a special responsibility in ensuring the safety of these systems. Realising that AI has many potentials and risks, that even developers do not fully understand, the countries agreed to continue to work together to promote scientific research on AI safety.