With the adoption of the resolution on AI, the UN General Assembly highlighted the importance of respecting, protecting and promoting human rights in the design, development and use of AI. The General Assembly calls on all the UN member countries and relevant parties to refrain from or end the use of AI systems inconsistent with international human rights law or pose undue risks to the practice of human rights.
The UN also urged countries, the private sector, research organisations and the media to develop and support methods to govern and manage the use of AI in a safe, secure and reliable manner. The resolution asks the parties to cooperate and support developing countries so that they can have comprehensive and equitable access, narrowing the gap and improving digital literacy.
Recognising the potential of AI in promoting and realising 17 sustainable development goals (SDG), the resolution also points out the downside of AI without appropriate governance mechanisms, especially the risk of being abused for malicious purposes, eroding the integrity of information, hindering sustainable development in socio-economic and environmental aspects, and increasing inequality.
Urgent issue
This is the first official document of the UN General Assembly on AI to regulate activities related to this technology in the context that the world is facing opportunities and responsibilities in managing AI instead of letting AI dominate humans. This is an increasingly urgent issue that requires countries to make appropriate thinking and strategies towards developing and applying AI technology proactively, responsibly, sustainably and fairly, bringing benefits to everyone.
Through the process of research and development, AI has made great strides and has been increasingly widely applied in socio-economic life. Notably, with the appearance of the ChatGPT tool, AI technology continues to make strong developments. According to statistics from the Organisation for Economic Cooperation and Development (OECD), AI could boost global GDP by 14% by 2030 and increase labour productivity by 40%. Generative AI alone can contribute 4.4 trillion USD and help to cut working time by 60-70%.
AI changes the relationship among subjects in society and the way people live and work, thereby posing new challenges in governance and building legal frameworks to regulate issues and new relations. One of the main challenges is determining responsibility and legal status in AI applications, especially in the event of errors or disputes related to human life or intellectual property rights. AI strongly supports social management, but concerns arise about personal freedom and abuse of control.
The ability to use AI to create fake, intentionally biased, and harmful content raises the risk of public opinion being manipulated for political purposes, reducing trust in news in the long run. “AI out of control” is a problem that most governments are concerned about. In addition, there are concerns about the possibility that AI will develop intelligence similar to humans in the distant future.
Global action
Building an AI governance framework at the national and global levels is facing many challenges because the development of laws cannot keep up with the technology’s pace of development in general and AI in particular. Many countries, including the US and European Union (EU) members, are trying to limit the risks of this technology. China has finalised the first rules governing the field of generative AI. The EU has officially passed the law on AI control.
With a statement that it will serve as a model for global action in applying AI, the US Government has recently announced specific protection measures when government agencies apply AI technology. US Vice President Kamala Harris stressed that all US federal agencies must transparently publish the list of AI systems they use, along with risk management solutions when applying this technology, such as monitoring, evaluating and examining the impact of AI on the public and reducing the risk of algorithmic discrimination.
In addition, all federal agencies in the US must also elect a “chief AI officer” with expertise to ensure that this technology is used responsibly. December 1 will be the deadline for US federal government agencies to apply the above AI use policy. The White House also plans to hire 100 experts in the field of AI to promote the safe use of this technology at the federal level.
The UN General Assembly's adoption of the first resolution on AI is a landmark and timely milestone with great significance for the world. AI brings great benefits to socio-economic development and human life, but some AI applications create risks that can harm individuals, businesses and society on national, regional and global scales. This reality requires the international community, especially developed countries, to show high responsibility in developing AI technology in a safe, fair and sustainable manner.
Faced with the overwhelming task of concretising the new resolution on AI passed by the UN General Assembly, the countries need to continue to work together to build a legal framework and develop technology to govern AI. The countries with platforms, infrastructure and investment strategies will outpace the countries newly integrating with AI. Therefore, it is crucial to have a mechanism to support the application and transfer of technology from developed countries to developing countries. As a result, the expectations of safe, reliable and sustainable AI, towards the goal of sustainable development and leaving no one behind, can be achieved.