The Global Conference on AI, Security and Ethics, organised by the United Nations Institute for Disarmament Research (UNIDIR), served as a forum for leaders and experts to discuss in depth measures to manage AI, ensuring this technology is not misused to harm global peace and stability.
Recently, numerous organisations and technology corporations have reaped sweet fruits after heavily investing in AI. Google’s parent Alphabet earned revenue of 96.5 billion USD in the fourth quarter of 2024, up 12% over the same period in 2023. Revenue from Google Cloud services increased significantly thanks to momentum from core products in the Google Cloud platform, AI and solutions related to generative AI.
AI is infiltrating every aspect of social life such as business, education, healthcare, and agriculture. This technology plays a role as an effective tool to promote peace. According to experts, with the ability to analyse data on previous conflicts, AI can help detect early signs of conflict, thereby preventing the risk of war early and from afar.
Thanks to AI applications, models predicting natural disasters and floods also have higher accuracy. In the military field, the application of AI robots in dangerous missions such as mine clearance and disposal can help reduce risks for soldiers and communities, while increasing operational efficiency.
However, the AI fever has caused governments in many countries to hurriedly find ways to manage this technology. UN Secretary-General Antonio Guterres has expressed concern that recent conflicts are becoming testing grounds for military AI applications. As an example, the recent wave of violence in Syria’s coastal provinces of Latakia and Tartus saw bad actors exploiting AI to add fuel to the fire.
According to researchers at the information verification organisation Verify-Sy, a storm of misinformation has occurred in Syria. Verify-Sy noted an increase in the use of AI to manipulate footage, alter voices to create false and violence-inciting content spreading across cyberspace.
Recent wars in Ukraine and the Middle East show that the application of AI has made weapon systems so intelligent that they can autonomously search for, track, and destroy targets without human involvement.
Seeing a future full of dangers if the development and application of AI is not strictly controlled, in September 2024, the Summit on Responsible Artificial Intelligence in the Military Domain was held in Seoul, the Republic of Korea, to discuss the risks. The conference adopted an action plan setting out key principles for the international community to move towards responsible use of AI for global benefit and security.
At the UN Security Council’s special session on AI in December 2024, participants committed to establishing a global management mechanism for AI, including forming an international scientific council and expanding dialogue on AI governance.
Promoting the central role of humans in building and applying AI is an urgent requirement to ensure this technology will be used properly, for the goals of peace and prosperity, rather than being exploited to become lethal weapons, as UN Secretary-General Antonio Guterres stated: “Humanity’s hand created AI. Humanity’s hand must guide it forward.”