Efforts to manage AI safely towards limiting potential risks

In 2023, many countries and organisations have made efforts to develop regulations on the safe and responsible use and development of artificial intelligence (AI) technology as well as limit potential risks. These are important steps, creating a solid foundation for the journey of mastering technology, so that technology can truly become an effective tool to support people.
Efforts to manage AI safely towards limiting potential risks

The latest survey results from Ernst & Young show that up to 70% of executives in the Asia-Pacific region consider AI a driving force in promoting efficiency and innovation. Aware of the downsides of AI, these businesses acknowledge that more efforts are needed to address risks ranging from cyber-attacks to misinformation and fraud.

The German Government has recently announced that by 2025 it will invest 1.6 billion EUR in AI research, development and application, while Singapore plans to triple the number of AI personnel to 15,000 people.

The year 2023 has witnessed the rapid "coverage" of AI technology globally. AI management has also become the topic of a series of international and regional conferences this year. Dictionary publisher Collins (the UK) has chosen AI as the keyword of 2023.

Undeniably, AI increasingly facilitates life and changing the world by its outstanding features. These advanced products have enormous potential to spur economic development and scientific progress and bring benefits to society, but they also pose risks to the world's safety if they are not developed responsibly.

AI is sparking a new, difficult and complicated battle between cybersecurity guards and cybercriminals. Observers warn that the elections taking place in the countries in 2024 will face trouble due to fake content and false information created by AI. In a survey, up to 58% of polled Americans said that AI will increase the spread of false information in the country's election in 2024.

Faced with this situation, a series of countries, organisations and businesses are taking urgent steps to form a framework for AI management. Member states and European Union (EU) parliamentarians have recently reached an agreement on the first comprehensive regulation to govern AI.

EU Internal Market Commissioner Thierry Breton affirmed that the law is the foundation for EU startups and researchers to lead the global race for AI development.

US President Joe Biden also issued an executive order on safety standards for AI in October 2023. The Group of Industrialised Countries (G7) published the first set of comprehensive international rules on advanced AI systems, providing recommendations for developers and users to minimise the risks posed by this technology.

Despite benefiting from AI products, a series of technology companies have also joined hands to promote the development of AI management rules. If in the past, many companies opposed regulations that they believed could stifle technological development, now, they have continuously warned about the dangers of AI if it is not put under strict management.

OpenAI has just published its latest guidelines for assessing the “catastrophic risks” of AI in models that are under development. Two large corporations, including Meta and IBM, along with about 50 companies and research organisations, established an AI Alliance to ensure an open cooperation method in developing this technology. One of the important mechanisms of the above alliance is the establishment of a Technical Supervision Committee, consisting of research experts from large technology corporations.

United Nations Secretary General Antonio Guterres emphasised that the world needs to stay ahead of the AI wave, and called on countries to respond to the risks of AI in a unified and sustainable manner. Although there is still much work to do ahead, recent AI management efforts have created a foundation for building a safe future in the application of this technology, while helping to balance innovation, creativity and response to ethical standards in AI development.