Safe management and use of AI, creating trust in AI

The world's largest technology exhibition, CES 2024, is about to take place in Las Vegas (the US), promising to witness breakthroughs in artificial intelligence (AI). In the context of AI technology developing rapidly and being expected to contribute positively to social life, the safe and effective management and use of AI is an issue of particular concern for countries.
Illustrative image/Photo: OpenAI.
Illustrative image/Photo: OpenAI.

About 3,500 companies from 150 countries are expected to participate in CES 2024 from January 9 to 12. With 550 businesses showcasing leading technology products, from home appliances, wearables, and mobile technology to self-driving cars, the Republic of Korea (RoK) is the third largest exhibitor, following the US and China in CES 2024.

The RoK said this is an opportunity for Samsung Electronics, the world's largest manufacturer of mobile phones and memory chips, to announce its AI vision on a global scale for the first time and for LG Electronics to introduce new AI technologies. Korean experts said the above global technology event reflects the inevitable trend of development and application of advanced technologies.

Accordingly, "AI assistants” began to be fully deployed, helping to improve the workers' qualifications. In the coming years, this technology can completely replace humans, performing tasks such as restructuring the tax system and redistributing income sources.

Facing the strong development of AI, organisations and businesses around the world are prioritising investment in AI and applying it to their work, even though AI also comes with technical, social, ethical and security risks. According to the latest survey by Ernst & Young (EY), 70% of CEOs in the Asia-Pacific region see AI as a driving force for efficiency and innovation. It is expected that by 2027, Asia-Pacific businesses will invest about 78.4 billion USD in AI per year.

However, AI also poses many challenges, and experts say the effective management of AI will promote growth and competitive advantage. Governments are establishing clear AI regulatory frameworks. Some governments have signed up to high-level voluntary principles, such as the Organisation for Economic Cooperation and Development (OECD) AI Principles. The AI Safety Summit in November 2023, which was attended by representatives of governments and businesses from 28 countries, issued a landmark statement, pledging to cooperate to ensure AI used in a “human-centred, trustworthy and responsible” manner.

The European Union (EU) also agreed on the AI Act, which is expected to take effect from 2026, including far-reaching measures to protect people. In the US, President Joe Biden's recent executive order set the stage for new federal standards for AI safety, security and reliability. China is also drafting a comprehensive AI law. ASEAN hopes to soon complete the ASEAN Guidelines on AI Governance and Ethics.

Businesses and companies said more efforts are needed to address risks, from cyber attacks to misinformation and deepfakes (fraud).

According to Ernst&Young's survey results, businesses investing in AI realise that the management of issues related to accuracy, ethics and privacy will require significant changes in their governance activities.

Experts also said businesses can benefit from the AI-related policy and regulatory frameworks that countries are trying to create, because this not only protects consumers but is also very important in building trust in AI to expand its applications and exploit the potential of this advanced technology.