In a recent meeting held in Japan, representatives of G7 countries agreed on a draft of 11 guiding principles for AI development. The G7 also aims to compile an international code of conduct for AI tool developers and guidance for AI service providers and users.
This is considered progress by the G7 in its efforts to build a safer, more secure and more reliable AI system, amid increased warnings about potential risks from this technology.
The rapid development of AI and its widespread coverage is posing a difficult problem for lawmakers and the technology circle, about effectively controlling risks from AI, without affecting technological development and stifling creative spirit.
This November, the UK will host the world’s first AI safety summit, with the participation of about 100 guests. The event will focus on discussing the risk of terrorists taking advantage of this advanced technology, to create weapons of mass destruction.
Analysts say that this summit will help emphasise London’s position as one of the world’s leading technology centres.
Technology companies themselves, despite benefiting greatly from AI products, also sound the warning bell about serious consequences, if this technology develops in the wrong direction and is used for immoral purposes.
The Chairman of Microsoft affirmed that AI has the potential to become a useful tool, but also has the risk of becoming a weapon against humanity if it goes beyond human control. He also emphasised that countries need to encourage technology businesses to do the right thing, as well as creating regulations and policies to ensure safety in all situations.
Facing the fierce race of many large technology companies (Big Tech) into the potential field of AI, President of the German Competition Authority Andreas Mundt warned, that AI technology may increase the market power of Big Tech and called on regulators to be wary of attempts by any brand to abuse its market power.
It is impossible to deny the benefits that AI brings, however, the distance between benefits and abuse of this technology for improper purposes is also very short.
Results of a survey by the London School of Economics and Political Science show that AI brings both benefits and risks to journalism activities. Up to 85% of respondents have experienced generative AI platforms, typically ChatGPT or Google Bard, to write news summaries and headlines.
However, 60% of surveyed participants expressed concerns about the ethical risks of AI to journalistic values, including accuracy, fairness, and transparency.
Previously, research results conducted by Sensity AI showed that about 96% of videos, using deepfake technology distributed on the internet, had unhealthy content and most of the victims were female.
Strengthening AI management needs to be parallel with encouraging investment to maximise the potential of the technology, such as investment in building AI research and development centres.
This advanced technology will only help promote socio-economic development and be a powerful tool to support human life if it is properly controlled.