Ensuring safe and sustainable technological development

In the context of rapid digital technological development, online fraud is becoming increasingly sophisticated, particularly with the emergence of deepfake technology, which is being exploited to impersonate relatives, organisations, and businesses in order to appropriate assets. This reality requires to strengthen management of digital platforms, improve legal frameworks, and enhance public awareness and vigilance to proactively prevent risks in cyberspace.

The opening ceremony for the signing of the United Nations Convention against Cybercrime. (Photo: THUY NGUYEN)
The opening ceremony for the signing of the United Nations Convention against Cybercrime. (Photo: THUY NGUYEN)

According to the 2025 cybersecurity survey report on individual users conducted by the National Cybersecurity Association, the number of victims of online fraud in 2025 decreased significantly compared to 2024. Specifically, about 1 in every 555 people became a victim, equivalent to 0.18%, compared to 0.45% in 2024. This is a positive signal, reflecting the joint efforts of authorities, technology companies, professional organisations, and the press in raising community awareness.

However, it is noteworthy that fraudulent activities have not disappeared and are evolving into more sophisticated forms, notably through the misuse of deepfake technology to impersonate images, voices, and identities, making it much harder for users to distinguish between real and fake.

According to Vu Duy Hien, Deputy Secretary-General and Chief of Office of the National Cybersecurity Association, looking at the overall picture of the current online fraud, there have been encouraging improvements, however new challenges continue to emerge. At present, images, voices, and videos can no longer be considered reliable means of identity verification. Deepfake technology is often accompanied by time-pressure scenarios designed to force users to make quick decisions and bypass necessary verification steps.

Therefore, protecting personal data is critically important, as deepfake technology becomes truly effective only when fuelled by real data. Careless sharing of images, voices, and personal information on social network or unverified platforms may inadvertently facilitate sophisticated impersonation.

Deepfake is becoming one of the most serious challenges of the artificial intelligence (AI) era, as the boundary between real and fake becomes increasingly blurred. “What you see and hear may not be real. From images and voices to videos, today’s AI tools can generate highly realistic fake content that is easily accessible and difficult to distinguish with the naked eye,” Expert Vu Duy Hung from Hung AI Creative noted.

Beyond the risks of online fraud, managing AI-generated content is emerging as a new, complex, and sensitive challenge. In practice, some AI chatbots are capable of creating and disseminating deepfake content of a sensitive nature, seriously violating human rights, dignity, and privacy in cyberspace. In response, there is a pressing need to establish effective management mechanisms to control risks while ensuring a transparent, responsible, and sustainable environment for AI development.

Identifying the causes of this situation, Assoc Prof, Dr Tran Thanh Nam, Vice Rector of the University of Education under Viet Nam National University, Ha Noi, and an expert at the Viet Nam–France Psychology Institute, pointed out that living in an information-overloaded world makes young people particularly vulnerable to online scams. The speed of information flow and the fear of missing out (FOMO) often lead to poor emotional control and insufficient risk assessment. Herd mentality, blind trust in fabricated evidence on social network, lack of critical thinking and digital financial thinking, prioritisation of speed over verification, and the desire for recognition all contribute to young people falling into fraudulent traps.

As AI continues to develop at a rapid pace, equipping the public with appropriate knowledge, skills, and attitudes is considered the most important “shield”. Experts advise users to exercise extreme caution with requests for money transfers, transaction confirmations, or the provision of personal information made via calls, messages, or videos, even when such requests appear to come from acquaintances, leaders, organisations, or familiar platforms.

Taking time to verify information through official channels or contacting relevant entities directly is key to reducing risks. Raising awareness, equipping individuals with the skills to identify and develop preventive and control measures against deepfakes are emphasised as urgent requirements in protecting information security and digital security today.

In response to these realities, authorities have continuously issued recommendations to protect people when conducting transactions, shopping, and interacting online, while also improving the legal framework for AI. According to Tran Van Son, Deputy Director of the National Institute of Digital Technology and Digital Transformation under the Ministry of Science and Technology, the Law on Artificial Intelligence, which was passed by the National Assembly on December 10, 2025, and took effect on March 1, 2026, establishes a relatively comprehensive legal foundation for risk classification, defining responsibilities of relevant stakeholders and granting authority to management agencies to supervise, intervene, and handle violations by AI systems.

The law strictly prohibits the use of deepfake technology for fraud or illegal activities, it also requires generative AI systems to label content created or modified by AI as well as to apply identification measures for management and traceability. The Ministry of Science and Technology serves as the focal agency responsible to the government for state management of AI, leading the classification of risks and conformity assessments. In cases where risks of damage or serious incidents are identified, competent authorities are required to suspend, withdraw, or reassess such systems.

For serious violations, especially those involving content that harms children or disrupts social order and safety, the organisations and individuals involved will not only be subject to restrictions or suspension of services under the Artificial Intelligence Law, but may also face administrative penalties, criminal prosecution, and compensation for damages as prescribed by law.

This approach clearly reflects a viewpoint of encouraging innovation while firmly rejecting the misuse of AI that infringes upon human rights and societal interests, thereby ensuring that technology develops in a safe, responsible, and sustainable manner.

Back to top