Law on Artificial Intelligence establishes a mechanism to control high-tech fraud

The Law on Artificial Intelligence (AI) has just been promulgated, setting out a comprehensive legal framework for the first time to prevent, deter, and deal with acts that exploit AI to misappropriate assets and manipulate social perception.

Workshop to gather feedback on the Draft Law on AI. (Photo: NDO)
Workshop to gather feedback on the Draft Law on AI. (Photo: NDO)

AI-enabled fraud increasingly sophisticated and hard to detect

Recently, authorities have continuously warned about the increase in scam forms using AI. The most common are voice, image, and video spoofing technologies (deepfake). With just a few seconds of data collected from social media, AI can recreate an almost identical voice of relatives, friends, agency leaders, or business partners, thereby making calls to request urgent money transfers, approve transactions, or provide sensitive information.

Not stopping there, AI is also used to create fake emails, messages, and websites with polished content that is “personalised” for each victim. Notifications about account verification, security alerts, refunds, or prizes are presented professionally, leading users to fake websites with interfaces nearly identical to the real ones in order to steal passwords, OTP codes, or bank card information.

In particular, investment and financial scams on social media are surging. Criminals use AI to generate fake images and videos of financial experts or celebrities to promote “super-profit” projects with “zero risk guarantees”. Some AI chatbots are even programmed to provide advice and converse continuously 24/7, creating a professional and trustworthy impression, gradually luring victims into depositing money.

The common feature of these tactics is that they prey on trust, a sense of urgency, and users’ habits of failing to verify information. In this context, the requirement is not only to raise public awareness but also to establish a sufficiently strong legal framework to control the use of AI.

Legal framework to prevent high-tech fraud

To control risks arising from the abuse of AI, the Law on AI 2025, taking effect from March 1, 2026, has established a strict system of regulations, directly and indirectly aimed at preventing AI-enabled fraud.

Firstly, the law clearly defines prohibited acts. Under Article 7, the use of AI to impersonate, manipulate, or deceive humans is absolutely prohibited. Notably, the use of fake or simulated elements of real persons or real events to intentionally deceive or manipulate human perception and behaviour is strictly banned. This provision provides a direct legal basis to deal with deepfake voice scams and fake videos impersonating leaders, relatives, or partners to ask for money transfers and misappropriate assets.

At the same time, the law also prohibits the creation or dissemination of fake content, which is capable of causing serious harm to social order and safety, covering videos, images, and false information generated by AI to defraud the community.

In addition to banning behaviours, the law requires transparency of AI content and systems. Under Article 11 of the Law on AI 2025, users must be clearly informed when they are interacting with an AI system, especially in the case of chatbots, virtual assistants, or automated customer service systems.

In parallel, the law stipulates that audio, images, and videos generated by AI must be marked and labelled in an easily recognisable manner, particularly when such content simulates the appearance or voice of real persons or recreates real events. This is considered an important regulation to reduce the risk of people being deceived by highly realistic fake calls, video calls, or content.

The Law on AI also approaches fraud from a risk management perspective. Under Article 9, AI systems that may cause confusion or manipulate users because they cannot distinguish AI-generated content are classified as medium-risk. These systems are not allowed to operate freely but must comply with requirements on notification, supervision, and inspection by competent authorities.

In cases where an AI system causes or poses a risk of causing damage, the law empowers management authorities to require suspension or withdrawal of the system, while obliging developers, providers, and deploying units to coordinate in overcoming consequences and reporting to competent authorities. This mechanism allows early intervention to limit widespread damage when AI tools are detected as being exploited for large-scale fraud.

The Law on AI 2025 has also established mechanisms for inspection, handling violations, and compensation for damage. Accordingly, management authorities have the right to request technical dossiers, system logs, and related data to determine responsibility. In addition, acts of using AI for fraud may be subject to administrative penalties or criminal prosecution in accordance with the law, and offenders must compensate affected victims for damages.

With the above system of regulations, the Law on AI not only creates favourable conditions for AI development and application in a safe and responsible manner, but also establishes an important legal framework to prevent the increasing wave of high-tech fraud in the digital era.

(This article is coordinated with, and uses data and legal information from LuatVietnam.vn within the framework of the High-Tech Crime Prevention section.)

Back to top