Q: In a context where AI is increasingly capable of intervening in artistic creation, how do you perceive the boundary between supporting creativity and misuse that erodes the original value of literature and the arts?
A: Although AI can replace humans in many tasks, including writing and artistic creation, the root issue lies not in the technology but in people. Are humans using AI to amplify their ideas, or relying on it to fully replace the meaningful intellectual labour that they themselves should experience?
Literature and art are not merely final products. They are also processes of emotional resonance, creative elevation, selection of ideas, trial and error, nurturing works through personal experience, and leaving the creator’s intellectual imprint. If AI helps artists conduct research, experiment with structure, find materials, and expand expressive capacity quickly, effectively, and vividly at an optimised cost, then it supports creativity. But if AI is used to bypass the entire effort of the artist, producing something that “looks like art” without lived experience or authorial responsibility, then it is no longer a creative work, but merely a simulation of creativity.
What is concerning is that the public can easily confuse the two, as today’s technology can produce outputs that are very “beautiful” and very “convincing”. However, something that resembles art is not necessarily art, just as something that resembles a human voice is not necessarily a voice with a soul. Creators themselves can also be overwhelmed by the extraordinary power of AI, leading to complete dependence on it and the loss of the depth that only they can provide.
Q: From a cybersecurity perspective, what risks might the use of AI in creative processes pose in relation to copyright, personal data, and the “theft of artistic style”?
A: From a cybersecurity standpoint, I usually break down AI-related risks in creative work into two parts: input risks and output risks. On the input side, there are three notable issues. First is the risk to personal data. Many creators today input unpublished drafts, images, and even highly personal notes into AI tools without fully understanding how that data is stored, used, or retrained. For artists, this is not only data but also intellectual property and their creative life.
Second is the risk related to training data sources. An AI model may generate content that appears new on the surface but is actually trained on millions of works whose owners neither know nor consent. This is no longer a case of isolated “copying” but a large-scale exploitation of creative labour, which may escalate into a broader issue.
That leads to the third risk: the appropriation of identity and large-scale theft of artistic style. AI can recreate the style, spirit, and expressive approach of an artist without directly copying. This can be seen as a form of identity impersonation at an aesthetic level, which is very difficult to detect and even harder to protect.
However, focusing only on input risks is not sufficient. The greater risks lie in the output — what AI produces and reintroduces to the market. First is the issue of ownership. Who truly owns an AI-generated work remains unclear, creating a legal grey area along with questions of responsibility and rights. If AI-generated content infringes copyright or causes disputes, who will bear the ultimate responsibility? When both users and technology providers can shift responsibility onto each other, the risks often fall on the weaker party — creators and the market.
Next is the issue of royalty flows. When AI-assisted works are commercialised and generate revenue, how is that value distributed? Are those who unintentionally become training data sources recognised or compensated, or does all the value concentrate in platforms and tool users? At present, there is no clear answer.
Finally, there is the issue of “AI junk”, as increasing volumes of low-quality content generated rapidly by AI flood the Internet. This not only negatively affects users, but such content may also be collected again to train new models, creating a cycle of declining quality. In simple terms, AI begins learning from its own outputs rather than from original human data.
Therefore, AI in creative work is not merely a matter of technology or convenience. It is about consent, ownership, responsibility, and value distribution. If these issues are not resolved, creators may gradually lose control not only over their works but also over their position within the creative ecosystem.
Q: In your view, is it possible to build a transparent ecosystem for AI-assisted works, such as labelling mechanisms and traceability, to protect both creators and the public?
A: I believe it is not only possible but necessary to build such a transparent ecosystem. Globally, efforts are already underway to avoid a future overwhelmed by “AI junk”. This ecosystem is not simply about attaching a label stating “AI-assisted”, but about clarifying multiple layers.
The first layer is transparency regarding the level of AI involvement. The public has the right to know whether a work is fully generated by AI or whether AI only assisted in certain stages such as colour correction, layout suggestions, or language editing. The second layer is traceability. In an ideal future, a work would come with a “digital passport”: who the author is, which tools and versions were used, which datasets were authorised, and what edits were made by humans. The third layer is independent verification. If transparency relies solely on self-declaration by content creators, it is insufficient. Technical standards and intermediary organisations are needed for validation.
Q: From your experience in technology, what are the most feasible technical solutions today to detect AI-generated content and prevent misuse in the arts?
A: The most basic and widely adopted technology used by many countries, organisations, and platforms is labelling — embedding identification, origin, and editing history into content from the moment it is created.
At a deeper level, there are technologies that analyse technical traces in images, audio, and text to detect generative models, anomalies, or inconsistencies in data structures. In addition to detecting AI-generated content, there are also technologies to protect original data and the identity of creators — proving who created something first, when it was created, on what data, and how it was edited.
At the deepest level, I believe it is about controlling access to and use of creative data. This protects data repositories, works, drafts, voice samples, and original images of artists from misuse or unauthorised use.
Q: From a business perspective, particularly for technology companies such as VinCSS, what role can they play in shaping ethical standards and legal frameworks for AI applications in creativity?
A: I believe technology companies should not remain merely “tool providers”. Their greater role is to help build a sustainable digital society where technology advances while privacy, creative rights, and transparency are protected.
Specifically, I think businesses can do three things. First, develop technologies for authentication and traceability. In a context where AI can generate content at scale, how can we determine what is real, where it comes from, and who is responsible? This is fundamentally a cybersecurity issue related to protecting digital identity, safeguarding data, and ensuring information integrity. Without this layer, we risk entering an environment where trust erodes rapidly.
Second, propose responsible operational standards, placing privacy at the centre. For example, what data can be used for training, what data requires explicit consent, how AI content should be labelled, when disclosure of AI involvement is required, and how disputes are handled. These are not just technical regulations but principles to protect both creators and content consumers.
Third, participate in multi-stakeholder dialogue among businesses, governments, the creative community, and the public to refine legal frameworks. Technology without understanding art can harm creativity, while businesses without social responsibility may prioritise convenience over long-term consequences.
At VinCSS, we do not view this as purely a technical issue. In addition to developing solutions related to digital identity, authentication, and access protection, we actively engage in raising public awareness of digital safety, privacy, and responsible AI use in the new technological era. Through communication activities, educational content, and knowledge sharing, we aim to help users better understand how their data is used, the risks of interacting with AI, and how to protect themselves in the digital environment. This is an important responsibility, because no matter how secure technology is designed, risks remain if users lack awareness.
Q: What do you suggest for young people — both “digital citizens” and creators — to use AI responsibly while maintaining their artistic identity without becoming dependent on tools?
A: I believe young people today have a great advantage in accessing more powerful tools than any previous generation of creators. But that also requires greater resilience. I have three suggestions.
First, use AI to expand your capabilities, not to avoid the process of growth. Tools can help you work faster, but they cannot live, observe, feel, or suffer on your behalf. At its deepest level, art still emerges from how a person has lived.
Second, preserve a part of your work that cannot be outsourced. That is your lens on life — how you perceive, cherish emotions, and decide what is worth expressing. In the future, the most valuable aspect may not be the ability to produce content, but the individuality behind it.
Third, learn digital ethics alongside digital skills. Do not become absorbed in the magic of technology — producing content at the click of a button — without asking whether your data is safe, where the AI’s data comes from, whether someone’s intellectual effort has been taken to create what you use, whether your output has value, or is merely “AI junk”, and who might be negatively affected by it.
I believe young people should not fear AI, but neither should they idolise it. AI can be seen as a new, modern, and feature-rich instrument. But ultimately, what people remember is not the instrument, but the melody that only you can create.
Reporter: Thank you for the insightful conversation!