AI Content Requires 'Real-Name System' Certification as of September 1

09/01 2025 395

Today, September 1, marks the inception of the "real-name system" for artificial intelligence content.

The "Measures for the Identification of Artificially Intelligent Generated Synthetic Content," jointly formulated by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration of Radio and Television, have officially taken effect.

Henceforth, all text, images, audio, video, and even virtual scenes generated by AI must be explicitly "labeled as AI-generated."

From its announcement to implementation, the Measures have been in development for six months, aiming to rectify the current landscape of indistinguishable AI content and restore public trust in online information.

Previously, the EU's AI Act, the US's Deepfake Accountability Act, and the UK's AI Regulation Draft all required watermarks. As of today, China is transforming these "suggestions" into "mandates," aligning with the highest global standards in one decisive step.

Double assurance with "open label" and "hidden code"

'Open label' refers to an 'explicit identifier' visible to all users: text must be labeled "Generated by Artificial Intelligence" at the beginning or end; audio must include voice prompts before and after; images and videos must have watermarks in prominent positions.

The so-called 'hidden code' is an implicit identifier embedded deep within the file: the generator's name, content number, and digital watermark implicit identifier are written into the file's metadata to facilitate accountability and traceability.

The combination of open label and hidden code provides a double assurance for identifying AI-generated content.

Generation end, dissemination end, user end: clear delineation of responsibilities

With the identification method clarified, the next step is to delineate responsibilities.

Firstly, the 'generation end' includes large model companies and editing tool developers, who must automatically apply the aforementioned identifiers when outputting content.

Secondly, the 'dissemination end' encompasses social platforms and video websites, which must verify the identifiers before content goes live, and any content lacking identifiers will trigger a risk warning.

Lastly, the 'user end' refers to each one of us. Users should refrain from removing the identifiers; malicious deletion, tampering, forgery, and other such actions can easily become illegal.

The regulatory body has designed a three-tier technical filtering mechanism

So, how do platforms ensure the authenticity and reliability of these identifiers? The regulatory body has devised a three-tier technical filtering mechanism.

The first tier involves the platform automatically reading the file metadata and directly identifying the content as AI-generated upon finding an implicit identifier.

The second tier kicks in if the metadata is missing; the platform uses algorithms to detect content characteristics and automatically labels it as "suspected AI-generated."

The third tier applies when a user actively declares that the content is not AI-generated; the platform needs to conduct a secondary confirmation before deciding whether to apply the label.

This approach doubles the difficulty for AI-generated content to go undetected.

However, despite the precision of algorithms, misjudgments are inevitable. In response, legal experts have issued early warnings: it is possible that original works may be mistakenly labeled. Therefore, the regulatory measures clearly stipulate that platforms must provide appeal channels. Once users discover they have been mislabeled, they can apply for manual review within 48 hours, and the party responsible for the misjudgment will be held accountable.

Relevant enterprises and developers: compliance preparation and access requirements

A reminder to relevant enterprises and developers:

Generation enterprises must complete the upgrade of their output modules by September 1 to enable the dual-labeling function of explicit and implicit identifiers.

Platform enterprises need to promptly launch three systems: metadata scanning, frontend prompts, and appeal backends.

Developers should not take any chances and are advised to access compliance detection platforms such as Zhejiang University's GCmark in advance to avoid being caught off guard during spot checks.

Penalties for violations

The "Identification Measures" emphasize that no organization or individual may maliciously delete, tamper with, forge, or conceal the identification of generated synthetic content stipulated in these Measures, nor provide tools or services to others for implementing such malicious acts.

Whether it is the generator or disseminator of AI content, violators will face a range of punishment measures, including but not limited to:

① Being ordered to rectify by the cyberspace administration department;
② Depending on the circumstances, being subjected to warnings, fines (which can reach millions of yuan), suspension of related services, removal from shelves, muting, traffic limitation, account closure, and other punishment measures;
③ If it causes social harm (such as the spread of fake news or rumors), more severe administrative or legal responsibilities may be pursued.

In summary, starting from September 1, artificial intelligence is no longer an 'anonymous netizen.' Any AI-generated content intended for public release must first display its 'identity card.'

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.