China’s AI regulation

As one of the first global superpowers to regulate generative AI (“Gen AI”), China has progressively introduced regulations and guidelines to create a flexible, ever-evolving regulatory framework. Instead of having a single piece of overarching AI law, China adopts a multi-faceted approach that consists of national laws, administrative regulations and measures, local rules and standards, and industry-specific rules (collectively, the “Regulations”). Together, these instruments form a dynamic framework that enables China to manage risks without curtailing innovation. It also ensures that Gen AI services are safely and appropriately made available to the public within China.

The Regulations

The Interim Measures for Management of Generative Artificial Intelligence Services (the “Interim Measures”) (effective August 2023) and the Basic Security Requirements for Generative Artificial Intelligence Service issued by the National Cybersecurity Standardization Technical Committee (TC 260) (the “Basic Requirements”) (published in February 2024) together outline the principal obligations for entities engaged in AI-related business (e.g. Gen AI Service Providers), as well as users of Gen AI services, in Mainland China.

The Measures for Labelling AI-Generated and Synthesized Content (the “Labelling Measures”), published in March 2025 and effective from 1 September 2025, sets out the duties for labelling AI-generated content for Gen AI Service Providers and users. The Labelling Measures complements the Administrative Provisions on Deep Synthesis in Internet-based Information Services (“Deep Synthesis Provisions”) (effective January 2023) and the Administrative Provisions on Recommendation Algorithms in Internet-based Information Services (“Algorithm Provisions”) (effective March 2022) in combating ethical and social risks of misusing AI-generated or modified content.

Further, there are other supplementary standards and guidelines that implement the above-mentioned Regulations, which set out more prescriptive technical specifications. Several examples are listed below:

  • Cybersecurity Technology – Labelling Method for Content Generated by Artificial Intelligence (published in February 2025 and effective on 1 September 2025) (the “Labelling Method”). This is a mandatory national standard.
  • Cybersecurity Technology – Basic Security Requirements for Generative Artificial Intelligence Service (published in April 2025 and effective on 1 November 2025). The requirements therein are recommended (i.e. not strictly mandatory), but it is market practice that stakeholders would refer to such requirements for compliance purposes.

To provide a brief overview, the Regulations impose obligations related to ensuring the legality of the training data and model, which may involve conducting security assessments in accordance with the Basic Requirements and filing algorithms (applicable to Gen AI services with attributes related to public opinion or capabilities for social mobilization). Additionally, Gen AI Service Providers have a duty to adopt effective measures to enhance the quality of data, including its accuracy, truthfulness, objectivity, and diversity.

In terms of outputs of Gen AI, Gen AI Service Providers should ensure transparency obligations that the interests of the public could be safeguarded. This includes requesting Gen AI Service Providers to implement anti-addiction measures, guide users to understand the Gen AI service provided and provide complaint procedures to users and the general public. To ensure the lawful use of the Gen AI service by the users and to prevent Gen AI outputs being used to infringe individuals’ rights or disrupt social and economic order, Gen AI Service Providers shall also ensure AI-generated content is properly labelled and that unlawful content is promptly removed and reported (as applicable).

Sector-Specific Rules

The respective PRC regulators in the fields of healthcare (e.g. concerning AI medical software products), automotives (e.g. concerning self-driving vehicles), finance and education have also issued various measures, guiding principles and draft regulations for consultations. These promulgations address the specific risks of Gen AI and AI deployment in sectors with significant impact on the general public, and set out additional compliance obligations to ensure privacy, safety and transparency.


Key legal risks / issues

1. Categorized Supervision: In addition to the Regulations, relevant state and local authorities may issue further regulations and guidance for specific industries and categories of Gen AI services. It is therefore essential for Gen AI Service Providers to keep updated of all applicable national, administrative, local, and sector-specific regulatory requirements, such as the ones described above.


2. Broad Definition: Under the Interim Measures, “Gen AI technology” is defined broadly as “models and related technologies that have the ability to generate texts, pictures, sounds, videos, and other content”. This covers essentially any content-generating technology, including recommendation algorithm technologies and deep synthesis technologies where they are integrated with Gen AI. Essentially, any provider offering services with a Gen AI element must comply with the regulatory obligations.


3. Development Oversight: The Regulations set out requirements for the development of Gen AI, covering the full lifecycle of AI model development and training. Gen AI Service Providers shall ensure training data is lawfully sourced, screened, and labelled in accordance with the Basic Requirements. They are also responsible for the models’ training process, output accuracy, as well as the security of their operation and updates. Where Gen AI Service Providers rely on third-party AI models, challenges may arise in meeting these requirements due to limited visibility into how such models were trained, evaluated, and maintained.


4. Output Labelling and Monitoring: Under the Regulations, Gen AI Service Providers are responsible for the outputs of their systems under the Regulations. For example, under the Labelling Measures and the Labelling Method, they must implement both explicit and implicit labelling mechanisms for AI-generated content in specified forms, disclose the labelling standards to users, and embed implicit labels to the metadata of outputs. For deep synthesis services that generate or significantly alter content (such as text, audio, images, or videos), providers must add explicit labels to the content per the Deep Synthesis Provisions. If outputs are used for illegal purposes, providers must promptly remove such content, preserve records, sanction users, suspend services, and/or report to the authorities (as applicable).


5. Consequences of Breach: Violations of the Interim Measures would be penalised in accordance with the Cybersecurity Law, Data Security Law, Personal Information Protection Law, and Law on Progress of Science and Technology. For scenarios not explicitly covered, regulators may take administrative measures (e.g. issuing warnings, reprimands, rectification orders, and suspension orders). There may also be potential criminal liability for violations that constitute criminal acts. Other competent authorities that may impose penalties include the authorities governing cyberspace, telecommunications, public security, radio and television and market regulation (e.g. the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, the State Administration of Market Regulation etc.).


Actions for consideration

1. Risk Assessment and Comprehensive Compliance Checklist: Given the broad coverage and extensive compliance obligations under the Regulations, it is vital for businesses to conduct self-evaluations on their risk exposure and compliance posture with respect to their offerings that leverage Gen AI. Establishing a comprehensive compliance and governance checklist would be helpful for this purpose. Additionally, it would be prudent for businesses to undertake similar assessments against third-party Gen AI system providers prior to any engagement. Both developers and deployers of Gen AI systems and Gen AI-powered products or services should stay informed of all relevant regulatory developments at the national, local and industry level.


2. Contractual Arrangement with Gen AI System Provider: Businesses may be held responsible under the Regulations, even if the Gen AI system was developed by a third party. It is therefore vital for businesses to regularly monitor the performance, operation, and outputs of Gen AI systems. Procurement contracts should include sufficient and comprehensive audit rights and mechanisms for ongoing inspection and cooperation. This ensures that the compliance obligations outlined in the Regulations are “flowed down” to third-party suppliers in any procurement contracts.


3. Maintain Human-in-the-Loop Oversight: The Regulations impose obligations throughout the full lifecycle of Gen AI deployment, particularly for services provided to the public in China. To comply, Gen AI Service Providers should implement human-in-the-loop oversight to ensure AI-generated outputs are accurately labelled, and to promptly address any detected unlawful content or user complaints in line with regulatory requirements.

laptop with pen on keys

Related contacts

Rhys McWhirter


Partner E: rhysmcwhirter@eversheds-sutherland.com T: +852 2186 4969 View profile

Frankie Tam


Partner E: frankietam@eversheds-sutherland.com T: +852 2186 4919 View profile

Issues to consider

Explore more legal issues to consider and associated actions for consideration.

View all issues to consider

Additional resources

close up of circuit board

AI Risk Navigator

Our AI Risk Navigator provides you with a concise and insightful report for your AI use case, showing you the thematic risks and recommended steps you should take to address them.

Visit AI Risk Navigator
person working on laptop

AI preparedness checklist USA

What every company should be doing now.

View checklist
traffic and roadwork signals at night

AI insights

View all our latest AI insights including articles, legal briefings and media mentions.

View insights
data center

AI Literacy

Mitigate risks to comply with the EU AI Act’s AI literacy obligations.

Discover more

Jargon buster

This is a guide to the AI-related terminology you need to know.

Visit jargon buster

Regional contacts

Please reach out to our regional contacts for more information.

Visit regional contacts

© Eversheds Sutherland. All rights reserved. Eversheds Sutherland is a global provider of legal and other services operating through various separate and distinct legal entities. Eversheds Sutherland is the name and brand under which the members of Eversheds Sutherland Limited (Eversheds Sutherland (International) LLP and Eversheds Sutherland (US) LLP) and their respective controlled, managed and affiliated firms and the members of Eversheds Sutherland (Europe) Limited (each an "Eversheds Sutherland Entity" and together the "Eversheds Sutherland Entities") provide legal or other services to clients around the world. Eversheds Sutherland Entities are constituted and regulated in accordance with relevant local regulatory and legal requirements and operate in accordance with their locally registered names. The use of the name Eversheds Sutherland, is for description purposes only and does not imply that the Eversheds Sutherland Entities are in a partnership or are part of a global LLP. The responsibility for the provision of services to the client is defined in the terms of engagement between the instructed firm and the client.

Share this page