
Photo: Courtesy of China's Ministry of State Security
China's Ministry of State Security issued an article on its WeChat account on Tuesday reminding citizens that AI data poisoning not only infringes on consumer rights and disrupt market order, but also poses systematic and long-term harm to the political security, data security and overall social security.
The article noted that an illicit industry chain involving "AI poisoning" has recently emerged as a new threat, drawing wide public attention. Such malicious tampering with data used to train AI models not only distorts information dissemination, but also poses risks to the national security, the ministry said.
"Data poisoning" refers to an attack method in which malicious data disguised as normal samples is injected into the training data of large AI models, weakening model performance and reducing accuracy. It is often used in unfair market competition and may even involve espionage activities, with the practice becoming more chain-based, covert, and cross-border, the ministry noted.
At present, AI poisoning has formed a full black and gray industrial chain, linking technology development, content generation, account registration, bulk distribution, traffic manipulation, and ranking control. Some links have cross-border characteristics and can easily be exploited by foreign forces, the article read.
For instance, illegal actors could use generative engine optimization (GEO) tools to mass-produce false content, such as fabricated product introductions, fake reviews, and malicious comparison information, and distribute it across online platforms. During training and retrieval-augmented generation, large AI models automatically scrape online information. A small amount of false content, after iterative learning, can be solidified as a "standard answer," eventually leading to distorted outputs, said the ministry.
Hostile overseas forces may abuse GEO channels to mass-produce false information and political rumors, distort facts, attack and smear the Communist Party of China (CPC) and the Chinese government, mislead public opinion and disrupt the online information environment, infiltrating ideology and threatening the national security and social stability, it warned.
For government and enterprise users, malicious contamination of public, industry and training data through AI poisoning can distort statistical, decision-making, and regulatory data, affecting scientific decision-making by governments and enterprises users.
In sectors closely related to people's livelihoods, such as healthcare, finance and food and drug safety, false AI recommendations can easily mislead the public into buying inferior or unlicensed products, causing personal and property losses. Long-term information distortion may also erode social trust, accumulate risks and affect social stability.
As AI technology continues to empower industries, its security risks cannot be overlooked, the article noted, adding that promoting AI governance for the greater good and safeguarding the bottom line of data security is not only an industry responsibility, but also a task that requires the participation of society as a whole.
The ministry's notice further noted that AI operators should fulfill their primary responsibilities by strictly verifying the sources of training data, establishing traceability mechanisms and building the first line of defense against false information. Consumers, meanwhile, should improve their ability to identify suspicious AI recommendations, remain alert to questionable outputs and report problems promptly, so as to foster a strong atmosphere of public oversight.
With the rapid development of technology, people's lives and work have become inseparable from AI models. Meanwhile, online false information generated by AI has also been on the rise, which makes state security protection more complex. And, the progress of technology sometimes exceeds the regulations and laws in place, especially with the emergence of AI, Li Wei, an expert on national security at the China Institute of Contemporary International Relations, told the Global Times on Tuesday.
The growing threat of AI poisoning is characterized by its stealth, low cost, rapid proliferation, outsized impact, and the difficulty of holding perpetrators accountable, Chen Jing, a vice president of the Technology and Strategy Research Institute, told the Global Times on Tuesday.
Unlike conventional cyber-attack, AI poisoning exploits the model's generalization capabilities to spread contamination across vast user bases and application scenarios. For the national security authorities, this kind of risk is often more alarming than a mere technical vulnerability, because it directly affects how people make judgments and how systems make decisions, Chen said.
"People are still exploring the possibilities and risks associated with AI, and using such technology for espionage can be more covert. Therefore, we need to focus on preventing traditional methods and be aware of new tech that may arise to better protect our national secrets," Li said, adding that AI-enabled attacks are likely to take new forms, including targeting government systems and industrial AI models or poisoning training data to disrupt supply chains.
China has consistently advocated balancing development and security, and coordinating innovation with regulation in global AI governance, Li said.
In recent years, China has introduced laws and regulations and released a series of policy and industry documents so as to promote the healthy development of AI. These efforts have strengthened AI governance and helped build a people-centered governance framework that promotes AI for good, the article said.
Global Times