CHINA / SOCIETY
MSS warns of hostile foreign force using deepfakes to spread fake videos in China to create panic
Published: Dec 26, 2025 09:36 AM
Data Security File. MSS issues a statement on its WeChat account on December 1, urging caution over sharing sensitive information online without declassification or risk evaluation. Such information may serve as a major source of open-source intelligence for foreign espionage agencies, which may potentially endanger national security. Photo: VCG

Data Security File. MSS issues a statement on its WeChat account on December 1, urging caution over sharing sensitive information online without declassification or risk evaluation. Such information may serve as a major source of open-source intelligence for foreign espionage agencies, which may potentially endanger national security. Photo: VCG

China's Ministry of State Security (MSS) warned on Friday that a foreign anti-China hostile force used deepfake technology to produce fabricated videos and attempted to disseminate them inside China to mislead public opinion and create panic, posing a threat to China's national security.

In an article posted on its WeChat account, the MSS said that the rapid expansion of AI large models is accelerating innovation across industries and daily life, increasingly integrating into people's routines. But as the technology spreads and becomes more deeply embedded, new risks such as data privacy and algorithmic bias are emerging, underscoring the need to strengthen security safeguards so that AI can empower society in a safe and sustainable way.

Some institutions faced data leaks and security risks after directly deploying internet-connected large models based on open-source frameworks, which allowed attackers to access internal networks without authorization, said the MSS. In one publicly reported case, an employee used an open-source AI tool to process internal documents, but the computer's public access was enabled by default and lacked a password, resulting in sensitive files being accessed and downloaded by overseas IP addresses.

Deepfake technology — which uses AI and deep learning to simulate or fabricate images, audio and video — can pose serious risks when abused, affecting individual rights, social stability and even national security.

China's national security authorities have found that a foreign anti-China hostile force generated fake videos using deepfakes and attempted to spread them inside China to mislead the public and create panic, said the MSS.

The MSS also said that AI systems reflect the data they are trained on, so if training sources contain bias or lack representation, large models may amplify discrimination. Tests have shown that some AI models display a systemic tilt toward Western perspectives.

In one study, when researchers asked the same historical questions in Chinese and English, the English responses tended to downplay or avoid certain historical facts — even producing incorrect information — while the Chinese answers were comparatively more objective.

In order to avoid those risks, the MSS stressed prudent and disciplined use of AI tools to reduce privacy, security and misinformation risks. Users should limit AI's scope of access by applying the principle of minimal permissions — avoid processing sensitive data with internet-connected models, prevent voice tools from collecting unnecessary ambient audio, and disable functions such as data-sharing or cloud sync when not required.

They are also advised to manage their digital footprint by clearing chat histories, updating passwords, checking logged-in devices and installing security updates, while remaining cautious about unknown AI programs or requests for identity or financial information.

Finally, the guidelines call for more responsible human-AI interaction: users should set clear constraints in prompts, ask AI to show reasoning or sources when possible, verify important information across platforms, and maintain independent judgment — particularly on political, historical or ideological topics — to avoid being misled by AI hallucinations.

Global Times