Stamp out Ai-aided crime before it blooms
Published: Jan 25, 2024 09:35 PM
Photo: Chen Xia/GT

Photo: Chen Xia/GT

AI (Artificial Intelligence) simulates human intelligence to solve complex problems. One of AI's most powerful capabilities is its ability to learn and analyze large amounts of data to discover hidden patterns and rules, providing accurate decision support and a better user experience. However, while this new technology is changing people's lives, it also has appeared in new types of crimes: Some people are using AI to fabricate rumors, overturning the concept of "a picture tells the truth" and making it harder to distinguish between fiction and reality.

Recently, the police in the Luojiang district, Deyang of Sichuan Province, investigated and dealt with a case of AI software being used to fabricate and spread online rumors. A woman was punished for using an AI mini-program to manufacture false information about "school bullying." In recent years, Chinese police have handled several similar cases, and the primary purpose of the people involved is to attract attention online and earn money from the platform. This new type of illegal activity, suspected fraud, calls for public vigilance.

AI-based rumor-mongering can mislead the public, disrupt social order, and cause irreparable losses to individuals and organizations. In addition to criminal offenses, there is also a type of potential risk related to the spread of pseudoscientific information, such as exaggerated publicity about the efficacy of health products for some older adults. Worse still, people could be misled by fake images or analyses into believing that a particular stock will experience price fluctuations, which could hurt the market. Moreover, in terms of payments through facial recognition, if pre-detection is not done well, it may even be possible to commit fraud through AI-generated faces, which is a significant risk.

Risks and social impacts accompany the birth and application of each technology, and we cannot avoid using it for fear of adverse impacts. Mainstream media outlets are currently trying to apply AIGC (Artificial Intelligence Generated Content) to improve efficiency or optimize quality. We need to pay more attention to the proportion of AIGC when it is too high, homogeneity may lead to the development of an information cocoon among push services driven by automated algorithms.

AI-involved online rumors and false information are complex issues that require cooperation among the government, media, and the public.

First, journalists should take responsibility and strengthen self-discipline to provide more objective and comprehensive reports. They should refrain from dramatizing unverified information to avoid misleading the public. Technology itself is an innocent and productive tool that can provide quality content more efficiently. When mainstream media or websites publish news, they should mark which part was generated by AI and explain why they did so.

Second, technology companies and research institutions should enhance the supervision and examination of AI applications to ensure compliance with ethics and social norms. Researchers and industry insiders must constantly update and iterate identification methods to identify the changing risks that come with the development and progress of technology. From a technical perspective, in the future, there will be some detection or identification signs indicating that this text or image was AI-generated. This way, the cost of squashing rumors will also be relatively reduced.

Third, the government and lawmakers should formulate diversified governance measures. For one thing, requiring platforms to implement real-name systems to curb illegal actions; for another, building an algorithm ethics review mechanism demands industry players to conduct ethical examinations before providing services. Higher education institutions should make preparations for including AI in their curriculum. They can start by offering related courses such as algorithms and training because by understanding how it works, students can avoid slavishly following the answers AI gives. This can enhance students' knowledge and cultivate critical thinking, which is the actual purpose of higher education. 

Last, as the audience for this content, we should improve our digital literacy, maintain independent thinking about internet information, and not mindlessly seek brief stimulation but focus on exploring the truth. We should remain vigilant and protect private information like personal and family documents or photos.

AI is a double-edged sword, which can produce enormous value for humanity but also may become a threat. While AI technology is developing rapidly and being applied extensively, we must recognize its potential risks and work together to create a more trustworthy and sustainable environment in which we can enjoy the welfare brought by AI.

The author is a faculty member with the School of Applied Economics, Renmin University of China.