EU’s AI law tackles potential risks, ‘could add costs for tech companies entering EU market’
Published: Mar 14, 2024 11:22 PM
AI Photo:VCG

AI Photo:VCG

The European Parliament voted Wednesday to adopt the long-awaited AI Act, which it has dubbed "the world's first comprehensive AI law." Chinese observers said the Act sets out a risk-based framework for AI which will help to prevent the fast-growing technology from harming human interests, but at the same time adds more hurdles and costs for tech companies, mostly from the US and China, to enter the EU market.

"Europe is NOW a global standard-setter in AI," Thierry Breton, the European commissioner for internal market, wrote on X.

The AI Act aims to control AI based on its potential to cause harm to society, with stricter regulations for higher-risk applications. AI technologies that present a significant risk to basic rights, such as those involving biometric data processing and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, will be prohibited, according to the European Parliament. 

High-risk AI systems in critical sectors such as infrastructure, education, healthcare, and law enforcement must meet stringent requirements. Low-risk services such as spam filters will face less regulation, as most services are expected to fall into this category. 

The legislation also addresses risks associated with generative AI tools and chatbots, requiring transparency from producers of general-purpose AI systems about the material used to train their models and compliance with EU copyright laws, according to BBC.

Additionally, artificial or manipulated images, audio or video content (deepfakes) need to be clearly labeled, the European Parliament said.

The Act is at the forefront of addressing potential risks and threats that the technology could bring to humanity, Liu Wei, director of the human-machine interaction and cognitive engineering laboratory with the Beijing University of Posts and Telecommunications, told the Global Times. He said that the potential negative effects and ethical issues of AI technology have already emerged, such as fake photos or videos to manipulate people's perceptions and public opinion.

The adoption of the Act means that if Chinese AI companies want to enter the European market, they must first make sure that their products and technologies meet the standards and requirements. This may require companies to invest more resources and time into research and development to ensure technical compliance, Liu said. 

The Chinese expert said that the EU has been scrambling to catch up with China and the US in AI tech developments and has great ambitions for playing a major role in global AI governance. 

In the future, both Chinese companies and US companies - key players in developing AI technology - looking to enter the European market will encounter a higher market access threshold, potentially impacting their competitiveness in Europe, Liu noted. 

In recent years, China's technology industry has expanded rapidly in the European market. Alibaba, Tencent and others have set up research and development centers or branches in Europe to cooperate with local companies. 

On AI governance, China has rolled out a slew of AI regulations, placing addressing AI risks in a key position and also seeking a balance between AI regulation and innovation preservation. 

The Act is expected to be enforced in May, following final reviews and approval from the European Council. The implementation of the regulation will begin in 2025.