Professor claims AI can spot criminals by looking at photos 90 percent of the time

By Liu Xin Source:Global Times Published: 2017/1/3 19:43:39

A woman looks at a poster advertising a method of payment through facial recognition. Photo: IC

A woman looks at a poster advertising a method of payment through facial recognition. Photo: IC

Shanghai Jiao Tong University professor Wu Xiaolin and his team developed an AI that can label people's faces according to human perception

The project received controversy on the Internet over whether or not it's discriminatory 

Wu said the findings should not be exaggerated and misinterpreted

It  never occurred to Wu Xiaolin, a professor at Shanghai Jiao Tong University, that his research on artificial intelligence (AI) would incur so much criticism for "discrimination."

Wu and his students recently released two papers on training machines to predict human perceptions of other people's personality traits and attributes. They explored the potential of developing so-called "learning" algorithms that can distinguish those who "look like" criminals and non-criminals, and label women's faces as "sweet" or "pretentious."

 "We just found it fascinating to explore whether machine could acquire human-like sensations … I never agree with 'judging people by their appearance'…  the public and some media just spread a misinterpretation without understanding our research," Wu told the Global Times.

Automatic face recognition is one of the major successes in artificial intelligence and the next challenging question is whether supervised machine learning can provide and analyze data on how humans judge each other; and how, if at all, our behavior and appearance are linked.

Extraordinary claims

In the first paper, Wu and his student Zhang Xi explained they used computer vision and machine learning to see if a computer could distinguish criminals from non-criminals by analyzing the faces of 1,856 people.

Wu and his team collected ID card photos that satisfied the following criteria - Chinese, male, between the ages of 18 and 55, no facial hair, no facial scars or other markings.

The ID photos of the 1,126 non-criminals of men were from a wide range of professions and social status; and the 730 ID photos of criminals came from those published by public security bureaus in different provinces in China.

The computer found that the angle from the nose tip to the corners of the mouth was on average smaller for criminals than for non-criminals. Also, the upper lip curvature was on average larger of criminals than of non-criminals.

Using these variables to judge photos, the AI was able to distinguish criminals and non-criminals in the data set with almost 90 percent accuracy.

"This coincides with the fact that all law-abiding citizens share many common social attributes, whereas criminals tend to have very different characteristics and circumstances, some of which are quite unique," read the paper.



Controversy arises

The paper has triggered great controversy.

"I am just shocked and appalled… It's so un-scientific, it should be taught in class as a counterexample," a Net user commented on the website of news portal Hacker News.

A student from Shanghai Jiao Tong University wrote Wu an e-mail, asking him to withdraw the paper and make a public apology for his "improper research methods."

"This paper is overwhelming in its strong discrimination and may mislead people. Research fellows in the artificial intelligence field should not abuse the usage of technologies to violate ethics," the student wrote.

Wu also received an e-mail from a counterpart at the US-based Cornell University, in which the academic "urged" him to withdraw the paper and said that criminality lies in behavior not appearance.

A study was published by Cornell human development academics in 2011 in which they showed people pictures of men and asked them to guess if these men were criminals, and then if the crime was violent or not and the specific crime committed. It found people were able "to a small but reliable" extent correctly guess if the man was a convict or not.

Shen Junnan, an expert on artificial intelligence at the Harbin Institute of Technology, told the Global Times that machines can only spot trends from the data set they are given, which means that criminals in the data set may possess similarities in their appearance, but this does not mean that people who have these features have criminal tendencies.

Wu said his research is only at an early stage, and he can't say what this may be applied to in the future.

But many were imaginative upon reading his findings. Some netizens have suggested Wu give his research to China's Central Commission for Discipline Inspection to help recognize corrupt officials.

Beauty and the bot

Before discussions of the first paper died down, Wu released another paper on the AI perceptions of female faces.

Wu explained that he used Baidu's image search engine to look up the words "beautiful," "pretty," "attractive girls" and "young women" to select his sample images.

The team collected 3,954 photos of "attractive" young Chinese women, among which 2,000 were labeled as looking "sweet, endearing, elegant, tender, caring, cute" and 1,954 were tagged as "pretentious, pompous, indifferent, coquettish."

Wu's team then trained the AI to identify which style of female attractiveness a picture contained, represented by the two sets of labels.

Perceptions of facial attractiveness are more complex than the already knotty matter of subjective tastes, for they are also a proxy for the personality and social values of both the observed and the observer, Wu explained.

Just like the first paper, this one has also come under fire with many netizens criticizing Wu for not respecting women.


Cause and effect?

Wu thinks it is unfair to say that their research is discriminatory without reading their papers.

He added that they conducted the research initially to overturn the belief, common in China, that a person's face is shaped by his or her character but were surprised by the results.

He stressed that there are differences between "scientific relations" and "cause and effect."

"That criminals tend to have these features just shows that there is some relation ... It is wrong to say someone can be born with a criminal face … we are not experts able to study the inner causal relationship, which may exist or not," Wu said.

 There is a world-wide discussion on whether scientists should be more disciplined or can be forgiven for pursuing knowledge and truth for a higher interest of the mankind.

"Should there be some restricted zones for research fellow as the artificial intelligence has developed to such a level? Frankly speaking, I don't know," he said.

Wang Shijin, deputy dean of iFlytek Research, a research center that develops AI speech and language technology, told the Global Times that the artificial technology we have today is actually "data intelligence" - machines that operate based on statistical data - and there is a long way to go to make machines that truly think or behave like humans.

"Ethical problems in the current phase of artificial technology have centered on how to input proper data or set the models for the machine to operate," Wang said.

Wang said Xiaobing, a young-girl-imitating AI chatbot developed by Microsoft was withdrawn after it started making "racist remarks," since the data base for the chatbot was Twitter, where improper words emerge frequently.

However, two professors, Wang Shaoyuan from the Shanxi University of Finance and Economics and Ren Xiaoming from Nankai University released an article on China Social Sciences Today in September 2015, saying that artificial intelligence technology and machine learning are never only about science and the sector needs input from various fields, including physics, psychology, philosophy and law to build a "good" system for the better development of artificial technology.

Newspaper headline: Computer cop


blog comments powered by Disqus