Healthy AI competition makes us all better off

Source:Global Times Published: 2019/5/28 22:03:42

Photo: VCG

Editor's Note:

Artificial Intelligence (AI) is one of the major topics in the current world. Will AI replace humans and deplete employment opportunities? What should we do to stay competitive? Where is the so-called AI war between major powers heading? Global Times (GT) reporter Li Qingqing discussed these and more with Martin Ford (Ford) at the recent 16th Eurasian Media Forum in Almaty, Kazakhstan. Ford is a futurist and founder of a Silicon Valley-based software development company and author of several bestselling books on AI.

GT: In your TED Talks in 2017, you mentioned a 1964 Triple Revolution report and said the US is on the edge of an economic and social upheaval because industrial automation was going to put millions out of work. Will this come true in the foreseeable future?

At the minimum, AI is going to make things more unequal. Some jobs will definitely disappear entirely. Another thing that will happen is that at some point, people had to have a lot of training to do a job, but now technology comes along, and anyone can do the job. For example, taxi drivers. The taxi drivers in London were required to undergo extensive education. Now, GPS and Google maps come along, and anyone can do the job. 

I think this will be very destructive in 10 or 15 years. For some groups of workers, there definitely could be unemployment. Similar fears have been raised many times in the past, and it has not happened yet. Even though it hasn't happened for a long time, eventually, it could happen. Things do not stay the same forever.

Elon Musk said that in years, there will be millions of fully autonomous robotic Teslas on the road. That was a good example of hype. I don't think that is going to happen in years. In the short run, we tend to overestimate the pace of change. But in the long run, we underestimate it. 

GT: AI benefits human society, but it brings challenges and risks to our security and governance as well. As AI has had problems including opaque decision-making processes and algorithmic bias, is our future world becoming more uncertain and unreliable?

There are two sides of AI. On one side, it is going to be a huge benefit to all of us. It can be a tool that brings medical breakthroughs which make human beings healthier, or brings us scientific achievements that may help resolve climate change. 

On the other side, there are things we need to worry about, such as AI systems that can be hacked and thus leak our privacy. Another thing I often talk about is increasing inequality. The potential for bias rises because the data that you train the algorithm on comes from people. For example, there is a company in the US that stopped using its AI system to screen resumes for new jobs, because the system was biased against women. It happens because the data that the algorithm was trained on is also biased. People are now working on to fix the problem, and it's actually easier to fix biases in an algorithm than it is in human beings. We may not rely on algorithms totally, but hopefully in the future, an algorithm can act as a second opinion that checks us, and if we are too biased, the algorithm will find that.

Recently, an organization named OpenAI made a very powerful AI fake text generator. It can generate stories. If you give it one sentence, it will automatically generate a whole narrative. It is very coherent. Imagine that in the future, machines literally generate junk information that is meaningless, and this will make our whole system overwhelmed. These are all real risks, and these are the areas where we really need regulations.

GT: In April, the European Commission's High-Level Expert Group on AI published its Ethics Guidelines for Trustworthy AI. According to the guidelines, trustworthy AI should be lawful, ethical, and robust. Will the guidelines help regulate the AI industry, or will they kill AI's development potential?

I'm not too concerned about the guidelines. Guidelines are important, but they can be very fuzzy. The question is that you need to have really heavy regulations instead of guidelines. Specific AI applications would definitely need regulations, such as self-driving cars or in the medical field. Since we've regulated cars and doctors, so of course, self-driving cars and medical-related AI need regulations. 

What would be dangerous that might hold things back is the regulation on AI's general basic research. I hope that AI regulations will not be too broad, but focused on specific areas. As long as we do it carefully, then I don't think it would hold back development.

GT: Will major power competition be centered on AI in the future?

There will definitely be a race, because AI is incredibly powerful technology. It can be used in many sectors like business, commerce, military, and security. There will be competition between China and US businesses, between Google and Tencent, and also competition between the Chinese, Russian, and US governments. 

I think China-US AI competition is inevitable and healthy. It is a good thing when you have businesses like Google, Facebook, Tencent, and Baidu competing with each other. But there is a more dystopian danger: an AI arms race. AI is not commercial technology - it is something that may be applied to the military sphere and national security. We know there is competition between the US, China, and Russia, and that could have negative effects. 

GT: Some say that a country should focus on its own AI development and prevent technological leaks to other countries, while others say technology has no borders and countries should work together against AI's potential risks. Which point of view do you prefer?  

In general, we should all work together, and that's kind of the way it is now. The top AI researchers in China and the US publish academic papers, and they can all read about each other's breakthroughs. It's fairly open now, and it is very important to have global conversations. 

Ideally, I hope countries can work together, and there will be platforms like the UN where we can talk about the application of these technologies and work together toward appropriate  regulations and guidelines. But everyone knows it's hard in the real world, especially when AI technologies have national security applications or can be used for military purposes. Then the situation would be totally different. 

GT: Will China and the US start a zero-sum AI war, or will the two influence and interact with each other in the area?

Hopefully China and the US can work together, and it will be a healthy competition that makes us all better off. Competition is good, and it pushes people to innovate. You don't want to become too destructive, and you don't want a military competition in which people are  building weapons.

It is true that most AI research and development is especially concentrated in the US and China right now, but AI development is also happening in many other countries. For example, Google just opened a research center in Africa. The key is, regardless of where the technology is actually developed, AI will become a utility similar to electricity. Electricity is everywhere, and you will see AI used throughout the world. Everyone could ultimately benefit from the advances that come with the technology. This is truly a global phenomenon.

GT: A Dutch peace group named PAX said that many countries are currently applying AI to warfare. PAX called it "lethal autonomous weapon" or simply "killer robots." How will an AI arms race, if it is really around the corner, affect the whole human community?

This is a scary scenario. People worry about autonomous weaponry because it might not be used only by the military but could become more generally available and may be accessible to terrorists. That would be scary. The debate over autonomous weapons is because people are uncomfortable with the idea that weapons could make a decision on its own to shoot someone, without human authorization. What's even scarier is that we are not talking about just one drone or one robot, but hundreds or thousands of them that swarm together. 

The barrier of using these technologies is quite low. It is not like nuclear weapons - you have to be a country, and you need to have resources to make nuclear weapons. Bad people can buy drones on Amazon or Alibaba, then work on them such as install software and turn them into weapons. Anyone can do this, and that's one of the reasons why people are concerned about this and want to put restrictions on them. Even if these kinds of weapons are only used by the military, it could increase the probability of war. People could see how the cost of war is lower, because the lives of soldiers wouldn't be at risk anymore. So it is important that we regulate these technologies.

GT: What do you think is the best scenario of AI in the future?

AI is going to have a huge impact on our lives in the next five, 10, 20 years, and beyond. I think it is the biggest thing that will shake the future. The best scenario is that human beings will find a way to leverage AI on behalf of everyone, so that it won't create significant inequalities and everyone can benefit from it. That's why I have covered this in my writing and through TED Talks regarding basic incomes. 

Maybe you will need to give everyone income and jobs to maintain social stability. This is also good for the economy because it helps drive consumer spending and avoids an economic downturn. 

It is a way of moderating inequality, because otherwise the wealthy will only get richer and average people might lose their income entirely. They will be left behind. If we apply such a strategy and make sure everyone benefits from AI, then the future could be very optimistic. We can then imagine technology that makes everyone wealthy, rather than people worrying about survival. This is the vision I have for the future, but there's still a lot we have to do to make that happen. We need more policies, and we must address the risks.

Posted in: VIEWPOINT

blog comments powered by Disqus