The debate on how to appropriately incorporate artificial intelligence into everyday life will become more intense as the robots get smarter. There are moral as well as practical questions, but also safety concerns. First written down by Isaac Asimov 75 years ago, the first of Three Laws of Robotics states that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Conceived as science fiction, that law has some serious connections to today’s real world. Carlson Wagonlit Travel vice president and chief data scientist Eric Tyree uses self-driving cars as an example of how AI can go wrong but eventually benefit the masses.


Where is artificial intelligence taking us as a society? It’s a fundamental question.

What are the moral implications on humanity of autonomous, artificially intelligent machines with the ability to make life-or-death decisions? Perhaps I can shed some light on the issue by looking at a particular technology with precisely this kind of power: self-driving cars.

The same two concerns arise time and again:
1) What if a self-driving car kills somebody?
2) What about the millions of truckers, taxi drivers and delivery people who will be out of jobs because of them?

The first question raises a moral quandary and the second raises issues around the ethics of how society manages change.

Let’s look at the first: Death by self-driving car. An AI-enabled Uber car recently “saw” a woman crossing the road and rather than stopping it accelerated and hit her. The woman was killed. Although it probably saw her with its sensors, the autonomous vehicle did not register that she was human and so ran her over.

This kind of tragedy happens all the time on our roads. Hundreds if not thousands of people are killed each year by drivers not paying attention. They may see what is out there but don’t register it. Why is it that when a human does this it rarely makes the news but when a self-driving car does, it’s a huge, global story?

The answer to this question was revealed in a conversation I had recently with my dad about automated cars. I have had this argument with many people before but I’ll single out my dad as he maintains a very high moral bar. He’s also an engineer who understands technological progress and what it means to build something as complex as a self-driving car. Dad argues that a self-driving car is a machine — AI or no AI — and if it kills one person, it’s faulty and therefore a hazard on the road.

Eric Tyree, CWT
Carlson Wagonlit Travel vice president and chief data scientist Eric Tyree

I argue that soon self-driving cars may well kill fewer people per mile than humans. Therefore, the moral decision might be to push for them to be mandated.

Dad argues vociferously against this. He sees death by self-driving cars as a preventable engineering problem and therefore the deaths are immoral. I argue that he is, in effect, condemning many more people to a premature death than is necessary. It goes without saying that we can’t resolve the argument because in some ways we are both right.

My disagreement with my dad gets right to the crux of one of the reasons we fear AI — it directly challenges our sense of humanity and morality. It reminds me of the school ethics challenge that gets thrown at you by a teacher: Is killing one person moral if it saves 10 lives? What if one of those lives is yours, or somebody you love? What if it’s an unknown life in a faraway land you have never heard of?

We don’t really like this question because all of the answers are terrifying. AI rather rudely forces the question upon us. It’s no longer an abstract schoolroom question; the future has arrived and we have to make a choice.

Can we accept that self-driving cars will kill innocent people on occasion even if they may soon kill fewer people per mile driven than humans? Many feel that death by AI in any quantity is a bridge too far.

The dilemma applies not only to death by AI but also to unemployment by AI. That is the other big impact of AI we fear. In many U.S. states and other countries, driving is one of the largest and most common occupations. Self-driving cars will put millions out of work. The impact will be huge and, for many, devastating. But is this not ultimately the story of progress? Bronze took out the stone workers, iron took out the bronze workers and steel mills killed the blacksmiths. Now, robots are taking out the steel mill workers. AI will take out steel truckers, most of middle management and probably a large chunk of the legal department. Is AI not just the latest installment of human progress?

Is it moral to sacrifice a job or two for the good of progress? What if it’s a million jobs? What if it’s your job being laid atop the altar? Or that of somebody in a far-off land you have never heard of? What if it’s educated, middle-class, professional jobs that are being replaced?

A week prior to writing this I was assessing software that automates what data scientists do. Data scientists are the people building the AI bots. The AI revolution is already eating itself.

What we really fear is that AI demands choices – who lives, who dies, who stays employed, who does not. We don’t fear progress; it’s the process of progress that terrifies us because we know it will condemn as well as benefit many.

What’s the answer? At the end of the day you have to make good for those who lose out. Progress is only progress if the vast majority share in the benefits. The challenge of AI we should be contemplating is not only the casualties incurred in exchange for progress but also the big social and economic questions around how we repair any damage in a just way.

So the answer to the problem of artificial intelligence all boils down to the golden rule: do unto others as you would have them do unto you. Progress is going to happen, but let’s do it with compassion. It’s only human.


Related
• Self-Driving Cars: Time To Update Risk Policies
• Anant Kale On The Automation Revolution And Which Jobs Robots Will Take

Follow
Notify of
2 Comments
oldest
newest