Moral Robots: How Close Are We?

world science festival robots
While we have grown accustomed to living and working in a world aided by “smart” devices, there is still a sense of suspicion when we talk about artificial intelligence (AI). Hollywood certainly hasn’t helped, with movies like “The Terminator” and “The Matrix,” but how close are we really to co-existing with autonomous, superintelligent robots?

The robots of today, at least, are not going to take over the world, said cognitive psychologist Gary Marcus one of the panelists at Saturday’s World Science Festival event, “The Moral Math of Robots: Can Life and Death Decisions be Coded?” To assuage any fears right off the bat, he encouraged audience members to watch a bloopers video from a recent DARPA Robotics Challenge. The Terminator, they are not. In fact, their fumbles are kind of endearing.

But despite their current inadequacies, programming and robotics are making great strides in certain areas and the panelists agreed that now, before things get too advanced, is the time to anticipate and address ethical worries.

The discussion largely revolved around the autonomous nature of future robots, such as driverless cars, eldercare robots, or autonomous lethal robots for military use. How can we ensure that these machines’ actions align with our value system?

At present, we can’t.

We don’t understand how moral processing works in humans, said roboticist, cognitive scientist, and philosopher Matthias Scheutz, putting things into perspective.

US Col. Linell Letendre, an Air Force officer and law professor, emphasized the need to move slower and develop specific legal guidelines for AI, keeping in mind treaties and international law. She recommended treating these learning systems as children, only slowly expanding their environment.

We’ve seen how autonomous machines can excel in an environment with fixed rules, such as Watson in Jeopardy and AlphaGo in the game Go, but when faced with unforeseen factors, they are not yet prepared to make sound judgments. A driverless car may have a faster reaction time, be less distractible, and better sense objects behind it, but it can’t understand the hand signals of a traffic cop, for example.

It’s easy to get bogged down in the obstacles and fears of AI, but one must remember that they could offer many benefits, too, said Marcus, reminding us that people are not good at doing a lot of the things we’re concerned about. “It’s not a no-brainer that people will be better than the robots,” he remarked.

In fact, one thing that humans struggle with at times is living within our own value system. The panel spoke about the recent sabotage of Microsoft’s AI chatbot “Tay.” Designed to engage with people on Twitter, learning through conversation, people managed to corrupt it in a single day, to the point where it was spouting Nazi propaganda and other hate messages. The external maliciousness of the outside world was underestimated, said Marcus.

People’s influence on AI, the potential danger of hackers—particularly with AI weapons—are very legitimate concerns. But let’s get back to what we CAN look forward to.

Robots could be of benefit in areas such as toxic mining conditions, saving the health of workers; in eldercare, caring for our increasingly aging population; and on the road, preventing accidents. According to bioethicist Wendell Wallach, some argue that driverless cars will ultimately lead to 93 percent fewer fatalities (though first we’ll need to navigate that awkward time when both human drivers and driverless cars are on the road together).

To achieve this human/robot blended future, Letendre emphasized the need to work “across disciplines,” ensuring that everyone uses the same language and definitions when building AIs. It seems we are on our way: Wallach, who at times seemed the most wary in the group, said communication about AI has dramatically improved in the last year and a half, compared with the previous 12 years.

So we’ll just have to wait and see what happens, but in the time being, it’s good to know that these discussions are taking place not only in academic centers and corporations, but also in public forums such as the World Science Festival.

Stay tuned for our coverage of the Festival event, “My Neurons, My Self,” on consciousness and self-awareness, which featured Dana Alliance member and cognitive neuroscientist Martha Farah.

You can also watch many of the recorded events online on the Festival website. [Update: Full video of this event now available here.]

– Ann L. Whitman

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: