The International Neuroethics Society opened its annual meeting last night at AAAS in DC with a thought-provoking public program on robots in society. Though the title conjures up images from the Terminator movies (at least for me), the two speakers avoided wading too far into a futuristic, science fiction universe, and instead focused on the impact of robots in warfare and healthcare, and the ethical considerations involved.
The first speaker, Ronald C. Arkin, Ph.D., a regents’ professor at the School of Interactive Computing & Director of the Mobile Robot Laboratory at Georgia Tech University, feels autonomous lethal robots should be used in warfare to reduce friendly casualties and civilian deaths.
A self-identified pessimist of human behavior in wartime, Arkin spoke about the potential humanitarian benefit of using robots in high-stress military environments. “We are human and we are fallible,” he said. War places people in unimaginable situations where mental and physical health can play a role in decision-making, and can sometimes lead to regrettable events and atrocities, he explained. Arkin posits that without emotions to cloud their judgment, autonomous systems could perform more ethically in these situations, ultimately leading to less non-combatant deaths. (See his paper on the topic.) He outlined limited circumstances where robots might be used: to spearhead missions (e.g. room clearing); interstate warfare (not counterinsurgency); and alongside soldiers. But, he said, “I hope these systems never, ever, have to be used.”
Of course lethal autonomous robots are not without their skeptics, and Arkin devoted time to both the pros and the cons of their use, which included:
- They can act conservatively
- They have better sensors for observation than human
- They don’t have emotions to cloud their judgment
- They can integrate more information from more sources, and faster, than humans
- They can reduce the number of soldiers needed
- They can serve for extended periods of time
- Who’s to blame if something goes wrong?
- They don’t have moral agency
- They provide risk-free warfare
- They could run amok (a la Terminator)
- They are not tamper-proof
- They could have a bad effect on squad cohesion
- Mission creep (using the robots beyond the scope of original intent)
Robots in healthcare
Presenting autonomous robots in a very different context, Goldie Nejat, Ph.D., Canada Research Chair in Robots for Society, University of Toronto, described her research on socially assistive robots for the elderly.
We frequently hear in this country and abroad of the rising epidemic of dementia and Alzheimer’s among our aging populations—and the social and financial costs associated with their care. Though Americans are living longer than ever, some cognitive abilities decline over time. How can we improve quality of life and provide basic day-to-day assistance for those who no longer can care for themselves? (These questions were also tackled in a 2012 World Science Festival event.)
Nejat and her colleagues have developed a few robot prototypes that could provide positive social engagement for these individuals and also help with daily tasks, such as preparing and eating meals, leading to an overall improvement in quality of life. The studies she described focused on older adults with a range of memory issues who may need prompting to remember to eat, or who may benefit from cognitive stimulation through playing games.
Brian, a non-contact, socially assistive robot, is equipped to show basic emotions through facial expressions, speech, and gestures—making “him” more relatable to humans. These human traits help Nejat’s team study whether a robot armed with expressions can be used in cognitively-stimulating interactions.
Using Brian, Nejat studies interactions during game playing and meal eating in long-term care facilities. In a scenario where a user is asked to play the matching card game, Memory, she looks at factors such as how long Brian can keep a player engaged in the game versus how the subjects fair on their own, with added distractions on the table. As a social robot, Brian is equipped to gage and match the cognitive abilities of the user, and to adapt to the emotions of the user, offering congratulations and encouragement when applicable.
For meal eating, Nejat said she places Brian with individuals for a one-on-one lunch. The interactions include both social engagement and meal prompting, when necessary. To measure success, Nejat looks at factors such as how often the subject engages with Brian, how much of the meal is eaten, and how users rate their feelings of trust toward the robot.
Though the number of subjects used in the studies Nejat described has been relatively small, she reports positive outcomes thus far.
Despite the fact that one class of robot is intended for lethal use and the other for extending life or improving its quality, it turns out people most fear the idea of robots in a healthcare or childcare situation (at least in the European Union). One of the presenters pointed to a public opinion survey on robots published by the E.U. in 2012, which received feedback from 27 countries. When asked, “Which areas do you think robots should be banned,” ad overwhelming 60 percent responded “Care of children, elderly, and the disabled.”
Functionality and cost aside, it appears scientists will need to change public perception of robots before they’ll be trusted and accepted into mainstream life.
[Update: Video of the full event is now available for viewing on YouTube: http://j.mp/1HDdpE3]
–Ann L. Whitman