Do Unto Robots As You…
by Kim Bellard, January 23, 2019
It was very clever of The New York Times to feature two diametrically different perspectives on robots on the same day: Do You Take This Robot… and Why Do We Hurt Robots? They help illustrate that, as robots become more like humans in their abilities and even appearance, we’re capable of treating them just as well, and as badly, as we do each other.
We’re going to have robots in our healthcare system (Global Market Insights forecasts assistive healthcare robots could be a $1.2b market by 2024), in our workplaces, and in our homes. How to treat them is something we’re going to have to figure out.
Written by Alex Williams, Do You Take This Robot… focuses on people actually falling in love with (or at least prefering to be involved with) robots. The term for it is “digisexual.”
As Professor Neil McArthur, who studies such things, explained toDiscover last year: We use the term ‘digisexuals’ to describe people who, mostly as a result of these more intense and immersive new technologies, come to prefer sexual experiences that use them, who don’t necessarily feel the need to involve a human partner, and who define their sexual identity in terms of their use of these technologies.
And it’s not just about sex. There are a number of companion robots available or in the pipeline, such as:
- Ubtech’s Walker. The company describes it as: “Walker is your agile smart companion — an intelligent, bipedal humanoid robot that aims to one day be an indispensable part of your family.”
- Washington State University’s more prosaically named Robot Activity Support System (RAS), aimed at helping people age in place.
- Toyota’s T-HR3, part of Toyota’s drive to put a robot in every home, which sounds like Bill Gates’ 1980’s vision for PCs.
- Intuition Robot’s “social robot” ElliQ.
- A number of cute robot pets., such as Zoetic’s Kiki or Sony’s Aibo.
All that sounds very helpful, so why, as Jonah Engel Bromwich describes in Why Do We Hurt Robots?, do we have situations like: A hitchhiking robot was beheaded in Philadelphia. A security robot was punched to the ground in Silicon Valley. Another security bot, in San Francisco, was covered in a tarp and smeared with barbecue sauce…In a mall in Osaka, Japan, three boys beat a humanoid robot with all their strength. In Moscow, a man attacked a teaching robot named Alantim with a baseball bat, kicking it to the ground, while the robot pleaded for help.
Cognitive psychologist Agnieszka Wykowska told Mr. Bromwich that we hurt robots in much the same way we hurt each other. She noted: “So you probably very easily engage in this psychological mechanism of social ostracism because it’s an out-group member. That’s something to discuss: the dehumanization of robots even though they’re not humans.”
Robots have already gotten married, been granted citizenship, and may be granted civil rights sooner than we expect. If corporations can be “people,” we better expect that robots will be as well.
We seem to think of robots as necessarily obeying Asimov’s Three Laws of Robotics, designed to ensure that robots could cause no harm to humans, but we often forget that even in the Asimov universe in which the laws applied, humans weren’t always “safe” from robots. More importantly, that was a fictional universe.
In our universe, though, self-driving cars can kill people, factory robots can spray people with bear repellent, and robots can learn to defend themselves. So if we think we can treat robots however we like, we may find ourselves on the other end of that same treatment.
Increasingly, our health is going to depend on how well robots (and other AI) treat us. It would be nice (and, not to mention, in our best interests) if we could treat them at least considerately in return.
This post is an abridged version of the posting in Kim Bellard’s blogsite. Click here to read the full posting
Reader Comments