News
Would 'Deviant' Sex Robots Violate Asimov's Law of Robotics?
Patrick Lin
Like it or not, sex robots are already here, and someday they might hurt you, if you ask nicely. As they cater to an ever-increasing range of tastes, some folks predict BDSM types (bondage, discipline, and sadomasochism) in the future bedroom.
But, wait, you might ask: wouldn’t these “deviant” or non-normative types violate the basic robot-ethics principle to not hurt people?
Sci-fi writer Isaac Asimov gave us the First Law of Robotics, which is: a robot may not injure a human being or, through inaction, allow a human being to come to harm. But sex-bots that spank, whip, and tie people up would seem to do exactly that.
Though it might seem silly, this discussion is actually relevant to AI and robotics in many other industries. What constitutes harm will be important for, say, medical and caretaking robots that may be instructed to “do no harm.”
Without that clarity, there’s no hope to translate the First Law into programming code that a robot or AI can properly follow.
What is harm? A conceptual analysis.
As the First Law commands, a robot is prohibited from acting in a way that harms a human. For instance, whipping a person, even if lightly, might hurt them or cause pain; and pain usually is an indication of a harm or injury. Tying a person up tends to make them feel vulnerable and very uneasy, so we usually consider that to be a negative psychological effect and therefore a harm.
But this is true only if we understand “harm” in a naive, overly simplistic way. Sometimes, the more important meaning is net-harm. For example, it might be painful when a child must have a cavity drilled out or take some awful medicine—the kid might cry inconsolably and even say that she hates you—but we understand that this is for the child’s own good: in the long term, the benefits far outweigh the initial cost. So, we wouldn’t say that taking the kid to a dentist or doctor is “harming” her. We’re actually trying to save her from a greater harm.
This is easy enough for us to understand, but some obvious concepts are notoriously hard to reduce into lines of code. For one thing, determining harm may require that we consider a huge range of future effects in order to tally up the net-result. This is an infamous problem for consequentialism, the moral theory that treats ethics as a math problem.
Thus, any harm inflicted by a BDSM robot is presumably welcomed, because it’s outweighed by a greater pleasure experienced by the person. What’s also at play is the concept of “wrongful harm”—a harm that’s suffered unwillingly and inflicted without justification.
The difference between a wrong and a harm is subtle: if I snuck a peek at your diary without your permission or knowledge, and I’m not using that information against you, then it’s hard to say that you suffered a harm. You might even self-report that everything is fine and unchanged from the moment before. Nonetheless, you were still wronged—I violated your right to privacy, even if you didn’t know it. Had I asked, you wouldn’t have given me permission to look.
Now, a person can also be harmed without being wronged: if we were boxing, and I knocked your tooth out with an ordinary punch, that’s certainly a harm, but I wasn’t wrong to do it—it was within bounds of boxing’s rules, and so you couldn’t plausibly sue me or have me arrested. You had agreed to box me, and you understood that boxing carries a risk of harm. Thus, you suffered the harm willingly, even if you preferred not to.
Back to robots, a BDSM robot would seem to inflict harm onto you, but if you had requested this, then it wasn’t wrongfully done. If the robot were to take it too far, despite your protests and without good reason (as a parent of a child with a cavity might have), then it’s wrongfully harming you because it’s violating your autonomy or wishes. In fact, it’d be doubly wrong, since it violates Asimov’s Second Law of Robotics: a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
But assuming the robot is doing what you want, the pain inflicted is only technically and temporarily harm, but it’s not harm in the commonsense way that the First Law should be understood. A computer, of course, can’t read our minds to figure out what we really mean—it can only follow its programming. Ethics is often too squishy to lay out as a precise decision-making procedure, especially given the countless variables and variations around a particular action or intent. And that’s exactly what gives rise to drama in Asimov’s stories.
What about the Zeroth Law?
Ok, maybe a BDSM robot could, in principle, comply with Asimov’s First Law to not harm humans, if the directive is properly coded (and the machine is capable enough to pick up our social cues, an entirely separate issue). Machine learning could be helpful for an AI to grasp nuanced, ineffable concepts, such as harm, without our explicitly unpacking the concepts; but there’d still be the problem of verifying what the AI has learned, which still requires a firm understanding of the concept on human side, at least.
But what about Asimov’s subtly different “Zeroth Law”, which supersedes the First Law? This Law focuses on the population-scale and not the individual scale, stating that: a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
That becomes a much different conversation.
It could be that sex robots in general, and not just BDSM robots, would promote certain human desires that should not be indulged. For instance, if you think sex robots objectify women and even gamify relationships—some sex-bots require a certain sequence of moves or foreplay to get them to comply, a Konami code of sorts—then that might be bad for other people and humanity at large, even if not obviously harmful to the individual user. If sex robots become so compelling that a lot of people no longer have or need human partners, then that can also harm humanity.
On the other hand, many people are unable to form relationships, especially intimate ones. So, it might be better for them, and for humanity, if these folks had options, even if with robots (which may just be glorified sex toys). It’s also possible that sex-bots can help teach users about consent in their human relationships.
But it gets super-tricky when you consider that sex-bots have already been made to look like children. Is that a desire that should be indulged? Strong intuitions may point to “no”, but there’s not a lot of research on therapy for pedophiles. It’s possible that these kinds of robots could be enough to distract would-be predators from targeting actual humans. Or it could go the other way, by fueling their dark desires and acting them out in the real world.
The analysis of how various types of sex-bots may or may not comply with the Zeroth Law is much more complicated than we can offer here. Entire reports and books grapple with the ethics of sex robots right now, and this article is focused primarily on the First Law, again which has relevance far beyond just sex machines.
But the sex industry has long been a harbinger of things to come, and so it deserves our attention. After all, it pioneered technologies from photography to film to the Internet to virtual reality and more. Not just new media, but it also spurred the development of e-commerce—from online credit cards purchases to the rise of cryptocurrencies—because nothing sells like the “world’s oldest profession.”
Back to programming, though the Laws of Robotics are really only a literary device, they’re still an important touchstone in robot ethics, and a tribute must be paid. They’re essentially a meme, and memes are very hard to dislodge once they’ve taken root. So, whether or not Asimov’s Laws are the right principles for AI and robotics, they’re a reasonable, maybe irresistible, starting point for a much larger conversation.
Originally published in the Oct. 15, 20018 issue of Forbes.
An Interview with Prof. Patrick Lin
(California Polytechnic University, San Luis Obispo, CA)
David Černý: Recent years have brought rapid development in the field of artificial intelligence and robotics. Thinking machines already outperform human intelligence in different domains and many researchers believe that these machines will reach the level of general human intelligence relatively soon. Then it might be just one more step to creating artificial superintelligence. Today, AI driven devices and robots, from social and medical robots to autonomous vehicles and drones, play an increasingly significant role in our private and social lives and have already become omnipresent. Patrick, what do you think an ethicist can offer to the modern men surrounded by sophisticated technologies? Do the groundbreaking developments in AI and robotics raise specifically ethical questions or should we put our trust in the hands of scientists and expect them to solve what may seem to belong into the province of ethics?
Patrick Lin: Right, our world is increasingly ruled by technology, but we still have a role in determining our own future. If we’re not deliberate and thoughtful, then we’re “leaving it up to the market” on how technology is developed and used. But the forces that drive the market—such as efficiency, pricing, branding, and so on—are not necessarily the same forces that promote social responsibility or “a good life.”
For example, if we’re not careful, we could see massive labor disruptions by a robotic workforce, and this would be dangerous if there’s no plan to retrain displaced human workers or otherwise take care of their daily needs. Some jobs might be too important to hand over to AI and machines, such as judges and soldiers or even teachers and caregivers. There might be benefits for having robotic workers, but we haven’t had enough conversation about what might be lost.
Related to those concerns is the fact that even AI decisions today can be hard to understand. Most of us don’t know how they work or why they arrived at a decision—it’s a black box. Without that transparency, it’s hard to trust these systems, which means we should be asking ourselves if we should deploy them at all. Programmers themselves also often don’t fully understand how and why machine learning works, which creates a responsibility gap when things go wrong—and they will go wrong, as the entire history of technology shows.
So, the ethical questions raised by emerging technologies are very broad, ranging from the design process to use-cases to unintended impacts and more. This is something like verification and validation in engineering, which ask respectively: did we build it correctly (was the technology built to the specifications) and, at a more basic level, did we build the right thing (was it actually what’s needed)?
Answering these questions is too big a responsibility to impose or hand over to any particular group of people, as those answers can reshape an entire society. Engineers may be helpful people in general who want to make the world a better place, but that doesn’t mean they understand the nuances of ethics, especially as it relates to programming AI and robots that interact with people or make critical decisions about their lives. Bias is all too easy to slip in at many points in the design process, such as data used for machine learning.
I wouldn’t hand over this responsibility to only philosophers, either. But philosophers and ethicists must have a seat at the table, and I’m encouraged that this seems to be happening more and more, as government and industry meetings recognize the need for actual expertise in ethics and other social issues.
DČ: Well, it might seem at this point that allowing philosophers and ethicists to have a seat at the table would solve the problem of how to ethically design algorithms for AI and robots. But philosophers are in general better at raising questions than answering them. Moreover, there is a widespread disagreement concerning the question “What ethical system is right?” or, more specifically, “What code of moral rules should AI and robots incorporate in their algorithms?” Imagine, for example, autonomous vehicles and possible crash situations. A utilitarian would suggest that AVs should maximize utility, a more deontologically inclined ethicist, on the other hand, would tend to impose some constraints on AVs behavior. Could this basic disagreement threaten our best efforts at constructing ethically sound and well-behaving AI and robots?
PL: That’s a fair question, but I don’t think it’s a real problem, only an academic one. I’m not saying that ethics is only academic—far from it, it affects real lives every day. But for some reason, in academic philosophy, there’s the mentality that ethics is something like the rings of power from The Hobbit, that there’s only One Ring to rule them all. Philosophers might say that we need to choose only one theory as the right one, because that’s what is needed in order to be principled and for intellectual honesty. After all, to accept consequentialism and deontological ethics seems to be accepting a contradiction, that results are the only things that matter and, at the same time, results don’t matter at all.
But I think all that is just pedantic. Instead, I think there’s something very useful and right in all of the major ethical theories. Sometimes they may be contradictory, but that’s okay—life is often a balancing test among competing core values, such as liberty versus security. So, why should ethics be a simple, straightforward formula? It looks more to be an art than a science. So, I would favor what I’d call a hybrid approach that draws from the best of these theories.
The process is something like this: run an ethical question through one theory, and then run it again through another, and continue to do this for the other theories you want to check it against. Think of this as a malware scan: no malware app is perfect, so it’s useful to run several scans with different apps to cover their gaps. Or think of it as a courtroom of several different judges who are experts in different areas.
Ideally, our ethical theories would converge on the same answer to any given question. But when they don’t, that’s where the hard work—the art—of balancing the different considerations is required. The point about ethics, I believe, isn’t just getting an answer to whether action x is ethical or not. But ethics is a process. It’s about explaining how you’ve balanced a given bundle of interests and considerations. These considerations don’t just include the action and its effects, but they also include one’s intentions, how well the act or intention promotes a good character, and so on.
Given that approach, I don’t think the answer to any particular ethical dilemma—if they’re truly dilemmas—is what’s important. That’s not the real prize we’re after. Instead, what matters is ensuring that a technology developer’s decision to do x is informed by a thoughtful process, and this process for different organizations may lead to different answers. And that might be okay, because there may be room for autonomous vehicles that behave differently, just like traffic today still moves ahead despite different driving styles. Some robot cars might drive faster and take more chances than others, and that may be fine if they’re all within the same ethical tolerances.
Anyway, arriving at a definitive answer to a genuine dilemma is a fool’s errand: by definition, there’s no consensus on the “right answer.” That philosophers haven’t been able to “solve” or converge on a single answer, or a single ethical theory, after thousands of years isn’t a bug; it’s a feature of a dilemma. Therefore, there’s no way that Tesla, Waymo, Ford, Zoox, or other car manufacturers—who have no particular expertise in ethics—can solve the problem in the next few years, or even a hundred years.
This isn’t to say that AV manufacturers can do whatever they like. If their products are involved in an accident, those companies will need to defend themselves and explain their thinking about the design elements implicated, e.g., are they prioritizing certain lives or things over others? And this can’t just be a post hoc rationalization, but to be an effective defense, they’ll need to proactively think through these questions to show that they’ve done their due diligence before their products have rolled out on the streets.
Back to your question, philosophers’ disagreements on how to best program AI is not going to hold up technology, because we’re not the ones developing it. At best, philosophers are advising technology developers who will release a product one way or another, and with or without our input. Manufacturers won’t wait for us to come to an agreement. Again, this is to say that our basic disagreement on ethics isn’t a real obstacle to developing AI and robotics.
However, we can help pave a smoother, more responsible path for new technologies, if we can give sensible guidance to both industry and policymakers. Often, they’re looking for exactly this kind of guidance when the law is unclear—they need a moral North Star to follow when they’re lost in uncharted seas. Philosophers can help anticipate problems, including those that might lead to future lawsuits, and suggest best-practices for addressing those problems. We can help guide technology’s use and evolution, as opposed to leaving it to market forces that care only about efficiency and profit. In that regard, philosophers and ethicists have an absolutely crucial role in the future of emerging technologies.
DČ: Many philosophers and scientists warn us against potential dangers connected with AI and rapidly progressing automation. If humanity succeeds in creating a general artificial (super)intelligence, its behavior may well be very unpredictable and even hostile towards any possible threads to its very existence, human beings included (see, e.g. Nick Bostrom's Superintelligence). Illah Reza Nourbakhsh, professor of robotics at Carnegie Mellon University, describes in his book Robot Futures a possible future scenario in which "democracy is effectively displaced by universal remote control through automatically customized new media." AI and AI embedded robots may either turn out to be a blessing for our civilization or be our final invention. What is your opinion, Patrick, about these matters? Are you more optimistic or pessimistic about our AI and robots containing future? Do you think that we will be able to implement ethical behavior into AI in a way that precludes (any) inimical attitudes and behavior towards human beings?
PL: This is a very hard question for me, because I don’t consider myself a futurist, and I try not to make faraway predictions. It’s not so much that I can’t, but that no one has a good crystal ball, really. If they say they do, then they’re trying to sell you something.
With AI and robotics, we’re venturing in unknown territory that has very little historical precedent, despite loose comparisons to the Industrial Revolution and so on. Whatever optimism or pessimism one might have, I’d say that all bests are off when it comes to this area, which is emerging and evolving too quickly for thoughtful policy to keep up, especially as international governments don’t seem to have the will to develop coordinated, cooperative policies. This is different from, say, predictions about climate change, which we can make reasonable predictions about, and all of them seem bad. Back to the technological race, Martin Luther King Jr. observed, “Our scientific power has outrun our spiritual power. We have guided missiles and misguided men.” Fifty years later, he’s still right: ethics, law, and policy need a turbo-boost to catch up to our technologies that relentlessly march forward.
So, without much optimism or pessimism, I’d tend to agree with the wisdom of Winston Churchill, who said: “It is a mistake to try to look too far ahead. The chain of destiny can only be grasped one link at a time.” This doesn’t mean we can’t try to forecast distant scenarios and plan for them, but we need to be very careful not to avoid the anchoring effect and lock ourselves into certain paths. Related to that is the bias of wish-fulfillment, so we need to be very self-aware about what we wish for.
The most important step is always the next one. Every step must be guided by real science and technology, as well as sober analysis—not science fiction or unjustified hopes and fears. We need a diversity of experts, who bring in new perspectives and can see things that we might miss, to give us sure footing throughout the journey. That’s why I’m excited to see the creation of the Karel Čapek Center, which will help energize the field, as well as reach new experts and stakeholders to expand the conversation globally, as it must be.
DČ: Patrick, thank you for the interview, and I’m happy that you have agreed to become part of our team at the Center.
Patrick Lin, The Ethical Dilemma of Self-Driving Cars
Mere instinctual reactions of the human drivers could soon be replaced by the principled decisions of the robot cars, but which principles should the latter follow?
Cookies