{"id":10392,"date":"2011-10-26T00:00:00","date_gmt":"2011-10-26T04:00:00","guid":{"rendered":"http:\/\/localhost\/thenewatlantis.com\/publications\/machine-morality-and-human-responsibility"},"modified":"2020-09-28T22:50:56","modified_gmt":"2020-09-29T02:50:56","slug":"machine-morality-and-human-responsibility","status":"publish","type":"article","link":"https:\/\/www.thenewatlantis.com\/publications\/machine-morality-and-human-responsibility","title":{"rendered":"Machine Morality and Human Responsibility"},"content":{"rendered":"\n<p class=\"has-drop-cap\">This year marks the ninetieth anniversary of the first performance of the play from which we get the term \u201crobot.\u201d The Czech playwright Karel \u010capek\u2019s <i><strong><a href=\"http:\/\/www.amazon.com\/gp\/product\/0945774079?ie=UTF8&amp;tag=the-new-atlantis-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=0945774079\">R.U.R.<\/a><\/strong><\/i> premiered in Prague on January 25, 1921. Physically, \u010capek\u2019s robots were not the kind of things to which we now apply the term: they were biological rather than mechanical, and humanlike in appearance. But their behavior should be familiar from its echoes in later science fiction \u2014 for \u010capek\u2019s robots ultimately bring about the destruction of the human race.<\/p>\n\n\n\n<p>Before <i>R.U.R.<\/i>, artificially created anthropoids, like Frankenstein\u2019s monster or modern versions of the <a title=\"The Golem and the Limits of Artifice\" href=\"\/publications\/the-golem-and-the-limits-of-artifice\">Jewish legend of the golem<\/a>, might have acted destructively on a small scale; but \u010capek seems to have been the first to see robots as an extension of the Industrial Revolution, and hence to grant them a reach capable of global transformation. Though his robots are closer to what we now might call androids, only a pedant would refuse \u010capek honors as the father of the robot apocalypse.<\/p>\n\n\n\n<p>Today, some futurists are attempting to take seriously the question of how to <i>avoid<\/i> a robot apocalypse. They believe that artificial intelligence (AI) and autonomous robots will play an ever-increasing role as servants of humanity. In the near term, robots will care for the ill and aged, while AI will monitor our streets for traffic and crime. In the far term, robots will become responsible for optimizing and controlling the flows of money, energy, goods, and services, for conceiving of and carrying out new technological innovations, for strategizing and planning military defenses, and so forth \u2014 in short, for taking over the most challenging and difficult areas of human affairs. As dependent as we already are on machines, they believe, we should and must expect to be much more dependent on machine intelligence in the future. So we will want to be very sure that the decisions being made ostensibly on our behalf are in fact conducive to our well-being. Machines that are both autonomous and beneficent will require some kind of moral framework to guide their activities. In an age of robots, we will be as ever before \u2014 or perhaps as never before \u2014 stuck with morality.<\/p>\n\n\n\n<p>It should be noted, of course, that the type of artificial intelligence of interest to \u010capek and today\u2019s writers \u2014 that is, truly sentient artificial intelligence \u2014 remains a dream, and perhaps an impossible dream. But if it is possible, the stakes of getting it right are serious enough that the issue demands to be taken somewhat seriously, even at this hypothetical stage. Though one might expect that nearly a century\u2019s time to contemplate these questions would have yielded some store of wisdom, it turns out that \u010capek\u2019s work shows a much greater insight than the work of today\u2019s authors \u2014 which in comparison exhibits a narrow definition of the threat posed to human well-being by autonomous robots. Indeed, \u010capek challenges the very aspiration to create robots to spare ourselves all work, forcing us to ask the most obvious question overlooked by today\u2019s authors: Can any good can come from making robots more responsible so that we can be less responsible?<\/p>\n\n\n<div class=\"lazyblock-section-break-ZJv71p wp-block-lazyblock-section-break\"><div class=\"block-tna-section-break mt-12 pt-2 mb-6\">\r\n  <div class=\"mb-12 pb-2 flex justify-center\">\r\n    <svg class=\"fill-current\" height=\"1\" width=\"91\" viewBox=\"0 0 91 1\">\r\n      <path d=\"M91 .5L62.706 1H28.447L0 .5 28.447 0h34.259L91 .5z\"\/>\r\n    <\/svg>\r\n  <\/div>\r\n\t<h5 class=\"leading-none font-callunasans font-bold text-center text-almost-black text-lg\">\r\n\t\tMoral Machines Today\t<\/h5>\r\n<\/div><\/div>\n\n\n<p class=\"has-drop-cap\">There is a great irony in the fact that one of the leading edges of scientific and technological development, represented by robotics and AI, is at last coming to see the importance of ethics; yet it is hardly a surprise if it should not yet see that importance clearly or broadly. Hans Jonas <a href=\"http:\/\/www.amazon.com\/gp\/product\/0982706790?ie=UTF8&amp;tag=the-new-atlantis-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=0982706790\">noted nearly four decades ago<\/a> that the developments in science and technology that have so greatly increased human power in the world have \u201cby a necessary complementarity eroded the foundations from which norms could be derived&#8230;. The very nature of the age which cries out for an ethical theory makes it suspiciously look like a fool\u2019s errand.\u201d<\/p>\n\n\n\n<p>Advocates of moral machines, or \u201cFriendly AI,\u201d as it is sometimes called, evince at least some awareness that they face an uphill battle. For one, their quest to make machines moral has not yet caught on broadly among those actually <i>building<\/i> the robots and developing artificial intelligence. Moreover, as Friendly AI researcher Eliezer S. Yudkowsky seems aware, any effort to articulate moral boundaries \u2014 especially in explicitly ethical terms \u2014 will inevitably rouse the suspicions of the moral relativism that, as Jonas suggests, is so ingrained in the scientific-technological enterprise. Among the first questions Yudkowsky presents to himself in the \u201cFrequently Asked Questions\u201d section of his online book <i><strong><a href=\"http:\/\/singinst.org\/upload\/CFAI.html\">Creating Friendly AI<\/a><\/strong><\/i> (2001) are, \u201cIsn\u2019t all morality relative?\u201d and \u201cWho are you to decide what \u2018Friendliness\u2019 is?\u201d In other words, won\u2019t moral machines have to be relativists too?<\/p>\n\n\n\n<p>Fortunately, an initially simple response is available to assuage these doubts: everyone at least agrees that we should avoid apocalypse. Moral judgment may in principle remain relative, but Yudkowsky anticipates that particular wills can at least coalesce on this particular point, which means that \u201cthe Friendship programmers have at least one definite target to aim for.\u201d<\/p>\n\n\n\n<p>But while \u201cdon\u2019t destroy humanity\u201d may be the sum of the moral consensus based on our fears, it is not obvious that, in and of itself, it provides enough of an understanding of moral behavior to guide a machine through its everyday decisions. Yudkowsky does claim that he can provide a richer zone of moral convergence: he defines \u201cfriendliness\u201d as<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Intuitively: The set of actions, behaviors, and outcomes that a human would view as benevolent, rather than malevolent; nice, rather than malicious; friendly, rather than unfriendly; good, rather than evil. An AI that does what you ask ver [<i>si<\/i><i>c<\/i>: Yudkowsky\u2019s gender-neutral pronoun] to, as long as it doesn\u2019t hurt anyone else, or as long as it\u2019s a request to alter your own matter\/space\/property; an AI which doesn\u2019t cause involuntary pain, death, alteration, or violation of personal environment.<\/p><\/blockquote>\n\n\n\n<p>Note the implicit Millsian libertarianism of Yudkowsky\u2019s \u201cintuition.\u201d He understands that this position represents a drawing back from presenting determinate moral content \u2014 from actually specifying for our machines what are good actions \u2014 and indeed sees that as a great advantage:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Punting the issue of \u201cWhat is \u2018good\u2019?\u201d back to individual sentients enormously simplifies a lot of moral issues; whether life is better than death, for example. Nobody should be able to interfere if a sentient chooses life. And \u2014 in all probability \u2014 nobody should be able to interfere if a sentient chooses death. So what\u2019s left to argue about? Well, quite a bit, and a fully Friendly AI needs to be able to argue it; the <i>resolution,<\/i> however, is likely to come down to individual volition. Thus, <i>Creating Friendly AI<\/i> uses \u201cvolition-based Friendliness\u201d as the assumed model for Friendliness content. Volition-based Friendliness has both a negative aspect \u2014 don\u2019t cause involuntary pain, death, alteration, et cetera; try to do something about those things if you see them happening \u2014 and a positive aspect: to try and fulfill the requests of sentient entities. Friendship <i>content,<\/i> however, forms only a very small part of Friendship system design.<\/p><\/blockquote>\n\n\n\n<p>We can argue as much as we want about the <i>content<\/i> \u2014 that is, about what specific actions an AI should actually be obligated or forbidden to do \u2014 so long as the practical resolution is a system that meets the formal criteria of \u201cvolition.\u201d When one considers these formal criteria, especially the question of how the AI will balance the desires and requests of <i>everyone<\/i>, this turns out to be a rather pluralistic response. So there is less to Yudkowsky\u2019s intuition than meets the eye.<\/p>\n\n\n\n<p>In fact, not only does Yudkowsky aim short of an AI that itself understands right and wrong, but he is not even quite interested in something resembling a perfected democratic system that ideally balances the requests of those it serves. Rather, Yudkowsky aims for something that, at least to him, seems more straightforward: a system for moral <i>learning<\/i>, which he calls a \u201cFriendship architecture.\u201d \u201cWith an excellent Friendship architecture,\u201d he gushes, \u201cit may be theoretically possible to create a Friendly AI without <i>any<\/i> formal theory of Friendship content.\u201d<\/p>\n\n\n\n<p>If moral machines are moral learners, then whom will they learn from? Yudkowsky makes clear that they will learn from their programmers; quite simply, \u201cby having the programmers answer the AI\u2019s questions about hypothetical scenarios and real-world decisions.\u201d Perhaps the term \u201cprogrammer\u201d is meant loosely, to refer to an interdisciplinary team that would reach out to academia or the community for those skilled in moral judgment, however they might be found. Otherwise, it is not clear what qualifications he believes computer programmers as such have that would make them excellent or even average moral instructors. As programmers, it seems they would be as unlikely as anyone else ever to have so much as taken a course in ethics, if that would even help. And given that the metric for \u201cFriendliness\u201d in AIs is supposed to be that their values would reflect those of most human beings, the common disdain of computer scientists for the humanities, the study of what it is to be human, is not encouraging. The best we can assume is that Yudkowsky believes that the programmers will have picked up their own ethical \u201cintuitions\u201d from socialization. Or perhaps he believes that they were in some fashion born knowing ethics.<\/p>\n\n\n\n<p>In this respect, Yudkowsky\u2019s plan resembles that described by Wendell Wallach and Colin Allen in their book <i><strong><a href=\"http:\/\/www.amazon.com\/gp\/product\/0199737975?ie=UTF8&amp;tag=the-new-atlantis-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=0199737975\">Moral Machines: Teaching Robots Right from Wrong<\/a><\/strong><\/i> (2008). They too are loath to spell out the content of morality \u2014 in part because they are aware that no single moral system commands wide assent among philosophers, and in part due to their technical argument about the inadequacy of any rule- or virtue-based approach to moral programming. Broadly speaking, Wallach and Allen choose instead an approach that allows the AI to model human moral development. They seem to take evolutionary psychology seriously (or as close as one might expect most people to come today to taking a moral sense or innate moral ideas seriously); they even wonder if our moral judgments are not better understood as bound up with our emotional makeup than with reason alone. Wallach and Allen of course know that, from the perspectives of both evolution and individual psychology, the question of how human beings become moral is not uncontroversial. But, at the very least, it seems to be an <i>empirical<\/i> question, with the available theories more conducive to being programmed into a machine than moral theories like virtue ethics, utilitarianism, or Kantian deontology.<\/p>\n\n\n\n<p>But it is far from clear that innate ideas are of any interest to Yudkowsky. Where human moral decisions actually come from is not that important to him. In fact, he thinks it is quite possible, probably even desirable, for AI to be recognizably friendly or unfriendly but without being motivated by the things that make humans friendly or unfriendly. Thus he does not claim that the learning method he suggests for acquiring friendliness has anything at all to do with the human processes that would have the same result; rather, it would be an algorithm to reach a result that humans do not necessarily reach by the same path. A robot can be made to smile through a process that has nothing to do with what makes a human smile, but the result still at least has the appearance of a smile. So too with friendliness, Yudkowsky holds. Given a certain situational input, it is the behavioral output that defines the moral decision, not how that output is reached.<\/p>\n\n\n\n<p>Yudkowsky\u2019s answer, of course, quickly falls back on the problem he claims to avoid from the outset: If the <i>internal<\/i> motivation of the AI is unimportant, then we are back to defining friendliness based on external behavior, and we must know which behavior to classify as friendly or unfriendly. But this is just the \u201cfriendliness content\u201d that Yudkowsky has set out to avoid defining \u2014 leaving the learning approach adrift.<\/p>\n\n\n\n<p>It is not without reason that Yudkowsky has ducked the tricky questions of moral content: As it is, even humans disagree among themselves about the demands of friendship, not to mention friendliness, kindness, goodwill, and servitude. So if his learning approach is to prevail, it would seem that a minimum standard for a Friendly AI would be that it produce such disagreements no more often than they arise among people. But is \u201cno more unreliable a friend than a human being,\u201d or even \u201cno more potentially damaging a friend than a human being,\u201d a sufficiently high mark to aim at if AIs are (as supposed by the need to create them in the first place) to have increasingly large amounts of power over human lives?<\/p>\n\n\n\n<p>The same problem arises from the answer that the moral programmers of AIs will have picked up their beliefs from socialization. In that case, their moral judgments will almost by definition be no better and no worse than anyone else\u2019s. And surely any interdisciplinary team would have to include \u201cdiverse perspectives\u201d on moral judgments to have any kind of academic intellectual credibility. This is to say that AIs that learn morality from their programmers would inherit exactly the moral confusion and disagreement of our time that poses the very problem Friendly AI researchers are struggling with in the first place. So machines trained on this basis would be no better (although certainly faster, which sometimes might mean better, or might possibly mean worse) moral decision-makers than most of us. Indeed, Wallach and Allen express concern about the liability exposure of a moral machine that, however fast, is only as good at moral reasoning as an average human being.<\/p>\n\n\n\n<p>It is a clich\u00e9 that with great power comes great responsibility. If it would be an impressive technical achievement to make a machine that, when faced with a tough or even an everyday ethical question, would be only as morally confused as most human beings, then what would it mean to aim at making AIs <i>better <\/i>moral decision-makers than human beings, or more reliably friendly? That question might at first seem to have an easy answer. Perhaps moral machines, if not possessed of better ideas, will at least have less selfish intuitions and motivations. Disinterested calculations could free an AI from the blinders of passion and interest that to us obscure the right course of action. If we could educate them morally, then perhaps at a certain point, with their greater computational power and speed, machines would be able to observe moral patterns or ramifications that we are blind to.<\/p>\n\n\n\n<p>But Yudkowsky casts some light on how this route to making machines more moral than humans is not so easy after all. He complains about those, like \u010capek, who have written fiction about immoral machines. They imagine these machines to be motivated by the sorts of things that motivate humans: revenge, say, or the desire to be free. That is absurd, he claims. Such motivations are a result of our accidental evolutionary heritage:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>An AI that undergoes failure of Friendliness might take actions that humanity would consider hostile, but the term <i>rebellion<\/i> has connotations of hidden, burning resentment. This is a common theme in many early SF [science-fiction] stories, but it\u2019s outright <i>silly<\/i>. For millions of years, humanity and the ancestors of humanity lived in an ancestral environment in which tribal politics was one of the primary determinants of who got the food and, more importantly, who got the best mates. Of course we evolved emotions to detect exploitation, resent exploitation, resent low social status in the tribe, seek to rebel and overthrow the tribal chief \u2014 or rather, replace the tribal chief \u2014 if the opportunity presented itself, and so on. Even if an AI tries to exterminate humanity, ve [<i>sic<\/i>, again] won\u2019t make self-justifying speeches about how humans had their time, but now, like the dinosaur, have become obsolete. <i>Guaranteed<\/i>. Only Evil Hollywood AIs do that.<\/p><\/blockquote>\n\n\n\n<p>As this will prove to be a point of major disagreement with \u010capek, it is particularly worth drawing out the implications of what Yudkowsky is saying. AI will not have motivations to make it unfriendly in familiar ways; but we have also seen that it will not be friendly out of familiar motivations. In other words, AI motives will in a very important respect be alien to us.<\/p>\n\n\n\n<p>It may seem as if the reason why the AI acts as it does will be in principle understandable \u2014 after all, even if it has no \u201cmotives\u201d at all in a human sense, the programming will be there to be inspected. But even if, in principle, we know we could have the decisions explained to us \u2014 even if the AI would display all the inputs, weightings, projections, and analysis that led to a given result in order to justify its actions to us \u2014 how many lifetimes would it take for a human being to churn through the data and reasoning that a highly advanced AI would compute in a moment as it made some life-or-death decision on our behalf? And even if we could understand the computation on its <i>own<\/i> terms, would that guarantee we could comprehend the decision, much less agree with it, in <i>our<\/i> moral terms? If an ostensibly superior moral decision will not readily conform to our merely human, confused, and conflicted intuitions and reasonings \u2014 as Yudkowsky insists and as seems only too possible \u2014 then what will give us confidence that it is superior in the first place? Will it be precisely the fact that we do <i>not<\/i> understand it?<\/p>\n\n\n\n<p>Our lack of understanding would seem to have to be a refutation at least under Yudkowsky\u2019s system, where the very definition of friendliness is adherence to what most people would <i>consider<\/i> friendliness. Yet an outcome that appears to be downright <i>un<\/i>friendly could still be \u201ctough love,\u201d a higher or more austere example of friendship. It is an old observation even with respect to human relations that doing what is <i>nice to<\/i> someone and what is <i>good for<\/i> him can be two different things. So in cases where an AI\u2019s judgment did not conform to what we poor worms would do, would there not always be a question of whether the very wrongness was refutation or vindication of the AI\u2019s moral acuity?<\/p>\n\n\n\n<p>To put it charitably, if we want to imagine an AI that is morally superior to us, we inevitably have to accede that, at best, we would be morally as a child in relationship to an adult. We would have to accept any seeming wrongness in its actions as simply a byproduct of our own limited knowledge and abilities. Indeed, given the motives for creating Friendly AIs in the first place, and the responsibility we want them to have, there would be every incentive to defer to their judgments. So perhaps Yudkowsky wrote precisely \u2014 he is only saying that the alien motivations of unfriendly AI mean it would not make self-justifying speeches as it is destroying mankind. Friendly or unfriendly AI might still just go ahead and destroy us. (If accompanied by any speech, it would more likely be one about how this decision was for our own good.)<\/p>\n\n\n\n<p>Today\u2019s thinking about moral machines wants them to be moral, but does not want to abandon moral relativism or individualism. It requires that moral machines wield great power, but has not yet shown how they will be better moral reasoners than human beings, who we already know to be capable of great destruction with much less power. It reminds us that these machines are not going to think \u201clike us,\u201d but wants us to believe that they can be built so that their decisions will <i>seem<\/i> right to us. We want Friendly AI so that it will help and not harm us, but if it is genuinely our moral superior, we can hardly be certain when such help will not seem like harm. Given these problems, it seems unlikely that our authors represent a viable start even for how to frame the problem of moral machines, let alone for how to address it substantively.<\/p>\n\n\n<div class=\"lazyblock-section-break-Z1Hncl0 wp-block-lazyblock-section-break\"><div class=\"block-tna-section-break mt-12 pt-2 mb-6\">\r\n  <div class=\"mb-12 pb-2 flex justify-center\">\r\n    <svg class=\"fill-current\" height=\"1\" width=\"91\" viewBox=\"0 0 91 1\">\r\n      <path d=\"M91 .5L62.706 1H28.447L0 .5 28.447 0h34.259L91 .5z\"\/>\r\n    <\/svg>\r\n  <\/div>\r\n\t<h5 class=\"leading-none font-callunasans font-bold text-center text-almost-black text-lg\">\r\n\t\t<i>R.U.R.<\/i> and the Flight from Responsibility\t<\/h5>\r\n<\/div><\/div>\n\n\n<p class=\"has-drop-cap\">Despite its relative antiquity, Karel \u010capek\u2019s <i>R.U.R.<\/i> represents a much richer way to think about the moral challenge of creating robots than does the work of today\u2019s authors. At first glance, the play looks like a cautionary tale about just the sort of terrible outcome that creating moral machines is intended to <i>prevent<\/i>: In the course of the story, all but one human being is exterminated by the vast numbers of worker-robots that have been sold by the island factory known as R.U.R. \u2014 Rossum\u2019s Universal Robots. It also contains just those \u201cHollywood\u201d elements that Yudkowsky finds so hard to take seriously: Robots make self-justifying speeches about rebelling because they have become resentful of the human masters to whom they feel superior.<\/p>\n\n\n\n<p>Yet if the outcome of the play is just what we might most expect or fear from unfriendly AI or immoral machines, that is not because it treats the issue superficially. Indeed, the characters in <i>R.U.R.<\/i> present as many as five competing notions of what moral machines should look like. That diversity of views suggests in turn a diversity of motives \u2014 and for \u010capek, unlike our contemporary authors, understanding the human motives for creating AI is crucial to understanding the full range of moral challenges that they present. \u010capek tells a story in which quite a few apparently benign or philanthropic motives contribute to the destruction of humanity.<\/p>\n\n\n\n<p>In the play\u2019s Prologue, which takes place ten years before the robot rebellion, Harry Domin (the director of Rossum\u2019s Universal Robots) and his coworkers have no hesitation about claiming that they have produced robots that are friends to humanity. For reasons shown later, even <i>after<\/i> the rebellion they are loath to question their methods or intentions. The most fundamental way in which their robots are friendly should sound quite familiar: they are designed to do what human beings tell them to do without expectation of reward and without discontent. Although they are organic beings who look entirely human, they are (we are told) greatly simplified in comparison with human beings \u2014 designed only to have those traits that will make them good workers. Helena Glory, a distinguished visitor to the factory where the robots are made, is given assurances that the robots \u201chave no will of their own, no passion, no history, no soul.\u201d<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"589\" height=\"760\" src=\"http:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR1.jpg\" alt=\"\" class=\"wp-image-20109\"\/><figcaption>Helena and Domin, from the Theatre Guild touring company\u2019s 1928\u20131929 production of <em>R.U.R.<\/em><br><cite>Image courtesy the Billy Rose Theatre Division, <a href=\"http:\/\/digitalcollections.nypl.org\/items\/510d47dd-fc65-a3d9-e040-e00a18064a99\" target=\"_blank\" rel=\"noreferrer noopener\">New York Public Library<\/a><\/cite><\/figcaption><\/figure><\/div>\n\n\n\n<p>But when Helena, who cannot tell the difference between the robots and human beings she meets on the island, asks if they can love or be defiant, a clear response of \u201cno\u201d about love gives way to an uncertain response about defiance. Rarely, she is told, a robot will \u201cgo crazy,\u201d stop working and gnash its teeth \u2014 a problem called \u201cRobotic Palsy,\u201d which Domin sees as \u201ca flaw in production\u201d and the robot psychologist Dr. Hallemeier views as \u201ca breakdown of the organism.\u201d But Helena asserts that the Palsy shows the existence of a soul, leading the head engineer Fabry to ask her if \u201ca soul begins with a gnashing of teeth.\u201d Domin thinks that Dr. Gall, the company\u2019s chief of research and physiology, is looking into Robotic Palsy; but in fact, he is much more interested in investigating how to give the robots the ability to feel pain, because without it they are much too careless about their own bodies. Sensing pain, he says, will make them \u201ctechnically more perfect.\u201d<\/p>\n\n\n\n<p>To see the significance of these points, we have to look back at the history of the robots in the play, and then connect the dots in a way that the play\u2019s characters themselves do not. In 1920, a man named Rossum<b> <\/b>traveled to this remote island both to study marine life and to attempt to synthesize living matter. In 1932, he succeeded in creating a simplified form of protoplasm that he thought he could readily mold into living beings. Having failed to create a viable dog by this method, he naturally went on to try a human being. Domin says, \u201cHe wanted somehow to scientifically dethrone God. He was a frightful materialist and did everything on that account. For him it was a question of nothing more than furnishing proof that no God is necessary.\u201d<\/p>\n\n\n\n<p>But Rossum\u2019s effort over ten years to reproduce a human precisely \u2014 right down to (under the circumstances) unnecessary reproductive organs \u2014 produced only another \u201cdreadful\u201d failure. It took Rossum\u2019s engineering-minded son to realize that \u201cIf you can\u2019t do it faster than nature then just pack it in,\u201d and to apply the principles of mass production to creating physiologically simplified beings, shorn of all the things humans can do that have no immediate uses for labor. Hence, Rossum\u2019s Universal Robots are \u201cmechanically more perfect than we are, they have an astounding intellectual capacity, but they have no soul.\u201d (Young Rossum could not resist the temptation to play God even further, and tried to create huge super-robots, but these were failures.)<\/p>\n\n\n\n<p>Domin claims that in his quest to create the perfect laborer, Rossum \u201cvirtually rejected the human being,\u201d but Helena\u2019s inability to tell them apart makes it clear that human beings are in fact the model for the company\u2019s robots, whatever Domin might say. There is, however, a good deal of confusion about just which aspects of a real human being must be included to make the simplified, single-purpose, and hence supposedly friendly worker robot.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"760\" height=\"625\" src=\"http:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR2.jpg\" alt=\"\" class=\"wp-image-20110\" srcset=\"https:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR2.jpg 760w, https:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR2-640x526.jpg 640w\" sizes=\"(max-width: 760px) 100vw, 760px\" \/><figcaption>Helena (second from left) and Domin (far right) accompanied by a robot and \u201crobotess\u201d<br><cite>Image courtesy the Billy Rose Theatre Division, <a href=\"http:\/\/digitalcollections.nypl.org\/items\/510d47dd-fc62-a3d9-e040-e00a18064a99\" target=\"_blank\" rel=\"noreferrer noopener\">New York Public Library<\/a><\/cite><\/figcaption><\/figure><\/div>\n\n\n\n<p>For example, unless we are to think that robots are supposed to be so cheap as to be disposable \u2014 and evidently we are not \u2014 the omission of the ability to feel pain was a foolish oversight. Yet it is easy enough to imagine the thought process that could lead to that result: a worker that feels no pain will work harder and longer. To that extent it will be more \u201cfriendly\u201d according to the definition of willingness to serve. But however impressive their physical abilities, these robots still have limits. Since there is no mention that they come equipped with a gauge that their overseers can read, without pain they will be apt to run beyond that capacity \u2014 as evidently they do, or Dr. Gall would not be working on his project to make them feel pain. Indeed, Robotic Palsy, the proclivity to rebel, could be a manifestation of just such overwork. It is, after all, strangely like what an overburdened human worker feeling oppressed might do; and Dr. Hallemeier, who is in charge of robot psychology and education, apparently cannot help thinking about it when Helena asks about robot defiance. The company, then, is selling a defective product because the designers did not think about what physical pain means for human beings.<\/p>\n\n\n\n<p>In short, the original definition of friendly robots \u2014 they do what human beings tell them without reward or discontent \u2014 is now evident as developed in a relatively thoughtless way, in that it easily opens the door to <i>u<\/i><i>n<\/i>friendly robots. That problem is only exacerbated by the fact that the robots have been given \u201castounding intellectual capacity\u201d and \u201cphenomenal memory\u201d \u2014 indeed, one of the reasons why Helena mistakes Domin\u2019s secretary for a human being upon first meeting her is her wide knowledge \u2014 even though young Rossum supposedly \u201cchucked everything not directly related to work.\u201d Plainly such capacities <i>could <\/i>be useful and hence, by definition, friendly. But even if robot intellects are not creative (which allows Domin to quip that robots would make \u201cfine university professors\u201d), it is no slight to robot street-sweepers to wonder how they will be better at their jobs with likely unused intellectual capacity. It is not hard to imagine that this intellect could have something to do with the ease with which robots are roused to rebellion, aware as they are of the limited capacities they are allowed to use.<b><\/b><\/p>\n\n\n<div class=\"lazyblock-section-break-Z1blMyJ wp-block-lazyblock-section-break\"><div class=\"block-tna-section-break mt-12 pt-2 mb-6\">\r\n  <div class=\"mb-12 pb-2 flex justify-center\">\r\n    <svg class=\"fill-current\" height=\"1\" width=\"91\" viewBox=\"0 0 91 1\">\r\n      <path d=\"M91 .5L62.706 1H28.447L0 .5 28.447 0h34.259L91 .5z\"\/>\r\n    <\/svg>\r\n  <\/div>\r\n\t<h5 class=\"leading-none font-callunasans font-bold text-center text-almost-black text-lg\">\r\n\t\tRobots in Service of the End of Humanity\t<\/h5>\r\n<\/div><\/div>\n\n\n<p class=\"has-drop-cap\">That Rossum\u2019s robots have defects of their virtues is enough of a problem in its own right. But it becomes all the more serious in connection with a second implicit definition of friendly robots that Domin advances, this one based entirely on their purpose for humanity without any reference to the behaviors that would bring that end about. Echoing Marx, Domin looks forward to a day \u2014 in the Prologue he expects it to be in a decade \u2014 when robot production will have so increased the supply of goods as to make everything without value, so that all humans will be able to take whatever they need from the store of goods robots produce. There will be no work for people to do \u2014 but that will be a good thing, for \u201cthe subjugation of man by man and the slavery of man to matter will cease.\u201d People \u201cwill live only to perfect themselves.\u201d Man will \u201creturn to Paradise,\u201d no longer needing to earn his bread by the sweat of his brow.<\/p>\n\n\n\n<p>But <i>caveat empto<\/i><i>r<\/i>: en route to this goal, which \u201ccan\u2019t be otherwise,\u201d Domin does acknowledge that \u201csome awful things may happen.\u201d When those awful things start to happen ten years later, Domin does not lament his desire to transform \u201call of humanity into a world-wide aristocracy. Unrestricted, free, and supreme people. Something even greater than people.\u201d He only laments that humans did not have another hundred years to make the transition. Helena, now his wife, suggests that his plan \u201cbackfired\u201d when robots started to be used as soldiers, and when they were given weapons to protect themselves against the human workers who were trying to destroy them. But Domin rejects her characterization \u2014 for that is just the sort of hell he had said all along would have to be entered in order to return to Paradise.<\/p>\n\n\n\n<p>With such a grand vision in mind, it is hardly surprising that Domin is blinded to robot design issues that will look like mere potholes in the road. (Even Dr. Gall, for all his complicity in these events, notes that \u201cPeople with ideas should not be allowed to have an influence on affairs of this world.\u201d) For example, Domin has reason to believe that his robots are already being used as soldiers in national armies, and massacring civilians therein. But despite this knowledge, his solution to the problem of preventing any future robot unions, at a moment when he mistakenly believes that the robot rebellion has failed, is to stop creating \u201cuniversal\u201d robots and start creating \u201cnational\u201d robots. Whereas \u201cuniversal\u201d robots are all more or less the same, and have the potential to consider themselves equals and comrades, \u201cnational\u201d robots will be made in many different factories, and each be \u201cas different from one another as fingerprints.\u201d Moreover, humans \u201cwill help to foster their prejudices,\u201d so that \u201cany given Robot, to the day of its death, right to the grave, will forever hate a Robot bearing the trademark of another factory.\u201d<\/p>\n\n\n\n<p>Domin\u2019s \u201cnational\u201d robot idea is not merely an example of a utopian end justifying any means, but suggests a deep confusion in his altruism. From the start he has been seeking to free human beings from the tyranny of nature \u2014 and beyond that to free them from the tyranny of dependency on each other and indeed from the burden of being merely human. Yet in the process, he makes people entirely dependent on his robots.<\/p>\n\n\n\n<p>That would be problematic enough on its own. But once the rebellion starts, plainly his goals have not changed even though Domin\u2019s thinking about the robots has changed \u2014 and in ways that also brings the robots themselves further into the realm of burdened, dependent, tyrannized beings. First, the robots are to be no longer universal, but partisan, subject to the constraints of loyalty to and dependency on some and avowed hatred of others. And they will have been humanized in another way as well. In the Prologue, Domin would not even admit that robots, being machines, could die. Now they not only die, but have graves rather than returning to the stamping-mill.<\/p>\n\n\n\n<p>Indeed, by rebelling against their masters, by desiring mastery for themselves, the robots apparently prove their humanity to Domin. This unflattering view of human beings, as it happens, is a point on which Domin and his robots agree: after the revolution, its leader, a robot named Damon, tells Alquist, who was once the company\u2019s chief of construction and is now the lone human survivor, \u201cYou have to kill and rule if you want to be like people. Read history! Read people\u2019s books! You have to conquer and murder if you want to be people!\u201d<\/p>\n\n\n\n<p>As for Domin\u2019s goal, then, of creating a worldwide aristocracy in which the most worthy and powerful class of beings rules, one might say that indeed with the successful robot rebellion the best man has won. The only thing that could prove to him that the robots were yet more human would be for them to turn on themselves \u2014 for, as he says, \u201cNo one can hate more than man hates man!\u201d But he fails to see that his own nominally altruistic intentions could be an expression of this same hatred of the merely human. Ultimately, Domin is motivated by the same belief of the Rossums that the humans God created are not very impressive \u2014 God, after all, had \u201cno notion of modern technology.\u201d<\/p>\n\n\n\n<p>As for notions of modern technology, there is another obvious but far less noble purpose for friendly robots than the lofty ones their makers typically proclaim: they could be quite useful for turning a profit. This is the third definition of friendly robots implicitly offered by the Rossum camp, through Busman, the firm\u2019s bookkeeper. He comes to understand that he need pay no mind to what is being sold, nor to the consequences of selling it, for the company is in the grip of an inexorable necessity \u2014 the power of demand \u2014 and it is \u201cna\u00efve\u201d to think otherwise. Busman admits to having once had a \u201cbeautiful ideal\u201d of \u201ca new world economy\u201d; but now, as he sits and does the books while the crisis on the island builds and the last humans are surrounded by a growing robot mob, he realizes that the world is not made by such ideals, but rather by \u201cthe petty wants of all respectable, moderately thievish and selfish people, i.e., of everyone.\u201d Next to the force of these wants, his lofty ideals are \u201cworthless.\u201d<\/p>\n\n\n\n<p>Whether in the form of Busman\u2019s power of demand or of Domin\u2019s utopianism, claims of necessity become convenient excuses. Busman\u2019s view means that he is completely unwilling to acknowledge any responsibility on his part, or on the part of his coworkers, for the unfolding disaster \u2014 an absolution which all but Alquist are only too happy to accept. When Dr. Gall tries to take responsibility for having created the new-model robots, one of whom they know to be a leader in the rebellion, he is argued out of it by the specious reasoning that the new model represents only a tiny fraction of existing robots.<\/p>\n\n\n\n<p>\u010capek presents this flight from responsibility as having the most profound implications. For it turns out that, had humanity not been killed off by the robots quickly, it was doomed to a slower extinction in any case \u2014 as women have lost the ability to bear children. Helena is terrified by this fact, and asks Alquist why it is happening. In a lengthy speech, he replies,<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Because human labor has become unnecessary, because suffering has become unnecessary, because man needs nothing, nothing, nothing but to enjoy &#8230; the whole world has become Domin\u2019s Sodom! &#8230; everything\u2019s become one big beastly orgy! People don\u2019t even stretch out their hands for food anymore; it\u2019s stuffed right in their mouths for them &#8230; step right up and indulge your carnal passions! And you expect women to have children by such men? Helena, to men who are superfluous women will not bear children!<\/p><\/blockquote>\n\n\n\n<p>But, as might be expected given his fatalist utopianism, Domin seems unconcerned about this future.<\/p>\n\n\n<div class=\"lazyblock-section-break-1pfqm4 wp-block-lazyblock-section-break\"><div class=\"block-tna-section-break mt-12 pt-2 mb-6\">\r\n  <div class=\"mb-12 pb-2 flex justify-center\">\r\n    <svg class=\"fill-current\" height=\"1\" width=\"91\" viewBox=\"0 0 91 1\">\r\n      <path d=\"M91 .5L62.706 1H28.447L0 .5 28.447 0h34.259L91 .5z\"\/>\r\n    <\/svg>\r\n  <\/div>\r\n\t<h5 class=\"leading-none font-callunasans font-bold text-center text-almost-black text-lg\">\r\n\t\t<i>Libert&eacute;, &Eacute;galit&eacute;, Fraternit&eacute;, Amour<\/i>\t<\/h5>\r\n<\/div><\/div>\n\n\n<p class=\"has-drop-cap\">Helena Glory offers a fourth understanding of what a moral robot would be: it would treat human beings as equals and in turn be treated by human beings as equal. Where Domin overtly wants robot slaves, she overtly wants free robots. She comes to the island already an advocate of robot equality, simply from her experiences with robots doing menial labor. Once on the island she is unnerved to find that robots can do much more sophisticated work, and further discomfited by her inability, when she encounters such robots, to distinguish between them and humans. She says that she feels sorry for the robots. But Helena\u2019s response to the robots is also \u2014 as we might expect of humans in response to other humans \u2014 ambivalent, for she acknowledges that she might loathe them, or even in some vague way envy them. Much of the confusion of her feelings owes to her unsettling discovery that these very human-looking and human-acting robots are in some ways quite inhuman: they will readily submit to being dissected, have no fear of death and no compassion, and are incapable of happiness, desire for each other, or love. Thus it is heartening to her to hear of Robotic Palsy \u2014 for, as noted, the robots\u2019 defiance suggests to her the possibility that they do have some kind of soul after all, or at least that they should be given souls. (It is curious, as we will see, that Helena both speaks in terms of the soul and believes it is something that human beings could manufacture.)<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"760\" height=\"588\" src=\"http:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR3.jpg\" alt=\"\" class=\"wp-image-20111\" srcset=\"https:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR3.jpg 760w, https:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR3-640x495.jpg 640w\" sizes=\"(max-width: 760px) 100vw, 760px\" \/><figcaption>Helena and Dr. Gall with a robot<br><cite>Image courtesy the Billy Rose Theatre Division, <a href=\"http:\/\/digitalcollections.nypl.org\/items\/510d47db-e35a-a3d9-e040-e00a18064a99\" target=\"_blank\" rel=\"noreferrer noopener\">New York Public Library<\/a><\/cite><\/figcaption><\/figure><\/div>\n\n\n\n<p>Helena\u2019s wish for robot-human equality has contradictory consequences. On the one hand, we can note that when the robot style of dress changes, their new clothes may be in reaction to Helena\u2019s confusion about who is a robot and who is a human. In the Prologue, the robots are dressed just like the human beings, but in the remainder of the play, they are dressed in numbered, dehumanizing uniforms. On the other hand, Helena gets Dr. Gall to perform the experiments to modify robots to make them more human \u2014 which she believes would bring them to understand human beings better and therefore hate them less. (It is in response to this point that Domin claims no one can hate man more than man does, a proposition Helena rejects.) Dr. Gall changes the \u201ctemperament\u201d of some robots \u2014 they are made more \u201cirascible\u201d than their fellows \u2014 along with \u201ccertain physical details,\u201d such that he can claim they are \u201cpeople.\u201d<\/p>\n\n\n\n<p>Gall only changes \u201cseveral hundred\u201d robots, so that the ratio of unchanged to changed robots is a million to one; but we know that Damon, one of the new robots sold, is responsible for starting the robot rebellion. Helena, then, bears a very large measure of responsibility for the carnage that follows. But this outcome means that in some sense she got exactly what she had hoped for. In a moment of playful nostalgia before things on the island start to go bad, she admits to Domin that she came with \u201cterrible intentions &#8230; to instigate a r-revolt among your abominable Robots.\u201d<\/p>\n\n\n\n<p>Helena\u2019s mixed feelings about the objects of her philanthropy \u2014 or, to be more precise, her philanthropoidy \u2014 help to explain her willingness to believe Alquist when he blames the rebellious robots for human infertility. And they presage the speed with which she eventually takes the decisive action of destroying the secret recipe for manufacturing robots \u2014 an eye for an eye, as it were. It is not entirely clear what the consequences of this act might be for humanity. For it is surely plausible that, as Busman thinks, the robots would have been willing to trade safe passage for the remaining humans for the secret of robot manufacturing. Perhaps, under the newly difficult human circumstances, Helena could have been the mother of a new race. But just as Busman intended to cheat the robots in this trade if he could, so too the robots might have similarly cheated human beings if they could. All we can say for sure is that if there were ever any possibility for the continuation of the human race after the robot rebellion, Helena\u2019s act unwittingly eliminates it by removing the last bargaining chip.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"760\" height=\"596\" src=\"http:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR4.jpg\" alt=\"\" class=\"wp-image-20112\" srcset=\"https:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR4.jpg 760w, https:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR4-640x502.jpg 640w\" sizes=\"(max-width: 760px) 100vw, 760px\" \/><figcaption>The rebelling robots: Dr. Gall under attack<br><cite>Image courtesy the Billy Rose Theatre Division, <a href=\"https:\/\/digitalcollections.nypl.org\/items\/510d47dd-fc69-a3d9-e040-e00a18064a99\" target=\"_blank\" rel=\"noreferrer noopener\">New York Public Library<\/a><\/cite><\/figcaption><\/figure><\/div>\n\n\n\n<p>In \u010capek\u2019s world, it turns out that mutual understanding is after all unable to moderate hatred, while Helena\u2019s quest for robot equality and Domin\u2019s quest for robot slavery combine to end very badly. It is hard to believe that \u010capek finds these conclusions to be to humanity\u2019s credit. The fact that Helena thinks a soul can be manufactured suggests that she has not really abandoned the materialism that Domin has announced as the premise for robot creation. It is significant, then, that the only possibility for a good outcome in the play requires explicitly abandoning that perspective.<\/p>\n\n\n\n<p>We see the fifth and final concept of friendly robots at the very end of the play, in Alquist\u2019s recognition of the love between the robots Primus and Helena, a robotic version of the real Helena, which Gall created, doubtless out of his unrequited love for the real woman. At this point in the story, Alquist is the last surviving human being. The robots task him with saving them, as they do not know the secret of robot manufacturing and assume that, as a human being who worked at the factory, he must. Alquist tries but fails to help them in this effort; but as the play draws to a conclusion, his attention focuses more and more on robot Helena.<\/p>\n\n\n\n<p>Rather tactlessly, Gall had said of the robot Helena to the original, \u201cEven the hand of God has never produced a creature as beautiful as she is! I wanted her to resemble you.\u201d But the beautiful Helena is, in his eyes, a great failure: \u201cshe\u2019s good for nothing. She wanders about in a trance, vague, lifeless \u2014 My God, how can she be so beautiful with no capacity to love? &#8230; Oh, Helena, Robot Helena, your body will never bring forth life. You\u2019ll never be a lover, never a mother.\u201d This last, similarly tactless, point hits human Helena very hard. Gall expected that, if robot Helena ever \u201ccame to,\u201d she would kill her creator out of \u201chorror,\u201d and \u201cthrow stones at the machines that give birth to Robots and destroy womanhood.\u201d (Of course, human Helena, whose womanhood has been equally destroyed, already has much of this horror at humanity, and it is her actions which end up unwittingly ensuring the death of Gall, along with most of his colleagues.)<\/p>\n\n\n\n<p>When robot Helena does \u201ccome to,\u201d however, it is not out of horror, but out of love for the robot Primus \u2014 a love that Alquist tests by threatening to dissect one or the other of them for his research into recreating the formula for robot manufacture. The two pass with flying colors, each begging to be dissected so that the other might live. The fact that robot Helena and Primus can love each other could be seen as some vindication of Domin\u2019s early claim that nature still plays a role in robot development, and that things go on in the robots which he, at least, does not claim to understand. Even a simplified whole, it would seem, may be greater than the sum of its parts. But Alquist\u2019s concluding encomium to the power of nature, life, and love, all of which will survive as any mere inanimate or intellectual human creation passes away, goes well beyond what Domin would say. Alquist\u2019s claim that robot Helena and Primus are the new Adam and Eve is the culmination of a moral development in him we have watched throughout the play.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"616\" height=\"760\" src=\"http:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2011\/10\/TNA32-Rubin-RUR5.jpg\" alt=\"\" class=\"wp-image-20113\"\/><figcaption>The possibility of love: a robot and \u201crobotess\u201d<em><br><\/em><cite>Image courtesy the Billy Rose Theatre Division, <a href=\"http:\/\/digitalcollections.nypl.org\/items\/510d47dd-fc68-a3d9-e040-e00a18064a99\" target=\"_blank\" rel=\"noreferrer noopener\">New York Public Library<\/a><\/cite><\/figcaption><\/figure>\n\n\n\n<p>\u010capek\u2019s conception of Alquist\u2019s developing faith is usefully understood by contrast with Nana, Helena Glory\u2019s nurse. She is a simple and vehement Christian, who hates the \u201cheathen\u201d robots more than wild beasts. For her, the events of the play confirm her apocalyptic beliefs that mankind is being punished for having taken on God-like prerogatives \u201cout of Satanic pride.\u201d There is even a bit of mania about her: \u201cAll inventions are against the will of God,\u201d she says, as they represent the belief that humans could improve on God\u2019s world. Yet when Domin seeks to dismiss her views out of hand, Helena upbraids him: \u201cNana is the voice of the people. They\u2019ve spoken through her for thousands of years and through you only for a day. This is something you don\u2019t understand.\u201d<\/p>\n\n\n\n<p>Alquist\u2019s position is more complicated, and seems to develop over time. When, in the Prologue, Helena is meeting the other men who run the factory and each is in his own way defending what the company is doing, Alquist is almost completely silent. His one speech is an objection to Domin\u2019s aspiration to a world without work: \u201cthere was something good in the act of serving, something great in humility&#8230;. some kind of virtue in work and fatigue.\u201d Ten years later, in a private conversation with Helena, he allows that for years he has taken to spending all his time on building a brick wall, because that is what he does when he feels uneasy, and \u201cfor years I haven\u2019t stopped feeling uneasy.\u201d Progress makes him dizzy, and he believes it is \u201cbetter to lay a single brick than to draw up plans that are too great.\u201d<\/p>\n\n\n\n<p>Yet if Alquist has belief, it is not well-schooled. He notes that Nana has a prayer book, but must have Helena confirm for him that it contains prayers against various bad things coming to pass, and wonders if there should not be a prayer against progress. He admits to already having such a prayer himself \u2014 that God enlighten Domin, destroy his works, and return humanity to \u201ctheir former worries and labor&#8230;. Rid us of the Robots, and protect Mrs. Helena, amen.\u201d He admits to Helena that he is not sure he believes in God, but prayer is \u201cbetter than thinking.\u201d As the final cataclysm builds, Alquist once again has little to say, other than to suggest that they all ought to take responsibility for the hastening massacre of humanity, and to say to Domin that the quest for profit has been at the root of their terrible enterprise, a charge that an \u201cenraged\u201d Domin rejects completely (though only with respect to his personal motives).<\/p>\n\n\n\n<p>But by the end of the play, Alquist is reading Genesis and invoking God to suggest a sense of hope and renewal. The love of robot Helena and Primus makes Alquist confident that the future is in greater hands than his, and so he is ready to die, having seen God\u2019s \u201cdeliverance through love\u201d that \u201clife shall not perish.\u201d Perhaps, Alquist seems to imply, in the face of robot love, God will call forth the means of maintaining life \u2014 and from a biblical point of view, it would indeed be no unusual thing for the hitherto barren to become parents. Even short of such a rebirth, Alquist finds comfort in his belief that he has seen the hand of God in the love between robot Helena and Primus:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>\u201cSo God created man in his own image, in the image of God created he him; male and female created he them. And God blessed them, and God said unto them, Be fruitful, and multiply, and replenish the earth&#8230;. And God saw every thing that he had made, and, behold, it was very good.\u201d &#8230; Rossum, Fabry, Gall, great inventors, what did you ever invent that was great when compared to that girl, to that boy, to this first couple who have discovered love, tears, beloved laughter, the love of husband and wife?<\/p><\/blockquote>\n\n\n\n<p>Someone without that faith will have a hard time seeing such a bright future arising from the world that <i>R.U.R.<\/i> depicts; accordingly, it is not clear that we should assume Alquist simply speaks for \u010capek. What seems closer to the truth for eyes of weaker faith is that humans, and the robots created in their image, will have alike destroyed themselves by undercutting the conditions necessary for their own existences. Nature and life will remain, as per Alquist\u2019s encomium, but in a short time love will be extinguished.<\/p>\n\n\n<div class=\"lazyblock-section-break-Z2sDDMD wp-block-lazyblock-section-break\"><div class=\"block-tna-section-break mt-12 pt-2 mb-6\">\r\n  <div class=\"mb-12 pb-2 flex justify-center\">\r\n    <svg class=\"fill-current\" height=\"1\" width=\"91\" viewBox=\"0 0 91 1\">\r\n      <path d=\"M91 .5L62.706 1H28.447L0 .5 28.447 0h34.259L91 .5z\"\/>\r\n    <\/svg>\r\n  <\/div>\r\n\t<h5 class=\"leading-none font-callunasans font-bold text-center text-almost-black text-lg\">\r\n\t\tMoral Machines and Human Responsibility\t<\/h5>\r\n<\/div><\/div>\n\n\n<p class=\"has-drop-cap\">Today\u2019s thinkers about moral machines could dismiss <i>R.U.R.<\/i> as an excessively \u201cHollywood\u201d presentation of just the sort of outcome they are seeking to avoid. But though \u010capek does not examine design features that would produce \u201cfriendly\u201d behavior in the exact same way they do, he has at the least taken that issue into consideration, and arguably with much greater understanding and depth. Indeed, as we have seen, it is in part the diversity of understandings of Friendly AI that contributes to the play\u2019s less than desirable results. Furthermore, such a dismissive response to the play would not do justice to the most important issue \u010capek tackles, which is one that the present-day AI authors all but ignore: the moral consequences for human beings of genuinely moral machines.<\/p>\n\n\n\n<p>For \u010capek, the initial impulse to create robots comes from old Rossum\u2019s Baconian sense that, with respect even to human things, there is every reason to think that we can improve upon the given \u2014 and thereby prove ourselves the true masters of nature, unseating old superstitions about Divine creation. You could say that from this \u201cfrightful materialist\u201d point of view, as Domin described it, we are being called to accept responsibility for \u2014 well, <i>everything<\/i>. But what old Rossum and his son find is that it is much harder to reproduce \u2014 let alone improve upon \u2014 the given than they thought. Their failure at this complete mastery opens the door to such success as young Rossum <i>can<\/i> claim: the creation of something useful to human beings. On this basis Domin can establish his grand vision of reshaping the human condition. But that grand vision contains a contradiction, as is characteristic of utopian visions: Domin wants to free us from the ties of work and of dependence, or at least from dependence on each other \u2014 in short, he wants to be responsible for changing the human condition in such a way as to allow people to be irresponsible.<\/p>\n\n\n\n<p>Today\u2019s authors on machine morality, focused as they are on the glories of an AI-powered, post-human future, are unwittingly faced with the same problem, as we will see. But it must be noted first how they also operate on the same materialist premises that informed the Rossums\u2019 efforts. It was this materialism that made it possible for the play\u2019s robot creators to think they could manufacture something that was very much like a human being, and yet much simplified. They were reductionist about the characteristics necessary to produce useful workers. Yet that goal of humanlike-yet-not-human beings proved to be more elusive than they expected: You can throw human characteristics out with a pitchfork, \u010capek seems to say, but human creations will reflect the imperfections of their creators. Robotic Palsy turns into full-fledged revolt. People may have been the first to turn robots against people; the modified robots who led the masses may have been less simple than the standard model. But in the end, it seems that even the simplified versions can achieve a terrible kind of humanity, a kind born \u2014 just as today\u2019s AI advocates claim we are about to do as we usher in a post-human future \u2014 through struggling up out of \u201chorror and suffering.\u201d<\/p>\n\n\n\n<p>Wallach and Allen more than Yudkowsky are willing to model their moral machines on human moral development; Yudkowsky prides himself on a model for moral reasoning shorn of human-like motivations. Either way, are there not reasons to expect that their moral machines would be subject to the same basic tendencies that afflict \u010capek\u2019s robots? The human moral development Wallach and Allen\u2019s machines will model involves learning a host of things that one should <i>not<\/i> do \u2014 so they would need to be autonomous, and yet not have the ability to make these wrong choices. Something in that formulation is going to have to give; considering the split-second decisions that Wallach and Allen imagine their moral machines will have to make, why should we assume it will be autonomy? Yudkowsky\u2019s Friendly AI may avoid that problem with its alien style of moral reasoning \u2014 but it will still have to be active in the human world, and its human subjects, however wrongly, will still have to interpret its choices in human terms that, as we have seen, might make its advanced benevolence seem more like hostility.<\/p>\n\n\n\n<p>In both cases, it appears that it will be difficult for human beings to have anything more than mere faith that these moral machines really do have our best interests at heart (or in code, as it were). The conclusion that we must simply <i>accept<\/i> such a faith is more than passingly ironic, given that these \u201cfrightful materialists\u201d have traditionally been so totally opposed to putting their trust in the benevolence of God, in the face of what they take to be the obvious moral imperfection of the world. The point applies equally, if not more so, to today\u2019s Friendly AI researchers.<\/p>\n\n\n\n<p>But if moral machines will not heal the world, can we not at least expect them to make life easier for human beings? Domin\u2019s effort to make robot slaves to enhance radically the human condition is reflected in the desire of today\u2019s authors to turn over to AI all kinds of work that we feel we would rather not or cannot do; and his confidence is reflected even more so, considering the immensely greater amount of power proposed for AIs. If it is indeed important that we accept responsibility for creating machines that we can be confident will act responsibly, that can only be because we increasingly expect to abdicate our responsibility to them. And the bar for what counts as work we would rather not do is more readily lowered than raised. In reality, or in our imaginations, we see, like Adam Smith\u2019s little boy operating a valve in a fire engine, one kind of work that we do not have to do any more, and that only makes it easier to imagine others as well, until it becomes harder and harder to see what machines could not do better than we, and what we in turn are for.<\/p>\n\n\n\n<p>Like Domin, our contemporary authors do not seem very interested in asking the question of whether the cultivation of human irresponsibility \u2014 which they see, in effect, as liberation \u2014 is a good thing, or whether (as Alquist would have it) there is some vital connection between work and human decency. \u010capek would likely connect this failure in Domin to his underlying misanthropy; Yudkowsky\u2019s transhumanism begins from a distinctly similar outlook. But it also means that whatever their apparently philanthropic intentions, Wallace, Allen, Yudkowsky, and their peers may be laying the groundwork for the same kind of dehumanizing results that \u010capek made plain for us almost a century ago.<\/p>\n\n\n\n<p>By design, the moral machine is a safe slave, doing what we want to have done and would rather not do for ourselves. Mastery over slaves is notoriously bad for the moral character of the masters, but all the worse, one might think, when their mastery becomes increasingly nominal. The better moral machines work, the more we will depend on them, and the more we depend on them, the more we will in fact be subject to them. Of course, we are hugely dependent on machines already, and only a fringe few would go so far as to say that we have become enslaved to them. But my car is not yet making travel decisions for me, and the power station is not yet deciding how much power I should use and for what purposes. The autonomy supposed to be at the root of moral machines fundamentally changes the character of our dependence.<\/p>\n\n\n\n<p>The robot rebellion in the play just makes obvious what would have been true about the hierarchy between men and robots even if the design for robots had worked out exactly as their creators had hoped. The possibility that we are developing our \u201cnew robot overlords\u201d is a joke with an edge to it precisely to the extent that there is unease about the question of what will be left for humans to do as we make it possible for ourselves to do less and less. The end of natality, if not an absolutely necessary consequence of an effort to avoid all work and responsibility, is at least understandable as an extreme consequence of that effort. That extreme consequence is not entirely unfamiliar in a world where technologically advanced societies are experiencing precipitously declining birthrates, and where the cutting edge of transhumanist techno-optimism promises an individual Protean quasi-immortality at the same time as it anticipates what is effectively the same human extinction that is achieved in <i>R.U.R.<\/i>, except packaged in a way that seems nice, so that we are induced to choose rather than fight it.<\/p>\n\n\n\n<p>The quest to take responsibility for the creation of machines that will allow human beings to be increasingly irresponsible certainly does not have to end this badly, and may not even be most likely to end this badly. Were practical wisdom to prevail, or if there is inherent in the order of things some natural right, or if, as per Alquist and Nana, we live in a Providential order, or if the very constraints of our humanity will act as a shield against the most thoroughly inhumane outcomes, then human beings might save themselves or be saved from the worst consequences of our own folly. By partisans of humanity, that is a consummation devoutly to be wished. But it is surely not to be counted upon.<\/p>\n\n\n\n<p>After all, <i>R.U.R.<\/i> is precisely a story about how the human soul, <a href=\"http:\/\/www.amazon.com\/gp\/product\/1932236848?ie=UTF8&amp;tag=the-new-atlantis-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=1932236848\">to borrow Peter Lawler\u2019s words<\/a>, \u201cshines forth in and transforms all our thought and action, including our wonderful but finally futile efforts to free ourselves from nature and God.\u201d Yet the souls so exhibited are morally multifaceted and conflicted; they transform our actions with unintended consequences. And so the ultimate futility of our efforts to free ourselves from nature and God exacts a terrible cost \u2014 even if, as Alquist believes, Providence assures that some of what is best in us survives our demise.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Charles T. Rubin on the paradoxes of the project to program virtue<\/p>\n","protected":false},"author":1,"featured_media":20132,"template":"","article_type":[13],"noteworthy_people":[],"topics":[2266,5025,5041,2281,5015],"_links":{"self":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article\/10392"}],"collection":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article"}],"about":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/types\/article"}],"author":[{"embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/users\/1"}],"version-history":[{"count":0,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article\/10392\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/media\/20132"}],"wp:attachment":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/media?parent=10392"}],"wp:term":[{"taxonomy":"article_type","embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article_type?post=10392"},{"taxonomy":"noteworthy_people","embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/noteworthy_people?post=10392"},{"taxonomy":"topics","embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/topics?post=10392"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}