{"id":10152,"date":"2008-03-24T00:00:00","date_gmt":"2008-03-24T04:00:00","guid":{"rendered":"http:\/\/localhost\/thenewatlantis.com\/publications\/till-malfunction-do-us-part"},"modified":"2021-09-24T09:56:49","modified_gmt":"2021-09-24T13:56:49","slug":"till-malfunction-do-us-part","status":"publish","type":"article","link":"https:\/\/www.thenewatlantis.com\/publications\/till-malfunction-do-us-part","title":{"rendered":"Till Malfunction Do Us Part"},"content":{"rendered":"\n<p class=\"has-drop-cap\">In a recent issue of the journal <em>Psychological Science<\/em>, researchers from the University of Chicago and Harvard reported that people are more likely to anthropomorphize animals and gadgets when they are lonely. \u201cPeople engage in a variety of behaviors to alleviate the pain of [social] disconnection,\u201d the authors write, including \u201cinventing humanlike agents in their environment to serve as potential sources of connection.\u201d This finding is hardly surprising, and is not unrelated to one of the favorite objectives of the budding consumer robotics industry: manufacturing \u201ccompanions\u201d for the isolated elderly.<\/p>\n\n\n\n<p>Japan \u2014 the country with the world\u2019s highest percentage of elderly people and lowest percentage of children \u2014 has been at the forefront of this domestic-robot trend. In 2005, Mitsubishi released its \u201cWakamaru\u201d robot to considerable fanfare. The three-foot-tall machine, its appearance something like a yellow plastic snowman, was designed to provide limited home care to the aged. It can \u201crecognize\u201d up to ten human faces, respond to voice commands, deliver e-mail and weather forecasts from the Internet, wheel around after people in their homes, and contact family members or health care personnel when it detects a potential problem with its ward.<\/p>\n\n\n\n<p>Despite Mitsubishi\u2019s high expectations, the first batch of one hundred Wakamaru did not sell well. At $14,500 apiece, Mitsubishi received only a few dozen orders, and then faced cancellations and returns as purchasers realized the robot couldn\u2019t clean or cook, or do much of anything. Customers were amused to find the machine unexpectedly \u201cwatching television\u201d or \u201cdancing,\u201d but were frustrated by its limited vocabulary and actual capabilities. Production was called off after three months, and the remaining stock of Wakamaru now work as rentable receptionists \u2014 a common fate for first-generation humanoid robots, too expensive for the general market.<\/p>\n\n\n\n<p>In the past decade, other robots intended for the elderly made their debuts in nursing homes, including \u201cParo,\u201d a furry, white, squawking baby seal made and sold in Japan. In videos viewable online, it is plain that nursing-home residents, including those suffering from advanced Alzheimer\u2019s, take comfort in watching, touching, talking to, singing at, and cleaning Paro. Like the cats and dogs sometimes used in therapy \u2014 but with less unpredictability and mess \u2014 Paro\u2019s robotic twitching and yelping seem to evoke a calm, warm focus in depressed, lonely, and ailing patients. Other robots provoke similar reactions, like \u201cMy Real Baby,\u201d a robotic toy doll. \u201cThese are used to soothe individuals,\u201d according to a 2006 paper by three M.I.T. scholars:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>The doll helps to quell the resident\u2019s anxiety. After a period of time (usually less than an hour), [the nursing home director] will return to the resident, take back the doll, and return it to her office. Often, when she takes the doll back, its mouth is covered in oatmeal, the result of a resident attempting to feed it. The reason that she takes the doll back, she says, is that \u201ccaring\u201d for the doll becomes too much to handle for the resident.<\/p><\/blockquote>\n\n\n\n<p>It is difficult to fault nursing home directors who, out of compassion, offer sad patients the comfort of interacting with robotic toys. Other uses of today\u2019s interactive robots seem essentially benign, too \u2014 like the use of \u201cNico\u201d and \u201cKASPAR,\u201d child-size humanoid robots, as tools for the social training of autistic children, or the employment of the industrious robotic guard dragon \u201cBanryu,\u201d which prowls the house smelling for smoke and looking for intruders.<\/p>\n\n\n<div class=\"lazyblock-discussed-fmhAJ wp-block-lazyblock-discussed\"><div class=\"block-tna-discussed block-offset-float font-calluna\">\r\n  <div class=\"bg-almost-white py-8 px-6\">\r\n          <div class=\"font-bold text-lg text-center mb-2\">\r\n        Reviewed in this article      <\/div>\r\n    \r\n                <figure>\r\n        <a href=\"\">\r\n          <img decoding=\"async\" class=\"mx-auto block object-contain\" style=\"height: 16rem\" \r\n               src=\"https:\/\/www.thenewatlantis.com\/wp-content\/uploads\/2020\/09\/Love-and-Sex-With-Robots.jpg\" \/>\r\n        <\/a>\r\n      <\/figure>\r\n        \r\n          <div class=\"my-3 links-no-underline links-hover italic text-base text-center leading-tight\">\r\n        <a href=\"\">\r\n          Love and Sex with Robots: The Evolution of Human-Robot Relationships        <\/a>\r\n      <\/div>\r\n    \r\n          <div class=\"text-grey link-author text-base text-center\">\r\n        David Levy      <\/div>\r\n    \r\n    <div class=\"text-sm text-center mt-2\">\r\n      Harper ~ 2007 ~ 334 pp.<br>$24.95 (cloth) $14.95 (paper)    <\/div>\r\n  <\/div>\r\n<\/div><\/div>\n\n\n<p>But some analysts predict that we are nearing a day when human interactions with robots will grow far more intimate \u2014 an argument proffered in its most exaggerated form in <em><a title=\"Love and Sex with Robots\" href=\"http:\/\/www.amazon.com\/exec\/obidos\/ASIN\/0061359750\/the-new-atlantis-20\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Love and Sex with Robots<\/strong><\/a><\/em>, a new book that contends that by the year 2050, people will be marrying robots. The author, David Levy, is a British artificial-intelligence entrepreneur and the president of the International Computer Games Association. In the book, his Ph.D. dissertation from the University of Maastricht, Levy first explains why people fall in love with one another \u2014 a great and timeless mystery which, with the aid of social scientific formulae and calibrated ten-point checklists, he helpfully distills into twenty-one illuminating pages. He then sets out to explain why the blind rascal Cupid might have as much success \u2014 or more \u2014 striking passion between humans and machines. With such astute observations as \u201c\u2018like\u2019 is a feeling for someone in whose presence we feel good,\u201d Levy lays out the potential for robots to exhibit \u201cbehavior patterns\u201d that will induce people to fall for them, heart and soul:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>A robot who wants to engender feelings of love from its human might try all sorts of different strategies in an attempt to achieve this goal, such as suggesting a visit to the ballet, cooking the human\u2019s favorite food, or making flattering comments about the human\u2019s new haircut, then measuring the effect of each strategy by conducting an fMRI scan of the human\u2019s brain. When the scan shows a higher measure of love from the human, the robot would know that it had hit upon a successful strategy. When the scan corresponds to a low level of love, the robot would change strategies.<\/p><\/blockquote>\n\n\n<p>These made-to-order lovers, Levy says, will look like movie stars, write symphonies better than Mozart, possess a \u201csuperhuman-like consciousness,\u201d converse with almost-infinite intelligence in any given language, demonstrate surpassing sensitivity to their owners\u2019 every thought and need, and at a moment\u2019s notice will be \u201cin the mood.\u201d Soon to be available for purchase at a location near you, their entire virtual existences will be devoted to making even the most luckless lover feel like a million bucks.<\/p>\n<p>For those who desire absolute submissiveness in a mate, robots, with their admittedly \u201cunsophisticated\u201d personalities, will offer the logical solution (assuming they are not subject to the same technical frustrations and perversities endemic to all other appliances). But for those who feel the need for za-za-zoom, the love-bots of the future will be programmed to be feisty:<\/p>\n\n\n<blockquote class=\"wp-block-quote\"><p>Surprises add a spark to a relationship, and it might therefore prove necessary to program robots with a varying level of imperfection in order to maximize their owner\u2019s relationship satisfaction&#8230;. This variable factor in the stability of a robot\u2019s personality and emotional makeup is yet another of the characteristics that can be specified when ordering a robot and that can be modified by its owner after purchase. So whether it is mild friction that you prefer or blazing arguments on a regular basis, your robot\u2019s \u201cfriction\u201d parameter can be adjusted according to your wishes.<\/p><\/blockquote>\n\n\n<p>Levy admits to finding it a little \u201cscary\u201d that robots \u201cwill be better husbands, wives, and lovers than our fellow human beings.\u201d But in the end, the superiority of machines at pitching woo needn\u2019t threaten humans: they can be our mentors, our coaches, our sex therapists \u2014 with programmable patience, sympathy, and \u201chumanlike sensitivity.\u201d<\/p>\n<p>While Levy\u2019s thesis is extreme (and terribly silly), many of its critical assumptions are all too common. It should go without saying that the attachment a person has to <em>any<\/em> object, from simple dolls to snazzy electronics, says infinitely more about <em>his<\/em> psychological makeup than the object\u2019s. Some roboticists are very clear on this distinction: Carnegie Mellon field robotics guru William \u201cRed\u201d Whittaker, who has \u201cfathered\u201d (as writer Lee Gutkind puts it in his 2007 book <strong><em><a title=\"Almost Human\" href=\"http:\/\/www.amazon.com\/gp\/product\/0393058670\/103-3773067-0063067?ie=UTF8&amp;tag=the-new-atlantis-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=0393058670\" target=\"_blank\" rel=\"noopener noreferrer\">Almost Human<\/a><\/em><\/strong>) more than sixty robots, advises his students and colleagues not to form emotional connections with them. \u201cThey certainly don\u2019t have the same feelings for you,\u201d Whittaker says. \u201cThey are not like little old ladies or puppies. They are just machines.\u201d<\/p>\n<p>The very premise underlying the discipline of sociable robotics, however, is that a machine can indeed mean something more. Their developers capitalize on the natural sociability of <em>humans<\/em>, our inborn inclinations to empathize with, nurture, or confide in something generating lifelike cues, to create the illusion that a lump of wires, bits, and code is sentient and friendly. Take, for example, the famous case of the cartoon-cute robot \u201cKismet\u201d developed by Cynthia Breazeal at M.I.T. in the 1990s. Breazeal designed Kismet to interact with human beings by wiggling its eyebrows, ears, and mouth, reasoning that if Kismet were treated as a baby, it would develop like one. As she put it in a 2003 interview with the <em>New York Times<\/em>, \u201cMy insight for Kismet was that human babies learn because adults treat them as social creatures who can learn; also babies are raised in a friendly environment with people. I hoped that if I built an expressive robot that responded to people, they might treat it in a similar way to babies and the robot would learn from that.\u201d The <em>Times<\/em> reporter naturally asked if Kismet ever learned from people. Breazeal responded that as the <em>engineers <\/em>learned more about the robot, they were able to update its design for more sophisticated interaction \u2014 a \u201cpartnership for learning\u201d supposedly indicative of the emotional education of Kismet, whose active participation in that partnership is glaringly absent from Breazeal\u2019s account.<\/p>\n<p>It is important, Breazeal emphasizes in her published dissertation <strong><em><a title=\"Designing Sociable Robots\" href=\"http:\/\/www.amazon.com\/gp\/product\/0262524317\/103-3773067-0063067?ie=UTF8&amp;tag=the-new-atlantis-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=0262524317\" target=\"_blank\" rel=\"noopener noreferrer\">Designing Sociable Robots<\/a><\/em><\/strong>, \u201cfor the robot to <em>understand its own self<\/em>, so that it can socially reason about itself in relation to others.\u201d Toward this goal of making conscious robots, some researchers have selected markers of self-understanding in human psychological development, and programmed their machines to achieve those specific goals. For example, Nico, the therapeutic baby bot, can identify itself in a mirror. (Aside from human beings, only elephants, apes, and dolphins show similar signs of self-recognition.) Kismet\u2019s successor, \u201cLeo,\u201d can perform a complicated \u201ctheory of mind\u201d cooperation task that, on the surface, appears equivalent to the psychological development of a four- or five-year-old. But these accomplishments, rather than demonstrating an advanced awareness of mind and self, are choreographed with pattern recognition software, which, though no small feat of coding cleverness, has none of the significance of a baby or an elephant investigating himself in a mirror.<\/p>\n<p>Still, many artificial intelligence (AI) aficionados \u2014 including David Levy \u2014 hold that the interior state or lack thereof is not important; the outward markers of intelligence should be sufficient indicators of it. AI patriarch Alan Turing famously proposed in 1950 a test in which a machine would be deemed intelligent if a human conversing with the machine and another human cannot distinguish the two. (The implications and flaws of Turing\u2019s test were unpacked at length in these pages by Mark Halpern [\u201c<a title=\"The Trouble with the Turing Test\" href=\"\/publications\/the-trouble-with-the-turing-test\">The Trouble with the Turing Test<\/a>,\u201d Winter 2006].) Levy submits that this test be applied not just to machine intelligence but also to emotions and other aspects of personality: If a machine <em>behaves<\/em> as though it has feelings, who\u2019s to say it doesn\u2019t? Thus he predicts that by the year 2025, robots will not only be fully at home in the human emotional spectrum, but will even \u201cexhibit nonhuman emotions that are peculiar to robots\u201d \u2014 an absurdly unserious claim. (One robot frequently used in studies of emotion simulation is \u201cFeelix\u201d the Lego humanoid, designed to express five of biological psychologist Paul Ekman\u2019s six \u201cuniversal emotions.\u201d Curiously, disgust, the sixth emotion, was deliberately excluded from Feelix\u2019s repertoire.)<\/p>\n<p>When explicitly defended, all such claims rest on the premise that human feelings are themselves nothing but the product of sophisticated biochemical mechanics. From the perspective that physiological processes and responses to stimuli comprise our emotions, \u201creal\u201d feeling is as available to robots as to living beings. \u201cEvery person I meet is &#8230; a machine \u2014 a big bag of skin full of biomolecules interacting according to describable and knowable rules,\u201d says Rodney Brooks, former director of the M.I.T. Artificial Intelligence Laboratory, in his 2002 book <strong><em><a title=\"Flesh and Machines\" href=\"http:\/\/www.amazon.com\/gp\/product\/037572527X\/103-3773067-0063067?ie=UTF8&amp;tag=the-new-atlantis-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=037572527X\" target=\"_blank\" rel=\"noopener noreferrer\">Flesh and Machines: How Robots Will Change Us<\/a><\/em><\/strong>. \u201cWe, all of us, overanthropomorphize humans, who are after all mere machines.\u201d<\/p>\n<p>One might question how those who accuse <em>anthropos<\/em> of \u201coveranthropomorphizing\u201d himself propose to make convincingly human machines, with so little understanding of what constitutes humanity. Robots, after all, are created in the image of their programmers. Kathleen Richardson, a doctoral candidate in anthropology at Cambridge, spent eighteen months in Brooks\u2019s lab observing the interaction between the humans and the robots and \u201cfound herself just as fascinated by the roboticists at M.I.T. as she was by the robots,\u201d as Robin Marantz Henig reported in the <em>New York Times<\/em>:<\/p>\n\n\n<blockquote class=\"wp-block-quote\"><p>She observed a kinship between human and humanoid, an odd synchronization of abilities and disabilities. She tried not to make too much of it. \u201cI kept thinking it was merely anecdotal,\u201d she said, but the connection kept recurring. Just as a portrait might inadvertently give away the painter\u2019s own weaknesses or preoccupations, humanoid robots seemed to reflect something unintended about their designers. A shy designer might make a robot that\u2019s particularly bashful; a designer with physical ailments might focus on the function \u2014 touch, vision, speech, ambulation \u2014 that gives the robot builder the greatest trouble.<\/p><\/blockquote>\n\n\n<p>One can just imagine a society populated by robo-reflections of the habits, sensitivities, and quirks of engineers. (There are, of course, simple alternatives: Lee Gutkind shares the telling little fact that at Carnegie Mellon, one saucy \u201croboceptionist\u201d called \u201cValerie,\u201d which likes to dish about its bad dates with vacuum cleaners and sessions with a psychotherapist, was programmed by computer scientists \u2014 but with a storyline designed by the School of Drama kids.)<\/p>\n<p>The latter half of Levy\u2019s book, a frighteningly encyclopedic treatise on vibrators, prostitution, sex dolls, and the short leap from all of that to sex with robots, scarcely deserves mention. Levy begins it, however, with the familiar story of Pygmalion, in a ham-handed act of mythical misappropriation.<\/p>\n<p>The example of Pygmalion, though, is inadvertently revealing because its true significance is precisely the reverse of what Levy intends. In Ovid\u2019s rendition of the tale, King Pygmalion is a sculptor, surrounded in the court by \u201cstrumpets\u201d so bereft of shame that \u201ctheir cheeks grew hard, \/ They turned with little change to stones of flint.\u201d Disgusted by their behavior, he thoroughly rejects womankind and carves himself a statue \u201cmore beautiful than ever woman born.\u201d Desiring his own masterwork, he kisses it, caresses it, and speaks to it as to his darling. In answer to his fervent supplication for \u201cthe living likeness\u201d of his ivory girl, Venus brings the ivory girl herself to life, and she bears Pygmalion a daughter. Two generations later, their strange union comes to a sad fruition, as Pygmalion\u2019s descendants collapse into incest and destruction.<\/p>\n<p>Levy shallowly wants us to see in Pygmalion\u2019s example only that human nature is what it always has been \u2014 that today\u2019s attractions have ancient parallels; he glibly notes that \u201csex with human-like artifacts is by no means a twenty-first-century phenomenon.\u201d But if anything, Pygmalion\u2019s story is a warning against just the temptation Levy dangles before us. Even as Pygmalion is repulsed by the stony shamelessness of the women of Cyprus, his stony unforgivingness of the flaws of living human beings leaves him with a stone as the center of his desire. Pursuing this unnatural union leads his family into ruin, the final result of the terrible inversion of erotic love between creator and creation.<\/p>\n<p>Levy mentions procreation only in passing, merely noting that the one shortcoming of \u201chuman-robot sexual activity\u201d is that children are not a natural possibility. He goes on to suggest that the robot half of the relationship might contribute to reproduction by designing other robots inspired by its human lover. What it might mean, for example, for an adopted or artificially-conceived child to grow up with a robot for a \u201cparent\u201d is never once considered.<\/p>\n<p>There are, however, scattered about Levy\u2019s book half-baked insights about love, most notably its connection to imperfection and mortality. \u201cSome humans might feel that a certain fragility is missing in their robot relationship,\u201d he muses \u2014 but hastily adds that fragility, like every other necessary or desirable feature, can just be simulated. More serious, however, is his concession that the \u201cone enormous difference\u201d between human and robotic love is that a human is irreplaceable. This means, he says, that a human need never sacrifice himself to protect his robot, because a replica will always be available; its \u201cconsciousness,\u201d backed up on a hard drive somewhere, can always be restored.<\/p>\n<p>Levy fails to see the trouble with his fantasy, because he begins by missing altogether the meaning of marriage, sex, and love. He errs not in overestimating the potential of machines, but in underrating the human experience. He sees only matter in motion, and easily imagines how other matter might move better. He sees a simple physical challenge, and so finds a simple material solution. But there is more to life than bodies in a rhythmic, programmed dance of \u201cliving likeness.\u201d That which the living likeness is like is far from simple, and more than material. Our wants and needs and joys and sorrows run too deep to be adequately imitated. Only those blind to that depth could imagine they might be capable of producing a machine like themselves. But even they are mistaken.<\/p>","protected":false},"excerpt":{"rendered":"<p>Caitrin Nicol on predictions of robotic intimacy<\/p>\n","protected":false},"author":1,"featured_media":7512,"template":"","article_type":[4647],"noteworthy_people":[],"topics":[2272,5013],"_links":{"self":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article\/10152"}],"collection":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article"}],"about":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/types\/article"}],"author":[{"embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/users\/1"}],"version-history":[{"count":2,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article\/10152\/revisions"}],"predecessor-version":[{"id":23153,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article\/10152\/revisions\/23153"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/media\/7512"}],"wp:attachment":[{"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/media?parent=10152"}],"wp:term":[{"taxonomy":"article_type","embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/article_type?post=10152"},{"taxonomy":"noteworthy_people","embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/noteworthy_people?post=10152"},{"taxonomy":"topics","embeddable":true,"href":"https:\/\/www.thenewatlantis.com\/wp-json\/wp\/v2\/topics?post=10152"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}