Saturday, March 01, 2008

Is man-machine marriage on the horizon?

Maybe so.

I find the idea ridiculous, but David Levy's argument from inevitability is persuasive.

I certainly expect that such a silly notion will not be long-lived (although it could enjoy a renaissance once AIs become convincing), but I also suspect there's enough momentum behind the Culture of Death's ambiguization of marriage that someone will be getting a marriage certificate signed before all's said and done.

Said Levy:

"If the alternative is that you are lonely and sad and miserable, is it not better to find a robot that claims to love you and acts like it loves you? Does it really matter, if you’re a happier person?"


Of course it matters--an android (or gynoid, I suppose) isn't going to fake love any better than an otherwise well-adjusted real human can fake it--but if you can fool yourself into thinking you're happy, into distracting yourself from genuine pain with playing house by yourself, and with all the things commensurate with house-playing, then I would be surprised if no one tried it. Marriage with animals has been tried--or aped, rather, if you'll pardon the incongruent pun--at least once. Robots are already being made to perform certain household functions, and in some parts of the world, lifelike and life-sized dolls are already treated as surrogate girlfriends. Having one that can cook dinner and scrub toilets is still a ways off--though Levy only predicts that we'll see it happening by the middle of the century--but having a conversation with a false person that you've programmed, instead of imagining the conversation, will seem pretty appealing to affluent and very lonely people.

Levy naturally assumes that people want to marry just for happiness, but when a robotic "marriage" goes south, it will probably seem as natural as any other human marriage that has failed. It will be interesting to see how many people with failed robotic spouses treat it like with all the aplomb of a cell phone upgrade and how many show the same potentially disturbing overattachment of a guy getting wistful over trading in his first car.

Levy continues:

"It’s not that people will fall in love with an algorithm but that people will fall in love with a convincing simulation of a human being, and convincing simulations can have a remarkable effect on people"


I wonder: does "remarkable" necessarily equal "good?" No, never mind; I know the answer already.


Rutgers University biological anthropologist Helen Fisher, renowned for her studies on romantic love, suggests that love seems dependent on three key components: sex, romance and deep attachments. These components, she remarks, “can be triggered by all kinds of things. One can trigger the sex drive just by reading a book or seeing a movie—it doesn’t have to be triggered by a human being. You can feel a deep attachment to your land, your house, an idea, a desk, alcohol or whatever, so it seems logical that you can feel deeply attached to a robot. And when it comes to romantic love, you can fall madly in love with someone who doesn’t know you exist. It shows how much we want to love.”


Dr. Fisher has a point, but the human drive to love doesn't mean that a three-element reductionism is proper to a well-integrated human. That people will attempt to have real relationships with fancy but stupid machines is--well, people anthropomorphize objects all the time, so it's precedented, first of all--not so indicative of the success of the robo-spouse industry as of the level of dysfunction that can be found in humanity.

It's cute when a child is attached to a doll or invents an imaginary friend. For a child, it's also not unhealthy; the young person becomes accustomed to taking care of a doll and hopefully, by imitating the real mother, will carry some foundation of parenting forward into adulthood; the child with an invisible playmate exercises the imagination, which is a faculty that requires exercise like any muscle and does not cease to be at the onset of adulthood.

When an adult becomes attached to a doll, even a mobile, speaking one, it is somewhat disturbing. Adult skills are learned and practiced in youth so that they will be used in earnest in adulthood. As the youth matures, the exercise becomes more sophisticated and realistic, and playing at adult behavior for the sake of playing should diminish as the lessons to be learned by simulating real social activity dwindle, and real adult behavior can be engaged more and more.

An artist will tell you that you have to emphasize the basics in rehearsal, but rehearsal without performance is empty. It's contraceptive, masturbatory.

People who would strive for a relationship with a lump of metal and latex will be expressing a genuine need for love, but machines aren't a solution. Whether the reason is fear, habit, or lack of thought about what marriage is really about, these people wouldn't be ready for a real relationship, and the solution is to get them ready, not give them a pressure release valve that spares them the risks of living.

If living and loving are like muscles to be exercised, and ones that are atrophied from decades of neglect, should they not be trained as they would have been in childhood? I will not deny the value of some coaching here. Older siblings and friends and therapists can all provide some tools for learning to deal with other, real people. Can androids be used to this end? Maybe, but they'd need to do more than convincingly simulate humanity in order to really do people some good, and it would be a lot more cost effective--if you want to be utilitarian about it--to simply employ actual humans. If someone isn't to the point where he can deal normally with real people even under limited conditions, then he's got more fundamental problems than being shy around girls or comfortable around abstracted personalities on a computer screen, and giving him the means to nurse his pathology is not going to make it better. It just saves him the trouble of doing so.

As long as he's desperately pretending to live the American Dream, though, who are we to judge, though, right?

No comments: