Where would we be without humanoid robots?

Humans versus robots
"Evolution doesn't care if we are happy"

Is it allowed to do violence to humanoid robots? If the first of them demands civil rights, it will be too late, says neuroethicist Thomas Metzinger. Why humanoid robots create "social hallucinations" in us and what that does to our psyche.

Mr. Metzinger, humanoid robots are already being used as concierges in Japanese hotels, and US companies recently introduced a new generation of sex dolls that can speak and react to touch. Are robots becoming more and more like us?

In areas where humans interact with machines, we will increasingly experience such humanoid-looking interfaces. This is only a small part of robotics, but as far as these anthropomorphic interfaces are concerned, the technology is now very advanced. In the laboratory, we can measure your head in 20 minutes and create a photo-realistic avatar from the data, we can give it eye-tracking movements and emotional facial expressions. If we steal audio data from your mobile phone, it can even speak like you. With video telephony, your avatar would be indistinguishable from you.
 
For years you have been dealing with the image that humans have of themselves and create. As a neuroethicist, do you see problems if our ticket machines soon look like ticket agents?

I fear that in the vast majority of cases these humanoid robots will be used for consumer manipulation. After all, the robot will analyze your facial expressions and begin to understand: Is the customer bored? What does he actually want to hear?

When the computer goes crazy, we yell at it.

And can the machine use that?

We have so-called mirror neurons in the brain. When someone smiles at you, you are subliminally imitating that with their facial muscles. You can use that to influence. Human-like robots disguise the fact that they are primarily not obliged to our interests, but to the interests of their programmers. Just like Google's algorithm optimizes its hits not for the users, but for its advertisers. But if you can't help but find a machine nice, you might want to upgrade yourself because it smiles so contagiously.
 
But I know that I'm dealing with a robot.

Maybe not. Humanoid robots trigger what I call "social hallucinations" in us. We humans have the ability to imagine that we are dealing with a self-confident counterpart, even when that is not the case. There are preliminary stages of this: Children and indigenous people often believe in an animated nature. When the computer goes crazy, we yell at it. Other people love their cars or miss their cell phones.

Sounds clumsy.

But it makes sense in evolutionary terms. Imagine you are in a dark forest at night. There is rustling in the bushes. Of course, those who have hallucinated a predator ten times more than those who have missed it survived. In a sense, evolution rewarded paranoia. And that's why we humans still have irrational fears so often today. Evolution doesn't care if we are happy.
 
Roboy | © Roboy 2.0 - roboy.org Nevertheless, some manufacturers deliberately avoid making their robots look too much like real people. The German “Roboy”, for example, with his child's schema head, looks more like Caspar, the friendly ghost. If the robot looks too real, many people actually find it scary. Researchers call this gray area "uncanny valley". It's like a horror movie. When we notice that something is looking at us that is actually dead, it touches upon primal fears.
 
In fact, the topic of robots has preoccupied humans for thousands of years. Even in the Iliad, Hephaestus created automatic servants. How do you explain this fascination?


One motive is certainly the desire to play a little god yourself. In addition, the fear of death should be an important point. With transhumanism, a new form of religion has now emerged that does without church and God, but still offers the denial of mortality as a core function. Robots and, above all, avatars suggest the hope that we could defeat death by living on in artificial bodies one day. For some researchers, however, it would be the greatest of all if he were the first to succeed in developing a robot with human intelligence or even consciousness.
 
According to a study, 90 out of 100 experts believe that we will see this by 2070.

I would not overlook such forecasts. In the 1960s, people thought that it would be another 20 years at most before we had machines that were as smart as humans. Such statements are a good way to raise funds for research, and if they turn out to be false, the expert is no longer alive.

The boundaries between man and machine are becoming more fluid.

What is a robot anyway, what distinguishes it from a machine?

I would define that in terms of the degree of autonomy. We have to start and stop a machine. Many robots can do this on their own. And robots are generally more flexible than, for example, a drill, they are programmable.
 
If you define life as organized matter with the ability to reproduce, wouldn't robots that you program to build new models also be life?

No, because they don't have a metabolism. But the distinction between natural and artificial systems may no longer be so easy to make in the future. For example, when we manufacture hardware that is no longer made of metal or silicon chips, but of genetically engineered cells. It is quite possible that there will soon be systems that are neither artificial nor natural.
 
TV series like "Westworld" predict that we will treat robots like slaves. Would you agree with that?

Possible. But you have to clearly distinguish two things. It makes a big difference whether these systems are intelligent or whether they are actually capable of suffering. Can you Dolores, the android from "Westworld“Do violence? The real ethical problem is that such behavior would damage our self-model.
 
As the?

It's similar to virtual realities. If criminal acts become consumable in this way, it can easily happen that the threshold to self-traumatization is exceeded. One brutalizes. It is something completely different whether, for example, violent porn is viewed two-dimensionally or can be “immersed” in three-dimensional and haptic terms. It is quite possible that disrespectful handling of humanoid robots leads to general anti-social behavior. In addition, humans have always used the latest technical systems as metaphors to describe themselves. When the first clocks and mechanical figures appeared in 1650, Descartes described the body as a machine. If we now encounter humanoid robots everywhere, then the question arises whether we don't see ourselves more as genetically determined automata, as biorobots. The boundaries between man and machine are becoming more fluid.

"How? Machine? This is my friend!"

What are the consequences for everyday life?

Imagine a generation growing up that plays with robots or avatars that are so lifelike that the distinction between living and dead, reality and dream is no longer developed in the brain to the same extent as we do. This does not have to mean that these children automatically have less respect for life. But I can already imagine that in 20 years' time someone will hit the table in an ethical dispute and say: "Now let's take it easy, this is just a machine!" And the answer is: "How?" Machine? This is my friend! Always been. "
 
Philosophers like Julian Nida-Rümelin therefore demand that we shouldn't build humanoid robots at all.

For a more differentiated solution, one should ask oneself whether there are actually sensible possible uses. Humanoid care robots could, for example, make people with dementia feel like they are not alone. Of course, one could argue whether that is not, in turn, degrading or patronizing towards the patient.
 
What limits would you draw?

The decisive point for me is not what the robots look like, but whether they are conscious, whether they are capable of suffering. Therefore, in terms of research ethics, I am resolutely against equipping machines with a conscious self-model, a sense of self. I know that is far in the future. But when the first robot calls for civil rights, it will be too late. If you then pull the plug, it could be murder. I want a moratorium on synthetic phenomenology.
 
So in the medium term we would need something like a global code of ethics for robotics.

Yes. It is absolutely clear that the big players would ignore it, just as the US ignores the International Criminal Court today. I find it particularly remarkable that most of the discussions that are already being held on the subject are not about ethics at all, but only about legal security and liability.

Artificial intelligence also confronts us with ourselves.

Because otherwise it can get expensive in case of doubt?

Correct. There really are a lot of questions. Imagine a Google car pulls into a roundabout, plus three other self-driving cars and two normal cars. Then a dog runs into the street. The cars recognize that there is going to be an accident and now have to calculate the least amount of damage. What is an animal's life worth? What's the life of a non-Google customer worth? What if one of the parties involved is a child or pregnant? Does that count twice? And what about in Muslim countries where women only have limited rights? Is the number of surviving men important there? What if there are Christians in the other car? That's the nice thing about robotics that we finally have to show our colors when it comes to all these questions. Artificial intelligence also confronts us with ourselves.
 
The philosopher Éric Sadin wrote in “Die Zeit” that relinquishing decision-making power to algorithms was an attack on the human condition.

It is actually a historically unique process that we cede autonomy to machines. The deeper problem is that every single step could be rational, even ethical. But of course there is a risk that development will slip away from us.
 
What are the consequences?

Military robotics is a particular problem. Drones will soon be so fast and intelligent that it no longer makes sense to ask an officer in a combat situation. An arms race begins and at some point the systems will have to react independently to one another. We already have that with trading systems on the stock exchange: In 2010 there was the flash crash, which resulted in losses of billions within seconds. Who knows what losses a military flash crash would produce.
 
The horror vision that machines could one day become dangerous to us is a popular topic in science fiction. In fact, recognized researchers such as Nick Bostrom from Oxford University are now also warning against an uncontrollable superintelligence.

Of course, one can speculate about whether we are just the ugly biological stirrup holders for a new level of evolution. But I think that robots are starting to build factories on their own and lock us up in reservations is completely absurd. On the other hand, new kinds of problems could indeed arise if an ethical superintelligence who only wants what is best for us comes to the conclusion that it would be better for us not to exist because our own lives are predominantly painful. The bigger problem at the moment is certainly not that the evil machines are overrunning us, but that this technology opens new gates to control us remotely. If algorithms start to independently optimize activities in social networks or channels of news flows, then political processes could be manipulated without us noticing.
 
The Silicon Valley developer Martin Ford fears in "The Rise of the Robots" above all economic consequences.

Yes, there are studies that suggest 47 percent of jobs will be lost by 2030. The super-rich and their analysts take this seriously because they see that the introduction of robots will make them richer and the poor poorer. And they also see that the countries that are now making the transition to Artificial Intelligence, those that fail will historically ultimately outperform. In the past it was always naively said: "Whenever new technologies emerged in the history of mankind, new jobs were created at the same time." This time, that could turn out to be a pious wish.
 

Thomas Metzinger | © IGU press office Thomas Metzinger

Thomas Metzinger, 59, receives in an office at the University of Mainz. Metzinger heads the Theoretical Philosophy department, is director of the MIND Group and the Neuroethics Research Center. From 2005 to 2007 he was President of the Society for Cognitive Science. His main focus is the study of human consciousness. Metzinger says that we have neither a self nor a soul, but only a “self-model” in our head that we do not experience as a model. A theory that he explains in the book "Der Ego-Tunnel" (Piper) in a way that is understandable for laypeople as well. Metzinger advocates linking the philosophy of the mind with brain research and has also been dealing with artificial intelligence for more than a quarter of a century. He himself owns a robot vacuum cleaner, which he is very enthusiastic about. But he has not yet given it a name.

author

Moritz Honert is a member of the Sunday editorial team at Daily mirror. As a qualified sociologist, he mainly writes on topics from the fields of culture, science and society.

Copyright: Courtesy of the Tagesspiegel. Published on Sunday, September 3, 2017 / NO. 23 224
September 2017

Do you have any questions about this article? Write us!

  • 0 comments
  • Print article