Technology is taking over our human life

digitalization

Jaana Müller-Brehm

Jaana Müller-Brehm is a social scientist and takes on analytical, editorial and coordination tasks at iRights.Lab in the areas of publicity, democracy, human rights, society and education, always in connection with digitization.

Philipp Otto

Philipp Otto is the founder and director of the think tank iRights.Lab and one of the leading digitization experts in Germany. He is a lawyer and was a visiting researcher at the Berkman Center for Internet & Society at Harvard University. He also heads the Digital Life Innovation Office of the BMFSFJ and various other high-ranking federal projects in other institutions. He has published a large number of books, essays and strategic analyzes at the interface between law, technology, society and politics in the context of digitization.

Michael Puntschuh

Michael Puntschuh is a social scientist and project coordinator at iRights.Lab. His areas of work include human rights online, cyberspace governance and digital ethics, as well as other social and legal issues relating to digital technologies.

Technological change is permanently changing human life and work and thus opening up numerous ethical questions. How can the advancing digitization be designed for the benefit of society and its members? Which values ​​should it be based on? Companies and IT professionals cannot find answers to these questions on their own. That is why politics and society are increasingly concerned with such questions.

Sophia is a robot developed by Hanson Robotics. He has human-like gestures and facial expressions, can answer questions and hold simple conversations, the topics of which are given. Here he welcomes the visitors to the Synergy Global Forum in Moscow, 2018. (& copy picture-alliance / dpa, Mikhail Tereshchenko)

The digital transformation is not over: Numerous technologies hold untapped potential. This is another reason why future developments are difficult to predict. The view of the further development of technologies is often shaped by the technology-deterministic perspective that digitization has an inevitable influence on our society, in which technology determines us humans (see also chapter Society, Culture and Education). But we humans decide how we develop and use technologies. We influence how they affect the lives of individuals and social interaction. Accordingly, there is room for maneuver that we can take advantage of and that must be shaped by social, political and legal processes. The aim should be to shape technological change in such a way that it corresponds to our ideas of a good life and coexistence.

Classify technology ethically

The term "digital ethics" covers questions that deal with how we can shape the advancing digitization for the benefit of society and its members and what we should pay attention to. Among other things, it goes back to the business IT specialist and philosopher Rafael Capurro. Discourses on digital ethics deal with how individuals and organizations use digital media and technologies in different social contexts, what problems and conflicts arise and how these could be resolved. This includes, for example, the following questions: Which values ​​are we guided by when designing technologies? What opportunities does technically optimized assistance hold for our society and under what conditions can we use it? How do we ensure that automated decision-making systems also follow rules that we apply to human activity? Who takes responsibility for the decisions of such systems?

The "digital ethics" has its roots especially in technology, information and media ethics. As early as the middle of the last century, scientists working on information technologies were concerned with the ethical and social effects of their work. The first in-depth discussions on the topic also took place under Hackers and Hackers held in the 1980s.

Ethical - and thus essentially philosophical - questions are often closely linked to social, cultural, legal, political, economic and ecological aspects. After all, once ethical and social goals have been set, the question arises of how they can be achieved: through economic support, broad social discourse, voluntary commitments or new laws. If this link does not take place, there is a risk that we will discuss ethical issues on a very general level and that we will not take advantage of the social and political scope for action.
Perspectives and levels of ethical issues (& copy bpb)

Meanwhile, large digital companies shape the design of technologies. They are mainly based on economic criteria and less on ethical issues, as the Italian political scientist Maria Luisa Stasi explains using the example of successful providers of social networks: Their automated moderation practices are not primarily oriented towards promoting diversity and pluralism of opinion, but their own economic ones Generate income. In order to assert validity for other standards as well, the socio-political space could provide criteria that should be incorporated into the design of the algorithmic moderation systems. These criteria should be based on the common good and fundamental social values, including the preservation of diversity in public debate. For this, in turn, a broad social discourse would have to take place that takes into account many different opinions.

Currently, the discourse on digital ethical issues is also growing in politics and society. New discussion forums and committees are being created that develop codes of ethics, ethical guidelines or catalogs with values, principles and quality criteria. The first concrete definitions of digital ethical norms on a global level were created at the "World Summit on the Information Society"2003 in Geneva. The aim of this United Nations-affiliated conference is to create an information society that is both development-oriented and inclusive, and that fully respects the Universal Declaration of Human Rights.

In this context, for example, the Federal Government set up the Data Ethics Commission in the summer of 2018. She prepared an expert opinion on ethical guidelines that are to be observed in particular with algorithmic systems and their regulation. At the end of 2019, it published an extensive paper recommending, among other things, negotiating and establishing ethical criteria at European level.

In the various discourses and projects, fundamental values ​​are repeatedly mentioned that digitization should be based on. These include openness, transparency, justice, tolerance, equality, security, freedom, responsibility and pluralism. These broadly shared social values ​​not only play a role in relation to the effects of advancing digitization in various areas of life. Rather, the challenge is to apply them to the technologies themselves - that is, to how we conceive, design and use them, taking these central values ​​into account as comprehensively as possible. The American technology ethicist Joanna Bryson, for example, deals with this in connection with artificial intelligence.

"Digital ethics" using the example of technological trends

"Digital ethics" can offer standards by which we judge technological developments. This can be illustrated in particular by the technologies that are currently causing discussions on digital ethics because they are increasingly being used and their fields of application are likely to expand.

Algorithmic systems are one such example. So analyzed a special software in the US judicial system, profiles data of criminal offenders and determines a "crime risk score" for these individuals. This value is included in the decision of judges when they consider whether a prisoner can be released early on parole or how long the prison sentence should be if a guilty verdict should be. Another field is human resources: algorithmic systems pre-filter applications for a specific position. In this way, the HR department receives the suitable candidates according to the software.

In these areas, it's not just about optimizing technological processes. Rather, the decisive point is that the analyzes of algorithmic systems (co-) determine human fate here. In this context, questions arise in relation to existing social norms such as equal treatment. On the one hand, the use of such systems can be an opportunity to dissolve existing forms of discrimination: you decide each time according to the same, previously determined criteria. Accordingly, they neither consciously nor unconsciously differentiate between people, for example on the basis of prejudices or situational influencing factors such as mood.

On the other hand, algorithmic systems are not free from prejudices either. Because how these systems are built ultimately depends on prejudiced people. A simple algorithmic system can discriminate against the objectives set by development teams. Often such evaluation specifications take place unconsciously, on the basis of common social stereotypes. This makes it necessary to reflect and discuss which assumptions flow into algorithmic systems and what individual and social consequences this has. Then an attempt can be made to design and use algorithmic systems in such a way that they are based on widely shared values.

Ethical questions about learning algorithmic systems, i.e. artificial intelligence, are directly linked to the challenge described above. After all, learning systems are also a certain type of algorithmic system. Self-learning application software, for example, analyzes the profiles of all employees in a company in order to find out which characteristics particularly successful employees have. The system then applies the filtered samples to future decisions and then filters applications. If a company systematically disadvantaged women in the past and therefore hired too few women, then a system learns from this data to recognize and adopt this pattern: It now also prefers to select male applicants.

The digital ethical debate on artificial intelligence goes on. Applications such as voice assistants are designed in such a way that conversations should be similar to those between people. So-called anthropomorphic robots, in which different learning systems work, are based on the human appearance. The field of robotics also includes non-human-like robots that are used, for example, in manufacturing processes in industry. The anthropomorphic robots in particular raise ethical questions. This includes, for example, in which areas and for which tasks such robots should be used and when their use seems impossible because it has consequences that do not correspond to valid moral concepts. Such a discussion is currently taking place in the area of ​​nursing, for example. There are numerous areas of application in which robots could provide relief.

In March 2020, the German Ethics Council published a statement on what an ethical approach to robots in care could look like. He comes to the unequivocal conclusion that they must not replace human personnel. With their help, existing personnel bottlenecks could not be eliminated. Robots are only ethically justifiable in certain contexts, for example in heavy activities such as lifting people with restricted mobility. However, a robot should never replace human contact.

The degree of autonomy of robots is also under discussion (see also the chapter on crime, security and freedom). An overarching question is: How independently can robots make decisions? And who takes responsibility for the possible consequences of these decisions? Similar to animal ethics, the ethical debate deals with the question of whether and from when something is considered a creature that deserves its own rights. Ultimately, it's about the basic relationship between humans and robots.

Source text

Electronic companions

Ms. Darling, as a robot ethicist you study the interaction between humans and machines. At your lectures you can be seen again and again with a "Pleo", a baby dinosaur robot. How would you describe your relationship?

I read about this toy, the baby dinosaur robot from Japan, in 2007 and ordered one because I was initially interested in the technology. It is equipped with microphones, touch sensors and infrared cameras and is very good at mimicking real-life behavior. But then something happened that surprised me: I developed an emotional bond with him.

How did that make itself felt?

Pleo can imitate expressions of pain very well. When I asked my friends to hold Pleo's tail up, he would cry. I noticed that it bothered me, which is strange because I know how the technology works. I've always been interested in robots, but it was precisely this moment that sparked my interest in robot-human interactions.

Did you give your Pleos name? [...] Naming something is an expression of empathy.

It's remarkable and often the first thing people do when interacting with robots, even when it comes to household vacuum robots. About eighty percent of them have names [...]. We are, in a certain way, biologically programmed to perceive robots as living beings. Our brain is constantly analyzing our environment and dividing everything into two categories: objects and agents. We react to certain movements, and although we know that robots are only objects, we project intent and intention into their behavior because they move in space in a way that appears autonomous. In addition, there is our fundamental tendency to anthropomorphize, i.e. to transfer human-like behavior, human emotions and qualities to the non-human. We do that with animals as we do with our cars. And we do the same with robots. [...]

If we are able to show compassion to robots, can we just as easily impose other strong emotions on them like hatred?

O yes! An example are security robots that look a bit like the "Daleks" from the "Doctor Who" series. These robots often patrol parking lots or buildings instead of security guards. They are equipped with cameras and look scary. Over and over again people push her, try to knock her over and just can't stand her. Another example is drones. People don't like being spied on and photographed by drones. [...]

Has your empathy towards robots decreased since you knew so much about them?

[...] We can gain clinical distance from them if we have to. But honestly, it is so fun and natural that we treat them like companions again once we are no longer on duty.

Kate Darling studies the legal, social, and ethical implications of robot-human relationships at the Massachusetts Institute of Technology's Media Lab.
Anna-Lena Niemann, "Robots are our companions", Interview with Kate Darling, in: Frankfurter Allgemeine Woche No. 5 of January 24, 2020 © All rights reserved. Frankfurter Allgemeine Zeitung GmbH, Frankfurt. Provided by the Frankfurter Allgemeine Archiv

Unfold

Close

Human characteristics such as creativity, emotionality and empathy as well as the ability to solve and weigh up diverse problems in just as diverse ways help us to sharpen our definition of humanity. In this respect, digitization is not only continuously bringing new technologies into our lives. Rather, it encourages us as humans to reassure ourselves of our humanity and at the same time to become aware of our influence on technological advancement.

Source text

Why AI cannot replace humans

The belief that computers will soon develop an awareness and then tell us humans where to go has not yet become generally accepted outside of Silicon Valley. Intuitively, something seems to resist the idea.But if one breaks faith down into its individual components, one encounters a lot of assumptions, which should definitely count on broad agreement: People essentially determine what goes on in their brain; Brain activity is information processing; Information is data.

From these elements one can easily conclude the other way round: If it is only possible to bring together as much data as the human brain (some advocates of artificial intelligence calculate with 10 to the power of 16 operations per second), consciousness can be generated artificially, which is then converted into is able to become self-employed.

It is a peculiarly abstract, dematerialized world in which such simulation games have their scene - a world, however, which is likely to gain plausibility the more completely the everyday world is also immersed in data via the smartphone.

[...] The historian Yuval Noah Harari, for example, had great success with his title "Homo Deus", in which, in view of the new technical promises, he declared not only humanism and individualism, but also people themselves to be a thing of the past: "Homo sapiens is an obsolete algorithm". [...] Thomas Fuchs [...], who holds the Karl Jaspers Professorship for the Philosophical Foundations of Psychiatry and Psychotherapy at the University of Heidelberg, [...] is now [...] examining the individuals [in his newly published book] [...] Elements from which belief in technological evolution is woven. It is astonishing and illuminating at the same time how simple, in the midst of the intimidating complexity of the matter, the category errors that he finds on the track are ultimately - simply by taking a step back.

That starts with the defense of the inconspicuous claim that computers are dealing with "information". But for data to become information, it needs a recipient who understands it. Information that is detached from a possible addressee is a contradiction in terms. [...] The computer alone neither processes information nor calculates or thinks: "Viewed in isolation, the device only converts electronic patterns into other patterns according to programmed algorithms."

This clarification has far-reaching consequences. Because the data that the brain handles, unlike that of the computer, is definitely information - but only insofar as the brain does not stand on its own.

This brings Fuchs to the second assertion, for which he proves a mistake in reasoning: that human consciousness is identical to the operations of the brain. A vivid illustration of this view is the thought experiment of the "brain in the tank", in which the philosopher Hilary Putnam imagined a brain removed from the body and immersed in a nutrient solution: the data continuously fed into it by a supercomputer ensure that it does Has awareness of a perfectly normal life in the world. "Where do you actually get the certainty from," added the philosopher Thomas Metzinger, "that you are not in a vessel with a nutrient solution while you are reading this book?"

Fuchs replies that this experiment only proves what it assumes, namely that all experience is just an accumulation of data in the brain. But this assumption ignores the close connection that the organ of the brain enters into with a living body: "Consciousness does not first arise in the cortex, but results from the ongoing vital regulatory processes that involve the whole organism and that are integrated in the brain stem and the higher center become." Without the body, the inclusion of the external world, which characterizes consciousness, cannot be explained: "Consciousness is not at all a localizable object that one could point to like a stone or apple. It is a perception-of ..., speaking-with ..., remembering ..., wishing-of ..., that is, a directed process that opens up a world. " Therefore, one cannot ascribe consciousness to brains, but only to people.

[...] As "cerebroccentrism" he describes the idea of ​​looking at people as a headbirth, as a pure brain being, as it happens with some talks about artificial intelligence and especially with those who want to load the human mind onto a hard drive ( the so-called "mind uploading").

Such transhumanistic fantasies, which understand the concrete physical existence less as an opportunity than as an obstacle, as a restriction of personal freedom, he counters with an astonishingly poetic quote from Immanuel Kant from the Critique of Pure Reason: "The light dove by being in free flight the air divides, the resistance of which she feels, could imagine that she would do much better in a vacuum. " But the vacuum is a mirage. Consciousness, adds Fuchs, needs the materiality of the body in order to exist.

Fuchs then makes the most radical category error in confusing the most fundamental, life, with its functions [...].

That life is an activity of its own accord, which no simulation can catch up with, this simple fact is simply excluded [...].

[...] Perhaps it is not just premature to say goodbye to the human being with a casual gesture. It's also ignorant and negligent. There is a general devaluation of experience in it, both personal and that contained in cultures. [...] In reality, the danger may not be that our devices will become independent. It could be more threatening that people only claim this in order to shift their responsibility onto the apparatus - for example in the targeted unleashing of semi-autonomous weapon systems.

Perhaps the imaginary offense that people pretend to suffer from their machines is the real political danger.

Mark Siemons, "Aren't we algorithms after all?", In: Frankfurter Allgemeine Sonntagszeitung from August 9, 2020 (Book review: Thomas Fuchs: Defense of the human. Basic questions of an embodied anthropology, Berlin 2020) © All rights reserved. Frankfurter Allgemeine Zeitung GmbH, Frankfurt. Provided by the Frankfurter Allgemeine Archiv

Unfold

Close

Source text

AI is just a tool

[…] Mr. Dräger, […] what is your understanding of artificial intelligence?

I find the term artificial intelligence [...] problematic because it suggests that this machine intelligence replaces us - just like an artificial hip joint replaces our natural hip joint. I myself favor the term "extended intelligence" because it is precisely that: There are some things that we humans are not built for, such as the analysis of huge amounts of data. The machine can do that for us. But it cannot relieve us of setting ethical goals, checking plausibility and monitoring them. These are human characteristics - and they will remain human characteristics in the age of AI.

Because the machines aren't intelligent enough for that after all?

At least as far as the breadth of the application is concerned. A chess computer that is almost unbeatable on eight by eight fields would be lost for nine by nine fields. Even then the transference knowledge is hardly available. And if I asked a chess computer to recognize images, it would fail miserably. We humans can transfer what we have learned to something else - that too is part of our intelligence in the human sense. In this respect, I want to contribute to the demystification of artificial intelligence: AI is just a tool!

The farmer, who doesn't want to run after the plow, sits on the tractor. And people who no longer want to synchronize millions of data just look for an AI that does it for them. We have to get used to the normality of an aid for our cognition; we must not be immediately offended when a machine can do something better than us. [...] People use cranes to lift and drive cars to move. That's why we shouldn't see a calculating machine as an insult to our brain. What sets us apart is that we can make ethical assessments, that we can build bonds and set goals. And the machine doesn't attack us at all, it can't do that at all.

To cite the concerns of many people now: Where is it attacking us?

In the repetitive, sometimes dull activities [...] AI ideally gives us time for the essentials, that is, for things that only we humans can do. Playing machine and human against each other is nonsensical, the point is to let human and machine work together. Many people still have this horror of the Terminator, the evil AI that is becoming independent and taking control. But basically it's just calculating machines that do certain things on our behalf.

You say machines can filter out strengths that humans do not recognize. But now they can also recognize weaknesses that people do not notice. And so we are right in the middle of the ethical discussion, for example about social scoring in China ...

The question arises, is the problem with this social scoring in China really the machine - or is it not the political system? […] The problem behind the algorithms is […] the respective political will. If the goal of an algorithm is efficiency, it can lead to less solidarity. If we strive for security, this can come at the expense of freedom.

For this very reason, should we not be guided by what is technically possible, but by what is socially desired?

Definitely yes. But for this we also need social self-confidence, we have to be clear: As a society, we decide politically about the use of AI - and not the technology corporations because something has become possible. But this sovereignty of politics only works if we fight for it.

But it seems that what is technically possible is the driver.

In parts, unfortunately, yes, because we as a society make far too little use of our creative options. It is not as if there are no problems in the analog world: racism, discrimination, injustice, that does not only come about through digitization. But it is precisely these problems that we can at least alleviate with AI.

At the moment, however, we as a society are really drifting and confusing the consumption of digital offers with the competent use of digitization. We shouldn't complain about the negative effects of digitization, but turn the tables and clarify which social problems we want to solve with the new technology. [...] It seems to me that the economy likes to see the opportunities offered by AI and overlooks some risks, while society only sees the risks and overlooks the opportunities. [...]

How could that be solved?

With transparency. I want such criteria to be transparent so that people can discuss them and consciously get an idea. [...] But as long as the algorithm and its operational logic remain hidden, nobody will notice - and they cannot get upset either. And if I can't get upset, I can't make a different decision […]. And that's what I mean when I say: we need competence and transparency. We need to understand what is going on. [...]

To come back to the beginning: You say we have to convert the fear of the machines into respect and understanding - how could that work?

It is useless to glorify digitization, but it is also useless to just draw dystopias. The truth lies in the middle. The most important thing in this debate is to keep horse and rider apart. We humans hold the reins in our hands. But as long as the feeling prevails in society that the machine determines us, it does our jobs and at some point takes our jobs away from us, the fear remains. We have to address this feeling and turn the debate around constructively. We humans don't want machines to rule us. The point is that humans create something with the machine. It's about the interaction between people and technology. [...] I ask the machine where it's going - and then I decide whether to follow their advice or take another route. The machine is a helper. And we have to create this image.

The physicist and education expert Jörg Dräger has been a board member of the Bertelsmann Foundation since 2008 and is responsible for education and integration. He also teaches public management at the Hertie School of Governance.
"There is no point just drawing dystopias", Interview by Boris Halva with Jörg Dräger, in: Frankfurter Rundschau from December 12, 2019 © All rights reserved. Frankfurter Rundschau GmbH, Frankfurt

Unfold

Close