Artificial intelligence

The biggest risk is reducing people to figures


Artificial intelligence: The biggest risk is reducing people to figures Interview

Artificial intelligence, or AI, is the subject of constant discussion by politicians, business people and academics. In their different disciplines, the latter studies AI, the future of human life in the AI age, human transformation and possible super-intelligence as a future evolutionary step and much more. We spoke to Janina Loh, research assistant (post-doc) in the department of technology and media philosophy at the University of Vienna, about trans and post-humanism, the role of humans in the AI environment and the use of algorithms in the financial services sector.

Ms Loh, the title of your book on "Trans and post-humanism" sounds extremely academic. Can you give our readers a brief summary of what it's all about?

Janina Loh: Trans-humanism (for example Nick Bostrom, Max More and Stefan Lorenz Sorgner) aims to technologically advance, optimise, modify and improve humans. Radical extensions to life and new methods of cognitive and emotional perception will help today's human beings to become a post-human (but still somehow human) being. Technological post-humanism (for example Ray Kurzweil, Hans Moravec and Marvin Minsky) is also interested in transformation of human beings, but with a focus on development of an artificial super-intelligence, which will represent the next evolutionary step and herald the singularity when human existence will have completely changed. Critical post-humanism (for example Donna Haraway, Rosi Braidotti and Karen Barad – and I also consider myself a critical post-humanist) examines the traditional, mostly humanistic dichotomies such as male/female, nature/culture or subject/object, which have made a significant contribution to the emergence of our current understanding of people and the world. It aims to break away from the conventional categories used to define humans and the thinking that goes hand in hand with them.

If we break it all down to artificial intelligence, AI, where does the issue fit in here and where are the links to your research?

Janina Loh: There is a lot of discussion of artificial intelligence in technological post-humanism in particular, as it is primarily concerned with developing a powerful, artificial super-intelligence that will supersede and overtake humans as creation's crowning glory. Of course, my work on robot ethics also involves the issue of artificial intelligence.

Let's stay with AI. There is a lot of discussion about how deeply artificial intelligence is becoming engrained in working and process environments. Where do you think we are, particularly in respect of self-learning systems?

Janina Loh: In a weak sense, we already have different types of artificial intelligence in many areas of our daily lives. From search engines to artificial personal assistants such as Siri and Alexa, to Facebook, Amazon and Netflix algorithms, which analyse our preferences and make suggestions based on them, for example for particular products that we might like. We also come across semi-autonomous and sometimes at least partially adaptive systems, for example in care, industry and road transport. But to date all of these technologies have been created for very specific purposes. They always have "specialised abilities" and, even if they are adaptive in a weak sense, they are a long way from being as flexible and supremely adaptable as human beings.

A chess computer can only play chess (even if it can do it better than most people) but it can't drive a car, iron the clothes or help our children with their homework. The development of a strong artificial intelligence that is just like human beings is not on the horizon, and we need to seriously consider whether we actually want such powerful AI.

What influence do humans have on these systems in respect of the weak and strong AI hypotheses?

Janina Loh: There's no single human being. People are all different. Depending on their position in society (whatever a particular society with its specific political and legal structures may be), financial means, profession, etc., they have different ways of exerting an influence.

Whatever the case, technological developments are not based on natural laws. They are made by humans and subject to human conditions. Not everything that is possible will necessarily become a reality. Discussions relating to development and introduction of certain technologies are frequently conducted as though it had already been agreed that these technologies will be a reality at some point. For example, Nick Bostrom asks what values we should implement in strong AI, instead of asking whether we want strong AI in the first place.

The question of what is technologically feasible and can be done must not be given precedence over the question of what is (morally) desirable and should be done. This is a tendency in empirical sciences and a kind of consensus exists on the development of certain technologies that of course it is good and right to develop the technology under discussion and bring it to market.

However, we must explicitly pose the question of what is desirable and what should be done, and discuss this more widely in society. For us as academics, this means influencing the discourse in such a way that it is as transparent and understandable as possible. We incapacitate ourselves if we support a kind of social and technological determinism stating that the wheel of history simply un- winds in front of our eyes and we have no way of influencing it.

Is there not a risk that humans could become a "junior partner to machines" as "Deutschlandfunk Kultur" recently stated in an article?

Janina Loh: In debates about AI and modern technologies, you often encounter two extreme scenarios: the dystopian perspective, in which machines will attempt to gain world domination, and the utopian perspective, in which we will at some stage merge with nanobots, upload our minds to a computer and then become virtually immortal.

We don't have the luxury of limiting ourselves to this black and white way of looking at things. We need to venture into the large messy grey area between these two extremes and critically reflect on individual technologies.

When people talk about robots, they frequently switch very quickly from the level of the specific artificial system (for example this chess computer is much better at playing chess than most people) to the general level of "the machine" (that machines will at some stage overtake humans). This is something we would never do when looking at animals (for example, we'd never move from an avalanche search dog, which has unique capabilities, to an abstract level of "the animals").

Technology is created for very specific purposes. For the moment, robots have "specialised abilities". We have to take a critical look at the relevant contexts and controversial technologies in particular.

In the financial services sector, there is a lot of talk about all the possible big data and analysis methods that could revolutionise banking. However, a look behind the scenes of many established banks provides a very different picture. Namely a much more analogue way of thinking and acting. So is it safe to say there have been lots of smoke screens but very little actual implementation in the digitalised banking world to date?

Janina Loh: I think that alongside the descriptive question of what changes will actually occur in this area, the other very relevant question is actually the normative question of which algorithms and which big data applications we want and where.

Let's take the example of high-speed trading. Here, we see significant price movements and financial transactions because the questionable algorithms provoke or predetermine this. In my opinion, we have to decide which algorithms should be "biased" in which form and by whom and where we want to use them as a result.

When we talk about digitalisation, big data, AI and new analysis methods open up a world that provides us with an increasing amount of information about (banking) customers. They highlight payment and purchasing habits, render cash superfluous and guide people through all the trials and tribulations of modern civilisation. Does this not sound enticing?

Janina Loh: It sounds like the dream of a modern, capitalistic, Western (and thus predominantly white and male) mass societal and trans-humanistic person – for me personally it's something of a nightmare, if I can express it in such simple terms. It is based on the idea that everything that is important to people can be translated into and expressed in figures.

This is not merely a reduction and standardisation of human beings. Figures allow people to be evaluated, measured and monitored. It also gives the impression of absolute transparency while simultaneously facilitating forecasting of human behaviour. Hannah Arendt would say that it is a victory for behaviourism and the political economy if we treat algorithms and statistics like natural laws.

What do you see as the biggest risks of using AI solutions in people's work and ultimately their lives?

Janina Loh: I think I've essentially outlined that in my answer to the previous question, but let me summarise it. From my perspective, the biggest risk is reducing people to figures, which creates the illusion of predictability and complete control.

Finally let's talk about the issue of "super-intelligence" that you mentioned. Is this in sight or do we need to have a bit of patience?

Janina Loh: Well, the term super-intelligence isn't one I came up with. But from my point of view – and I've already stated this – for the time being we will have to be satisfied with robots with specialist abilities. I see this as an opportunity to first consider whether we actually want to develop such strong AI or artificial super-intelligence.

Dr. Janina Loh (née Sombetzki) is a research assistant (post-doc) in the department of technology and media philosophy at the University of Vienna. She studied at Humboldt University in Berlin and, from 2009 to 2013, did her doctorate in the graduate college financed by the DFG looking at "Constitution beyond the state: From a European to a global legal community?", supervised by Prof. Volker Gerhardt and Prof. Rahel Jaeggi. Her dissertation "Verantwortung als Begriff, Fähigkeit, Aufgabe. Eine Drei-Ebenen-Analyse " [Responsibility as a concept, capability and duty. A three-level analysis] was published in 2014 by Springer VS.

After a three-year post-doctoral position at Christian-Albrechts University in Kiel (2013 to 2016), Janina Loh has worked in Vienna since April 2016. She wrote the first German-language introduction to trans and post-humanism (Junius 2018). She is currently writing an introduction to robot ethics (Suhrkamp 2019). Her post-doctoral thesis deals with the critical and post-humanist elements in Hannah Arendt's thinking and works (working title). In addition to research, trans and post-humanism and robot ethics, her research interests include Hannah Arendt, feminist technical philosophy and ethics in the sciences.

[ Bildquelle Titelbild: Adobe Stock ]
Risk Academy

Die Intensiv-Seminare der RiskAcademy® konzentrieren sich auf Methoden und Instrumente für evolutionäre und revolutionäre Wege im Risikomanagement.

Seminare ansehen
Newsletter

Der Newsletter RiskNEWS informiert über Entwicklungen im Risikomanagement, aktuelle Buchveröffentlichungen sowie Kongresse und Veranstaltungen.

jetzt anmelden
Lösungsanbieter

Sie suchen eine Softwarelösung oder einen Dienstleister rund um die Themen Risikomanagement, GRC, IKS oder ISMS?

Partner finden
Ihre Daten werden selbstverständlich vertraulich behandelt und nicht an Dritte weitergegeben. Weitere Informationen finden Sie in unseren Datenschutzbestimmungen.