zum Hauptinhalt
Thomas Metzinger (Archivbild 2011)
© Fredrik von Erichsen dpa/lrs

EU guidelines: Ethics washing made in Europe

On Tuesday, the EU has published ethics guidelines for artificial intelligence. A member of the expert group that drew up the paper says: This is a case of ethical white-washing.

Thomas Metzinger is Professor of Theoretical Philosophy at the University of Mainz and was a member of the commission's expert group that has worked on the guidelines published on Tuesday.

Read his op-ed in German here.

It's really good news: Europe has just taken the lead in the hotly contested global debate on the ethics of artificial intelligence (AI). On Monday in Brussels, the EU Commission presented its Ethics Guidelines for Trustworthy AI. The 52-member High-Level Expert Group on Artificial Intelligence (HLEG AI), of which I am a member, worked on the text for nine months. The result is a compromise of which I am not proud, but which is nevertheless the best in the world on the subject. The United States and China have nothing comparable. How does it fit together?

 Artificial Intelligence cannot be trustworthy

The Trustworthy AI story is a marketing narrative invented by industry, a bedtime story for tomorrow's customers. The underlying guiding idea of a “trustworthy AI” is, first and foremost, conceptual nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour. Hence the Trustworthy AI narrative is, in reality, about developing future markets and using ethics debates as elegant public decorations for a large-scale investment strategy. At least that's the impression I am beginning to get after nine months of working on the guidelines.

Hardly any ethicists involved

The composition of the HLEG AI group is part of the problem: it consisted of only four ethicists alongside 48 non-ethicists – representatives from politics, universities, civil society, and above all industry. That's like trying to build a state-of-the-art, future-proof AI mainframe for political consulting with 48 philosophers, one hacker and three computer scientists (two of whom are always on vacation).

Whoever was ultimately responsible for the group's extreme industrial weight was right on at least one point: It’s true that if you want the European AI industry to adhere to ethical rules, you have to involve the leaders in the field and get them on board from the start. There are good and intelligent people there, and it is worth listening to them. However, while the expert group included a lot of smart people, the rudder cannot be left to industry.

Red lines have been defused

As a member of the expert group, I am disappointed with the result that has now been presented. The guidelines are lukewarm, short-sighted and deliberately vague. They ignore long-term risks, gloss over difficult problems (“explainability”) with rhetoric, violate elementary principles of rationality and pretend to know things that nobody really knows.

Together with the excellent Berlin machine learning expert Urs Bergmann (Zalando), it was my task to develop, over many months of discussion, the “Red Lines” – non-negotiable ethical principles determining what should not be done with AI in Europe. The use of lethal autonomous weapon systems was an obvious item on our list, as was the AI-supported assessment of citizens by the state (social scoring) and, in principle, the use of AIs that people can no longer understand and control. 

I only realized that all this was not actually desired when our friendly Finnish HLEG President Pekka Ala-Pietilä (formerly Nokia) asked me in a gentle voice whether we could remove the phrase “non-negotiable” from the document. In the next step, many industry representatives and group members interested in a “positive vision” vehemently insisted that the phrase “Red Lines” be removed entirely from the text – although it was precisely these red lines that were our mandate. The published document no longer contains any talk of “Red Lines”; three were completely deleted and the rest were watered down. Instead there is only talk of “critical concerns”.

 From Fake News to Fake Ethics

This phenomenon is an example of “ethics washing”. Industry organizes and cultivates ethical debates to buy time – to distract the public and to prevent or at least delay effective regulation and policy-making. Politicians also like to set up ethics committees because it gives them a course of action when, given the complexity of the issues, they simply don't know what to do – and that’s only human. At the same time, however, industry is building one “ethics washing machine” after another. Facebook has invested in the TU Munich – funding an institute to train AI ethicists. Similarly, until recently Google had engaged philosophers Joanna Bryson and Luciano Floridi for an "Ethics Panel", however this was abruptly discontinued at the end of last week. Had it not been for this, Google would have had direct access via Floridi, a member of HLEG AI, to the process by which this group will develop the political and investment recommendations for the European Union starting this month. That would have been a strategic triumph for the American conglomerate. Because industry acts more quickly and efficiently than politics or the academic sector, there is a risk that, as with “Fake News”, we will now also have a problem with fake ethics, including lots of conceptual smoke screens and mirrors, highly paid industrial philosophers, self-invented quality seals, and non-validated certificates for “Ethical AI made in Europe”.

Given this situation, who could now develop ethically convincing "Red Lines" for AI? Realistically, it looks as if it can only be done by the new EU Commission that starts its work after the summer. Donald Trump's America is morally discredited to the bone; it has taken itself out of the game. And China? Just as in America, there are many clever and well-meaning people there, and with a view to AI security, it could, as a totalitarian state, enforce any directive bindingly. But it's already far ahead in the employment of AI-based mass surveillance on its 1.4 billion citizens; we cannot expect genuine ethics there. As “digital totalitarianism 2.0”, China is not an acceptable source for serious ethical discussions. Europe must now bear the burden of a real historical responsibility.

Take ethics back from industry!

If you have any ethical goals, you are obliged to find and use the best tools available. AI is one of the best instruments for practical ethics humankind has. We cannot afford to politically slow down this technology, and we certainly should not block further development. But because good AI is ethical AI, we also have a moral obligation to actively improve the guidelines of the High-Level Group ourselves. Despite all potential criticisms of the way they were created, the ethical guidelines we are currently developing in Europe are currently the best globally available platform for the next phase of discussion. China and the United States will study them closely.

Their legal anchoring in European fundamental values is excellent, and the first selection of abstract ethical principles is at least acceptable. Only the genuine normative substance at the level of long-term risks, concrete applications, and case studies has been destroyed. The first step is good. But it is high time that universities and civil society recapture the process and take the self-organized discussion out of the hands of industry.

Everyone is feeling it: We are in a rapid historical transition that is taking place at many levels simultaneously. The window of opportunity within which we can at least partially control the future of AI and effectively defend the philosophical and ethical foundations of European culture will close in a few years' time. We must act now.

Thomas Metzinger

Zur Startseite