December 15, 2021
Article by Bob Williamson

AI as Mediator

There is currently much debate about the ethics of Artificial Intelligence (AI), with one widespread view holding that AI should never be used to make consequential decisions affecting people. In this blog post, I suggest that on the contrary, rather than worrying about AI “making decisions” about us, we should should pay more attention to who commissioned the chain of technological action using AI rather than the technology itself.

“Biased Decisions”

A widespread view of the problem with AI is that it is constructed from “biased” algorithms which cause an AI to make “bad” decisions, perhaps ones reflective of historically bad behaviour of humans. There are many difficulties with taking “bias” as a departure point, not least of which is the impossibility of defining it – one person’s bias, is another’s fine judgement.

But rather than worrying about the “bias” in “biased decisions” I suggest more attention needs to be paid to “decision”. I argue that the current framing of the ethics of AI implicitly adopts a particular answer to the question “What do we mean when we make a decision?” Indeed, when do we make it? Suppose, after enough times getting drenched, I “decide” one day that I will always take an umbrella in the morning if the forecast on predicts greater than 30% chance of rain. Each morning I consult my phone, and act accordingly. Now who “made” the decision as to whether to take the umbrella? And when was that decision made? The normal framing of AI presumes that the decision has been made by the algorithm on the phone, and I have subsequently surrendered my autonomy to that of the machine. But I argue that we need instead to conceive of what is happening here is that the technology has mediated matters for me, and I remain responsible. This simple example illustrates the need to think in terms of delegation, and higher order autonomy. I remain entirely autonomous in adopting a decision rule (call it D for reference): “if forecast > 30% then take umbrella”. Having adopted rule D, I have delegated the final decision to a machine. More strictly speaking I have delegated it to a very complex socio-technical system, part of which, no doubt, is an “AI algorithm” which produces the numerical probabilities. My original autonomous decision to rely upon external aid is mediated by a complex technological system; it has not been taken over by the technology. And if I follow my procedure, and thus do not take an umbrella one day, and consequently get soaked, it remains me who was responsible: I chose to adopt D, and thus to rely on the forecasts, and I chose to run the risk 3 times out of 10 of being rained upon.

Decisions are the result of a chain … to evaluate the moral worth of a decision, one needs the whole chain

This simplistic example illustrates something important regarding every AI based decision, including those about matters more consequential than me staying dry – namely that if one tracks back far enough, there is a real person or persons who have commissioned the chain of technological action, and who are ultimately responsible for it. It is as absurd to blame those technological chains (alone) for ethical harms as it is for me to blame my telephone for me getting wet.

But blame the phone is what many would have us do: the technology has led us astray! This is unjust, or at least unfair: my explicit decision rule D has one very great advantage over me “just deciding” each morning through some intuitive process: it is explicit, and its performance can be judged. Conversely, if I “just decide” then there is really nothing to evaluate, and thus I probably would not do so. This shows the great advantage of delegating details of consequential decisions to a machine: in order to do so one needs to make one’s logic precise, and that means one needs to mathematise it, and having done that, one can much more readily evaluate it and potentially improve its performance.

Bob Williamson took up his position as Professor for Foundations of Machine Learning at the University of Tübingen in April 2021. He previously lived and worked in Canberra, Australia. © Elia Schmid / University of Tübingen

Recognising the bait-and-switch – manipulation as a service

Does this matter? Yes! Because framing the problem as one of flawed technology (alone) allows those who orchestrate some of the more significant harms mediated by AI to hide in plain sight. When large advertising companies sponsor research into bias and fairness, they are following the successful path they have already hewn whereby privacy is construed as keeping information secret (with no regard to what is done with it), rather than recognising that the significant harms do not arise from the information itself, but from what is done with it. A computational advertising company, such as Google or Facebook, can (and perhaps does) have exemplary data security practices – your personal data will not leak from their servers. But at the very same time, they can use this data (with your willing consent) to systematically manipulate you in a manner such that you do not even know you are being manipulated. And that is precisely how they make their money.

It is not the gain of autonomy of some self-directed technology we need concern ourselves with; rather it is the loss of our own autonomy, not to self-acting technologies, but to corporations run by real people who have deliberately designed these systems for the sole purpose of manipulation – advertising is of no value if it is not effective, and it is only effective if the subject (quaintly called “the user”) is successfully manipulated. I am far less concerned by the ethical problem of AI as a technology than I am of the business model of these large corporations: manipulation as a service.

What is to be done?

Fortunately we are not without choice, nor precedent. Earlier technologies were only properly understood when it was realised that the technology itself is not responsible for harms, but rather the use to which they were put. Certainly, tweaks can and should be made to make the technology more amenable to sound ethical action. But the party responsible is he or she who commissions the technology. As always, our concern is not the ethics of technology, but of human action, which has been and will always be mediated by technology. And in the same way that the invention of written language, and mathematics (which can be viewed as technologies to the same extend that AI can) helped us behave better (there is no moral philosophy without language), these technologies, which we create, can help us behave better.

But to do this, we need to be clear about where responsibility lies: it ultimately rests in the hands of a person. All technologies are ultimately commissioned by people, and they should be held to account. But figuring out who that person might be is currently not so simple because we are obsessed with the notion that the technology itself causes the harm, and so we do not bother to trace the long chains of mediation back to the commissioning person. In the same way that good science relies upon being able to trace back through long and complex “chains of reference”, we need to improve our systems for tracing the complexities of decision making when mediated by technological systems … systems confusingly known as “artificial intelligence.”

This blog post is inspired by a recently published book chapter: Robert C. Williamson, The AI of Ethics, in Machines we Trust (Edited by Marcello Pelillo and Teresa Scantamburlo), MIT Press 2021.

To find out more about Bob Williamson’s research, please visit our website.


  • Joshua Bentov, August 19, 2023

    This article raises important considerations around responsibility and accountability in AI decision-making. I agree that we need to view AI as a mediating technology, not an autonomous agent making independent choices. The decisions made by AI systems originate from the people and organizations that commission and implement them.

    As the article states, it's unhelpful to blame flawed technology alone for biased or unethical decisions. Rather, we should trace back through the complex socio-technical chains to identify the human actors responsible for the system's design, training data, and use case. Making this attribution is challenging but essential.

    One way to improve accountability could be mandating transparent documentation of an AI system's provenance, as well as audits of its development process. I'm also intrigued by services like that aim to facilitate human oversight of AI decisions through managed dispute resolution. While not a complete solution, tools like this that insert humans into the loop could help restore user agency and provide recourse.

    Overall, the article offers an insightful re-framing of the AI ethics debate. We need to focus less on restraining autonomous systems, and more on making human creators and operators responsible stewards of this transformative technology.

I have taken note of the privacy policy.
July 19, 2021 Theresa Authaler

Responsibility Cannot Be Delegated to an Algorithm

Computer Science Professor Ulrike von Luxburg speaks in an interview about the opportunities and challenges of trimming machine learning systems to fairness. Prof. von Luxburg also explains why she is convinced that people, rather than machines should resolve certain questions.
July 19, 2021 Thomas Grote

Where Algorithms and People Are Allies

Who makes better medical diagnoses, an algorithm or a human? A philosopher specialized in technology, Thomas Grote, says viewing this as a rivalry isn’t productive. He argues in favor of focusing on the interplay of the two – and emphasizes the significance of philosophy.