Section Three - Technology & Ethics
A story about an ultrasound and an autonomous car (12 minutes)
New ethical questions
In the previous section we looked at an animated video explaining the Mediation Theory by Peter-Paul Verbeek. At the end of this video he states that technology mediates the ethical questions we ask ourselves.
"Designing technology is doing ethics by other means" - P.P. Verbeek
So, inventing, using and designing technology leads to all sorts of (new) ethical questions. We will explain this by discussing two examples:
Talk to me baby
First, the obstetric ultrasound technology. This technology shows the baby while in the womb. Maybe you would think this technology is neutral. However, Peter-Paul Verbeek, convincingly argues that an ultrasound scanner is, in fact, much more than a mute and passive object that is only used as an instrument to look into the womb. Ultrasound is not neutral, but enters into the moral decision-making process itself, largely influencing which moral questions are relevant, and even which question can be posed at all, in practices surrounding pregnancy.
And this is true even though it was not originally designed to establish new moral practices. Maybe this is a little vague so let’s make it clearer!. Suppose a young mother has a seriously handicapped child, then it used to be just bad luck. But since the availability of ultrasound, the mother has some explaining to do, because maybe she could have known. Then why did she still decided to have the child? In the eyes of some, the mother shifts from victim to perpetrator. Thanks to the ultrasound technique.
Please read this article (4 minutes) on the ethics and moral decision-making process of the ultrasound scanner.
In the Netherlands you have a lot of expectant parents that decide to do an extra ultrasound for fun, but, somehow, after reading the article above it does not seem so much fun anymore.
In the previous example we saw that the introduction of a technology as ‘simple’ as an ultrasound has all kinds of (unexpected) ethical and moral implications. These are the results of a (new) technology. In other cases, and more and more often, we try explicitly to program ethical and moral decisions into technology. So the focus shifts from the unintended results to the intended design.
The self-driving car
In the next example we look explicitly at programming moral decisions into technology. The example we use for that is – of course – the autonomous car. Watch the video (5 minutes):
The probability of totally (level 5) autonomous cars is pretty low, but experiments like this are really important and very, very complex.If you program a robot in a factory you program it to be absolutely safe and if anything still goes wrong, than there is something wrong with the program. However, in programming an autonomous car you know things will go wrong and you need to program a moral (life or death) decision. Is this possible?
Quick question: do you think that we can program cars to make life & death decisions?
Maybe less dramatic but equally important are the pre-programmed decisions that determine your newsfeed in Facebook, Netflix suggestion or decides what a chatbot will answer. Those are also moral pre-programmed decisions. In a future with more and more digital technology the question about the influence of technology on moral decision-making will become more and more important. After all, these decisions partly determine if technology makes the world a better place (and what is better, of course!).
That is the topic of our next section.
Take aways from section three:
- Technologies, even when designed with different objectives often lead to new moral practices;
- Moral and ethical implications used to emerge after the technology was introduced;
- Advanced technologies often are (or will be) programmed with moral or ethical decisions;