Self Driving Cars: A ethical dilemma

In Computer Science

What are self driving cars?

Self driving cars have been a fantasy of mankind ever since we invented the car. We are constantly striving to make the task of driving easier for ourselves. Modern cars now have most things powered solely by electricity or assisted by it in order to make it easier for us humans to drive. We have power assisted steering, power assisted brakes, fully automatic windows, automatic windscreen wipers and automatic headlights just to name a few. Google however, have made this dream a reality, they have built automated self driving cars out of pre-built manual cars and advanced electronics.

Googles self driving car

Google’s self driving car

These cars started being tested and built as far back as 2009, and in late 2014 they delivered their first prototype of a car built from the ground up to be automatic. These prototypes and older hybrids have been slowly building up miles of experience and valuable data that Google is using to further its goal of rolling self driving cars out onto the open market.


Ethical problems

Google’s self driving cars do not come without issues however, while they have not yet had an accident that was the fault of the self driving mechanism, they still require hundreds of thousands of miles more testing before they may be allowed to be available to the public.

The self driving car has a set of rotating cameras on top of its roof that along with an array of lasers and radars scan the surrounding area to build up a 3D image that the car can use to “see” what its environment is. The code that the car will then use to decide what to do has been written by a programmer (or probably a whole host of programmers), this programmer or team of programmers will then have to decide in advance what to do. Light is red, stop. Light is green, go. Simple right?


So a simple traffic light change may be easy, but what about the things that can not easily be planned for, the things that we humans rely on instinct to decide… Below are three scenarios that self driving cars will have to deal with.


Scenario 1

Say for example that you are travelling along a road in countryside at sixty miles an hour (96.5 kilometers per hour) when suddenly a cyclist on a bike pulls out of a side turning into your path. You are travelling to fast and are too close to stop in time. What do you do?

Well most human reactions would be to swerve to the side and brake as hard as possible in order to avoid a collision, whatever happens now will be classed an accident. No one has time to do anything, if you hit the cyclist then it is a terrible accident or if you swerve fully off the road then you or passengers in the car may be hurt, possibly badly if you hit a tree or a wall. Local authorities will investigate, the media may lobby for lower speed restrictions on the road or a better view from the turning. But you are not to blame, no one is.

Now imagine the same scenario but with a self driving car, the cars computing power is sufficient that whatever decision it makes can be easily seen as predetermined. These are a few of the most likely actions.

  1. The car decides to brake but not to swerve in order to give the occupants of the car the highest chance of survival. Bicycle and person: squishy; tree: solid. This means that the car has made the decision to hit the cyclist, a decision that may prove fatal for the cyclist. Apart from being life changing for the cyclist, some person will have had to program the car to effectively kill the cyclist. This could have major mental repercussions for the programmer.
  2. The car brakes and swerves to avoid the cyclist. Thus giving the cyclist a much higher chance of survival, but possibly endangering the occupants of the car. While this may be the most human course of action, would you choose a car that will maximise your chances of your survival, possibly at the expense of others. Or would you choose the car that maximises the survival chances of everyone, even if that means placing you in harms way.

Scenario 2

Now a different scenario, you are driving behind a truck on the motorway (freeway), the truck is loaded with scaffold equipment. To your left is a motorcyclist wearing a helmet and protective clothing, to your right is a motorcyclist wearing a t-shirt and jeans. Some poles come loose on the truck and fall towards your vehicle. You do not have time to stop, you have to swerve left, swerve right, or carry straight on.

Again, whatever you do is a reaction. It is not predetermined what you do, it will be classed as an accident.

With a self driving car again the car will have enough time to make a decision. Below are the possible outcomes.

  1. The car continues straight, this way the only people that have been put at risk are the occupants of the car. As they are the most protected they stand a good chance of survival. But would you want a car that maximises others survival over yours?
  2. The car swerves left into the motorcyclist with protective clothing. This motorcyclist has a higher chance of survival than the one on the right. But then has the car not just potentially condemned this person for choosing to wear the recommended equipment? By this logic it is actually safer to wear jeans and a t-shirt than the recommended protective equipment.
  3. The car swerves right in to the motorcyclist wearing jeans and a t-shirt. This person will most likely die from the collision, especially at the relatively high speeds of the motorway (freeway). Has the car now essentially dealt out a very crude form of judgement, essentially condemning the motorcyclist for not wearing the correct equipment.

Scenario 3

This last scenario is very hypothetical, it is very similar to scenario 2 with a few changes. Again you are behind a truck carrying scaffolding, again you are flanked by two motorcyclists. Both of the motorcyclists are wearing fully protective clothing and helmets. The contents of the truck again spills towards you and forces a decision.

For a human it makes no difference, again it is a reaction, again an accident. But for a self driving car, maybe not as much.

  1. The car, unable to make a decision based on survival chances, generates a random number to calculate what option to choose. No matter what happens, would you be content to let your car essentially gamble with your life?
  2. The car reads the number plates of the motorcyclists, it finds that on the left is a convicted drug user and alcoholic, on the right is a neurosurgeon who has saved over a hundred people with his or her skill. The car decides that the neurosurgeons life is more important as without them many people will die who could be saved. It swerves in to the motorcyclist on the left. The car has now indirectly “saved” countless lives by preserving the neurosurgeon. Is this pre-meditated murder? Or is it just the possible future that we can expect our cars to decide whose life is worth more than others? Who will program the car to decide whose life is worth the most? Very 1984.



The self driving cars of tomorrow may very well be able to decide who lives and who dies. How Google and others that are building and programming these machines will deal with the massive ethical dilemmas is still to be seen.


Recent Posts