Habra: The Selective Ethics of Driverless Cars

Since man began innovating in engineering and mechanical inventions, we have steadily become better at automating various tasks. We have seen this automation over the last century in large scale farming solutions and assembly line systems.

The most advanced types of automation today take place by processing large amounts of data through algorithms in such a way that apps and machines can increasingly handle the mundane thinking for you; even anticipating what you will think, feel, and want.  We see this everywhere now — from Amazon’s predictive shopping suggestions that are very accurate, to smart homes that adjust room environments on the fly to make you more comfortable. We’re seeing an evolution of fully connected virtual entities processing data to automate your life and maximize your comfort.

There’s nothing more tangible, more far-reaching, and more evolved in this category than the promise of self-driving cars.  As we examine the notion of trusting our vehicles to drive us from point A to point B, many deep philosophical issues come to mind.  There is plenty of data already that suggests that self-driving cars will save lives – and lots of them. Today’s self-driving cars are already far safer than human drivers. Currently, approximately 90 people are killed in car accidents every single day.  Some of this is due to neglect, error, and other cases involving intoxication. Accidents by in large happen on account of human error. The trends in self-driving car technology suggest that as we move toward a fully self-driving car world this figure will shift toward zero in the coming decades.  Within a decade self-driving cars will be commonplace in society according to Morgan Stanley.

So, at first blush, the promise that we could all but eliminate these driving errors is a transformative advancement in our civilization and in the preservation of human life.

Imagine a driverless car scenario in which the vehicle is driving 70 mph along a divided highway.  Suddenly, four teenagers sprint across the highway.  The driverless car must make a split second decision to either strike the teenagers – instantly killing them – or swerve onto oncoming traffic and potentially kill the vehicle’s passengers.  If a human was at the helm, the human would make an instinctive decision in that urgent moment.  The driverless car, however, follows its programming without emotion or moral considerations.

The question programmers face, then, is: “how do we program appropriate ethical decision making in such scenarios?”  If programmers choose to maximize the self-preservation of the car owner and passengers, then the car would choose to strike the teenagers.  The programmer may alternatively take into consideration the number of people in the car and make a purely quantitative decision.  If there are only two passengers, it would be better to sacrifice the two in the car, rather than killing the four running on the highway.

Then we have to consider nuances like: “What are those four teenagers doing running across a highway?” Or maybe the car even knows the number of dependents the car owner has and factors this into the equation.  Perhaps there are only two passengers, a husband and wife, but they have six young children.  And so, the death of two would actually immediately affect eight versus the death of four teenagers with no dependents.

If your head is spinning just analyzing these hypotheticals, consider the job of the programmers tasked with looking at these types of situations in detail and programming for them.  They not only have to consider these factors, and thousands more, but then find a meaningful way to program this into the car.  If we apply the rules of Utilitarianism to the equation, the programmer would have to factor the “value” of various lives in each equation to determine the choice that leads to the greatest “utility.”  The reality is that there is no simple answer and even entire philosophical schools of thought like Utilitarianism will not resolve these ethical dilemmas.

An easy way out would be to say, “Sure, there will be horrible accidents, but many more lives will be saved and so the general good of the driverless cars is greater than the specific bad moments.”

But there’s a much deeper question that concerns the ethical choices that will need to be made about driverless cars.  First of all, it’s important to note that driverless cars are not error-free, and these errors can lead to death.  Such an event took place last year when a Tesla on auto-pilot failed to apply the brakes as a tractor turned in its path.  The 40-year-old Tesla driver lost his life in the accident.  There are far more complex scenarios in question when the driverless car does follows its programming without error and still results in having to make a split-second action that results in loss of life.

All of this analysis raises the question of programming ethics into our machines.  The first challenge is realizing that everyone’s ethics differ to some degree.  Even if you have a firm stance on this scenario that you would “for sure sacrifice yourself,” realize that in that instant moment, you may not follow even your own programming. So, this raises the question – what kind of ethics will you as the car owner program into your own driverless car?

If these questions are intriguing you, you’re not alone.  So much so that MIT came up with a test to help frame how you might program the ethics of your driverless car.

The manufacturer will not set the final “ethical” programming.  This will be left to the owner and the owner’s specific ethical code.  I think this alone will make us better humans.  Too many people go through life with no defined ethical code.  That doesn’t make them bad people, it just means that they react as life happens and while they may have some core values, the ethical code has yet to be defined.

In this world where we are making major decisions about morality and ethics, it seems as though the self-driving car will do much more than just get us from point A to B: this up and coming technology will create a new drive to program our own morality.

Jacques Habra is an award-winning entrepreneur who has launched various startups in Web, electronics, real estate, and retail. Habra’s consulting firm, Noospheric, provides consulting and coaching for entrepreneurs and professionals; as well as educational courses in wealth creation and entrepreneurship. To learn more about Jacques Habra or Noospheric visit or


Comment count on this article reflects comments made on and Facebook. Visit Breitbart's Facebook Page.



I don't want to get today's top news.