Self-driving cars, ethics and improving health
ARE you familiar with two common terms used in ethics dialogue — utilitarian and egalitarian?
In a simplified way, utilitarian means believing that actions are right if they are useful and are for the benefit of the majority. It is derived from the doctrine of utilitarianism, which states that an action is right insofar as it promotes happiness, and that the greatest happiness of the greatest number of people in a society should be the guiding principle for human conduct.
Egalitarian means making or ensuring that everything is equal. It comes from the doctrine of egalitarianism, which states that all people are equal and so deserve equal rights and equal opportunities.
Benefits of driverless cars
These two terms have both been used in reference to driverless cars recently. A very progressive invention, driverless cars appear to provide many benefits to humans.
Worldwide, 1.2 million people are killed in traffic accidents each year. With a reduction or elimination of human error, it is believed that driverless cars would significantly reduce traffic accidents and hence road deaths each year.
Self-driving cars would also bring a reduction in traffic, as well as reduced carbon dioxide emissions and air pollution, thereby improving health. Traffic jams cause a rise in blood pressure, anxiety, depression, road rage, as well as a loss of good quality sleep. Air pollution contributes to respiratory allergy, asthma, and other respiratory diseases.
Driverless cars are also likely to be more precise, more technical, and less emotional in decision-making on the roads. They will relieve the driver of needing to have full concentration at the steering wheel, creating more free time to sit back and relax, read a book, or sleep while the car does the driving. You could also use the telephone at length, or do constructive work on laptops or other devices while being driven to work or other obligations, assignments or meetings.
Safety issues
Lately, however, driverless cars have been in the news for less spectacular accomplishments, namely that of causing two fatalities on the road during March 2018. This brought safety issues to the fore for in-depth consideration and discussions. However, since 94 per cent of car crashes on the road are caused by driver error, once cars become driverless, there is likely to be less fatalities on the road.
We know that safety can never be guaranteed 100 per cent in all matters on the road, and so, for our own benefit, we will have to decide whether driverless cars should be utilitarian or egalitarian. In other words, what kind of ethics should be programmed into driverless cars?
Just like us humans, driverless cars will face very challenging situations on the roads and will have to make life and death decisions regarding which lives to prioritise in the event of an impending crash. Such decisions require ethical deliberations and ethical justifications.
Ethical deliberation
In the journal Frontiers in Behavioral Neuroscience, a recent publication analysed the views of human beings on how driverless cars should manage these situations. The researchers from Osnabruck University in Germany assessed the responses of 189 participants to a series of difficult ethical scenarios, including one in which the participants must choose between careening into a crowd versus running over a single pedestrian, and another possibility where participants must choose between sacrificing their own lives or taking the lives of a group of pedestrians.
The researchers found that respondents overwhelmingly sought to minimise loss of life, and were generally willing to sacrifice their own lives. Further, respondents also tended to favour younger individuals over older people. These respondents’ intuitions were described by the researchers as utilitarian, and they noted a conflict between this approach and an egalitarian philosophy.
Should the focus be on reducing the number of deaths for people inside cars, or for pedestrians? Should the programming of the vehicle be made in terms of people versus dogs versus light posts, or instead be on high value versus low value? What is the relative value placed on a pedestrian versus a bicyclist versus another car?
Morally relevant factors
A self-driving car is not being programmed to make a decision in terms of how humans make decisions, it is being programmed to choose the path that best maximises value. But should customers have a choice in how their self-driving vehicle is programmed, or should the moral code for all these cars be the same? Should the autonomous, self-driving vehicle be programmed to run red lights every time it is appears safe to do so?
Should the government have any role in reducing the extreme possibilities? How long will it take for us to trust self-driving cars? The society should consider what are the norms they would want in self-driving cars, and policymakers should engage with the society on the matter. Our future is upon us.
Dr Derrick Aarons MD, PhD, is a Jamaican family physician and consultant bioethicist who is a specialist in ethical issues in health care, research, and the life sciences; and is the health registrar and head of the health secretariat for the Turks & Caicos Islands.