Guest post by Akshata Prabhu
Imagine that you’re the driver of a runaway trolley, hurtling towards five innocent men working on the tracks. You can’t control the trolley, but can pull a switch, capable of diverting the trolley onto another track, where there is just one worker. Do you pull the switch, and actively kill the one worker? If you think this is ridiculous, and that any such situation is never going to actually happen, the days of such blissful ignorance are sadly over.
The trolley problem has been one of the most famous philosophical experiments of all time, and it’s not hard to see why. Conflicting answers are often the norm for the ‘right thing to do’; a result of the tension between our moral duty to not cause harm vs. to not do bad things. If harm were to be measured by the number of lives taken, adhering to Mill’s idea of utilitarianism, switching tracks is a no-brainer. However, the fear of ‘actively’ doing bad things i.e. murder, by choosing to pull the switch, keeps that from being the obvious choice. Either way, irrespective of choice, dismissing this experiment as a pointless one is often the first reaction. If seen realistically, the action of a person who might actually be in such a situation might be widely different from what he/she thinks is the right choice. If you’re driving a car, and are headed towards an inevitable crash, what you choose to do moments before the crash could be chalked up to fear, survival instincts or even reflexes. The idea of being human, rids you of the responsibility to take a strong ethical stance.
Over the last year or two, however, the trolley problem seems to have resurfaced, and this time it has enormous social consequence. With the advent of driver-less cars, the ethical questions that seem to haunt researchers is not unlike the trolley problem. Would a person get into a driverless car, if the algorithms chose to let the occupant die instead of allow a collision that kills multiple people? Helen Frowe, a professor of practical philosophy, supports the idea of protecting innocent bystanders, as those in the car have more responsibility for any danger. But this is where things get a little murky; in a situation where there are innocent people in the car, especially children, where does the responsibility lie? Or, if the collision is with a jaywalker, Frowe’s suggestion doesn’t seem right at all. According to Jean-Francois Bonnefon, a professor at the Toulouse School of Economics, utilitarianism, instead, seems like a popular choice. An in-depth attempt by him to gauge public opinion, however, resulted an almost predictable situation. People wanted others to use utilitarianism-upholding vehicles, more than they themselves did! The paradox is laughable, but it also quite clearly sheds light on the sort of self-interest that comes up in such decision-making.
Irrespective of the choice made, any decisions made in this regard has significant impact on society. The imposition of one moral viewpoint over the world severely disadvantages innocent people on either side. Or, the existence of different moral algorithms, and the ability of buyers to choose, drastically changes responsibility for harmful consequences. But more than revolutionizing the idea of justice, the trickling down of the trolley problem into the world of artificial intelligence, seems to have opened up the door to a world of new possibilities; one where decisions will cease to be knee-jerk reactions, and one where we will have the ability to premeditate different options as we program how our machines act. From all this, one thing is clear; policymakers in the field of artificial intelligence are going to have to deal with some of the toughest ethical dilemmas that our society has ever seen. The decision on what constitutes ‘ethical’ and what doesn’t, will be crucial in determining the idea of right and wrong for mankind.
Akshata Prabhu is participant of the 13th cohort of GCPP, the flagship course in public policy by Takshashila Institution.