Defend Truth

Opinionista

I, for one, welcome our robot overlords

Rousseau is a voluntary exile from professional philosophy, where having to talk metaphysics eventually became unbearably irritating. He now spends his time trying to arrest the rapid decline in common sense exhibited by his species, both through teaching critical thinking and business ethics at the University of Cape Town, and through activities aimed at eliminating the influence of religious ideology in public policy. When not being absurdly serious, he’s one of those left-wing sorts who enjoys red wine, and he is alleged to be able to cook a mean Bistecca Fiorentine.

Technological advances can certainly threaten our customs as well as our comfort levels, in asking us to adapt to some new – and hopefully improved – way of doing something. And where we’re being offered something that seems a clear improvement, like cars driven by robots which can’t get drunk or tired, we should be wary of allowing emotion to impede morality.

On a flight back from somewhere, earlier this year, the pilot announced to us that we’d just been treated to a fully automated landing. While nobody expressed any concern, there were a few thoughtful or confused looks around the cabin, of people not quite sure how to respond to this news.

My first thought was regarding the timing of the announcement. Just in case anyone would be concerned at being landed by an algorithm, the SAA (I think it was) management (yes, I know) presumably decided to only let us know once the deed had successfully been done. But I also wondered how many others were, like me, thinking something along the lines of “it’s about time”.

It’s about time, I mean, that we acknowledge that humans are inferior to computers at making some decisions, and that we should therefore remove humans from the equation. And not just some – in areas where decisions are made by reference to a multitude of factors, and the intended outcome (such as landing a plane safely) is unambiguous, I’d be tempted to up that to “most”.

Pilots are of course well trained, and no doubt need to pass regular checks for things that might impair judgement, like drugs, alcohol or sleep-deprivation. That’s one of the reasons that far fewer people die from accidents involving planes than die from accidents involving cars. But another reason is that we think far too highly of ourselves, and our own competence at performing routine tasks in adverse circumstances – like driving home after one too many drinks.

We’re reluctant to understand ourselves as a simple statistical data-point, far more likely to conform to the mean than not. Anecdotes trump data every time for most of us, which is why we can think that we’re superb drivers while under the influence of something, until that day when we’re just a drunk driver, like all the other drunk drivers who have caused accidents since booze first got behind the wheel of a large metal object.

But despite our occasional incompetence in this regard – and note, also an incompetence that we can to some extent control – is it time to hand everyday driving over to computers also? I’d say it might well be, for those of us who can afford to. Because even if you’re as alert as you could possibly be, you’re still not able to simultaneously engage with as many variables as a computer can, and nor are you able to react to the outputs of that engagement as quickly.

Computers – or robots – pose questions beyond whether they or a human would be superior at performing a given task, like getting you to the church on time or destroying an enemy installation during war. In war, proportionality of response is a key issue for determining whether a drone attack is legal or not, and as soon as a drone is fully autonomous, we’d need to be able to trust that its software got those judgements right.

Or would we? The standards that we set for human beings allow for mistakes, so it would be inconsistent to refuse the possibility of error for robots, even if they were unable to express contrition, or to make amends. As with many encroachments of technology into our existence, robotics is an area where we need to be careful of privileging the way that humans have always done things, just because we are human.

Cloned or genetically modified food, in vitro fertilisation, surrogate motherhood, stem-cell research (to list but a few examples) are all areas where either a sort of naturalistic fallacy (thinking something morally superior or inferior depending on whether it’s natural or not) or some sort of emotive revulsion (the “yuk factor”) get in the way of a clear assessment of costs versus benefits. When speaking of robots driving our children home from school, a similarly emotive reaction can also cloud our thinking.

Just as with any data point in any set, you and me and everyone we know more often feels superior in whatever skill set than actually is better at that skill than the mean. The mean describes something: in this case, it describes the level of performance of the average person. And if we were all better than average, the average would be higher. For driving, it’s not, or we wouldn’t have an average of over 700 road fatalities every month.

So the question to ask is: when can we be confident that – on average – fewer people will die on the roads if cars are robotic than if the drivers are human? If we’ve reached the point of being confident about that, then the moral calculus shifts against human drivers. If you have the means and opportunity, you’d be acting less morally to drive your kids to school than have a computer do so – regardless of how this feels.

In the New Yorker, Gary Marcus recently invited us to consider this scenario: “Your car is speeding along a bridge at 50mph when [an] errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”

As Sarah Wild and others have pointed out in response, this sort of scenario does raise question about which rules we would like the car to follow, who makes those rules, and who is to blame when some unfortunate accident or death occurs. But where I think most comment on this issue gets it wrong is in labelling the moral dilemmas “tricky”, as Wild does.

If we can, on average, save many more lives by using robotic cars instead of human-controlled ones, the greater good would at some point certainly be maximised. Yes, there will be circumstances where the “wrong” person dies, because a maximise-life-saving algorithm will not be adaptable to very idiosyncratic circumstances, like the one described by Marcus.

In general, though, the robotic car will not speed, will never run a red light, and will never exceed the threshold for maintaining traction around a corner. It will never drive drunk, and will be far better at anticipating the movements of other vehicles (even if they’re not robots, on the same information grid themselves), thanks to a larger data-set than ours and an objective assessment of that data.

So it’s not that there is a tricky moral dilemma here. What’s tricky is that we aren’t able to view it – and ourselves – as a simply economic problem, where the outcome that would be best for all of us would be to set things up in a way that maximises life, on aggregate.

Any solution that prioritises human agency or building in mechanisms to know who to blame when things go wrong is understandable. But, once the driverless car is sophisticated enough, it would also be a solution that operates contrary to a clear moral good. DM

Gallery

Please peer review 3 community comments before your comment can be posted

Every seed of hope will one day sprout.

South African citizens throughout the country are standing up for our human rights. Stay informed, connected and inspired by our weekly FREE Maverick Citizen newsletter.