Beware of the rise of the black box algorithm

One of the ways my partner and I are a good match is that we both like board games, and I’m not very good at them. It helps because my partner is a graceful winner but a terrible loser. Once, in her early teens, during a game of checkers with her sister, she responded to an unwinnable position by turning over the table.

If artificial intelligence destroys human life, it will certainly look more like my partner’s reaction to defeat than the destructive intelligence of the terminator movies. Disaster will not occur when a sophisticated intelligence decides to use its power for deliberate evil, but when the easiest way to accomplish its programming and “win” is to flip the table.

The threat of artificial intelligence causing some sort of societal catastrophe is, of course, one reason why we should care about research, ethics, and transparency. But this focus on the potential for disaster can sometimes distract from the more mundane dangers. If your GPS points you towards the edge of a cliff, as it did in 2009, when Robert Jones was convicted for not driving with due care and attention, it is not a societal tragedy. But it can be a personal issue if it takes your life, your job, or even just your driver’s license.

An unfortunate consequence of constant dire predictions about the absolute worst consequences of artificial intelligence or machine learning programs is that they encourage a kind of “well, they haven’t killed us yet” complacency about their current prevalence. in public policy and business decision-making. .

A more common problem is that, for policymakers and business leaders alike, the word “algorithm” can sometimes be imbued with magical powers. A good recent example is the UK government’s doomed attempt to award grades to students during the pandemic. But an algorithm is just a set of data fed by rules or mathematical formulas to produce a result. As no UK student taking their GCSEs or A-levels had much meaningful data on their own performance, the UK ‘algorithm’ was essentially arbitrary at the individual level. The result was public outcry, an abandoned algorithm, and rampant rating inflation.

The most disturbing use of algorithms in politics are so-called “black box algorithms”: those in which inputs and processes are hidden from public view. This may be because it is considered proprietary information: for example, the factors underlying the Compas system, used in the United States to measure the likelihood of recidivism, are not publicly available because they are treated as company property.

This inevitably poses problems for democracy. Any system designed to measure the likelihood that a person will re-offend must make a choice between releasing those who might in fact re-offend or continuing to imprison people who are ready to become productive members of society. There’s no “right” or “right” answer here: Algorithms can shape your decision-making, but the judgment is ultimately one that must be made by politicians and, indirectly, their constituents.

As statistician David Spiegelhalter has observed, there is no practical difference between judges using algorithms and judges following sentencing guidelines. The material difference is uniquely and significantly that the sentencing guidelines are clearly understood, publicly available, and subject to democratic debate.

The UK’s doomed review algorithm was not a ‘black box’ due to intellectual property laws or a company’s desire to protect its interests, but the result of the default preference of the British state for opaque decision-making. If the workings of the process had been made available sooner, the political opposition to it would have become clear in time to find a more palatable solution.

The other form of black box algorithm is one in which the information is publicly available but too complex to be easily understood. This, again, can have disastrous implications. If the algorithm that decides who gets fired cannot be reasonably understood by employees or, indeed, employers, then it is a poor tool for managers and one that causes dissatisfaction. In public policy, if the results of an algorithm are too complex, they can cloud the debate instead of helping policy makers make better decisions.

Spiegelhalter proposes a four-phase process for algorithms and machine learning in public policy and the workplace, comparable to the process UK pharmaceuticals must go through to be approved. One of the reasons this plan is good is that it could avoid a doomsday mistake: but it could also avoid minor tragedies and public policy failures.

stephen.bush@ft.com

Leave a Reply

%d bloggers like this: