U.S. Markets closed

Caught in algorithmic management trap

Azeem Azhar

Originally published by Azeem Azhar on LinkedIn: Caught in algorithmic management trap

One element of the United Airlines passenger violence story intrigues me: how firms start to rely on process so heavily that it becomes an excuse for eliminating employee discretion, common sense or kindness.

We know that in the recent case, United didn’t follow its processes exactly but the incident demonstrates the fundamental issue of processes debilitating decency.

It’s a small example of the risks of algorithmic management. Processes and manuals which have steps, branching and decision points are essentially algorithms. If the people following them have little room to exercise discretion (either through supervisory fiat, incentives or corporate culture), you're dealing with management by algorithm.

John Robb sums it up in this excellent essay:

The entire process was inevitable. It’s also not a unique situation. We’re going to see much more of this in the future as algorithms and authoritarianism grow

To which I’ll add that while the passenger’s treatment resonates viscerally, we are all, as consumers and citizens, at the mercy of black-box processes (aka algorithms).

When we engage with business and government and are fed through a decision tree. When an insurance claim is processed. When we are triaged by a telenurse. When we make a complaint. These are often rigid black-box systems. The humans operating these processes in many cases have limited discretion. This discretion has been whittled away over years in the name of optimisation or standardisation. And in many cases, that process has led to efficiency, fewer errors and higher quality for us.

But in some instances (especially with dominant market players like US airlines, credit reference agencies, utilities or social networks) a black-box culture becomes irredeemably inflexible and opaque.

In many industries, the human is already out-of-the-loop, simply following a script. We’ve all heard an agent say “the system doesn’t allow me to do that.” Facebook's banning of the world's most famous war photo is another example.

Will automated systems make this more or less common?

The silver lining is that automated systems could be less brittle than current processes. A machine learning system could learn rapidly from experience, ultimately optimising more efficiently and with greater efficacy that an ‘architect-once, implement forever’ process manual. Human overseers could have the power to override such genuinely automated systems if the machines suggest macabre outcomes. The could is crucial here. It depends on the systems being well designed in the first place.

The risk is the opposite will happen because firms may expediently implement poorly designed systems that are prone to absurdity or petty cruelty, byzantine in explicability or implacably opaque.

What do you think businesses will choose?

Want to follow week-to-week commentary of the most pressing questions in technology? Sign up to my newsletter here.