First, write out the numbers one to 100 in 10 rows. Cross out the one. Then circle the two, and cross out all of the multiples of two. Circle the three, and do likewise. Follow those instructions, and you’ve just completed the first three steps of an algorithm, and an incredibly ancient one. Twenty-three centuries ago, Eratosthenes was sat in the great library of Alexandria, using this process (it is called Eratosthenes’ Sieve) to find and separate prime numbers. Algorithms are nothing new, indeed even the word itself is old. Fifteen centuries after Eratosthenes, Algoritmi de numero Indorum appeared on the bookshelves of European monks, and with it, the word to describe something very simple in essence: follow a series of fixed steps, in order, to achieve a given answer to a given problem. That’s it, that’s an algorithm. Simple.
Apart from, of course, the story of algorithms is not so simple, nor so humble. In the shocked wake of Donald Trump’s victory in the United States presidential election, a culprit needed to be found to explain what had happened. What had, against the odds, and in the face of thousands of polls, caused this tectonic shift in US political opinion? Soon the finger was pointed. On social media, and especially on Facebook, it was alleged that pro-Trump stories, based on inaccurate information, had spread like wildfire, often eclipsing real news and honestly-checked facts.
But no human editor was thrust into the spotlight. What took centre stage was an algorithm; Facebook’s news algorithm. It was this, critics said, that was responsible for allowing the “fake news” to circulate. This algorithm wasn’t humbly finding prime numbers; it was responsible for the news that you saw (and of course didn’t see) on the largest source of news in the world. This algorithm had somehow risen to become more powerful than any newspaper editor in the world, powerful enough to possibly throw an election.
So why all the fuss? Something is now happening in society that is throwing algorithms into the spotlight. They have taken on a new significance, even an allure and mystique. Algorithms are simply tools but a web of new technologies are vastly increasing the power that these tools have over our lives. The startling leaps forward in artificial intelligence have meant that algorithms have learned how to learn, and to become capable of accomplishing tasks and tackling problems that they were never been able to achieve before. Their learning is fuelled with more data than ever before, collected, stored and connected with the constellations of sensors, data farms and services that have ushered in the age of big data.
Algorithms are also doing more things; whether welding, driving or cooking, thanks to robotics. Wherever there is some kind of exciting innovation happening, algorithms are rarely far away. They are being used in more fields, for more things, than ever before and are incomparably, incomprehensibly more capable than the algorithms recognisable to Eratosthenes.
Algorithms are the accountant, banker and lawyer of the future. Ben Hammersley, a contributing editor to Wired (and a sharp eye in the perilous game of tech futurology) has recently announced that “the rarefied position of the specialist professions is coming to an end”. Algorithms are learning to do what previously only white collar workers have (expensively) done. Algorithms are being used as negotiators for legal contracts, choosing – split-second – which terms to offer and accept. They are also being used to create entirely new kinds of “smart” contracts where, in the words of Mustafa Al Bassam, an innovator in the technology underlying them, the “clauses [are] not written in English, secured by courts or executed by administrators. Clauses are written in computer code, secured by cryptography, and executed by an open network of computers that update a public ledger.” It’s not just lawyers. At the University of California, San Francisco’s new medical centre, an algorithmically-operated robot runs a fully-automated hospital pharmacy. The dispensing room is sterile and secure and the algorithm, having prepared hundreds of thousands of prescriptions, is yet to make a mistake.
We already live in the age of algorithmic law enforcement. Predpol is, among others, a predictive policing company that uses algorithms to predict areas where crime is likely to happen in the future, on the basis of crimes committed in the past. If you are caught by the police, an algorithm might help to decide if and when you get parole, too. According to the Wall Street Journal, at least 15 states in the US use automated risk assessment tools to aid judges in making parole decisions. Algorithms are increasingly deciding whether you get that all-important job, too. Pegged is one company that offers this kind of technology (powered by artificial intelligence and fuelled by huge amounts of data) to find the most likely candidates for a company.
Algorithms are also being increasingly used in pursuits that we think are creative and essentially human, things which come from the human soul, not a cold, rules-based machine intelligence. Kenichi Yoneda is a Japanese artist who has been using algorithms to mimic the imperfections and creative licence of human artists to create works increasingly indistinguishable from human efforts. Algorithmic journalism is on the rise too. Associated Press’s “robot journalists” published 3,000, mainly financial, articles last year, some within minutes of the information being released.
The forces sweeping algorithms forward are strong. They are tireless, they don’t need to eat, and don’t draw a salary. If you can automate something that was once done by a human being, you very often save money in the long-term. In many of these areas, however, the claim is often that algorithms aren’t just as good as humans; they’re probably better. Algorithms can avoid the blind spots, prejudices, biases that are, well, unavoidably human. In the face of mountains of data and countless variables, humans, the argument goes, just can’t compete.
Which brings us back to Facebook’s news algorithm. Its algorithm is intended to assess the huge amount of content available and to try to serve you up content that it thinks you will find interesting and which you will want to see.
This process used to be tweaked and curated by human editors. But after a rash of stories alleged that the team were soft liberals and, consciously or not, suppressed conservative opinions, Facebook got rid of them and started leaning more on algorithms. Under the old guidelines, news curators stuck to a list of trusted sources. The algorithms apparently didn’t, leading to claims that conspiracy theories and outright falsehoods were spreading out of control. Even US president Barack Obama has commented that online misinformation is a threat to democratic institutions.
Setting the specifics of that controversy, and of any particular algorithm, aside, the danger is not that these algorithms are stupid. If they are, they won’t be for long. From the news that we see, to where the police patrol, to the contracts that we sign, the medicines that we take, and countless other examples I could have picked, the danger is that they are too powerful.
Facebook is the world’s largest distributor of news: the news algorithm towers above any editor. Google is the window into the internet: its algorithm is simply the most important guide we have to online life. Algorithms breezily move billions around in the world’s markets, and make decisions that are sometimes, literally, a matter of life or death, freedom or imprisonment.
Alec Ross, who worked as an adviser to Hillary Clinton when she was US Secretary of State, has a dark prediction for algorithms. In 2017, he thinks, somebody will be assassinated by a driverless car: the rise of the algorithmic assassin.
Algorithms have undoubtedly improved services and made our lives easier. They have not just improved modern life; they are utterly essential to modern life. However, there’s a dark, worrying side to them too.
Aren’t there always humans somewhere behind the algorithms? Aren’t they just doing what we tell them to do? If an algorithm kills someone, or becomes racist, or crashes a market, isn’t there always somebody to blame? The answer is: in most cases it’s really not that simple. They’re not just now unbelievably sophisticated and built by large teams of coders, but they are also constantly learning themselves. They are full of flux, revision and change on the basis of the feedback they’re getting.
When you’ve got networks of hundreds of algorithms working together, all themselves constantly changing, it gets even more difficult. Also (you’ve guessed it), we’re also seeing the rise of algorithms to make other algorithms. Amid all this complexity, unpacking the underlying logic and rationale is formidably difficult. The problem is that for most of us, most of the time, algorithms are invisible – indeed, the thinking behind instructions are also invisible.
Even if you could work out what they did, for most of us, algorithms are “black boxes”; hidden, proprietary. They are expensive pieces of intellectual property, jealously guarded by walls of both technical and legal protection. The source code of nearly all the algorithms that are most important to society are not visible to society. And because they’re invisible, they’re also unaccountable. The writer Cathy O’Neil has coined a term to describe nasty and pernicious kinds of algorithms that may be doing all kinds of harm: “Weapons of Math Destruction”.
Parole algorithms may bias decisions on the basis of income or (indirectly) ethnicity. Recruitment algorithms might stop you getting a job on the basis of mistaken identity, and there’s nothing you can do. The academic Frank Pasquale has dubbed a society that uses them the “Black Box Society”, a society harmed by a whole new kind of secrecy that obscures the automated judgements that affect our lives.
The age of algorithmic power is no science fiction. It is not a looming possibility, or possible future; it is here and it shapes our lives today. Yet while we’ve seen algorithms grow in power, we have done little to control that power. We haven’t, societally, created ways of making them more transparent or subject to any kind of outside scrutiny.
People often have no opportunity to know, much less contest, the decisions that algorithms make that affect them, and little chance of redress or restitution when they believe these decisions affect them adversely. This is a case of the technology outpacing all those other societal things that are needed to control it: new processes, regulatory structures, professional codes, much less broader public understanding and acceptance.
It was not just mathematics that was important or useful to the ancient Greeks, nor is it the only thing to which we owe them. The travels of Eratosthenes during his life took him to Athens and there, perhaps more than anywhere else in the world, they also worried about concentrations of power, and began to develop concepts like responsibility, accountability and transparency, to control it and to limit its abuses. Those examples, as much as the early algorithms, remain as important today as they have ever been.
Carl Miller is the research director at think tank Demos in the UK.