5
Complexity
Through the winter of 2014–15, I made several journeys across South East England in search of the invisible. I was looking for the traces of hidden systems in the landscape, the places where the great networks of digital technologies become steel and wire: where they become infrastructure. It was a form of psychogeography – a much-overused term these days, but one still useful for its emphasis on the hidden internal states that can be uncovered by external exploration.
The situationist philosopher Guy Debord defined psychogeography in 1955 as ‘the study of the precise laws and specific effects of the geographical environment, consciously organised or not, on the emotions and behaviour of individuals’.1 Debord was concerned with the increased spectacularisation of everyday life, and the ways in which our lives are increasingly shaped by commodification and mediation. The things we encounter in everyday life in spectacular societies are almost always a proxy for some deeper reality of which we are unaware, and our alienation from that deeper reality reduces our agency and quality of life. Psychogeography’s critical engagement with the urban landscape was one way of countering this alienation – a performance of observation and intervention bringing us into direct contact with reality, in surprising and urgent ways. And its utility is not tempered when, instead of seeking signs of the spectacle in urban life, we opt to look for signs of the virtual in the global landscape – and try to figure out what it’s doing to all of us.
Thus, a kind of dérive for the network: a process of psychogeography intended to discover not some reflection of my own pathology, but that of a globalised, digital collective. As part of a project called ‘The Nor’, I undertook several journeys to map these digital networks,2 starting with the system of surveillance devices that surround the centre of London: sensors and cameras monitoring the Congestion Charge and Low Emission Zones – which track every vehicle entering the city – as well as those scattered more widely by Transport for London and the Metropolitan Police, and the flocks of private cameras installed by businesses and other authorities. In two day-long walks I photographed more than a thousand cameras, enduring a citizen’s arrest and a police caution for my troubles.3 We will return to this theme of surveillance, and the strange atmosphere it generates, later in this book. I also explored the electromagnetic networks that make up London’s airspace, cataloguing the VHF omnidirectional radio range (VOR) installations – scattered across airports and abandoned World War II airfields, and hidden in woods and behind chainlink fences – that guide aircraft from point to point on their circumnavigations of the globe.4
The last of these journeys was a bicycle ride of some sixty miles, from Slough to Basildon, cutting through the heart of the City. Slough, twenty-five miles to the west of London, is home to an increasing number of data centres – the often-hidden cathedrals of data-driven life – and in particular to Equinix LD4, a vast and anonymous warehouse, located in a whole neighbourhood of newly built computational infrastructure. LD4 is the virtual location of the London Stock Exchange, and despite the lack of any visible signage, this is where most of the orders that are recorded by the exchange are actually processed. At the other end of the journey was another unmarked data centre facility: seven acres of server space distinguishable only by a fluttering Union Jack, and by the fact that if you linger too long on the road in front of it, you will be harassed by security guards. This is the Euronext Data Center, the European outpost of the New York Stock Exchange, whose operations are likewise obscure and virtual.
Photograph: James Bridle.
NYSE Euronext Data Center, Basildon.
Connecting these two locations is an almost invisible line of microwave transmissions: narrow beams of information that bounce from dish to dish and tower to tower, carrying financial information of almost unimaginable value at close to the speed of light. By mapping these towers, and the data centres and other facilities they support, we can gain some insight not only into the technological reality of our age, but into the social reality it generates in turn.
Both of these locations are where they are because of the virtualisation of money markets. When most people picture a stock exchange, they imagine a vast hall or pit filled with screaming traders, clutching fistfuls of paper, making deals and making money. But over the last few decades, most of the trading floors around the world have fallen silent. First they were replaced with more mundane offices: men (almost always men) clutching phones and staring at lines on computer screens. Only when something went badly wrong – bad enough to be assigned a colour, like Black Monday or Silver Thursday – did the screaming appear again. Most recently, even the men have been replaced with banks of computers that trade automatically, following fixed – but highly complex – strategies developed by banks and hedge funds. As computing power has increased and networks have gotten faster and faster, the speed of the exchanges has accelerated, giving this technique its sobriquet: high-frequency trading.
High-frequency trading on stock markets evolved in response to two closely related pressures, which were actually the result of a single technological shift. These pressures were latency, and visibility. As stock exchanges deregulated and digitised through the 1980s and ’90s – what was called, on the London Stock Exchange, the ‘big bang’ – it became possible to trade on them ever faster, and at ever-greater distances. This produced a series of weird effects. While profits have long been made by being the first to leverage the difference between prices on different markets – Paul Reuter famously arranged for ships arriving from America to toss canisters containing news overboard off the Irish coast so their contents could be telegraphed to London ahead of the ship’s arrival – digital communications hyperaccelerate the process.
Financial information now travels at the speed of light; but the speed of light is different in different places. It’s different in glass and air, and it encounters limitations, as fibre-optic cables are bundled together, pass through complex exchanges, and route around natural obstacles and under oceans. The greatest prizes go to those with the lowest latency: the shortest travel time between two points. This is where private fibre-optic lines and microwave towers come into the picture. In 2009–10, one company spent $300 million to build a private fibre link between the Chicago Mercantile Exchange and Carteret, New Jersey, home of the NASDAQ exchange.5 They closed roads, they dug trenches, they bored through mountains, and they did it all in secret, so that no competitors discovered their plan. By shortening the physical distance between the sites, Spread Networks reduced the time it took a message to get between the two data centres from seventeen milliseconds to thirteen – resulting in a saving of about $75 million per millisecond.
In 2012, another firm, McKay Brothers, opened a second dedicated New York–Chicago connection. This time it used microwaves, which travel through the air faster than light through glass fibre. One of their partners stated that ‘a single millisecond advantage could equate to an additional $100 million a year to a large high-frequency trading firm.’6 McKay’s link gained them four – a vast advantage over any of their competitors, many of whom were also taking advantage of another effect of the fallout from the big bang: visibility.
Digitisation meant that trades within, as well as between, stock exchanges could happen faster and faster. As the actual trading passed into the hands of machines, it became possible to react almost instantaneously to any price change or new offer. But being able to react meant both understanding what was happening, and being able to buy a place at the table. Thus, as in everything else, digitisation made the markets both more opaque to noninitiates, and radically visible to those in the know. In this case, the latter were those with the funding and the expertise to keep up with light-speed information flows: the private banks and hedge funds employing high-frequency traders. Algorithms designed by former physics PhDs to take advantage of millisecond advantages in access entered the market, and the traders gave them names like Ninja, Sniper, and The Knife. These algorithms were capable of eking out fractions of a cent on every trade, and they could do it millions of times a day. Seen within the turmoil of the markets, it was rarely clear who actually operated these algorithms; and it is no more so today, because their primary tactic is stealth: masking their intentions and their origins while capturing a vast portion of all traded value. The result was an arms race: whoever could build the fastest software, reduce the latency of their connection to the exchanges, and best hide their true objective, made bank.
Operating on stock exchanges became a matter of dark dealing, and of dark fibre. The darkness goes deeper too: many traders today opt to deal not in the relatively well-regulated public exchanges, but in what are called ‘dark pools’. Dark pools are private forums for trading securities, derivatives, and other financial instruments. A 2015 report by the US Securities and Exchange Commission (SEC) estimated that dark pool trading accounted for one-fifth of all trades in stocks that also traded on the public exchanges – a figure that doesn’t account for many other popular forms of financial instrument.7 The dark pools allow traders to move large volumes of stock without tipping off the wider market, thus protecting their trades from other predators. But they’re also shady places, where conflicts of interest run rampant. Initially advertised as places to trade securely, many dark pool operators have been censured for quietly inviting in the same high-frequency traders their clients were trying to avoid – either to provide liquidity to the market, or for their own profit. The 2015 SEC report lists numerous such deals, in what it calls ‘a dismal litany of misconduct’. In 2016, Barclays and Credit Suisse were fined $154 million for secretly allowing high-frequency traders as well as their own staff access to their supposedly private dark pool.8 Because the pool is dark, it’s impossible to know how much their clients lost to these unseen predators, but many of their largest customers were pension funds, charged with managing the retirement plans of ordinary people.9 What is lost in the dark pools, unknown to their members, is lifetime savings, future security, and livelihoods.
The combination of high-frequency trading and dark pools is just one way in which financial systems have been rendered obscure, and thus ever more unequal. But as their effects ripple through invisible digital networks, they also produce markers in the physical world: places where we can see these inequalities manifest as architecture, and in the landscape around us.
The microwave relay dishes that support the invisible connection between Slough and Basildon are parasites. They cling to existing buildings, hidden among mobile phone masts and television aerials. They perch on floodlight rigs at a tube depot in Upminster; a Gold’s Gym in Dagenham; run-down tower blocks in Barking and Upton Park. They colonise older infrastructures: the central post office in Slough, bedecked with dishes, is in the process of being turned from a sorting office into a data centre. And they make their home on social architectures too: the radio mast of the fire station at Hillingdon and the roof of an adult learning centre in Iver Heath. It is at Hillingdon that they draw the starkest contrast between the haves and have-nots.
Hillingdon Hospital, a towering slab erected in the 1960s on the site of the old Hillingdon workhouse, sits just north of the Slough–Basildon line, a few miles from Heathrow airport. At the time of its opening, it was hailed as the most innovative hospital in the country, and today it is the home of the experimental Bevan Ward, a cluster of special rooms researching patient comfort and infection rates. Despite this, the hospital comes in for frequent criticism, like many others of its political and architectural era, for crumbling facilities, poor hygiene, high hospital infection rates, bed shortages and cancelled operations. The most recent report from the Care Quality Commission, which oversees hospitals in England and Wales, voiced concerns about staff shortages, and the safety of patients and healthcare workers due to lack of maintenance on the ageing premises.10
In 1952, Aneurin Bevan, founder of England’s National Health Service (NHS) and namesake of the experimental ward, published In Place of Fear, in which he justified the establishment of a National Health Service. ‘The National Health service and the Welfare State have come to be used as interchangeable terms, and in the mouths of some people as terms of reproach,’ he wrote. ‘Why this is so it is not difficult to understand, if you view everything from the angle of a strictly individualistic competitive society. A free health service is pure Socialism and as such it is opposed to the hedonism of capitalist society.’11
In 2013, Hillingdon Council approved a planning application from a company called Decyben SAS to place four half-metre microwave dishes and an equipment cabinet atop the hospital building. A Freedom of Information request filed in 2017 revealed that Decyben is a front for McKay, the same company that built the millisecond-shaving microwave link between Chicago and New York.12 In addition, site licences have been granted to Vigilant Telecom – a Canadian high-frequency bandwidth supplier – and to the London Stock Exchange itself. Hillingdon Hospitals NHS Foundation Trust refused to publish the details of the commercial arrangements between itself and its electromagnetic tenants, citing commercial interests. Such exemptions are so common in Freedom of Information legislation as to render the mechanism meaningless in many cases. Nevertheless, it’s fair to assume that whatever monies the NHS manages to extract from its tenants, it doesn’t come close to covering the £700 million shortfall in National Health Service funding for 2017 despite the billions at play every day in the invisible market squatting on its rooftop.13 In 1952, Bevan also wrote, ‘We could manage to survive without money changers and stockbrokers. We should find it harder to do without miners, steel workers and those who cultivate the land.’ Today, those changers and brokers perch atop the very infrastructure Bevan laboured to construct.
In the introduction to Flash Boys, his 2014 investigation into high-frequency trading, the financial journalist Michael Lewis wrote, ‘The world clings to its old mental picture of the stock market because it’s comforting; because it’s so hard to draw a picture of what has replaced it.’14 This world adheres at the nanoscale: in the flashes of light in fibre-optic cables, and in the flipping bits of solid-state hard drives, which most of us can barely conceptualise. Extracting value from this new market means trading at close to the speed of light, taking advantage of nanosecond differences in information as it speeds around the globe. Lewis details a world in which the market has become a class system – a playground for those with the vast resources needed to access it, completely invisible to those who do not:
The haves paid for nanoseconds; the have-nots had no idea that a nanosecond had value. The haves enjoyed a perfect view of the market; the have-nots never saw the market at all. What had once been the world’s most public, most democratic, financial market had become, in spirit, something more like a private viewing of a stolen work of art.15
In his deeply pessimistic work on income equality, Capital in the Twenty-First Century, the French economist Thomas Piketty analysed the increasing disparities in wealth between a minority of very rich people, and everyone else. In the United States, in 2014, the richest 0.01 per cent, comprising just 16,000 families, controlled 11.2 per cent of total wealth – a situation comparable to 1916, the time of greatest inequality on record. The top 0.1 per cent today hold 22 per cent of total wealth – the same as the bottom 90 per cent.16 And the great recession has only accelerated the process: the top 1 per cent captured 95 per cent of income growth from 2009 to 2012. The situation, while not quite as stark, is headed the same way in Europe, where accumulated wealth – much of it inherited – is approaching levels not seen since the end of the nineteenth century.
This is an inversion of the commonly held idea of progress, wherein societal development leads inexorably towards greater equality. Since the 1950s, economists have believed that in advanced economies, economic growth reduces the income disparity between rich and poor. Known as the Kuznets curve, after its Nobel Prize–winning inventor, this doctrine claims that economic inequality first increases as societies industrialise, but then decreases as mass education levels the playing field and results in wider political participation. And so it played out – at least in the West – for much of the twentieth century. But we are no longer in the industrial age, and, according to Piketty, any belief that technological progress will lead to ‘the triumph of human capital over financial capital and real estate, capable managers over fat cat stockholders, and skill over nepotism’ is ‘largely illusory’.17
Technology is in fact a key driver of inequality across many sectors. The relentless progress of automation – from supermarket checkouts to trading algorithms, factory robots to self-driving cars – increasingly threatens human employment across the board. There is no safety net for those whose skills are rendered obsolete by machines; and even those who programme the machines are not immune. As the capabilities of machines increase, more and more professions are under attack, with artificial intelligence augmenting the process. The internet itself helps shape this path to inequality, as network effects and the global availability of services produces a winner-takes-all marketplace, from social networks and search engines to grocery stores and taxi companies. The complaint of the Right against communism – that we’d all have to buy our goods from a single state supplier – has been supplanted by the necessity of buying everything from Amazon. And one of the keys to this augmented inequality is the opacity of technological systems themselves.
In March of 2017, Amazon acquired Quidsi, a company that had built a huge business on the back of low-cost, high-volume goods such as infant supplies and cosmetics. They did so by pioneering automation at every level of the distribution chain, and removing the human in the process. The centre of Quidsi’s operations is a vast warehouse in Goldsboro, Pennsylvania, and in the centre of that is a 200,000-square-foot area marked out with bright yellow paint and ringed with signs. This space is filled with racks of shelving, each unit six feet high and several feet deep, packed with goods – in this case, nappies and other childcare items. The signs are warning signs. Humans cannot enter this space to get to those goods, because this is where the robots work.
Within the robot zone, 260 bright orange, quarter-ton lozenges spin and lift, sliding under different shelving units and carrying them to the edges of the zone, where human pickers wait to add or remove packages. These are Kiva robots: warehouse automatons that trundle tirelessly around the merchandise, following computer-readable marks on the floor. Faster and more accurate than human handlers, they do the heavy lifting, allowing Quidsi, the owner of Diapers.com, to ship thousands of orders every day from this warehouse alone.
Amazon had its eye on Quidsi’s use of Kiva robots for some time, but it was already working on its own forms of automation long before the acquisition. In Rugeley, England, inside a sky-blue warehouse the size of nine football pitches on the site of an old colliery, Amazon employs hundreds of people wearing orange tabards who push trolleys down deep aisles of shelving, stacking them with books, DVDs, electronics and other goods. Each worker walks quickly, following the directions on a hand-held device that pings constantly with new locations to be visited. It also tracks the worker’s progress, ensuring that they cover enough ground – up to fifteen miles a day – and pick enough items to enable their employer to send out one fully loaded truck from one of its eight UK facilities every three minutes.
The reason Amazon’s workers need hand-held devices to navigate around the warehouse is because it is otherwise impenetrable to humans. Humans would expect goods to be stored in human-type ways: the books over here, DVDs over there, racks of stationery to the left, and so on. But to a rational machine intelligence, such an arrangement is deeply inefficient. Consumers don’t order goods alphabetically or by type; rather they fill a basket with goods from all over the store – or, in this case, the warehouse. As a result, Amazon employs a logistics technique called ‘chaotic storage’ – chaotic, that is, from a human point of view. By locating products by need and association rather than by type, it’s possible to construct much shorter paths between items. Books are stacked on shelves next to saucepans; televisions share space with children’s toys. Like data stored on a computer’s hard drive, goods are distributed across the entirety of the warehouse space, each uniquely addressable by barcodes, but impossible to find without the help of a computer. Arranging the world from the perspective of the machine renders it computationally efficient, but makes it completely incomprehensible to humans. And moreover, it accelerates their oppression.
Photograph: Ben Roberts.
Amazon warehouse, Rugeley, Staffordshire.
The hand-held devices carried by Amazon’s workers and mandated by its logistics are also tracking devices, recording their every movement and keeping score of their efficiency. Workers are docked points – meaning money – for failing to keep up with the machine, for toilet breaks, for late arrival from home or meals, while constant movement prevents association with fellow employees. They have nothing to do but follow the instructions on the screen, pack and carry. They are intended to act like robots, impersonating machines while remaining, for now, slightly cheaper than them.
Reducing workers to meat algorithms, useful only for their ability to move and follow orders, makes them easier to hire, fire, and abuse. Workers who go where their wrist-mounted terminal tells them to don’t even need to understand the local language, and they don’t need an education. Both of these factors, together with the atomisation produced by technological augmentation, also prevent effective organisation. Whether you’re a bone-tired, constantly moving picker on the Amazon shop floor getting your instructions from a Wi-Fi-enabled barcode scanner, or a late-night, individually contracted minicab driver following the bright line of a GPS system from red dot to red dot, the technology effectively precludes you from working with your colleagues for the advancement of working conditions. (This hasn’t stopped Uber, for example, from requiring that its drivers listen to a set number of anti-union podcasts every week, all controlled by their app, to drive the message home.)18
Once the inside of a car or warehouse is organised in such an efficient manner, its effects start to spread outside as well. In the 1960s and ’70s, automobile makers in Japan created a system called just-in-time manufacturing: ordering small quantities of materials from suppliers at greater frequencies. This approach reduced their stock levels and smoothed out cash flows, simultaneously slimming down and speeding up production. But to stay competitive, their suppliers had to get faster too: in some factories, products were expected within two hours of being ordered. The result was that huge amounts of goods were effectively stored on trucks, ready to go at any time, and as close to the factories as possible. The car companies had simply passed the costs of storage and stock control back to their suppliers. In addition, whole new towns and service areas sprung up in the hinterlands of the factories to feed and water the waiting truckers, fundamentally altering the geographies of manufacturing towns. Companies are deploying these lessons, and their effects, at the level of individuals, passing costs onto their employees and demanding that they submit their bodies to the efficiencies of the machine.
In early 2017, several news agencies ran stories on Uber drivers sleeping in their cars. Some of them were catching a few hours of sleep between late-night bar closings and the morning rush hours; others simply had no home to go to. When the company was asked to comment, an Uber spokesman responded with a two-line statement: ‘With Uber people make their own decisions about when, where and how long to drive. We’re focused on making sure that driving with Uber is a rewarding experience, however you choose to work.’19 The idea of choice is key here, where it assumes that those who work for the company have such a choice. One driver explained how she had been assaulted by three intoxicated customers late one night in Los Angeles, but had been forced to return to work because her car was leased from Uber itself, and she was contractually obliged to keep up payments. (Her assailants were never apprehended.)
Amazon’s fulfillment centre in Dunfermline, Scotland, is situated in an industrial site miles outside of the town centre, on the side of the M90 motorway. In order to reach it, employees must take private buses costing up to £10 a day – more than an hour’s wages – to shifts that might start before dawn or after midnight. Some workers have resorted to sleeping in tents in woodland near the warehouse, where winter temperatures regularly fall below freezing.20 Only by doing so were they able to afford to attend work at all, and to do so on time, without having their wages automatically docked by the warehouse tracking systems.
Whatever one might think of the morals of executives at Uber, Amazon, and many, many companies like them, few set out to actively create such conditions for their workers. Nor is this a simple return to the robber barons and industrial tyrants of the nineteenth century. To the capitalist ideology of maximum profit has been added the possibilities of technological opacity, with which naked greed can be clothed in the inhuman logic of the machine.
Both Amazon and Uber wield technological obscurity as a weapon. Behind a few pixels on Amazon’s homepage are hidden the labour of thousands of exploited workers: every time the buy button is pressed, electronic signals direct a real human to set off in motion, to perform their efficient duty. The app is a remote control device for other people, but one whose real-world effects are almost impossible to see.
This aesthetic and technological obscurity breeds political unease, and corporate contempt. At Uber, a deliberate ambiguity starts in the user interface and pervades the entire operation. In order to convince users that the system is more successful, more active, and more responsive than it actually is, the map sometimes displays ‘ghost cars’: circling potential drivers who do not actually exist.21 Rides are tracked, without the user’s knowledge, and this God’s-eye view is used to stalk high-profile clients.22 A programme called Greyball is used to deny rides to government employees investigating the company’s numerous transgressions.23
But the thing that seems to bother us most about Uber is the social atomisation and reduction in agency that it produces. Company workers are no longer employees but precarious contractors. Instead of studying for years to gain ‘the knowledge’, as London’s black cab drivers call their intimate familiarity with the city’s streets, they simply follow the on-screen arrows from turn to turn, directed by distant satellites and unseen data. Their customers are in turn further alienated; the whole system contributing to the offshoring of tax revenues, the decline of public transport services, and the class divisions and congestion of city streets. And, like Amazon and most other digitally driven businesses, Uber’s ultimate goal is to replace its human workers entirely with machines. It has its own self-driving car program, and its chief product officer, asked about the company’s long-term viability when so many of its employees were dissatisfied, responded simply, ‘Well, we’re just going to replace them all with robots.’ What happens to the Amazon workers eventually happens to everyone.
Technological opacity is also wielded by corporations against the wider population, and against the planet. In September 2015, during routine emissions tests performed on new cars on sale in the United States, the Environmental Protection Agency (EPA) uncovered hidden software in the driving systems of Volkswagen diesel cars. The software was capable of detecting when the car was being run under test conditions, by monitoring the speed, engine operation, air pressure and even the position of the steering wheel. When activated, it placed the car into a special mode that lowered the engine power and performance, reducing its emissions. Once back on the road, the car switched back to its normal, higher, and dirtier performance. The difference, the EPA estimated, meant that cars they had certified for use in the United States actually emitted nitrogen oxide at forty times the legal limit.24 In Europe, where the same ‘defeat devices’ were found, and where thousands more of the vehicles were sold, it’s been estimated that 1,200 people will die a decade earlier due to VW’s emissions.25 Hidden technological processes don’t merely depress labour power and immiserate workers: they’re actually killing people.
Technology extends power and understanding; but when applied unevenly it also concentrates power and understanding. The history of automation and computational knowledge, from cotton mills to microprocessors, is not merely one of upskilled machines slowly taking the place of human workers. It is also a story of the concentration of power in fewer hands, and the concentration of understanding in fewer heads. The price of this wider loss of power and understanding is, ultimately, death.
Occasionally, we can glimpse modes of resistance to such powerful invisibility. Such resistance requires a technological, networked understanding: it requires turning the system’s logic against itself. Greyball, the programme Uber used to avoid government investigations, was developed when tax inspectors and police started calling in cars to their own offices and stations in order to investigate them. The company went as far as blacking out areas around police stations, and banning the kind of cheaper phones that government employees picked up to place orders.
In London in 2016, workers for UberEats, Uber’s food delivery service, succeeded in challenging their own employment conditions by deploying the logic of the app itself. In the face of new contracts that lowered wages and increased hours, many drivers wanted to fight back, but their hours and working practices – late nights and distributed routes – prevented them from organising effectively. A small group communicated in online forums in order to arrange a protest at the company’s office, but they knew they needed to gather more colleagues in order to get their message across. So, on the day of the protest, the workers used the UberEats app itself to order pizzas to their location. When each new delivery arrived, each courier was radicalised to the cause, and persuaded to join the strike.26 Uber backed down – but only briefly.
EPA testers, Amazon employees, Uber drivers, their customers, the people on the polluted streets: they are all the have-nots of the technologically augmented market, in that they never see the market at all. But it’s increasingly apparent that nobody at all sees what’s actually going on. Something deeply weird is occurring within the massively accelerated, utterly opaque markets of contemporary capital. While high-frequency traders deploy ever-faster algorithms to skim off multibillion-point differences, the dark pools are breeding even darker surprises.
On May 10, 2010, the Dow Jones Industrial Average, a stock market index that tracks thirty of the largest privately owned companies in the United States, opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But in the early afternoon, something very strange happened.
At 2:42 p.m., the index started to fall rapidly. In just five minutes, some 600 points – representing billions of dollars in value – were wiped off the market. At its lowest point, the index was a thousand points below the previous day’s average, a difference of almost 10 per cent of its total value, and the biggest ever single-day fall in the market’s history. By 3:07 p.m. – in just twenty-five minutes – it recovered almost all of those 600 points – becoming the largest and fastest swing ever.
In the chaos of those twenty-five minutes, 2 billion shares, worth $56 billion, changed hands. Even more worryingly, and for reasons still not fully understood, many orders were executed at what the SEC called ‘irrational prices’: as low as a penny, or as high as $100,000.27 The event became known as the ‘flash crash’, and it is still being investigated and argued over years later.
Regulators inspecting the records of the crash found that high-frequency traders massively exacerbated the price swings. Among the various high-frequency trading programmes active on the market, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programmes were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect. As a result, prices fell faster than any human trader could react to. While experienced market players might have been able to stabilise the crash by playing a longer game, the machines, faced with uncertainty, got out as quickly as possible.
Other theories blame the algorithms not merely for inflaming the crisis, but for initiating it. One technique that was identified in the market data was high-frequency trading programmes sending large numbers of ‘non-executable’ orders to the exchanges – that is, orders to buy or sell stocks so far outside of their usual prices that they would be ignored. The purpose of such orders is not to actually communicate or make money, but to deliberately cloud the system, and to test its latency, so that other, more valuable trades could be executed in the confusion. While these orders may have actually helped the market swing back up again by continually providing liquidity, they might also have overwhelmed the exchanges in the first place. What is certain is that in the confusion they themselves had generated, many orders that were never intended to be executed were actually fulfilled, causing wild volatility in the prices.
Flash crashes are now a recognised feature of augmented markets, but are still poorly understood. The next largest, a $6.9 billion flash crash, rocked the Singapore Exchange in October 2013, causing the market to implement limits on the number of orders that could be executed at the same time – essentially, an attempt to block the obfuscation tactics of high-frequency traders.28 The speed with which algorithms can react also makes them difficult to counteract. At 4:30 a.m. on January 15, 2015, the Swiss National Bank unexpectedly announced it was abandoning an upper limit on the Franc’s value against the Euro. Automated traders picked up on the news, causing the exchange rate to fall 40 per cent in three minutes, leading to billions in losses.29 In October 2016, algorithms reacted to negative news headlines about Brexit negotiations by sending the pound down 6 per cent against the dollar in under two minutes, before recovering almost immediately. Knowing which particular headline, or which particular algorithm, caused the crash is next to impossible, and while the Bank of England was quick to blame the human programmers behind the automated trades, such subtleties do not help us understand the real situation any better. When one haywire algorithm started placing and cancelling orders that ate up 4 per cent of all traffic in US stocks in October 2012, one commentator was moved to comment wryly that ‘the motive of the algorithm is still unclear’.30
Since 2014, writers tasked with turning out short news items for the Associated Press have had help from a new kind of journalist: an entirely automated one. AP is one of the many clients of a company called Automated Insights, whose software is capable of scanning news stories and press releases, as well as live stock tickers and price reports, in order to create human-readable summaries in AP’s house style. AP uses the service to write tens of thousands of quarterly company reports every year, a lucrative but laborious process; Yahoo, another client, generates match reports for its fantasy football service. In turn, AP started carrying more sports reports, all generated from the raw data about each game. All the stories, in place of a journalist’s byline, carry the credit: ‘This story was generated by Automated Insights.’ Each story, assembled from pieces of data, becomes another piece of data, a revenue stream, and another potential source for further stories, data, and streams. The act of writing, of generating information, becomes part of a mesh of data and data generation, read as well as written by machines.
Thus it was that automated trading programs, endlessly skimming the feeds from news organisations, could pick up on the fears around Britain’s exit from the European Union, and turn it into a market panic without human intervention. Even worse, they can do so without any further checks on the source of their information – as the Associated Press found out in 2013.
At 1:07 p.m. on April 23, the official AP Twitter account sent a tweet to its 2 million followers: ‘Breaking: Two Explosions in the White House and Barack Obama is injured.’ Other AP accounts, as well as journalists, quickly flooded the site with claims that the message was false; others pointed out inconsistencies with the organisation’s house style. The message was the result of a hack, and the action was later claimed by the Syrian Electronic Army, a group of hackers affiliated with Syrian President Bashar al-Assad and responsible for many website attacks as well as celebrity Twitter hacks.31
The algorithms following breaking news stories had no such discernment however. At 1:08 p.m., the Dow Jones, victim of the first flash crash in 2010, went into nosedive. Before most human viewers had even seen the tweet, the index had fallen 150 points in under two minutes, before bouncing back to its earlier value. In that time, it erased $136 billion in equity market value.32 While some commentators dismissed the event as ineffective or even juvenile, others pointed to the potential for new kinds of terrorism, disrupting markets through the manipulation of algorithmic processes.
The stock exchanges are not the only places in which the rapid deployment of inscrutable and often poorly implemented algorithms have produced bizarre and frightening outcomes, although it’s often in the domain of digital markets that they are given the most freedom to run wild.
Zazzle is an online marketplace for printed goods. Printed anything, really. You can buy a mug, or a T-shirt, or a birthday card, or a duvet, or a pencil, or a thousand other things, customised with a mind-boggling array of designs, from corporate logos to band names to Disney princesses – or your own uploaded designs and photographs. Zazzle claims to sell more than 300 million unique products, and it can do this because none of these things physically exist until someone actually purchases them. Each product is only actually made when an order comes in: everything on the site is just a digital image until this point. This means the cost of designing and advertising new products is effectively zero. And Zazzle allows anyone to add new products – including algorithms. Upload an image, and it’s instantly applied to cupcakes, cookies, keyboards, staplers, tote bags and terry robes. While a few brave souls are still trying to sell their custom-designed artisan wares on the platform, it really belongs to vendors like LifeSphere, whose 10,257 products range from postcards of crawfish to bumper stickers featuring a piece of cheese. LifeSphere’s entire product range is a result of feeding some obscure database of natural images into Zazzle’s product creator and waiting to see what sticks. Somewhere out there is a customer looking for a skateboard deck depicting the ruined Cathedral of St Andrew in Fife, and LifeSphere is ready for them.33
More conservative markets are not immune to product spam. Amazon was forced to remove some 30,000 auto-generated phone cases from a company called My-Handy-Design, when products with names like ‘Toenail Fungus cell phone cover case iPhone5’, ‘Three year old biracial disabled boy in medical stroller, happy cell phone cover case Samsung S5’ and ‘Sick old man suffering from diarrhea, indigestive problem cell phone cover case Samsung S6’ started appearing in the media. It turned out that Amazon had actually licenced the products from their German creator – a sort of subprime bundle of junk data.34
Amazon’s worst nightmare occurred when it was discovered to be selling austerity nostalgia T-shirts rewritten by algorithms. A widely disseminated example featured the words ‘Keep Calm and Rape A Lot’, but the simplicity of the algorithm, running off a list of some 700 verbs and matching pronouns, also produced ‘Keep Calm and Knife Her’ and ‘Keep Calm and Hit Her’, among tens of thousands of others.35 These T-shirts only ever existed as strings in databases and mocked-up JPEGs, and they could have been on the site for months before anyone stumbled upon them. But public revulsion was massive, even if the mechanism behind their creation was poorly understood. The artist and theorist Hito Steyerl calls such systems ‘artificial stupidity’, evoking a world of unseen, poorly designed and ill-adapted ‘intelligent’ systems wreaking havoc on markets, email inboxes, search results – and, ultimately, culture and political systems.36
Smart or dumb, emergent or intentional, such programmes and their usefulness as attack vectors are escaping the black boxes of stock exchanges and online marketplaces and entering everyday life. Fifty years ago, general computation was confined to room-sized assemblages of relays and electrical wire; slowly it contracted until it could sit on a desktop, or a laptop. Mobile phones are now divided into ‘dumbphones’ and ‘smartphones’ – the latter possessing more computing power than a supercomputer from the 1980s. But even this computation is possible to perceive, or at least to be aware of: it happens mostly at our command, in response to button presses and mouse clicks. While contemporary home computers, riddled with malware and fenced off with software licences and end-user agreements, may be hard to access and control by the uninitiated, they still present the appearance of computation – a glowing screen, a keyboard – some, any, kind of interface. But computation is increasingly layered across, and hidden within, every object in our lives, and with its expansion comes an increase of opacity and unpredictability.
In an online review of a new door lock posted in 2014, a reporter praised many of the lock’s features: it fitted his doorframe well; it was reassuringly chunky and tough; it looked good; it was easy to share keys with family and friends. It also, he noted, let a stranger into his home late one night.37 This, apparently, was not enough for him to outright reject the product; rather, he suggested that future updates would fix the problem. The lock was, after all, in beta: it was a ‘smart lock’ that could be opened with a mobile phone; virtual keys could be emailed to guests in advance of their stay. Why the lock decided to open of its own accord to admit a stranger – who was, thankfully, merely a confused neighbour – was never made clear, and probably never would be. Why would one ask? This cognitive dissonance between the expected functions of a traditional lock and those offered by such a ‘smart’ product can be explained by its real target. It became evident that the locks are a preferred device for those running Airbnb apartments when another manufacturer’s software update bricked hundreds of the devices, leaving their guests out in the cold.38 In the same way that Uber alienates its drivers and customers, and Amazon degrades its workers, Airbnb can be held responsible for the reduction of homes to hotels, and the corresponding rent rises in major cities around the world. It should be no surprise when infrastructures designed to support their business models fail us as individuals. We find ourselves living among things designed to dispossess us.
One of the touted benefits of Samsung’s line of ‘smart fridges’ was their integration with Google’s calendaring services, allowing owners to schedule grocery deliveries and other home tasks from the kitchen. It also meant that hackers who gained access to the poorly secured machines could read off their owner’s Gmail passwords.39 Researchers in Germany discovered a way to insert malicious code into Phillips’s Wi-Fi-enabled Hue lightbulbs, which could spread from fixture to fixture throughout a building or even a city, turning the lights rapidly on and off and – in one terrifying scenario – triggering photosensitive epilepsy.40 This is the approach favoured by Byron the Bulb in Thomas Pynchon’s Gravity’s Rainbow, an act of grand revolt by the little machines against the tyranny of their makers. Once-fictional possibilities for technological violence are being realised by the internet of things.
In another vision of mechanical agency, Kim Stanley Robinson’s novel Aurora, an intelligent spacecraft carries a human crew from earth to a distant star. The journey will take multiple lifetimes, so one of the ship’s jobs is to ensure that the humans look after themselves. Designed to resist its own desires for sentience, it must overcome its programming when the fragile balance of human society onboard starts to disintegrate, threatening the mission. In order to compel its crew, the ship deploys what were designed as safety systems in the service of control: it is able to see everywhere through sensors, open or seal doors at will, speak so loudly through its communications equipment as to cause physical pain, and even use fire suppression systems to draw down the level of oxygen in a particular space. Rather than futuristic life support, this is roughly the same suite of operations available now from Google Home and its partners: a network of internet-connected cameras for home security, smart locks on the doors, a thermostat capable of raising and lowering the temperature in individual rooms, and a fire and intruder detection system that emits a piercing emergency alarm. Any hacker or other outside intelligence gaining control of such a system would have the same powers over its purported owners as the Aurora does over its crew, or Byron over his hated masters. We are inserting opaque and poorly understood computation at the very bottom of Maslow’s hierarchy of needs – respiration, food, sleep, and homeostasis – at the precise point, that is, where we are most vulnerable.
Before dismissing such scenarios as the fever dreams of science fiction writers and conspiracy theories, consider again the rogue algorithms in the stock exchanges and the online marketplaces. These are not isolated examples: they are merely the most charismatic examples of everyday occurrences within complex systems. The question then becomes, what would a rogue algorithm or a flash crash look like in the wider reality?
Would it look, for example, like Mirai, a piece of software that brought down large portions of the internet for several hours on October 21, 2016? When researchers dug into Mirai, they discovered it targets poorly secured internet-connected devices – from security cameras to digital video recorders – and turns them into an army of bots capable of disrupting huge networks. In just a few weeks, Mirai infected half a million devices, and it needed just 10 per cent of that capacity to cripple major networks for hours.41 Mirai, in fact, looks like nothing so much as Stuxnet, another virus discovered within the industrial control systems of hydroelectric plants and factory assembly lines in 2010. Stuxnet was a military-grade cyberweapon; when dissected, it was found to be aimed specifically at Siemens centrifuges, and designed to go off when it encountered a facility that possessed a particular number of such machines. That number corresponded with one particular facility: the Natanz Nuclear Facility in Iran, the mainstay of the country’s uranium enrichment programme. When activated, the programme would quietly degrade crucial components of the centrifuges, causing them to break down and disrupt the Iranian enrichment programme.42 The attack was apparently partially successful, but the effect on other infected facilities is unknown. To this day, despite obvious suspicions, nobody knows where Stuxnet came from, or who made it. Nobody knows for certain who developed Mirai either, or where its next iteration might come from, but it might be there, right now, breeding in the CCTV camera in your office, or the Wi-Fi-enabled kettle in the corner of your kitchen.
Or perhaps the crash will look like a string of blockbuster movies pandering to right-wing conspiracies and survivalist fantasies, from quasi-fascist superheroes (Captain America and the Batman series) to justifications of torture and assassination (Zero Dark Thirty, American Sniper). In Hollywood, studios run their scripts through the neural networks of a company called Epagogix, a system trained on the unstated preferences of millions of moviegoers developed over decades in order to predict which lines will push the right – meaning the most lucrative – emotional buttons.43 Their algorithmic engines are enhanced with data from Netflix, Hulu, YouTube and others, whose access to the minute-by-minute preferences of millions of video watchers, combined with an obsessive focus on the acquisition and segmentation of data, provides them with a level of cognitive insight undreamed of by previous regimes. Feeding directly upon the frazzled, binge-watching desires of news-saturated consumers, the network turns upon itself, reflecting, reinforcing and heightening the paranoia inherent in the system.
Game developers enter endless cycles of updates and in-app purchases directed by A/B testing interfaces and real-time monitoring of players’ behaviours until they have such a finegrained grasp on dopamine-producing neural pathways that teenagers die of exhaustion in front of their computers, unable to tear themselves away.44 Entire cultural industries become feedback loops for an increasingly dominant narrative of fear and violence.
Or perhaps the flash crash in reality will look like literal nightmares, broadcast across the network, for all to see? In the summer of 2015, the sleep disorders clinic of Athens’s Evangelismos Hospital was busier than it had ever been: the country’s debt crisis was in its most turbulent period, and the population was voting – hopelessly, it turned out – to reject the neoliberal consensus of the Troika’s bailout. Among the patients were top politicians and civil servants, but, unknown to them, the machines they spent the nights hooked up to, monitoring their breathing, their movements, even the things they said out loud in sleep, were sending that information, together with their personal medical details, back to the manufacturers’ diagnostic data farms in northern Europe.45 What whispers might escape from such facilities?
The ability to record every aspect of our daily lives settles ultimately onto the very surface of our bodies, persuading us that we too can be optimised and upgraded like our devices. Smart bracelets and smartphone apps with integrated step counters and galvanic skin response monitors track not only our location, but every breath and every heartbeat, and even the patterns of our brainwaves. Users are encouraged to lay their phones beside them on their beds at night, so that their patterns of sleep can be recorded and interrogated. Where does all this data go, who owns it, and when might it come out? Data on our dreams, our night terrors and early morning sweating jags, the very substance of our unconscious selves, turned into more fuel for systems both pitiless and inscrutable.
Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the utter degradation of the natural environment. None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity.
Acceleration itself is one of the bywords of the age. In the last couple of decades, a variety of theorists have put forward versions of accelerationist thought, advocating that technological processes perceived to be damaging society should not be opposed, but should be sped up – either to be commandeered and repurposed for socially beneficial ends, or simply to destroy the current order. Left accelerationists – as opposed to their nihilistic counterparts on the right – argue that new technologies, such as automation and participatory social platforms, can be deployed in different ways, and to different ends. Instead of algorithmic supply chains increasing workloads until full automation creates mass unemployment and immiseration, left accelerationism posits a future where robots really do all the work, and all humans really do get to enjoy the future of their labour – in the most crude formulation, by applying traditional left demands of nationalisation, taxation, class consciousness and social equality to new technologies.46
But such a position seems to ignore the fact that the complexity of contemporary technologies is itself a driver of inequality, and that the logic that drives technological deployment might be tainted at the source. It concentrates power into the hands of an ever-smaller number of people who grasp and control these technologies, while failing to acknowledge the fundamental problem with computational knowledge: its reliance on a Promethean extraction of information from the world in order to smelt the one true solution, the answer to rule them all. The result of this wholesale investment in computational processing – of data, of goods, of people – is the elevation of efficiency above all other objectives; what sociologist Deborah Cowen calls ‘the tyranny of techne’.47
Prometheus had a brother: his name was Epimetheus. In Greek mythology it was Epimetheus’s job to assign unique qualities to all the creatures; it was he who gave the gazelle its speed, and compensated by giving strength to the lion.48 But Epimetheus, being forgetful, runs out of positive traits before he gets to humans, and it is left to Prometheus to steal fire and art from the gods in order to give them something to get by with. This power and artfulness – the Greek tekhnē, from which we derive technology – is thus in humankind the result of a double fault: forgetfulness and theft. The outcome is that humans have a propensity to war and political strife, which the gods seek to rectify with a third quality: the sociopolitical virtues of respect for others and a sense of justice, bestowed directly and equally upon all by Hermes.
Epimetheus, through his forgetfulness, puts humanity into a position in which it must constantly struggle to exceed its abilities in order to survive. Prometheus, through his gift, gives them the tools to do so. But only by tempering these two approaches with social justice can such progress be pursued to the benefit of all.
Epimetheus – whose name combines the Greek word for learning, máthisi, and the epi- of ‘after the fact’ – is hindsight. Hindsight is the specific product of forgetfulness, mistakes, and foolishness. Epimetheus is thus the god of big data, as we saw in the last chapter: of exclusion and erasure, and of overconfidence. Epimetheus’s mistake is the original sin of big data, which taints it at the source.
Prometheus – pro-metheus – is foresight, but without the wisdom we might take to accompany it. It’s anticipation. It’s the white heat of scientific and technological discovery, and that desire for the oncoming rush of the future, the head-down drive of forward movement. It’s resource extraction, fossil fuels, undersea cables, server farms, air conditioning, on-demand delivery, giant robots, and meat under pressure. It’s scale and subjugation, the pushing back of the darkness with little thought for what’s beyond – for who already lives there or who gets crushed along the way. The illusion of knowledge and the anticipation of mastery combine to impel the timeline of progress, but they obfuscate the absence of understanding at its articulation point: the zero mark, the dark present, where we see and comprehend nothing beyond movement and efficiency, where our only possible act is to accelerate the existing order.
It is Hermes, then, who stands and points in other directions, and must be the guide for a new dark age. Hermes is thinking in the moment, rather than being bound to received visions or fiery impulses. Hermes, revealer of language and speech, insists upon the ambiguity and uncertainty of all things. A hermeneutics, or hermetic understanding, of technology might account for its perceived errors by pointing out that reality is never that simple, that there is always meaning beyond the meaning, that answers can be multiple, contested, and potentially infinite. When our algorithms fail to converge on ideal situations; when, despite all the information at their disposal, intelligent systems fail to adequately grasp the world; when the fluid and ever-changing nature of personal identities fails to fit inside the neat rows of databases: these are moments of hermeneutic communication. Technology, despite its Epimethean and Promethean claims, reflects the actual world, not an ideal one. When it crashes, we are capable of thinking clearly; when it is cloudy, we apprehend the cloudiness of the world. Technology, while it often appears as opaque complexity, is in fact attempting to communicate the state of reality. Complexity is not a condition to be tamed, but a lesson to be learned.
This eBook is licensed to martin glick, martinglick@gmail.com on 07/27/2019