10
Cloud
In May 2013, Google invited a select group of around 200 guests to the Grove Hotel in Hertfordshire, England, for its annual Zeitgeist conference. Held every year since 2006, and followed by a public ‘big tent’ event in the hotel’s grounds, the two-day gathering is intensely private, with only selected speakers’ videos being released online. Over the years, the conference has featured talks by former US presidents, royalty, and pop stars, and the 2013 guest list included several heads of state and government ministers, CEOs of many of the largest European corporations, and the former chief of the British armed forces, alongside Google directors and motivational speakers. Several of the attendees, including Google’s own CEO Eric Schmidt, would return to the same hotel a month later for the annual and even more secretive Bilderberg Group meeting of the world’s political elite.1 Topics in 2013 included ‘Action This Day’, ‘Our Legacy’, ‘Courage in a Connected World’, and ‘The Pleasure Principle’, with a succession of speeches urging some of the most powerful people in the world to support charity initiatives and seek their own happiness.
Schmidt himself opened the conference with a paean to the emancipatory power of technology. ‘I think we’re missing something,’ he said, ‘maybe because of the way our politics works, maybe because of the way the media works. We’re not optimistic enough … The nature of innovation, the things that are going on both at Google and globally are pretty positive for humankind and we should be much more optimistic about what’s going to happen going forward.’2
In the discussion session that followed, in response to a question that suggested George Orwell’s 1984 as a counterexample to such utopian thinking, Schmidt cited the spread of cell phones – and particularly of cell phone cameras – to illustrate how technology improved the world:
It’s very, very difficult to implement systemic evil now in an Internet age, and I’ll give you an example. We were in Rwanda. Rwanda in 1994 had this terrible … essentially genocide. 750,000 people were killed over a four-month period by machetes, which is a horrific, horrific way to do this. It required planning. People had to write it down. What I think about is in 1994, if everyone had a smartphone it would have been impossible to do that; that people would have actually noticed this was going on. The plans would have been leaked. Somebody would have figured it out and somebody would have reacted to prevent this terrible carnage.3
Schmidt’s – and Google’s – worldview is one that is entirely predicated on the belief that making something visible makes it better, and that technology is the tool to make things visible. This view, which has come to dominate the world, is not only fundamentally wrong; it is actively dangerous, both globally and in the specific instance that Schmidt states.
The wide spectrum of information that global policy makers possessed – particularly the United States, but also including the former colonial powers in the region, Belgium and France – both in the months and weeks preceding the genocide, and while it was occurring, has been exhaustively documented.4 Multiple countries had embassy and other staff on the ground, as did NGOs, while the UN, foreign and state departments, militaries and intelligence groups all monitored the situation and withdrew personnel in response to the escalating crisis. The National Security Agency listened in to, and recorded, the now-notorious nationwide radio broadcasts calling for a ‘final war’ to ‘exterminate the cockroaches’. (General Roméo Dallaire, the commander of the UN peacekeeping operation in Rwanda at the time of the genocide, later commented that ‘simply jamming [the] broadcasts and replacing them with messages of peace and reconciliation would have had a significant impact on the course of events’.)5 For years, the United States denied that it possessed any direct evidence of the atrocities as they were occurring, but in the trial of one Rwandan genocidaire in 2012, the prosecution unexpectedly produced high-resolution satellite photos shot over the country in May, June, and July of 1994, throughout the course of the ‘one hundred days of genocide’.6 The images – drawn from a much larger trove classified by the National Reconnaissance Office and the National Geospatial-Intelligence Agency – depicted roadblocks, destroyed buildings, mass graves, and even bodies lying in the streets of Butare, the former capital.7
The situation repeated itself in the Balkans in 1995, when CIA operatives watched the massacre of some 8,000 Muslim men and boys at Srebrenica from their situation room in Vienna via satellite.8 Days later, photographs from a U-2 spy plane showed the freshly dug mounds of mass graves: evidence that wasn’t shown to President Clinton until a month later.9 But institutional inertia cannot really be blamed, as the kind of distributed image making that Schmidt calls for has since come to pass. Today, satellite images of mass graves are no longer the preserve of military and state intelligence agencies. Instead, before-and-after images of trenches filled with murdered bodies, such as those in the grounds of the Daryya Mosque, south of Damascus, in 2013, are available on Google Maps.10
In all of these cases, surveillance reveals itself as a wholly retroactive enterprise, incapable of acting in the present and entirely subservient to the established and utterly compromised interests of power. What was missing in Rwanda and Srebrenica was not evidence of an atrocity, but the willingness to act upon it. As one investigative report on the Rwandan killings noted, ‘Any failure to fully appreciate the genocide stemmed from political, moral, and imaginative weaknesses, not informational ones.’11 This statement feels like it could be the punchline to this book: a damning indictment of our ability to either ignore or seek more raw information, when the problem is not with our knowing, but with our doing.
Such a denunciation of the degraded power of the image should not, however, be taken as support of Schmidt’s position that more images or more information, however democratically and distributedly generated, would have helped. The very technology that Schmidt insists upon as a counter to systemic evil, the smartphone, has been shown again and again to amplify violence and expose individuals to its ravages. Following Kenya’s disputed election result in 2007, the place of the radio stations in Rwanda was taken by the cell phone, and the swirling violence was fed by circulating text messages urging ethnic groups on both sides to slaughter one another. Over 1,000 people were killed. One widely shared example exhorted people to make and send lists of their enemies:
We say no more innocent Kikuyu blood will be shed. We will slaughter them right here in the capital city. For justice, compile a list of Luos and Kalus(ph) you know at work or in your estates, or elsewhere in Nairobi, plus where and how their children go to school. We will give you numbers to text this information.12
The problem of hate messages was so severe that the government attempted to circulate its own messages of peace and reconciliation, and humanitarian NGOs blamed the worsening cycle of violence directly on the escalating rhetoric within the closed, inaccessible communities created by cell phones. Subsequent studies have found that across the continent, even when income inequality, ethnic fractionalisation and geography are taken into account, increases in cell phone coverage are associated with higher levels of violence.13
None of this is to argue that the satellite or the smartphone themselves create violence. Rather, it is the uncritical, unthinking belief in their amoral utility that perpetuates our inability to rethink our dealings with the world. Every unchallenged assertion of the neutral goodness of technology supports and sustains the status quo. The Rwanda claim simply does not stand – in fact, the reverse is true, and Schmidt, one of the world’s most powerful facilitators of data-driven digital expansion, with a crowd of global business and government leaders as his audience, is not merely wrong, but dangerously so.
Information and violence are utterly and inextricably linked, and the weaponisation of information is accelerated by technologies that purport to assert control over the world. The historical association between military, government, and corporate interests on the one hand, and the development of new technologies on the other, makes this clear. The effects are seen everywhere. And yet we continue to place an inordinate value upon information that locks us into repeated cycles of violence, destruction, and death. Given our long history of doing exactly the same thing with other commodities, this realisation should not and cannot be dismissed.
The phrase ‘data is the new oil’ was apparently coined in 2006 by Clive Humby, the British mathematician and architect of the Tesco Clubcard, a supermarket reward programme.14 Since then, it has been repeated and amplified, first by marketers, then by entrepreneurs, and ultimately by business leaders and policy makers. In May 2017, the Economist devoted an entire issue to the proposition, declaring that ‘smartphones and the internet have made data abundant, ubiquitous and far more valuable … By collecting more data, a firm has more scope to improve its products, which attracts more users, generating even more data, and so on.’15 The president and CEO of Mastercard told an audience in Saudi Arabia, the world’s largest producer of actual oil, that data could be as effective as crude as a means of generating wealth (he also said it was a ‘public good’).16 In British parliamentary debates on leaving the European Union, data’s oily qualities were cited by Members of Parliament on both sides.17 Yet few such citations address the implications of long-term, systemic and global reliance on such a poisonous material, or the dubious circumstances of its extraction.
In Humby’s original formulation, data resembled oil because ‘it’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.’18 The emphasis on the work required to make information useful has been lost over the years, aided by processing power and machine intelligence, to be replaced by pure speculation. In the process of simplification, the analogy’s historical ramifications, as well as its present dangers and its long-term repercussions, have been forgotten.
Our thirst for data, like our thirst for oil, is historically imperialist and colonialist, and tightly tied to capitalist networks of exploitation. The most successful empires have always promulgated themselves through a selective visibility: that of the subaltern to the centre. Data is used to map and classify the subject of imperialist intention, just as the subjects of empires were forced to register and name themselves according to the diktats of their masters.19 The same empires first occupied, then exploited, the natural reserves of their possessions, and the networks they created live on in the digital infrastructures of the present day: the information superhighway follows the networks of telegraph cables laid down to control old empires. While the fastest data routes from West Africa to the world still run through London, so the British-Dutch mutinational Shell continues to exploit the oil of the Nigerian delta. The subsea cables girding South America are owned by corporations based in Madrid, even as countries there struggle to control their own oil profits. Fibre-optic connections funnel financial transactions by way of offshore territories quietly retained through periods of decolonisation. Empire has mostly rescinded territory, only to continue its operation at the level of infrastructure, maintaining its power in the form of the network. Data-driven regimes repeat the racist, sexist, and oppressive policies of their antecedents because these biases and attitudes have been encoded into them at the root.
In the present, the extraction, refinement, and use of data/oil pollutes the ground and air. It spills. It leaches into everything. It gets into the ground water of our social relationships and it poisons them. It enforces computational thinking upon us, driving the deep divisions in society caused by misbegotten classification, fundamentalism and populism, and accelerating inequality. It sustains and nourishes uneven power relationships: in most of our interactions with power, data is not something that is freely given but forcibly extracted – or impelled in moments of panic, like a stressed cuttlefish attempting to cloak itself from a predator.
The ability of politicians, policy makers and technocrats to talk approvingly of data/oil today should be shocking, given what we know about climate change, if we were not already so numb to their hypocrisy. This data/oil will remain hazardous well beyond our own lifetimes: the debt we have already accrued will take centuries to dissipate, and we have not come close as yet to experiencing its worst, inevitable effects.
In one key respect, however, even a realistic accounting of data/oil is insufficient in its analogous power, for it might give us false hope of a peaceful transfer to an information-free economy. Oil is, despite everything, defined by its exhaustibility. We are already approaching peak oil, and while every oil shock prompts us to engage and exploit some new territory or some destructive technology – further endangering the planet and ourselves – the wells will eventually run dry. The same is not true of information, despite the desperate fracking that appears to be occurring when intelligence agencies record every email, every mouse click, and the movements of every cell phone. While peak knowledge may be closer than we think, the exploitation of raw information can continue infinitely, along with the damage it does to us and our ability to reckon with the world.
In this, information more closely resembles atomic power than oil: an effectively unlimited resource that still contains immense destructive power, and that is even more explicitly connected than petroleum to histories of violence. Atomic information might, however, force us to confront existential questions of time and contamination in ways that petroculture, bubbling up through the centuries, has mostly managed to avoid.
We have traced the ways in which computational thinking, evolved with the help of the machines, developed to build the atomic bomb, and how the architecture of contemporary processing and networking was forged in the crucible of the Manhattan Project. We have also seen the ways in which data leaks and breaches: the critical excursions and chain reactions that lead to privacy meltdowns and the rhizomatic mushroom cloud. These analogies are not mere speculations: they are the inherent and totalising effects of our social and engineering choices.
Just as we spent forty-five years locked in a Cold War perpetuated by the spectre of mutually assured destruction, we find ourselves in an intellectual, ontological dead end today. The primary method we have for evaluating the world – more data – is faltering. It’s failing to account for complex, human-driven systems, and its failure is becoming obvious – not least because we’ve built a vast, planet-spanning information-sharing system for making it obvious to us. The mutually assured privacy meltdown of state surveillance and leak-driven countersurveillance activism is one example of this failure, as is the confusion caused by real-time information overload from surveillance itself. So is the discovery crisis in the pharmacological industry, where billions of dollars in computation are returning exponentially fewer drug breakthroughs. But perhaps the most obvious is that despite the sheer volume of information that exists online – the plurality of moderating views and alternative explanations – conspiracy theories and fundamentalism don’t merely survive, they proliferate. As in the nuclear age, we learn the wrong lesson over and over again. We stare at the mushroom cloud, and see all of this power, and we enter into an arms race all over again.
But what we should be seeing is the network itself, in all of its complexity. The network is only the latest, but certainly the most advanced, civilisation-scale tool for introspection our species has built thus far. To deal with the network is to deal with a Borgesian infinite library and all the inherent contradictions contained within it: a library that will not converge and continually refuses to cohere. Our categories, summaries and authorities are no longer merely insufficient; they are literally incoherent. As H. P. Lovecraft noted in his annunciation of a new dark age, our current ways of thinking about the world can no more survive exposure to this totality of raw information than we can survive exposure to an atomic core.
The ‘Black Chamber’, forerunner to the National Security Agency, was established as the first peacetime cryptanalytic organisation by the United States in 1919, dedicated to the cracking open of information, its refinement and combustion in the name of power. Its physical analogue was constructed by Enrico Fermi under the bleachers of Chicago’s Stagg Field in 1942 from 45,000 blocks of black graphite, and used to shield the world’s first artificial nuclear reaction. Just as the once-secret mesa town of Los Alamos finds its contemporary equivalent in the NSA data centres under construction in the Utah desert, so the black chamber is reified today both in the opaque glass and steel of NSA’s headquarters at Fort Meade, Maryland, and in the endless, inscrutable server racks of Google, Facebook, Amazon, Palantir, Lawrence Livermore, Sunway TaihuLight, and the National Defense Management Center.
The two chambers of Fermi and the NSA represent encounters with two annihilations – one of the body, and one of the mind, but both of the self. Both are analogues of the endlessly destructive pursuit of ever more finely grained knowledge, at the expense of the acknowledgement of unknowing. We’ve built modern civilisation on the dialectic that more information leads to better decisions, but our engineering has caught up with our philosophy. The novelist and activist Arundhati Roy, writing on the occasion of the detonation of India’s first nuclear bomb, called it ‘the end of imagination’ – and again, this revelation is literalised by our information technologies.20
In response to the end of the imagination, unmistakeably visible not only in the looming mushroom cloud but in the inhuman longevity of atomic half-lives that will continue to radiate long after humanity itself expires, we have resorted to myth and silence. Proposals put forward for marking long-term waste storage in the United States include sculpture so terrible in form that other species will recognise its location as evil. One verbal formulation compiled to accompany it states, ‘This place is not a place of honor. No highly esteemed deed is commemorated here. Nothing valued is here.’21 Another proposal by the Human Interference Task Force, convened by the Department of Energy in the 1980s, suggested the breeding of ‘radiation cats’ that would change colour when exposed to radioactive emissions and serve as living indicators of danger, to be accompanied by works of art and fable that would transmit the significance of this change through deep cultural time.22 The Onkalo spent nuclear fuel repository, dug deep into the bedrock beneath Finland, has suggested another plan: once completed, it will simply be erased from the map, its location hidden and eventually forgotten.23
An atomic understanding of information presents, at the last, such a cataclysmic conception of the future that it forces us to insist upon the present as the only domain for action. In contrast and in opposition to nihilistic accounts of original sins and dys/utopian imaginings of the future, one strand of environmental and atomic activism posits the notion of guardianship.24 Guardianship takes full responsibility for the toxic products of atomic culture, even and especially when they have been created for our ostensible benefit. It is based on the principles of doing the least harm in the present and of our responsibility to future generations – but does not presume that we can know or control them. As such, guardianship calls for change, while taking on the responsibility of what we have already created, insisting that deep burial of radioactive materials precludes such possibilities and risks widespread contamination. In this, it aligns itself with the new dark age: a place where the future is radically uncertain and the past irrevocably contested, but where we are still capable of speaking directly to what is in front of us, of thinking clearly and acting with justice. Guardianship insists that these principles require a moral commitment that is beyond the abilities of pure computational thinking, but well within, and utterly appropriate to, our darkening reality.
Ultimately, any strategy for living in the new dark age depends upon attention to the here and now, and not to the illusory promises of computational prediction, surveillance, ideology and representation. The present is always where we live and think, poised between an oppressive history and an unknowable future. The technologies that so inform and shape our present perceptions of reality are not going to go away, and in many cases we should not wish them to. Our current life support systems on a planet of 7.5 billion and rising utterly depend upon them. Our understanding of those systems and their ramifications, and of the conscious choices we make in their design, in the here and now, remain entirely within our capabilities. We are not powerless, not without agency, and not limited by darkness. We only have to think, and think again, and keep thinking. The network – us and our machines and the things we think and discover together – demands it.
This eBook is licensed to martin glick, martinglick@gmail.com on 07/27/2019