9
Concurrency
On the screen, a man’s hands slowly rotate a box of twenty-four Cars-branded Kinder Eggs. He removes the polythene wrapper and rotates it, carefully lifting it to show the top and the bottom of the box. The video cuts to a dozen of the eggs neatly arranged on a table. The pair of hands picks one up and peels off the red and silver foil wrapper, revealing the chocolate egg inside. The egg is cracked to free a small plastic barrel, which, when opened, contains a small plastic toy. If the toy comes with stickers or other attachments, these are carefully applied, and the toy is slowly manipulated in front of the camera, all to the gentle sounds of tearing foil, cracking chocolate, and peeling plastic. After it has been fully appreciated, the egg and its contents are set aside, and the process is repeated for the next egg, and the next, until all have been opened. After a brief panning shot of all the toys, the video ends. It lasts seven minutes, and on YouTube it has been viewed 26 million times.
Kinder Eggs are an Italian sweet consisting of a milk and white chocolate shell containing a packaged plastic toy. Since their introduction in 1974, they have been sold in their millions worldwide – although they are banned in the United States, which prohibits sweets with objects inside. Cars, a 2006 Disney movie featuring the animated adventures of Lightning McQueen and his vehicular friends, grossed $450 million worldwide and has spawned two sequels so far, as well as near-infinite promotional tie-ins – including Kinder Eggs. So why of all sweets and all the product promos in the world does this one deserve such a reverential review?
It doesn’t, of course. It’s not special. The video, titled ‘Cars 2 Silver Lightning McQueen Racer Surprise Eggs Disney Pixar Zaini Silver Racers by ToyCollector’ is just one of millions and millions of ‘surprise egg’ videos on YouTube. Every video follows the same theme: there’s an egg; it’s got a surprise in it; the surprise is revealed. But from such a simple premise, an infinity of combinations flows. There are more Kinder Egg videos of course, in every possible flavour: superhero eggs and Disney eggs and Christmas eggs and so on and so forth. And then there are knock-off, Kinder-alike eggs and Easter eggs and eggs made of Play-Doh and Lego eggs and balloon eggs and on and on. There are egg-like objects, such as toy garages or doll’s houses that can be opened to reveal their contents with the same hushed awe. There are surprise egg videos that last for more than an hour, and there are more surprise egg videos than any human being could watch in an entire lifetime.
Unboxing videos have been a staple of the internet since decent video streaming became a possibility. Originating in the tech community, they fetishise new products and the experience of unpacking them: lingering close-ups of iPhones and games consoles as they emerge from their packaging. Around 2013, the trend spread to children’s toys, and something weird started to happen. Children exposed to the videos would lock on to them with a laser-like focus and endlessly reloop them in the way that previous generations wore out tapes of their favourite Disney movies. The younger the children, the less the actual content seems to matter. The repetition of the process, together with the bright colours and the constant sense of revelation, seems to transfix them. On YouTube, they could surf through hours and hours of such videos, continuously buoyed up by reassuring repetition and endless surprises, their desires constantly fed by the system’s recommendation algorithms.1
Children’s television, particularly that aimed at preschool children, always seems odd to adults. Before it disappeared from mainstream broadcasts and took up a new life on dedicated digital channels and online, the last great controversy of the kids’ broadcast era was Teletubbies, which depicted five baby–bear–creatures with aerials on their heads and television screens in their bellies bumbling around green fields and hills, playing games and taking naps. The show was a huge success, but it also bothered people who thought that kids’ TV should in some way be educational. The Teletubbies communicated in a simplified ‘goo-goo’ language, which parents and newspapers thought would inhibit children’s development. In fact, the Teletubbies’ language had been developed by speech scientists and had its own internal logic. It also included many of the themes that would be automated by the surprise egg videos: call-and-response setups and cries of ‘again, again’ when a sequence was about to be repeated.2 What struck adults as bizarre, nonsensical, and somewhere between boring and threatening, created a safe and reassuring world for small children. Knowingly or unknowingly, it is these psychological traits that have made surprise egg videos and their kin so popular on YouTube today. But in their combination of childish appeal, promised reward, and algorithmic variation, they are also what makes the videos so terrifying.
YouTube recommendation algorithms work by identifying what viewers like. Entirely new and uncategorised content has to go it alone on the network, existing in a kind of limbo that can only be disturbed by incoming links and outside recommendations. But if it finds an audience, if it starts to collect views, the algorithms may deign to place it among their recommended videos – featuring it on the sidebar of other videos and boosting it to regular viewers, thus increasing its ‘discoverability’. Even better, if it comes with a description, if it’s properly titled and tagged to identify it in an algorithmically friendly way, the system can group it with other similar videos. It’s pretty simple: if you like that, you’ll like this, and down the rabbit hole you go. You can even set the website to autoplay, so that when one video ends, the next one in the recommendation queue will play, and so on to eternity. Kids generate recommendation profiles pretty quickly, and they intensify fast when children lock onto a particular kind of video and replay it over and over again. The algorithms love that: it identifies a clear need, and they attempt to feed it.
On the other side of the screen, you have people making the videos. Making videos is a business, and it comes with one simple incentive: get more views, and you get more money. YouTube, a Google company, is partnered with AdSense, also a Google company. Alongside – and increasingly within, before, after, and even during – videos, AdSense serves advertisements. When they get views on the adverts that accompany the videos, the creators get paid – usually in ‘cost per mille’ (CPM, or per thousand views). A specific creator’s CPM varies a lot, because not all videos and not all views are accompanied by ads, and the CPM rate itself can change depending on a variety of factors. But videos can be worth a fortune: ‘Gangnam Style’, the Korean pop hit that was the first to break 1 billion views on YouTube, earned $8 million dollars from AdSense from its first 1.23 billion views, or about 0.65 cents per view.3 You don’t have to have Gangnam-level success to make a living from YouTube, although it’s obviously easier to get higher returns by making more and more videos, trying to get them in front of more and more eyeballs – and targeting markets, like children, that watch videos over and over again.
YouTube’s official guidelines state that the site is for ages thirteen and up, with parental permission required for those below eighteen, but there’s nothing to stop a thirteen-year-old accessing it. Moreover, there’s no need to have an account at all; like most websites, YouTube tracks unique visitors by their address, browser and device profile, and behaviour, and it can build a detailed demographic and preference profile to feed the recommendation engines without the viewer ever consciously submitting any information about themselves. That applies even if the viewer is a three-year-old child plonked in front of their parent’s iPad and mashing the screen with a balled-up fist.
The frequency with which such a situation occurs is obvious in the site’s own viewer statistics. Ryan’s Toy Review, a channel specialising in unboxing videos and other kids’ tropes, is the sixth most popular channel on the platform, only just behind Justin Bieber and the WWE.4 At one point in 2016, it was the most popular. Ryan is six years old, has been a YouTube star since he was three, and has 9.6 million subscribers. His family is estimated to earn around $1 million a month from their videos.5 Next in the list is Little Baby Bum, which specialises in nursery rhymes for preschoolers. With just 515 videos, they have accrued 11.5 million subscribers and 13 billion views.
Children’s YouTube is a vast and lucrative industry because on-demand video is catnip to both parents and their kids – and thus to content creators and advertisers as well. Small children, mesmerised by familiar characters and songs, bright colours, and soothing sounds, can be kept quiet and entertained for hours. The common tactic of assembling many nursery rhyme or cartoon episodes into hours-long compilations, and making a virtue of their length in video descriptions and titles, points to the amount of time some kids are spending with them.
As a result, YouTube broadcasters have developed a huge number of tactics to draw parents’ and children’s attention to their videos, and the advertising revenues that accompany them. One of them, as demonstrated in the surprise egg mashups, is a kind of keyword excess, cramming as many relevant search terms into a video title as possible. The result is what is known as word salad, a random sample from just a single channel reading, ‘Surprise Play Doh Eggs Peppa Pig Stamper Cars Pocoyo Minecraft Smurfs Kinder Play Doh Sparkle Brilho’; ‘Cars Screamin’ Banshee Eats Lightning McQueen Disney Pixar’; ‘Disney Baby Pop Up Pals Easter Eggs SURPRISE’; ‘150 Giant Surprise Eggs Kinder CARS StarWars Marvel Avengers LEGO Disney Pixar Nickelodeon Peppa’; and ‘Choco Toys Surprise Mashems & Fashems DC Marvel Avengers Batman Hulk IRON MAN’.6
This unintelligible assemblage of brand names, characters and keywords points to the real audience for the descriptions: not the viewer, but the algorithms that decide who sees which videos. The more keywords you can cram into a title, the more likely it is that your video will find its way into the recommendations, or even better, simply autoplay when a similar video finishes. The result is millions of videos with cascading, nonsensical titles – but then, YouTube is a video platform, and neither the algorithms nor the intended audience care about meaning.
There are other ways to get views to your channel too, and the simplest and most time-honoured of these is simply to copy and pirate other content. A quick search for ‘Peppa Pig’ on YouTube yields more than 10 million results – and the front page is almost entirely from the verified ‘Peppa Pig Official Channel’, run by the show’s creators. But quickly the results start to fill up with other channels, although the way YouTube uniformly displays its search results makes it hard to notice. One such channel is the unverified Play Go Toys, which has 1,800 subscribers and consists of pirated Peppa Pig episodes, unboxing videos, as well as videos of official Peppa Pig episodes being acted out with branded toys, titled as if they are the actual episodes.7 Mixed in among them are videos of – presumably – the channel owner’s own children playing with the toys, and going to the park.
While this channel is merely indulging in a little harmless piracy, it shows how the structure of YouTube facilitates the delamination of content and author, and how this impacts our awareness and trust of its source. One of the traditional roles of branded content is that it is a trusted source. Whether it’s Peppa Pig on children’s TV or a Disney movie, whatever one’s feelings about the industrial model of entertainment production, these products are carefully produced and monitored so that kids are essentially safe watching them, and can be trusted as such. This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.
This is the exact same process as the delamination of trusted news media on Facebook feeds and in Google results that is currently wreaking such havoc on our cognitive and political systems. When a fact-checked New York Times article is shared on Facebook or pops up in the ‘related content’ box of a Google Search, the link appears almost identical to one shared from NewYorkTimesPolitics.com, a website built by a teenager in Eastern Europe and entirely filled with invented, inflammatory and highly partisan stories about the US election.8 We’ll return to those sites in a bit, but the result on YouTube is that it’s incredibly easy for strange and inappropriate content to appear intermingled with – and almost indistinguishable from – known sources.
Another striking example of the weirdness of children’s video is the Finger Family. In 2007, a YouTube user called Leehosok uploaded a video in which two sets of finger puppets dance to the tinny, background sound of a recorded nursery rhyme: ‘Daddy finger, daddy finger, where are you? Here I am, here I am, how do you do?’ and so on through mommy finger, brother finger, sister finger, and baby finger. While the song clearly predated the video, this is its debut on YouTube.9 As of late 2017, there are at least 17 million versions of the Finger Family Song on YouTube. Like the surprise egg videos, they cover every possible genre, with billions and billions of aggregated views. Little Baby Bum’s version alone has 31 million views. One on the popular channel ChuChu has half a billion. The simplicity of the premise makes it ripe for automation: a basic piece of software can top an animated hand with any object or character, so Superhero Finger Families, Disney Finger Families, fruit and gummy bear and lollipop Finger Families, and their infinite varieties, spill down the page, accumulating millions and millions more views. Stock animations, audio tracks, and lists of keywords are assembled in their thousands to produce an endless stream of videos. It becomes difficult to get a purchase on such processes without simply listing their endless variations, but it’s important to grasp how vast this system is, and how indeterminate its actions, process, and audience. It’s also international: there are variations of Finger Family and Learn Colours videos for Tamil epics and Malaysian cartoons that are unlikely to pop up in any anglophone search results. This very indeterminacy and reach is key to this system’s existence, and its implications. Its dimensionality makes it difficult to grasp, or even to really think about.
The view numbers of these videos must be taken under serious advisement. Just as a huge number of these videos are created by automated pieces of software – bots – they are also viewed by bots, and even commented on by bots. The arms race between bot makers and Google’s machine learning algorithms is one that Google lost a long time ago across most of its properties. It’s also one that it has no real reason to take seriously: while it may publicly denounce and downplay the activity of bots, they massively magnify the number of adverts shown, and thus the revenue Google generates. But that complicity shouldn’t obscure the fact that there are also many actual children, plugged into iPhones and tablets, watching these videos over and over again – in part accounting for the inflated view numbers – while learning to type basic search terms into the browser, or simply mashing the sidebar to bring up another video. Increasingly, voice-activated commands alone will do the job of calling up content.
The weirdness only increases when humans reappear in the loop. Pringles Tin and Incredible Hulk 3D Finger Families might be easy to understand, at least procedurally, but wellknown channels with crews of human actors also begin to reproduce the same logic out of the necessity of gaining page views. At some point, it becomes impossible to determine the degree of automation that is at work, or how to parse out the gap between human and machine.
Bounce Patrol is a children’s entertainment group from Melbourne, who follow in the brightly coloured tradition of pre-digital kid sensations like their fellow Australians, the Wiggles. Their YouTube channel, Bounce Patrol Kids, has almost 2 million subscribers, and they post professionally produced videos featuring their crew of human actors at the rate of about one per week.10 Yet Bounce Patrol’s productions follow closely the inhuman logic of algorithmic recommendation. The result is the deep weirdness of a group of people endlessly acting out the implications of a combination of algorithmically generated keywords: ‘Halloween Finger Family & more Halloween Songs for Children Kids Halloween Songs Collection’; ‘Australian Animals Finger Family Song | Finger Family Nursery Rhymes’; ‘Farm Animals Finger Family and more Animals Songs | Finger Family Collection – Learn Animals Sounds’; ‘Safari Animals Finger Family Song | Elephant, Lion, Giraffe, Zebra & Hippo! Wild Animals for kids’; ‘Superheroes Finger Family and more Finger Family Songs! Superhero Finger Family Collection’; ‘Batman Finger Family Song – Superheroes and Villains! Batman, Joker, Riddler, Catwoman’; and on and on and on. It’s old-school improvisation, only the cues are being shouted out by a computer fed on the demands of a billion hyperactive toddlers. This is what content production looks like in the age of algorithmic discovery: even if you’re a human, you end up impersonating the machine.
We’ve encountered pretty clear examples of the disturbing outcomes of full automation before, like the Amazon phone cases and rape-themed T-shirts. Nobody set out to create phone cases with drugs and medical equipment on them; it was just a deeply weird probabilistic outcome. Likewise, the case of the ‘Keep Calm and Rape A Lot’ T-shirts is depressing – and distressing – but comprehensible. Nobody set out to create these shirts; they just paired an unchecked list of verbs and pronouns with an online image generator. It’s quite possible that none of these shirts ever physically existed, or were ever purchased or worn, and thus that no harm was done. It’s significant, however, that the people creating these items failed to notice, and neither did their distributor. They literally had no idea what they were doing.
What starts to become apparent is that the scale and logic of the system is complicit in these outputs, and compels us to think through their implications. These outcomes entrain the wider social effects of previous examples, such as racial and gender bias in big data and machine intelligence–driven systems, and in the same manner they have no easy, or even preferable, solutions.
How about a video entitled ‘Wrong Heads Disney Wrong Ears Wrong Legs Kids Learn Colors Finger Family 2017 Nursery Rhymes’? The title alone confirms its automated provenance. The origin of the ‘Wrong Heads’ trope will remain a mystery for now. But it’s easy to imagine, as with the Finger Family Song, that somewhere there is a totally original and harmless version that made enough kids laugh that it started to climb the algorithmic rankings, until it made it onto the word salad lists. There, it would have combined with Learn Colors, Finger Family, Nursery Rhymes, and all of these other tropes – not merely as words but as images, processes, and actions – to be mixed into this particular assemblage.
The video consists of the Finger Family song played over an animation of rotating character heads and bodies from Disney’s Aladdin. Initially innocent, if mismatched, a strangeness creeps in with the appearance of a non-Aladdin character – Agnes, the little girl from Universal’s Despicable Me. Agnes is the arbiter of the scene: when the heads match up, she cheers; when they don’t, she bursts into floods of simulated tears. While the mechanism is clear, the result is pure pablum: the minimum of effort to produce the minimum of meaning.
The video’s creator, BABYFUN TV, has produced many similar videos, all of which work in exactly the same way. The character Hope from Disney’s Inside Out bawls through a Smurfs and Trolls head swap. Wonder Woman weeps at the X-Men. It goes on and on. BABYFUN TV only has 170 subscribers and very low view rates, but then there are thousands and thousands of channels like this. Viewing numbers on YouTube and other mass content aggregators aren’t significant in the abstract, but in their accumulation. The underlying mechanism of Wrong Heads is clear, but the constant overlaying and intermixing of different tropes starts to become troubling to adult sensibilities: a growing sense of something inhuman, of the uncanny valley between us and the system producing such content. It feels like a mistake, somewhere, deeper than the surface content.
In BABYFUN’s Wrong Heads videos, the same identical, scratchy digital sample of a child crying features in each video. While we might find it disturbing, it’s possible – like the gurgling baby in the Teletubbies’ sun – that this sound might provide some of the rhythm or cadence or relation to their own experience that actual babies are attracted to in this content. But nobody made this decision: it has been warped and stretched through algorithmic repetition and recombination in ways that nobody intended, that nobody actually wanted to happen. And what happens when this endless recirculation and magnification loops back to humans again?
Toy Freaks is a hugely popular YouTube channel – sixty-eighth on the platform, with 8.4 million subscribers – that features a father and his two daughters playing out many of the tropes we’ve identified so far, along the same principles as Bounce Patrol: the girls open surprise eggs, and they sing seasonal variations of the Finger Family song. As well as nursery rhymes and learning colours, Toy Freaks specialises in gross-out situations, such as food fights and filling bathtubs with fake insects. Toy Freaks has caused a degree of controversy, with many viewers feeling the videos border on abuse and exploitation – if they don’t cross the line entirely – citing videos of the children vomiting, bleeding, and in pain.11 Toy Freaks is a YouTube verified channel, although verification simply means that a channel has more than 100,000 subscribers.12
Toy Freaks is almost tame compared to its imitators. A Vietnamese variant called Freak Family features a young girl drinking bathroom products and cutting herself with a razor.13 Elsewhere, children fish brightly coloured automated weapons out of muddy rivers. A live-action Elsa from Frozen drowns in a swimming pool. Spiderman invades a Thai beach resort and teaches colours through the medium of gaffer tape, wrapped around bikini-clad teenagers. Policemen wearing outsize baby heads and rubber Joker masks terrorise patrons at a Russian water park. It just goes on and on. The amplification of tropes in popular, human-led channels such as Toy Freaks leads to them being endlessly repeated across the network in increasingly outlandish and distorted recombinations. But there’s an undercurrent of violence and degradation – one that doesn’t, we might still hope, come from the dark imaginations of gross-out loving, actual children.
Sections of YouTube, like the rest of the internet, have long played host to a culture of violent affrontery, in which nothing is sacred. YouTube Poop is one such subculture, featuring mostly harmless, if deliberately offensive, remixes of other videos, overdubbing sweary rants and drug references onto children’s TV shows. It’s often the first level of weirdness that parents encounter too. One official Peppa Pig video, in which Peppa goes to the dentist, seems to be popular – although, confusingly, what appears to be the real episode is only available on an unofficial channel. In the official timeline, Peppa is appropriately reassured by a kindly dentist. In one version appearing high in the results of a ‘peppa pig dentist’ search, she is basically tortured, with teeth bloodily removed to the sounds of screaming. Disturbing Peppa Pig videos, which tend towards extreme violence and fear, with Peppa eating her father or drinking bleach, are widespread. Many are obviously parodies, or even satires of themselves: indeed, previous controversies around them have resulted in them receiving protection from copyright claims under that legal right. They’re not setting out to terrorise children – not really – even when they do. But they are, and they’re also setting off a whole chain of emergent outcomes in response.
Simply attributing YouTube weirdness and terror to the actions of trolls and dark humourists doesn’t really cut it. In the video cited, Peppa endures her horrendous dental experience, and then she transforms into a series of Iron Man/pig/robot hybrids and performs the Learn Colours dance. Whatever agency is at play here is far from clear: the video starts with a trollish Peppa parody, but later syncs into the kind of automated repetition of tropes we’ve seen before. It’s not just trolls, or just automation; it’s not just human actors playing out an algorithmic logic, or algorithms mindlessly responding to recommendation engines. It’s a vast and almost completely hidden matrix of interactions between desires and rewards, technologies and audiences, tropes and masks.
Other examples seem less accidental, and more intentional. One whole strand of video production involves automated recuts of video game footage, reprogrammed with superheroes or cartoon characters instead of soldiers and gangsters. Spiderman breaks the legs of the Grim Reaper and Elsa from Frozen and buries them up to their neck in a pit. The Teletubbies – yes, them again – reprise Grand Theft Auto in motorcycle chases and bank heist shoot-outs. Dinosaurs, pierced with ice creams and lollipops, destroy city blocks. Nurses eat faeces to the sound of the Finger Family Song. Nothing makes sense and everything is wrong. Familiar characters, nursery tropes, keyword salad, full automation, violence, and the very stuff of kids’ worst dreams combine in channel after channel after channel of undifferentiated content, churned out at the rate of hundreds of new videos every week. Cheap technologies and cheaper distribution methods are put in the service of industrialised nightmare production.
What does it take to make these videos, and who makes them? How can we even know? Just because there aren’t human actors doesn’t mean there aren’t people involved. Animation is easy these days, and online content for children is one of the simplest ways of making money from 3-D animation, because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries), and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate – and neither do the kids. Cheap animations might be the work of a small studio of half a dozen people low on other work; they might be huge warehouses of slave labour, sweatshops for video production; they might be the product of a rogue dumb AI, an experimental project left in a box somewhere that’s just kept on running, racking up millions of views in the process. If it were some state power or network of paedophiles deliberately attempting to poison a generation – as some online commentators believe – we wouldn’t know. It might just be what the machine wants to do. Raising the question online simply tips one down another rabbit hole of conspiracy and trauma. The network is certainly incapable of diagnosing itself, just as the system is incapable of tempering its demands.
Kids are being traumatised by these videos. They watch their favourite cartoon characters acting out scenes of murder and rape.14 Parents have reported behaviour changes in their children after watching disturbing videos. These network effects cause real and probably lasting damage. To expose young children – some very young – to violent and disturbing scenes is a form of abuse. But it would be a mistake to deal with this issue as a simple matter of ‘won’t somebody think of the children’ hand-wringing. Obviously this content is inappropriate; obviously there are bad actors out there; obviously some of these videos should be removed. Obviously, too, this raises questions of fair use, appropriation, free speech and so on. But a reading of this situation only through this lens fails to fully grasp the mechanisms being deployed, and is thus incapable of thinking its implications in totality and responding accordingly.
What characterises many of the strange videos out there is the level of horror and violence on display. Some of the time it’s kids being gross, and some of the time it’s trollish provocation; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires – in fact, it’s what it seems to do best. It’s possible to argue this tendency towards the positive: the efflorescence of network technologies has allowed many to realise and express themselves in ways never before possible, increasing their individual agency and liberating forms of identity and sexuality that have never spoken so vibrantly and in so many diverse voices as today. But here, where millions of children and adults play for hours, days, weeks, months and years – where they reveal, through their actions, their most vulnerable desires to predatory algorithms – that tendency seems overwhelmingly violent and destructive.
Accompanying the violence are untold levels of exploitation: not of children because they are children, but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation to sustain their revenue, encoding the worst aspects of rapacious, free market capitalism. No controls are possible without collapsing the entire system. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. What makes it disturbing is that this is not a science fictional exploitative future of AI overlords and fully robot workforces in the factories, but exploitation in the playroom, in the living room, in the home and the pocket, being driven by exactly the same computational mechanisms. And humans are degraded on both sides of the equation: both those who, numbed and terrified, watch the videos; and those who, low paid or unpaid, exploited or abused, make them. In between sit mostly automated corporations, taking the profit from both sides.
These videos, wherever they are made, however they come to be made, and whatever their own conscious intentions, are bred by a system that was consciously intended to show videos to children for profit. The unconsciously generated, emergent outcomes of this are all over the place.
To expose children to this content is abuse. This is not the same as the debatable but undoubtedly real effects of film or video game violence on teenagers, or the effects of pornography or extreme images on young minds. Those are important debates, but they’re not even what is being discussed here. At stake on YouTube is very young children, effectively from birth, being deliberately targeted with content that will traumatise and disturb them, via networks that are extremely vulnerable to exactly this form of abuse. It’s not about intention, but about a kind of violence inherent in the combination of digital systems and capitalist incentives.
The system is complicit in the abuse, and YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by unknown persons to abuse children – perhaps not even deliberately, but at a massive scale. The owners of these platforms have an absolute responsibility to deal with that, just as they have a responsibility to deal with the radicalisation of (mostly) young (mostly) men via extremist videos – of any political persuasion. They have so far shown absolutely no inclination to do this, which is despicable but sadly unsurprising. But the question of how they can respond without shutting down the services themelves, and many of the systems that resemble them, has no easy answer.
This is a deeply dark time, in which the structures we have built to expand the sphere of our communications and discourses are being used against us – all of us – in systematic and automated ways. It is hard to keep faith with the network when it produces horrors such as these. While it is tempting to dismiss YouTube’s wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that such events will be used as justification for increased control over the internet, sweeping censorship, surveillance, and crackdowns on freedom of speech. In this, YouTube’s children’s crisis reflects the wider cognitive crisis produced by automated systems, weak machine intelligence, social and scientific networks, and the wider culture – with its own matching set of easy scapegoats and cloudier, entangled substructures.
In the final weeks of the 2016 US election, the international media descended on the small city of Veles, in the Republic of Macedonia. A short hour’s drive from the capital Skopje, Veles is a former industrial centre of just 44,000 people, but it received attention at the highest levels. In the last days of the campaign, even President Obama became obsessed with the place. It had come to epitomise a new media ecosystem in which, he said, ‘everything is true and nothing is true’.15
In 2012, two brothers from Veles set up a website called HealthyFoodHouse.com. They stuffed it with weight-loss tips and recommendations for alternative remedies, culled from wherever they could find them on the internet, and over the years it drew more and more visitors. Their Facebook page has 2 million subscribers, and 10 million come to the site every month, drawn, via Google, to articles with titles like ‘How To Get Rid of The Folds On Your Back And Sides in 21 Days’ and ‘5 Soothing Essential Oils To Rub On Your Sciatic Nerve For Instant Pain Relief’. With the visitors, the AdSense earnings started rolling in: the brothers became local celebrities and spent their money on fast cars and bottles of champagne in Veles’s nightclubs.
Other kids in Veles followed suit, many dropping out of school in order to devote their time to filling their burgeoning portfolios of websites with plagiarised and specious content. In early 2016, the same kids discovered that the biggest and most voracious consumers of news – any news at all – were Trump supporters, who gathered in large and easily targeted Facebook groups. Like the unverified channels of YouTube, their websites were indistinguishable – and no more or less authoritative – than the thousands of alternative news sites popping up across the internet in response to the Trumpian renunciation of mainstream media. More often than not, the distinction didn’t even matter: as we’ve seen, all sources look the same on social networks, and clickbait headlines combined with confirmation bias acted on conservative audiences in much the same way that YouTube algorithms responded to ‘Elsa Spiderman Finger Family Learn Colors Live Action’ strings. Repeated clicks just pushed such stories higher in Facebook’s own rankings. A few brave teens tried the same tricks on Bernie Sanders supporters, with less impressive results. ‘Bernie Sanders supporters are among the smartest people I’ve seen,’ said one. ‘They don’t believe anything. The post must have proof for them to believe it.’16
For a few brief months, headlines claiming that Hillary Clinton had been indicted or that the Pope had declared his support for Trump brought a trickle of wealth to Veles: a few more BMWs appeared in its streets, and more champagne was sold in its nightclubs. The American media, for its part, decried the ‘amoral’ attitudes and ‘cocksure demeanours’ of Macedonian youth.17 In doing so, it ignored, or failed to think, the histories and complex interrelationships that fuelled Macedonia’s fake news boom – and in turn, failed to understand the wider, systemic implications of similar events.
Veles used to be officially known as Tito’s Veles, when the city belonged not to the Republic of Macedonia, but to Yugoslavia. When that country and its networks fell apart, Macedonia managed to avoid the most bloody conflicts that tore apart the central Balkan states. In 2001, a UN-backed agreement made peace between the majority government and ethnic Albanian separatists, and in 2005 the country applied to join the European Union. But it faced one major impediment: a naming dispute with its southern neighbour, Greece. According to the Greeks, the name Macedonia belongs to the Greek province of the same name, and they accused the new Macedonians of planning to take it over. The dispute has simmered for over a decade, preventing the Republic’s accession to the EU and subsequently to NATO, and causing it to slide away from further democratic reforms.
Frustrated at the lack of progress, divisions in society have deepened, and ethnic nationalisms have revived. One outcome has been the ruling party’s policy of ‘antiquisation’: the deliberate appropriation and even fabrication of a Macedonian history.18 Airports, train stations and stadiums were renamed after Alexander the Great and Philip of Macedon – both figures from Greek history who have little connection to Slavic Macedonia – as well as other places and figures from Greek Macedonia. Huge areas of Skopje were bulldozed and rebuilt in a more classical style, a programme costing hundreds of millions of Euros in a country with some of the lowest employment figures on the continent. The centre of the city now features massive statues officially referred to as simply the Warrior and the Warrior on Horseback – but known to everyone as Philip and Alexander. For a while, the country’s official flag depicted the Vergina Sun, a symbol found on Philip’s tomb in Vergina, in northern Greece. These and other appropriations have been supported by nationalist rhetoric, which has been used to suppress minority and centrist parties. Politicians and historians have received death threats for advocating a compromise with Greece.19 In short, Macedonia is a country that has attempted to construct its whole identity on the basis of fake news.
In 2015, a series of leaks revealed that the same government pushing the antiquisation programme also sponsored an extensive wiretapping operation by the country’s security services, which illegally recorded some 670,000 conversations from more than 20,000 telephone numbers over more than a decade.20 Unlike in the United States, the United Kingdom, and other democracies found to be eavesdropping on their own citizens, the leaks led to the collapse of the government, followed by the release of the intercepts to their subjects. Journalists, members of parliament, activists and employees of humanitarian NGOs received CDs containing hours of their own most intimate conversations.21 But just like everywhere else, these revelations didn’t change anything – they simply fuelled more paranoia. Those on the right accused foreign powers of orchestrating the scandal, doubling down on the nationalist rhetoric. Trust in government and democratic institutions fell to a new low.
In such a climate, is it any surprise that the young people of Veles should take wholeheartedly to a programme of disinformation, particularly when it is rewarded by the very systems of modernity they have been told are the future? Fake news is not a product of the internet. Rather, it is the manipulation of new technologies by the same interests that have always sought to manipulate information to their own ends. It is the democratisation of propaganda, in that ever more actors can now play the role of propagandist. And ultimately it is an amplifier of a division that exists already in society, just as gang stalking websites are amplifiers for schizophrenia. But the objectification of Veles, while ignoring the historical and social context that formed it, is symptomatic of a collective failure to comprehend the mechanisms we have built and with which we have surrounded ourselves – and of the fact that we are still seeking clear answers to cloudy problems.
In the months after the election, other actors were accused of its manipulation. The most popular scapegoat was Russia, the go-to bad guy for most contemporary shady tricks, particularly when these emerge from the internet. Following the Russian pro-democracy protests of 2011, which were largely organised through the internet, allies of Vladimir Putin became increasingly active online, setting up armies of pro-Kremlin sock puppets on social media. One such operation, known as the Internet Research Agency, employs hundreds of Russians in St Petersburg, from where they coordinate a campaign of blog posts, comments, viral videos and infographics pushing the Kremlin’s line both within Russia and internationally.22 These ‘troll farms’ are the electronic equivalent of Russia’s gray zone military campaigns: elusive, deniable, and deliberately confusing. There are also thousands of them, at every administrative level: a constant background chatter of misinformation and malevolence.
In trying to support Putin’s party in Russia, and to smear opponents in countries like Ukraine, the troll farms quickly learned that no matter how many posts and comments they produced, it was pretty hard to convince people to change their minds on any given subject. And so they started doing the next best thing: clouding the argument. In the US election, Russian trolls posted in support of Clinton, Sanders, Romney, and Trump, just as Russian security agencies seem to have had a hand in leaks against both sides. The result is that first the internet, and then the wider political discourse, becomes tainted and polarised. As one Russian activist described it, ‘The point is to spoil it, to create the atmosphere of hate, to make it so stinky that normal people won’t want to touch it.’23 Unidentified forces have influenced other elections too, each laced with conspiracy and paranoia. In the run-up to the EU referendum in the United Kingdom, a fifth of the electorate believed that the poll would be rigged in collusion with the security services.24 Leave campaigners advised voters to take pens with them to vote, in order to ensure pencil votes weren’t erased.25 In the aftermath, attention focused on the work of Cambridge Analytica – a company owned by Robert Mercer, former AI engineer, hedge fund billionaire and Donald Trump’s most powerful supporter. Cambridge Analytica’s employees have described what they do as ‘psychological warfare’ – leveraging vast amounts of data in order to target and persuade voters. And of course it turned out that the election really was rigged by the security services, in the way that rigging actually happens: the board and staff of Cambridge Analytica, which ‘donated’ its services to the Leave campaign, includes former British military personnel – notably the former director of psychological operations for British forces in Afghanistan.26 In both the EU referendum and the US election, military contractors used military intelligence technologies to influence democratic elections in their own countries.
Carole Cadwalladr, a journalist who has repeatedly highlighted the links between the Leave campaign, the US Right, and shadowy data firms, wrote,
Try to follow this on a daily basis and it’s one long headspin: a spider’s web of relationships and networks of power and patronage and alliances that spans the Atlantic and embraces data firms, thinktanks and media outlets. It is about complicated corporate structures in obscure jurisdictions, involving offshore funds funnelled through the black-box algorithms of the platform tech monopolists. That it’s eye-wateringly complicated and geographically diffuse is not a coincidence. Confusion is the charlatan’s friend, noise its accessory. The babble on Twitter is a convenient cloak of darkness.27
Just as in the US election, attention turned to Russia as well. Researchers found that the Internet Research Agency had been on a Brexit tweeting spree, in characteristically divisive fashion. One account purporting to be a Texan republican, but suspended by Twitter for links to the Agency, tweeted, ‘I hope UK after #BrexitVote will start to clean their land from muslim invasion!’ and ‘UK voted to leave future European Caliphate! #BrexitVote’. The same account had previously appeared on the front pages of the tabloid newspapers when it posted images purporting to show a Muslim woman ignoring victims of a terror attack in London.28
Beyond the 419 accounts identified as actively belonging to the Agency, untold numbers more were automated. Another report, the year after the referendum, found a network of more than 13,000 automated accounts tweeting on both sides of the debate – but eight times more likely to promote pro-Leave than pro-Remain content.29 All 13,000 accounts were deleted by Twitter in the months after the referendum, and their origin remains unknown. According to other accounts, one-fifth of all online debate around the 2016 US election campaign was automated, and the actions of the bots measurably shifted public opinion.30 Something is rotten in democracy when huge numbers of those participating in its debates are unaccountable and untraceable, when we cannot know who or even what they are. Their motives and their origin are entirely opaque, even as their effects on society grow exponentially. The bots are everywhere now.
In the summer of 2015, AshleyMadison.com, a dating website for married people seeking affairs, was hacked and the details of 37 million members leaked onto the internet. Digging through vast databases of explicit messages between the site’s users, it rapidly became clear that for a site that promised to connect women and men directly – including guaranteeing affairs for its premium members – there was a huge discrepancy between the numbers of each gender. Of those 37 million users, just 5 million were women, and most of them had created an account and never logged on again. The exception was a hugely active cohort of some 70,000 female accounts that Ashley Madison called ‘Angels’. The Angels were the ones who initiated contact with men – who had to pay to respond to them – and kept up conversations over months to keep them coming back, and paying more. The Angels, of course, were entirely automated.31 Ashley Madison paid third parties to create millions of fake profiles in thirty-one different languages, building an elaborate system to administer and animate them. Some men spent thousands of dollars on the site – and some even had affairs in the end. But the vast majority simply spent years having explicit and fruitless conversations with pieces of software. Here is another take on the automation of dystopia: a social site where it’s impossible to be social, half the participants are shadows, and participation is only possible through payment.Those exposed to the system had no way of knowing what was occurring, apart from the suspicion that something might be wrong. And it was impossible to act on that suspicion without destroying the fantasy on which the entire enterprise was assembled. The collapse of the infrastructure – the hack – revealed its bankruptcy, but it had already been made explicit in the technological framing of an abusive system.
When I first published research into the strangeness and violence of children’s YouTube online, I received a rush of messages and emails from strangers who all believed they knew where the videos were coming from. Some had spent months tracking website owners and IP addresses across the web. Others had correlated live-action video locations with documented cases of abuse. The videos were coming from India, from Malaysia, from Pakistan (they were always coming from Elsewhere). They were the grooming tools of an international gang of paedophiles. They were the product of this one company. They were the output of a rogue AI. They were part of a concerted, international, and state-backed plan to corrupt Western youth. Some of the emails were from cranks, some from dedicated researchers; all believed that they had somehow cracked the code. Most of their evaluations were convincing regarding some subset or aspect of the videos; all failed utterly when tested against their entirety.
What is common to the Brexit campaign, the US election, and the disturbing depths of YouTube is that, despite multiple suspicions, it is ultimately impossible to tell who is doing what, or what their motives and intentions are. Watching endlessly streaming videos, scrolling through walls of status updates and tweets, it’s futile to attempt to discern between what’s algorithmically generated nonsense or carefully crafted fake news for generating ad dollars; what’s paranoid fiction, state action, propaganda, or spam; what’s deliberate misinformation or well-meaning fact check. This confusion certainly serves the manipulations of Kremlin spooks and child abusers alike, but it’s also broader and deeper than the concerns of any one group: it is how the world actually is. Nobody decided that this is how the world should evolve – nobody wanted the new dark age – but we built it anyway, and now we are going to have to live in it.
This eBook is licensed to martin glick, martinglick@gmail.com on 07/27/2019