From ‘Apple’ to ‘Anomaly’ by Trevor Paglen @ the Barbican

Listen up! Listen up! American artist, geographer, and author Trevor Paglen has big news for everyone! He is here to tell us that artificial intelligence may not be a totally wonderful, life-enhancing, fair and just invention after all! Here he is to explain.

AI networks

Trev takes as his starting point the way Artificial Intelligence networks are taught how to ‘see’, ‘hear’ and ‘perceive’ the world by engineers who feed them vast ‘training sets’.

Standard ‘training sets’ consist of images, video and sound libraries that depict objects, faces, facial expressions, gestures, actions, speech commands, eye movements and more. The point is that the way these objects are categorised, labelled and interpreted are not value-free; in other words, the human categorisers have to bring in all kinds of subjective and value judgements – and that this subjective element can lead to all kinds of wonky outcomes.

Thus Trev wants to point out that the ongoing development of artificial intelligence is rife with hidden prejudices, biases, stereotypes and just wrong assumptions. And that this process starts (in some iterations) with the scanning of vast reservoirs of images. Such as the one he’s created here.

Machine-seeing-for-machines is a ubiquitous phenomenon, encompassing everything from facial-recognition systems conducting automated biometric surveillance at airports to department stores intercepting customers’ mobile phone pings to create intricate maps of movements through the aisles. But all this seeing, all of these images, are essentially invisible to human eyes. These images aren’t meant for us; they’re meant to do things in the world; human eyes aren’t in the loop.

From apple to anomaly

So where’s the work of art?

Well, the Curve is the long tall curving exhibition space at the Barbican which is so uniquely shaped that the curators commission works of art specifically for its shape and structure.

For his Curve work Trev has had the bright idea of plastering the long curving wall with 35,000 (!) individually printed photographs pinned in a complex mosaic of images along the immense length of the curve. It has an awesome impact. That’s a lot of photos.

From ‘Apple’ to ‘Anomaly’ by Trevor Paglen © Tim P. Whitby / Getty Images

As the core of his research & preparation, Trev spent some time at ImageNet. This is one of the most widely shared, publicly available collection of images out there – and it is also used to train artificial intelligence networks. It’s available online, so you can have a go searching its huge image bank:

Apparently, ImageNet contains more than fourteen-million images organised into more than 21,000 categories or ‘classes’.

In most cases, the connotations of image categories and names are uncontroversial i.e. a ‘strawberry’ or ‘orange’ but many others are ambiguous and/or a question of judgement  – such as ‘debtors’, ‘alcoholics’ and ‘bad people’.

As the old computer programming cliché has it: ‘garbage in, garbage out.’ If artificial intelligence programs are being taught to teach themselves based on highly questionable and subjective premises, we shouldn’t be surprised if they start developing all kinds of errors, extrapolating and exaggerating all kinds of initial biases into wild stereotypes and misjudgements.

So the purpose of From Apple to Anomaly is to ‘questions the content of the images which are chosen for machine learning’. These are just some of the kinds of images which researchers are currently using to teach machines about ‘the world’.

Conceptually, it seemed to me that the work doesn’t really go much further than that.

It has a structure of sorts which is that, when you enter, the first images are of the uncontroversial ‘factual’ type – specifically, the first images you come to are of the simple concept ‘apple’.

Nothing can go wrong with images of an apple, right? Then as you walk along it, the mosaic of images widens like a funnel with a steady increase of other categories of all sorts, until the entire wall is covered and you are being bombarded by images arranged according to (what looks like) a fairly random collection of themes. (The themes are identified by black cards with clear white text, as in ‘apple’ below, which are placed at the centre of each cluster of images.)

From ‘Apple’ to ‘Anomaly’ by Trevor Paglen © Tim P. Whitby / Getty Images

Having read the blurb about the way words, and AI interpretation of words, becomes increasingly problematic as the words become increasingly abstract, I expected that the concepts would start simple and become increasingly vague. But the work is not, in fact like that – it’s much more random, so that quite specific categories – like paleontologist’ – can be found at the end while quite vague ones crop up very early on.

There was a big cluster of images around the word pizza. These looked revolting, but it was getting close to lunchtime and I found myself mysteriously attracted to the 40 or 50 images which showed fifty or so depictions of ‘ham and eggs’. Mmmm. Ham and eggs, yummy.

Conclusions

Most people are aware that Facebook harvests their data, just like Google and all the other big computer giants, twitter, Instagram blah blah. The disappointing reality for deep thinkers like Trev is that most people, quite obviously, don’t care. As long as they can instant message their mates or post photos of their cats for the world to see, most people don’t appear to give a monkeys what these huge American corporations do with the incalculably vast tracts of date they harvest and hold about us.

I think the same is true of artificial intelligence. Most people don’t care because they don’t think it affects them now or is likely to affect them in the future. Personally, I’m inclined to agree. When I read articles about artificial intelligence, particularly articles about the possible stereotyping of women and blacks i.e. the usual victims

1. American bias The books are written by Americans and feature examples from America. And when you dig deep you tend to find that AI, insofar as it is applied in the real world, tends to exacerbate inequalities and prejudices which already exist. In America. The examples about America’s treatment of its black citizens, or the poor, or the potentially dreadful implications of computerised programmes on healthcare, specifically for the poor – all these examples tend to be taken from America, which is a deeply and distinctively screwed-up country. My point is a lot of the scarifying about AI turns out, on investigation, really to reflect the scary nature of American society, its gross injustices and inequalities.

2. Britain is not America Britain is a different country, with different values, run in different ways. I take the London Underground or sometimes the overground train service every day. Every day I see the chaos and confusion as large-scale systems fail at any number of pressure points. The idea that learning machines are going to make any difference to the basic mismanagement and bad running of most of our organisations seems to me laughable. From time to time I see headlines about self-driving or driverless cars, sometimes taken as an example of artificial intelligence. OK. At what date in the future would you say that the majority of London’s traffic will be driverless cars, lorries, taxis, buses and Deliveroo scooters? In ten years? Twenty years?

3. The triviality of much AI There’s also a problem with the triviality of much AI research. After visiting the exhibition I read a few articles about AI and quickly got bored of reading how supercomputers can now beat grand chessmasters or world champions at the complex game of Go. I can hardly think of anything more irrelevant to the real world. Last year the Barbican itself hosted an exhibition about AI – AI: More Than Human – but the net result of the scores of exhibits and interactive doo-dahs was how trivial and pointless most of them were.

From ‘Apple’ to ‘Anomaly’ by Trevor Paglen © Tim P. Whitby / Getty Images

4. No machine will ever ‘think’ And this brings us to the core of the case against AI, which is that it’s impossible. Creating any kind of computer programme which ‘thinks’ like a human is, quite obviously impossible. This is because people don’t actually ‘think’ in any narrowly definable sense of the word. People reach decisions, or just do things, based on thousands of cumulated impulses and experiences, unique to each individual, and so complicated and, in general, so irrational, that no programs or models can ever capture it. The long detailed Wikipedia article about artificial intelligence includes this:

Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that ‘it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility’.

Intelligence tests, chess, Go – tasks with finite rules of the kinds computer programmers understand – relatively easy to programme. The infinitely complex billions of interactions which characterise human behaviour – impossible.

5. People are irrational I’ve been studying art and literature and history for 40 years or so and if there’s one thing that comes over it is how irrational, perverse, weird and unpredictable people can be, as individuals and in crowds (because the behaviour of people is the subject matter of novels, plays, poems and countless art works; the really profound, bottomless irrationality of human beings is – arguably – the subject matter of the arts).

People smoke and drink and get addicted to drugs (and computer games and smart phones), people follow charismatic leaders like Hitler or Slobodan Milosevic or Donald Trump. People, in other words, are semi-rational animals first and only a long long way afterwards, rational, thinking beings and even then, only rational in limited ways, around specific goals set by their life experiences or jobs or current situations.

Hardly any of this can be factored into any computer program. I am currently working in the IT department of a large American corporation, and what I see every day, repeatedly, throughout the day, is what I’ve seen in all my other jobs in IT and websites and data, which is that the ‘users’, damn their eyes, keep coming up with queer and unpredicted ways of using the system which none of the program managers and project managers and designers and programmers had anticipated.

People keep outwitting and outflanking the computer systems because that’s what people do, not because any individual person is particularly clever but because, taken as a whole, people here, there and across the range, stumble across flaws, errors, glitches, bugs, unexpected combinations, don’t do what ultra-rational computer scientists and data analysts expect them to, Dammit!

6. It doesn’t work The most obvious thing about tech, is that it’s always breaking. I am currently working in the IT department of a large American corporation. This means being on the receiving end of a never-ending tide of complaints and queries about why this, that or the other functionality has broken. Same was true of all the other website jobs I’ve had. The biggest eye-opener for me working in this sector was to learn that things are always broken; there are always bugs and glitches and sometimes quite large structural problems, all of which have to be ranked and prioritised and then we get round to fixing them when we have a) developer time b) budget.

As a tiny confirmation, I have been trying to access Imagenet, the online image bank at the core of this work of art, and guess what? For two days in a row it hasn’t been working, I’ve got the message: ImageNet is under maintenance. Only ILSVRC synsets are included in the search results. Exactly. QED.

7. Big government, dumb data I worked for UK government departments and big government agencies for eight years and my tkeaway from the experience is that it isn’t artificial intelligence we should be frightened of – it is human stupidity.

Working inside the civil service was a terrifying insight into how naturally people in groups fall into a kind of bureaucratic mindset, setting up meetings and committees with minutes and notes and spreadsheets and presentations and how, slowly but steadily, the ability to change anything or get anything is strangled to death. No amount of prejudicing or stereotyping in, to take the anti-AI campaigners’ biggest worries, image recognition, will ever compete with the straightforward bad, dumb, badly thought out, terribly implemented and often cack-handedly horrible decisions which governments and their bureaucracies take.

Take Theresa May’s campaign of sending vans round the UK telling unwanted migrants to go home. Or the vast IT catastrophe which is Universal Credit. For me, any remote and highly speculative threat about the possibility that some AI programs may or may not be compromised by partial judgements and bias is dwarfed by the bad judgements and stereotyping which characterise our society and, in particular our governments, in the present, in the here-and-now.

8. Destroying the world Following this line of thought to its conclusion, it isn’t artificial intelligence which is opening a new coal-fired power stations every two weeks, and building a 100 new airports and manufacturing 75 million new cars and burning down tracts of the rainforest the size of Belgium every year. The meaningful application of artificial intelligence is decades away, whereas good-old-fashioned human stupidity is destroying the world here and now in front of our eyes, and nobody cares very much.

Summary

So. I liked this piece not because of the supposed warning it makes about artificial intelligence – and the obvious criticism or comment about From apple to anomaly is that, apart from a few paragraphs on one wall label, it doesn’t really give you very much background information to get your teeth into or ponder — no, I liked it because:

  1. it is huge and awesome and an impressive thing to walk along – so American! so big!
  2. and because its stomach-churning glut of imagery is testimony to the vast, unstoppable, planet-wasting machine which is humanity

From ‘Apple’ to ‘Anomaly’ by Trevor Paglen © Tim P. Whitby / Getty Images


Related links

Reviews of other exhibitions at the Barbican

And concerts

All I Know Is What Is On The Internet @ the Photographers’ Gallery

Some exhibitions I respond to personally and emotionally; some I respond to intellectually, picking up on ideas or theories; and some leave me stone cold.

This is the text from the press release for All I Know Is What Is On The Internet.

All I Know Is What Is On The Internet presents the work of 11 contemporary artists and groups seeking to map, visualise and question the cultural dynamics of 21st century image culture.

Importantly, it investigates the systems through which today’s photographic images multiply online and asks what new forms of value, knowledge, meaning and labour arise from this endless (re)circulation of content.

Traditionally, photography has played a central role in documenting the world and helping us understand our place within it. However, in a social media age, the problem of understanding an individual photograph is being overwhelmed by the industrial challenge of processing millions of images within a frantically accelerated timeframe. Visual knowledge and authenticity are now inextricably linked to a ‘like’ economy, subject to the (largely invisible) actions of bots, crowdsourced workers, Western tech companies and ‘intelligent’ machines.

This exhibition focuses on the human labour and technical infrastructure required to sustain the web’s 24/7 content feed. The collected works explore the so-called ‘democratisation’ of information, and ask in whose interest this narrative serves. Paying attention to the neglected corners of digital culture, the artists here reveal the role of content moderators, book scanners, Google Street View photographers and everyday users in keeping images in circulation.

The exhibition considers the changing status of photography, as well as the agency of the photographer and the role of the viewer within this new landscape. The artists involved draw attention to the neglected corners of image production, making visible the vast infrastructure of digital platforms and human labour required to support the endless churn of selfies, cat pics and memes.

Taking its title from a Donald Trump quote, All I Know Is What’s On The Internet considers the digital conditions under which photography is produced , and the bodies and machines which help automate the flow of visual content online. Set against Silicon Valley’s desire to automate the processing of human knowledge, the exhibition seeks to make visible ‘the human in the algorithm’.

All I Know Is What’s On The Internet presents a radical exploration of photography when the boundaries between truth and fiction, machine and human are being increasingly called into question.

#Brigading_Conceit

The enormousness of the subject they’re tackling meant that each exhibit, object or installation required a lot of explanation. Take #Brigading_Conceit (2018) by Constant Dullaart.

#Brigading_Conceit (2018) by Constant Dullaart. Aluminium, automotive coating, forex, SIM cards, vesa mounts. Courtesy of Upstream Gallery Amsterdam

#Brigading_Conceit (2018) by Constant Dullaart. Aluminium, automotive coating, forex, SIM cards, vesa mounts. Courtesy of Upstream Gallery Amsterdam

It’s a very big installation hanging on a wall and looks, to me, like the cover of a laptop computer. In fact:

#Brigading_Conceit uses some of the thousands of SIM cards the artist purchased while building an army of fake followers on Facebook and Instagram. The most valuable fake accounts are PVAs (Phone Verified Accounts) which are registered on phone numbers bought in bulk in multiple countries. After verifying the account via SMS message, the SIM cards are often sold for the scrap value of the gold in the chip. Providing physical evidence of the industrial scale in which fake accounts are made, Dullart embeds these SIMs  in different materials, using arrangements reminiscent of army formations. The resulting compositions are representations of brigades made from artificial identities, a series of ‘standing armies’ to be deployed in ongoing and future information wars. Each image of the work tagged and uploaded to Instagram will attract the attention of Dullart’s army, who will bestow likes and automated comments. The semi-reflective surface reveals the form of each photographer whilst concealing their vanity in the effort of harvesting social feedback.

Quite a lot to take in, isn’t it?

And then, having read it all, looking back up at this butterfly made of silver laptop covers… what exactly are you to think? (It crossed my mind that Dullart might be a spoof name: Dull Art.)

IOCOSE A Crowded Apocalypse

IOCOSE A Crowded Apocalypse (2012).

IOCOSE A Crowded Apocalypse (2012)

This is, as you can see, a set of 18 photos arranged in three rows and six columns. As the wall label explains:

Crowdsourcing platforms such as Amazon’s Mechanical Turk provide a means for outsourcing small, repetitive tasks (‘micro-tasks’) to a distributed online workforce. These platforms were used by IOCOSE to assemble a crowd which would create its own conspiracy and then protest against its protagonists and effects. Firstly, the artists hired hundreds of anonymous workers to generate a set of symbols, companies, religious groups and mythical creatures. These were combined into a series of slogans and conspiracy theories by another set of workers. In the final stage, further workers photographed themselves taking to the streets protesting against this global conspiracy.

By operating as ‘artificial artificial intelligence’ (as Amazon touts its platform) the workers transform a practice of activism into a mechanical process. The result is a collection of singular, anonymous protests, whose slogans and claims barely makes sense. The workers, and the people around them, appear at the same time as victims and beneficiaries, actors and spectators of network technologies.

Nothing Personal

Or take the wall of the gallery which was completely covered in a ‘wallpaper’ collage of imagery and texts from the brave new digital world, and titled Nothing Personal (2014-15) by Mari Bastashevski.

Nothing Personal (2014-15) by Mari Bastashevski

Nothing Personal (2014-15) by Mari Bastashevski

Apparently,

In the past decade, the industry that satisfies governments’ demands for surveillance of mass communications has skyrocketed, and it is one of today’s most rapidly expanding markets. Most surveillance technologies are produced by American, European and Israeli companies and sold to anonymous clients and law enforcement agencies across Africa, Asia, Latin America and the Middle East.

While most of these products are undetectable by design, the industry has developed a collective corporate aesthetic using detached technical jargon, stock photography and sanitised clip-art. Nothing Personal presents material from over 300 surveillance companies, including fragment of correspondence between their employees and clients the artist found online.

On closer inspection, the people working within these companies – from the spaces they occupy – to the emails they send – seem to match the very image of the ‘enemy’ depicted by their own marketing.

World Brain

World Brain (2015) is an installation of logs of wood, scattered with wood chip surrounded by small piles of books, and video screens on the wall, the work of Degoutin & Wagon.

Installation view of World Brain (2015) by Degoutin & Wagon

Installation view of World Brain (2015) by Degoutin & Wagon

Explanation:

World Brain is a sprawling journey into the architecture of data centres, the collective intelligence of kittens, high-frequency trading and the creation of transhuman rats. Mixing documentary film and fiction, the artists explore the utopian dreams and ideologies which underpin the idea of a worldwide network and the development of collective intelligence.

The film is presented here is the film in two parts, with accompanying literature. Part one (21 mins 8 secs) is a journey into the physical spaces of the Internet exploring the complex structure of global Internet traffic. The second part (51 mins 54 secs) follows the wanderings of a group of researchers who try to survive in the forest using Wikipedia, with the ultimate aim of securing the survival of humankind.

World Brain is also available to watch online at: tpg.org.uk/worldbrain

Ironically, when I tried to access this URL, I found the video is unavailable and got this message:

This video contains content from Arte, who has blocked it in your country on copyright grounds.

Which may, or may not, be part of the work itself. Or an ironic comment on the work. Or the internet. Or something.

So this is an intensely cerebral exhibition, in the sense that you really have to focus on each of the works, read the explanatory text carefully, and then bring quite a lot of intelligence and knowledge of the subject to bear on each piece to assess whether they ‘work’ for you.

A view

I have spent the past eight years working on the intranets and public websites and password-protected portals of four British government departments and agencies.

I have attended countless meetings, seminars and conferences about website design, data management and security, about government usage of social media, about how to convey messages or get users hooked on your website, and so on.

In fact I myself ran a 6-month programme of weekly seminars for the content team of a big government website on subjects like how to use Facebook and the rest of social media to transmit government messages, how to gather data about users, analyse and convey messages better, etc.

And for two years I was a data analyst on the password-protected portal of a major UK government portal, doing elaborate number crunching, producing infographics for all sorts of data, and merrily ‘repositioning’ the numbers to support the ‘official narrative’ put out by our department.

So I have a reasonable grasp of digital issues and I have, from the start, been extremely sceptical about the internet, about social media, and especially about mobile phone technology.

I refuse to own a smartphone because I a) don’t want to become addicted b) I want to relate to the world around me instead of staring at a tiny screen all the time c) I don’t want to be bugged, surveilled, followed and have all my personal data harvested.

All in all, I am confident that I understand the world these artists are portraying and that I understand a lot of the issues they’re addressing. I have grappled in person with some of them, as part of my job.

But I found it hard to get very worked up about any of the actual art on show and went away wondering why.

I think it’s something to do with accessibility. Web accessibility is a subject I’ve worked with personally, trying to present government information more clearly, both visually and textually. Even the dimmest of users must be able to read the text and use the transactions on a government website.

Whereas hardly any of the works on display here seemed very accessible. None of them leapt straight out ans made me think, ‘Yes, that’s the issue, that’s what we need to be saying / exploring / addressing’.

Instead I found it ironic that in the supposed Age of the Image, all of these works and installations required such a lot of text to get their point across.

There were quite a few younger visitors in evidence (unlike most of the ‘traditional’ art exhibitions I visit, which are dominated by old age pensioners).

Maybe this is art for a younger generation than me, accustomed to swiping screens, skimming information, cherry picking text. Maybe a lot of these issues and ideas will be new to them, or they are so accustomed to smartphones and apps and processing information, that the works will leap out and say something meaningful to them.

My over-riding sense of the Digital Age we live in is that most people, by now, know that Amazon, Facebook, twitter, and their phone providers are morally compromised, tax-evading, High Street-destroying, personal-information-harvesting creepy multinational companies, but…

It’s just so handy being able to order something from Amazon Prime, or send messages to your Facebook group, or share photos of your party on Instagram…

And none of the revelations about how smartphones track your movements and your conversations seem to have made the slightest dent in smartphone ownership or usage.

My sense is that most people just don’t care what iniquities these companies carry out, as long as their stuff turns up next day and they can share their photos for free.

It was a brave effort to put on an exhibition like this. I didn’t really like the works on show. Maybe others will.

Participating artists

  • Mari Bastashevski
  • Constant Dullaart
  • IOCOSE
  • Stephanie Kneissl & Max Lackner
  • Eva & Franco Mattes
  • Silvio Lorusso & Sebastian Schmieg
  • Winnie Soon
  • Emilio Vavarella
  • Stéphane Degoutin & Gwenola Wagon
  • Andrew Norman Wilson
  • Miao Ying

Related links

Reviews of other Photographers’ Gallery exhibitions

%d bloggers like this: