Listen up! Listen up! American artist, geographer, and author Trevor Paglen has big news for everyone! He is here to tell us that artificial intelligence may not be a totally wonderful, life-enhancing, fair and just invention after all! He is an American so he has special insight into the interweb and he is here to explain.
AI networks
Trev takes as his starting point the way Artificial Intelligence networks are taught how to ‘see’, ‘hear’ and ‘perceive’ the world by engineers who feed them vast ‘training sets’.
Standard ‘training sets’ consist of images, video and sound libraries that depict objects, faces, facial expressions, gestures, actions, speech commands, eye movements and more. The point is that the way these objects are categorised, labelled and interpreted are not value-free. In other words, the human categorisers have to bring in all kinds of subjective and value judgements – and that this subjective element can lead to all kinds of wonky outcomes.
Thus Trev wants to point out that the ongoing development of artificial intelligence is rife with hidden prejudices, biases, stereotypes and just wrong assumptions. And that this process starts (in some iterations) with the scanning of vast reservoirs of images. Such as the one he’s created here.
Machine-seeing-for-machines is a ubiquitous phenomenon, encompassing everything from facial-recognition systems conducting automated biometric surveillance at airports to department stores intercepting customers’ mobile phone pings to create intricate maps of movements through the aisles. But all this seeing, all of these images, are essentially invisible to human eyes. These images aren’t meant for us; they’re meant to do things in the world; human eyes aren’t in the loop.
From apple to anomaly
So where’s the work of art?
Well, the Curve is the long tall curving exhibition space at the Barbican which is so uniquely shaped that the curators commission works of art specifically for its shape and structure.
For his Curve work Trevor has had the bright idea of plastering the long curving wall with 35,000 (!) individually printed photographs pinned in a complex mosaic of images along the immense length of the curve. It has an awesome impact. That’s a lot of photos.
As the core of his research and preparation, Trev spent some time at ImageNet. This is one of the most widely shared, publicly available collection of images out there – and it is also used to train artificial intelligence networks. It’s available online, so you can have a go searching its huge image bank:
Apparently, ImageNet contains more than fourteen million images organised into more than 21,000 categories or ‘classes’.
In most cases, the connotations of image categories and names are uncontroversial i.e. a ‘strawberry’ or ‘orange’ but many others are ambiguous and/or a question of judgement – such as ‘debtors’, ‘alcoholics’ and ‘bad people’.
As the old computer programming cliché has it: ‘garbage in, garbage out.’ If artificial intelligence programs are being taught to teach themselves based on highly questionable and subjective premises, we shouldn’t be surprised if they start developing all kinds of errors, extrapolating and exaggerating all kinds of initial biases into wild stereotypes and misjudgements.
So the purpose of From Apple to Anomaly is to ‘questions the content of the images which are chosen for machine learning’. These are just some of the kinds of images which researchers are currently using to teach machines about ‘the world’.
Conceptually, it seemed to me that the work doesn’t really go much further than that.
It has a structure of sorts which is that, when you enter, the first images are of the uncontroversial ‘factual’ type – specifically, the first images you come to are of the simple concept ‘apple’.
Nothing can go wrong with images of an apple, right? Then as you walk along it, the mosaic of images widens like a funnel with a steady increase of other categories of all sorts, until the entire wall is covered and you are being bombarded by images arranged according to (what looks like) a fairly random collection of themes. (The themes are identified by black cards with clear white text, as in ‘apple’ below, which are placed at the centre of each cluster of images.)
Having read the blurb about the way words, and AI interpretation of words, becomes increasingly problematic as the words become increasingly abstract, I expected that the concepts would start simple and become increasingly vague. But the work is not, in fact like that – it’s much more random, so that quite specific categories – like paleontologist’ – can be found at the end while quite vague ones crop up very early on.
There was a big cluster of images around the word pizza. These looked revolting, but it was getting close to lunchtime and I found myself mysteriously attracted to the 40 or 50 images which showed fifty or so depictions of ‘ham and eggs’. Mmmm. Ham and eggs, yummy.
Conclusions
Most people are aware that Facebook harvests their data, just like Google and all the other big computer giants, twitter, Instagram blah blah. The disappointing reality for deep thinkers like Trev is that most people, quite obviously, don’t care. As long as they can instant message their mates or post photos of their cats for the world to see, most people don’t appear to give a monkeys what these huge American corporations do with the incalculably vast tracts of date they harvest and hold about us.
I think the same is true of artificial intelligence. Most people don’t care because they don’t think it affects them now or is likely to affect them in the future. Personally, I’m inclined to agree. When I read articles about artificial intelligence, particularly articles about the possible stereotyping of women and blacks i.e. the usual victims
1. American bias
The books are written by Americans and feature examples from America. And when you dig deep you tend to find that AI, insofar as it is applied in the real world, tends to exacerbate inequalities and prejudices which already exist. In America. The examples about America’s treatment of its black citizens, or the poor, or the potentially dreadful implications of computerised programmes on healthcare, specifically for the poor – all these examples tend to be taken from America, which is a deeply and distinctively screwed-up country. My point is a lot of the scarifying about AI turns out, on investigation, really to reflect the scary nature of American society, its gross injustices and inequalities.
2. Britain is not America
Britain is a different country, with different values, run in different ways. I take the London Underground or sometimes the overground train service every day. Every day I see the chaos and confusion as large-scale systems fail at any number of pressure points. The idea that learning machines are going to make any difference to the basic mismanagement and bad running of most of our organisations seems to me laughable. From time to time I see headlines about self-driving or driverless cars, sometimes taken as an example of artificial intelligence. OK. At what date in the future would you say that the majority of London’s traffic will be driverless cars, lorries, taxis, buses and Deliveroo scooters? In ten years? Twenty years?
3. The triviality of much AI
There’s also a problem with the triviality of much AI research. After visiting the exhibition I read a few articles about AI and quickly got bored of reading how supercomputers can now beat grand chessmasters or world champions at the complex game of Go. I can hardly think of anything more irrelevant to the real world. Last year the Barbican itself hosted an exhibition about AI – AI: More Than Human – but the net result of the scores of exhibits and interactive doo-dahs was how trivial and pointless most of them were.
4. No machine will ever ‘think’
And this brings us to the core of the case against AI, which is that it’s impossible. Creating any kind of computer programme which ‘thinks’ like a human is, quite obviously impossible. This is because people don’t actually ‘think’ in any narrowly definable sense of the word. People reach decisions, or just do things, based on thousands of cumulated impulses and experiences, unique to each individual, and so complicated and, in general, so irrational, that no programs or models can ever capture it. The long detailed Wikipedia article about artificial intelligence includes this:
Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counter-intuitively, difficult to program into a robot. The paradox is named after Hans Moravec, who stated in 1988 that ‘it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility’.
Intelligence tests, playing chess or Go – tasks with finite rules of the kinds computer programmers understand — are relatively easy to programme. The infinitely complex billions of interactions which characterise human behaviour – impossible.
5. People are irrational
I’ve been studying art and literature and history for 40 years or so and if there’s one thing that comes over it is how irrational, perverse, weird and unpredictable people can be, as individuals and in crowds (because the behaviour of people is the subject matter of novels, plays, poems and countless art works; the really profound, bottomless irrationality of human beings is – arguably – the subject matter of the arts).
People smoke and drink and get addicted to drugs (and computer games and smart phones), people follow charismatic leaders like Hitler or Slobodan Milosevic or Donald Trump. People, in other words, are semi-rational animals first and only a long long way afterwards, rational, thinking beings and even then, only rational in limited ways, around specific goals set by their life experiences or jobs or current situations.
Hardly any of this can be factored into any computer program. I am currently working in the IT department of a large American corporation, and what I see every day, repeatedly, throughout the day, is what I’ve seen in all my other jobs in IT and websites and data, which is that the ‘users’, damn their eyes, keep coming up with queer and unpredicted ways of using the system which none of the program managers and project managers and designers and programmers had anticipated.
People keep outwitting and outflanking the computer systems because that’s what people do, not because any individual person is particularly clever but because, taken as a whole, people here, there and across the range, stumble across flaws, errors, glitches, bugs, unexpected combinations, don’t do what ultra-rational computer scientists and data analysts expect them to, Dammit!
6. It doesn’t work
The most obvious thing about tech, is that it’s always breaking. I am currently working in the IT department of a large American corporation. This means being on the receiving end of a never-ending tide of complaints and queries about why this, that or the other functionality has broken. Same was true of all the other website jobs I’ve had. The biggest eye-opener for me working in this sector was to learn that things are always broken; there are always bugs and glitches and sometimes quite large structural problems, all of which have to be ranked and prioritised and then we get round to fixing them when we have a) developer time b) budget.
As a tiny confirmation, I have been trying to access Imagenet, the online image bank at the core of this work of art, and guess what? For two days in a row it hasn’t been working, I’ve got the message: ImageNet is under maintenance. Only ILSVRC synsets are included in the search results. Exactly. QED.
7. Big government, dumb data
I have worked for UK government departments and big government agencies for 14 years and my takeaway from the experience is that it isn’t artificial intelligence we should be frightened of – it is human stupidity.
Working inside the civil service was a terrifying insight into how naturally people in groups fall into a kind of bureaucratic mindset, setting up meetings and committees with minutes and notes and spreadsheets and presentations and how, slowly but steadily, the ability to change anything or get anything is strangled to death. No amount of prejudicing or stereotyping in, to take the anti-AI campaigners’ biggest worries, image recognition, will ever compete with the straightforward bad, dumb, badly thought out, terribly implemented and often cack-handedly horrible decisions which governments and their bureaucracies take.
Take Theresa May’s campaign of sending vans round the UK telling unwanted migrants to go home. Or the vast IT catastrophe which is Universal Credit. For me, any remote and highly speculative threat about the possibility that some AI programs may or may not be compromised by partial judgements and bias is dwarfed by the bad judgements and stereotyping which characterise our society and, in particular our governments, in the present, in the here-and-now.
8. Destroying the world
Following this line of thought to its conclusion, it isn’t artificial intelligence which is opening a new coal-fired power stations every two weeks, and building a 100 new airports and manufacturing 75 million new cars and burning down tracts of the rainforest the size of Belgium every year. The meaningful application of artificial intelligence is decades away, whereas good old-fashioned human stupidity is destroying the world here and now in front of our eyes, and nobody cares very much.
Summary
So. I liked this piece not because of the supposed warning it makes about artificial intelligence – and the obvious criticism or comment about From apple to anomaly is that, apart from a few paragraphs on one wall label, it doesn’t really give you very much background information to get your teeth into or ponder – no, I liked it because:
- it is huge and awesome and an impressive thing to walk along – so American! so big!
- and because its stomach-churning glut of imagery is testimony to the vast, unstoppable, planet-wasting machine which is humanity
Related links
- From ‘Apple’ to ‘Anomaly’ by Trevor Paglen continues at the Barbican until 19 January 2020
- ‘Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind’: John Lanchester article in the London Review of Books
- Big Tech’s Big Defector article in the New Yorker