AI: More than Human @ Barbican

What a fabulously enjoyable funfair of an exhibition, even if it isn’t quite the searching investigation or revealing insight into its subject which the curators hoped it would be.

Do you remember the science fiction exhibition the Barbican put on two years ago, Into The Unknown? It filled the long, narrow, curving exhibition space they call The Curve with loads of sci fi books, magazines and screens showing clips from classic sci fi movies and TV shows (Star Wars, Star Trek and so on), along with models of the spaceships, and some of the actual outfits and spacesuits worn by famous sci fi characters. It was geek heaven!

Well, now that whole exhibition looks a bit like the introduction, the part one, to this exhibition’s part two. Where Into The Unknown romped through retro visions of the future, from Jules Vernes and H.G. Wells to ‘2001: A Space Odyssey’ and ‘Blade Runner’, AI: More than Human packs out the same curving exhibition space with a jamboree of interactive gadgets which explore sci fi aspects of the present and the near future, in particular the notion of artificial intelligence or AI for short.

The exhibition space is absolutely crammed with robots large and small, classic movie clips looming down from overhead screens, videos showing the latest AI research in agriculture or undersea exploration, plus a dozen or more games and touch screen programs you can get involved in – the whole busy funfair of exhibits claiming to be an investigation of how artificial intelligence dominates our current existences and will do so more and more in the near future.

Installation view of AI: More than Human at the Barbican showing Alter 3: Offloaded Agency (Photo by the author)

For example, there’s a photo booth just like the ones you traditionally get your passport photos from, except that in this one you have to type a word of your own choosing into the instruction pad, then pose for the photo. The booth then generates – from your one word – a unique ‘poem’ which it prints out over the photo it’s taken of you. Prints the pic out for you to show your friends. Emails it to you, if you want to share your email address. The idea is the program running it will slowly build up a database of people’s key words and this will influence the evolution of its poetry-writing skills.

Each section of the long curved exhibition space is marked off with translucent white hangings. One little section is devoted to the fact that a computer program, DeepMind recently beat the world champion at Go, the Chinese board game (it was in 2016). the space includes a big video screen showing the world champion pushing through throngs of admirers while, at waist height is a table containing several monitors showing a Go board and counters. One of these monitors showed the fatal move which stunned the Go champion and the Go world with its unexpected brilliance. On others, I think you were meant to have a go at Go against the computer, if you wanted. Personally, I’ve no idea what the rules of Go are and not much interest in finding out.

Installation view of AI: More than Human at the Barbican showing the Go section: a tense Go fan on a screen hanging above the table into which are embedded several monitors showing games of Go. Note the translucent white curtains used through the exhibition (Photo by the author)

In another little alcove I was surprised to come across a couple of two- or three-foot-wide Lego boards. In front of them were a number of ‘wells’ containing Lego pieces of different sizes and colours and behind the bases were screens showing a series of metrics. The idea is to ‘build a city’ using the Lego pieces, and the computer would then sense the design and layout you’ve created and assess its social parameters, such as Quality of Life, Employment, Percentage of Highly Educated and so on. Difficult to see how this information could be generated from a few toy bricks positioned at random. Not easy to see how this would be applied in real-world situations where, presumably, there would already be existing measurements of quality of life, employment rate and so on. The whole thing was titled Kreyon City.

Installation view of AI: More than Human at the Barbican showing the Kreyon City installation (Photo by the author)

In a self-contained alcove was an artwork by Stephanie Dinkins which consisted of a black pot with ‘Do not touch’ written on it. being human and not a robot, I immediately wanted to touch it. Behind it, on the wall, was a large video screen showing, when I strolled in, a big picture of a row of ladies’ hats in a hat shop. The visitor assistant manning this little stall apologised and said the installation was broken, so I wandered round the pot and out again, none the wiser.

Paradox 6554 by Stephanie Dinkins at AI: More than Human at the Barbican

Another stand featured a play area a few yards wide on which a cute little robot ‘puppy’ was trotting across till it bumped into one of the raised edges, turned round and trotted off in a other direction. A French TV presenter was very excitedly explaining the point of this cute little toy to his viewers and rolled a red ball towards the puppy which ignored it.

Just beyond the main exhibition space is a row of four black leather chairs set in front of immersive, split computer games screens. You put on headphones and take the console in your hands and then navigate through a computer-generated image based on the architecture of the Barbican itself. As you go downstairs you enter increasingly futuristic fictional environments. Personally, I have never seen the point of computer games and watching my son fritter away a lot of his teenage years holding just such consoles while he eviscerated vast numbers of enemy warriors in Rome Total War or League of Legends has put me off computer games for life. There didn’t appear to be any guns or swords in this game so my son wouldn’t have been interested.

Installation view of AI: More than Human at the Barbican (Photo by the author)

Early on in the show there was a timeline on the wall showing key moments in mankind’s quest to create artificial intelligence, starting sometime around the writing of Frankenstein and carrying through early computer pioneer Ada Lovelace, the famous Alan Turing, through the women who worked at Bletchley Park during the war and on into the modern age of computer research, increasingly carried out in America and Japan, and then onto contemporary digital technology.

Installation view of AI: More than Human at the Barbican showing the timeline of computers and AI technology (Photo by the author)

Probably the most dramatic attraction came towards the end and was a life-size robot with a prosthetic head which waves its arms around in front of a large screen showing atmospheric shots of Japanese technicians interacting with it, giving the whole installation a very filmic vibe.

Installation view of AI: More than Human at the Barbican (Photo by the author)

Throughout the exhibition there was a wealth of wall labels briefly addressing issues surrounding artificial intelligence. I give a flavour of these in the précis of the press release, below.

None of them really told me anything I didn’t already know. None of them really told me what artificial intelligence is. I didn’t read all of them, but nowhere did I come across a memorable definition. Instead we were eased into the idea by the opening section which described the medieval idea of the golem, a medieval legend of a human-shaped creature which is created from inanimate matter. its story was told through some Marvel and DC superhero comics and I was immediately distracted by a set of big video screens showing clips from classic 1920s and 30s silent sci fi and horror films.

The whole exhibition felt a bit like that. Consecutive thought was everywhere sacrificed to pop culture and flashy effects. But as I marvelled at the big rack of cogs which was part of one of the decoding machines at Bletchley, or admired the role of women who are often overlooked in official histories of computing, or watched a middle-aged man in what appeared to be a simulator of a racing car, or looked at a miniature greenhouse in which plants were growing whose temperature and humidity etc were all controlled by computer – what began to really forcefully impress itself on me was that possibility that there is no such thing as artificial intelligence.

Sure enough the digital world is now full of algorithms which can predict what you want to buy next or your personality type and so on (if you let them access enough of your personal data). Personally, I don’t have a smart phone and don’t use Facebook, twitter or any other social media, for precisely this reason.

But none of us are likely to escape the increasing use of facial recognition programs and one feature seemed to be able – if you stood in the right position – to do a full body scan of you and tell you what kind of fabric clothes you’re wearing. Right at the entrance to the Barbican was an enormous video screen and, if you stand on a circular manhole-cover-sized pad and jig around, then abstract shapes on the screen perform exactly the same movements, as if a piece of modern sculpture had come to life.

But absolutely none of these clever gadgets has a mind, has purpose or intention or agency. None of these devices can choose what they’re doing, or is in the slightest bit aware that it is a machine performing a function.

Programs which are designed to monitor the data they’re processing and change the program itself in light of that data – self-correcting or improving algorithms – can have dramatic effects, but… none of them amount to anything even remotely resembling intelligence.

They are just very thorough face recognition, or clothes recognition, or Lego recognition, or word recognition programs. In the same way that the big robot at the end which can wave its arms about is a million miles away from being human, from being a self-conscious, aware being.

I wondered if my reaction was just me being jaded and cynical but then I happened to get into conversation with a BBC science journalist and a friend of his, who both know a lot more than me about this area.

They referenced the classic 1974 paper by the philosopher Thomas Nagel titled ‘What is it like to be a bat?’ which, apparently, says that even if bats have something we might call ‘intelligence’, it would be of such a completely different type, evolved to perfectly suit bats and their batty situation, that we wouldn’t recognise it anyway, hopelessly programmed as we are to think solely in terms of human values and goals.

The BBC guy’s friend then referenced the philosopher Peter Singer’s work on animal rights to argue that, even if we ever did manage to create a self-starting, self-directed form of intelligence, would we not then be guilty of slavery? If we created something that genuinely had heart and soul and emotions and yearnings – would we not be immediately duty bound to ‘set it free’?

But even thinking about it like this makes you realise how absurdly far we are from a situation like that. Programs and machines and devices which can mimic our movements and project them up onto video screens – these are fabulous as artworks, but in the end, all I saw at the exhibition was toys, glorified toys.

Mimic (concept), 2018, by Universal Everything. Image courtesy of Universal Everything

I was relieved by this little conversation which confirmed my opinion that the exhibition contains lots of fun fairground attractions, eye-catching news snippets (computer beats Go champion, Steven Hawking signs a petition warning governments against weaponising artificial intelligence), and distracting movie clips (right at the start there’s a screen showing a montage of pretty much every movie in which an android or robot turns on its human makers, from Blade Runner to Ex Machina), and lots of featurettes about self-guiding robots which can explore the bottom of the oceans, or monitor growing conditions in greenhouses – but somehow all this gallimaufrey of festival fun manages not, in the end, to be that penetrating or insightful.

I got talking to one of the curators of the exhibition and asked what one thing she’d learned from the year or more they’d been preparing it. She said, ‘Not to be afraid of AI’.

She said here in the West, there’s a long tradition of fear of robots and computers (fears not allayed, it must be said, by the numerous movie clips of robots strangling people which greet you as you walk in).

But by contrast, she said that one of the curators was Japanese and it had been a real eye-opener for her to see the completely different approach the Japanese have to new technology. Possibly it is because of their Shinto traditions, according to which the world is full of spirits, but the Japanese seem to be more open and receptive to the idea that we are on the verge of developing new types and forms of intelligence. For us in the West, this immediately prompts headlines about Frankenstein. For the Japanese, she said, these new developments are to be welcomed into a world already full of various types of technology.

That was an interesting insight into Japanese culture. But I couldn’t help noticing how she, like all the wall labels and exhibition promo material, said that we are on the verge of a brave new world where there will be trans-humans incorporating digital technology, or cities will run themselves, cars drive themselves and so on and so on.

I was a big fan of science fiction in the 1970s, I watched Tomorrow’s World every week, and they told us then that robots were about to take over all the boring chores of life, that soon cities would be run by computers and that this would usher in The Leisure Society – an age where everything was done for us by smart bots and so the biggest struggle people would have would be finding ways to fill all their leisure time. Everyone would become poets and playwrights and artists. It would be utopia. And what followed all this technological utopianism? The 1980s of Mrs Thatcher and Ronald Reagan. Robot technologies were introduced in some car manufacturing plants, but they were a drop in the ocean compared to the mass unemployment, social crises, to the Miners Strike and the Poll Tax riots. The failure of the technological utopianism of the 1970s innoculated me for life against believing a word of the prophets of Shiny New Societies until I actually see them.

Meanwhile what I see is the destruction of countless ecosystems, the extermination of species at an unprecedented rate, the irreversible heating of the atmosphere, the poisoning of the oceans, and the new digital technology being used by China to control its population and Russia to launch cyber-attacks on its enemies.

That is the actual existing world which we live in and no sweet little robot puppy or booth which prints rubbish poems over your passport photo or big monitor screens on which shapes dance around mimicking your movements, are going to change it.

What a Loving and Beautiful World

Just like the Into The Unkown exhibition, elements of the show are scattered beyond the Curve, in the entrance space and foyer – where a film is running of a dancer whose movements are copied by sensors and where there’s a tall pulsing sculpture called Totem. But the best thing is downstairs in the space they call The Pit.

Here, in a big square room, a Japanese art collective called teamLab have installed a wonderful thing – projected onto the four walls is a continual slow flow of colour washes, down which move large images of Chinese characters i.e. letters from Chinese script. If you reach out your hand and the shadow of your hand touches one of these characters it gently explodes releasing a plume of images. Thus I reached out and the shadow of my hand touched a Chinese character as it slowly moved down the wall and – it disappeared in a puff of smoke and a covey of brightly coloured birds appeared and started flying round the walls!

If someone else happens to have touched the character for ‘tree’, the birds you’ve released will fly round the walls and go and roost in the tree. Touching another character released a flourish of butterflies which fluttered round the wall. All this is accompanied by a soundtrack of very chilled Oriental music consisting of just a flute and maybe a cymbal or two, very soft, very mellow, very calming.

I’ve been subjected to many interactive installations in my time, but I think this might be the most genuinely interactive, and certainly the most mellow and blissful, I’ve ever experienced. I couldn’t for the life of me, though, see what it had to do with ‘artificial intelligence’. Rather it is just (I say ‘just’ – it is the immensely impressive) use of advanced but still non-conscious, non-self-correcting computer programming.

Installation view of What a Loving, and Beautiful World, part of AI: More than Human at the Barbican (Photo by the author)

Thoughts

I went round the exhibition twice and nothing I read on any of the wall labels and none of the interactive exhibits really explained artificial intelligence to me, or the current state of research into artificial intelligence. Instead I was distracted from distractions by more distractions. It was decades ago – 1996 – that IBM’s computer DeepBlue beat world chess master Gary Kasparov at chess. Did it rock my world? Now DeepBlue has beaten the world Go champion. Somehow I can’t get excited.

I couldn’t help thinking that if a metal robot waving its arms around and a cute little plastic puppy are the best that contemporary robotics can come up with, the rest of us have nothing to fear. And, if playing with Lego is the best that AI can offer contemporary architecture, isn’t that rather pitiful?

A major risk with creating an exhibition like this, most of which seems to consist of funky digital art works, is that the artworks hugely distract from the actual, intellectual questions we should be asking.

For example, I saw one little monitor tucked away in a corner with a short wall label describing in a superficial way China’s use of digital and social media to define and control its entire population. This is a massive issue, an absolutely enormous development, with huge ramifications for the way the same kind of system of total digital control might possibly be introduced into the West. But it wasn’t explored or followed through.

There was footage of some researchers who’ve developed some kind of deep sea fish robot which learns about its environment. That’s sweet, but news last week revealed that

A retired naval officer dove in a submarine nearly 36,000ft into the deepest place on Earth, only to find what appears to be plastic waste.

We are, in other words, destroying the planet, laying waste to entire ecosystems, burning up the atmosphere and poisoning the oceans far faster than we can develop any kind of technology to stop it.

Downstairs on the other side of the Barbican from the main show was a bar which has been set up with a robot barperson i.e. a robotic arm, which can mix any cocktail you want from a row of liquor bottles in front of it. Is… is that the best they can do? Are the pubs round where I live ever going to have robot bar staff? No.

One of the exhibits showcases the following project:

Massachusetts Institute of Technology (MIT), Woods Hole Oceanographic Institute (WHOI), Australian Center for Field Robotics, and NASA present pioneering research that took place in Costa Rican waters on Schmidt Ocean Institute’s Research Vessel Falkor, using the deep sea as a testbed for exploration of Europa – one of Jupiter’s moons.

Do you really think we are ever going to ‘explore’ Jupiter’s moons? And why would we? We are burning up this planet. Shouldn’t absolutely every scrap of scientific research imaginable be going towards devising non-carbon ways of generating energy, storing energy, non-carbon ways to travel and transport food and goods?

I react to projects like these as I react to Elon Musk’s announcements that he is going to fund a manned expedition to Mars, which is: Why? Is he mad? Why isn’t he spending billions trying to save this planet, the one we all live on?

Another exhibit:

With the consequences of climate change growing in scale every year, MIT’s Open Agriculture Initiative looks at ensuring our food security for the future with their AI-driven ‘personal computer farms’ that optimise the development of crops in tabletop-sized growing chambers. It hopes to bring controlled agriculture into the household, by gathering crop-growing data from a network of farms and sharing it with the wider public.

‘It hopes to bring controlled agriculture into the household’! In my household we can’t even grow cacti on the windowsill. This is never going to be affordable or practical. Those who are interested already grow vegetables in windowboxes or garden beds or their local allotment.

If this is the best contemporary technology has to offer us, we’re doomed.


A précis of the press release

There is so much to see, and the exhibition itself is just part of a wider Barbican season about life in modern technology, that, in the name of spreading information and enlightenment – and also to give the full, official explanation of some of the exhibits I’ve mentioned above – I here give a summary of the press release. I’ve highlighted in bold the exhibits I’ve referred to in my review.

AI: More than Human is part of Life Rewired, the Barbican’s 2019 season exploring what it means to be human when technology is changing everything.

It tells the rapidly developing story of AI, from its ancient roots in Japanese Shintoism through Ada Lovelace and Charles Babbage’s early experiments in computing, to AI’s major developmental leaps from the 1940s to the present day.

The exhibition features some of the most cutting-edge research projects in the field from DeepMind, Jigsaw, Massachusetts Institute of Technology Computer Science Artificial Intelligence Laboratory (MIT CSAIL), IBM, Sony Computer Science Laboratories, Google Arts and Culture, Google PAIR, Affectiva, Lichtman Lab at Harvard, Eyewire, Wake Forest Institute for Regenerative Medicine, Wyss Institute and Emulate Inc.

The exhibition also features commissions by artists, researchers and scientists Memo Akten, Joy Buolamwini, Certain Measures (Andrew Witt & Tobias Nolte), Es Devlin, Stephanie Dinkins, Justine Emard, Alexandra Daisy Ginsberg, Stefan Hurtig & Detlef Weitz, Hiroshi Ishiguro & Takashi Ikegami, Mario Klingemann, Kode 9, Lawrence Lek, Daito Manabe & Yukiyasu Kamitani, Massive Attack & Mick Grierson, Lauren McCarthy, Yoichi Ochiai, Neri Oxman, Qosmo, Anna Ridler, Chris Salter in collaboration with Sofian Audry, Takashi Ikegami, Alexandre Saunier and Thomas Spier , Sam Twidale and Marija Avramovic, Yuri Suzuki, teamLab and Universal Everything.

The exhibition includes digital media, immersive art installations and a chance for visitors to interact directly with exhibits to experience AI’s capabilities first-hand, to examine the subject from multiple, global perspectives and give visitors the tools to decide for themselves how to navigate our evolving world.

The exhibition asks the big questions: What does it mean to be human? What is consciousness? Will machines ever outsmart a human? And how can humans and machines work collaboratively?

Section 1. The Dream of AI

The exhibition charts the human desire to bring the inanimate to life right back to ancient times, from the religious traditions of Shintoism and Judaism to the mystical science of alchemy.

Artist and electronic musician Kode9 presents a newly commissioned sound installation on the golem. A mythical creature from Jewish folklore, the golem has influenced art, literature and film for centuries from Frankenstein to Blade Runner. Kode9’s audio essay adapts and samples from many of these stories of unruly artificial entities to create an eerie starting point to the exhibition. Stefan Hurtig & Detlef Weitz also look at the golem as well as other artificial life forms and how they are imagined in film and television.

This section explores Japanese animism philosophy, including Shinto food ceremonies and a selection of ancient anthropomorphic Japanese cooking tools, shown for the first time outside Japan. Sam Twidale and Marija Avramovic also look at AI through the lens of Japanese Shinto beliefs to explore notions of animism and techno-animism in Sunshowers.

Doraemon – one of the best known Japanese manga animations – will also be on display, exploring its influence on the philosophy of robotics and technology development.

Section 2. Mind Machines

This section explains how AI has developed through history from the early innovators who tried to convert rational thought into code, to the creation of the first neural network in the 1940s, which copied the brain’s own processes, going on to show how this has developed into machine learning – when an AI is able to learn, respond and improve by itself.

It includes some of the most important moments and figures in AI’s history:

  • computing pioneers Ada Lovelace and Charles Babbage
  • Claude Shannon’s experimental games
  • Alan Turing’s groundbreaking efforts to decipher code in World War II
  • Deep Blue vs chess champion Garry Kasparov
  • IBM’s Watson, who beat a human on US gameshow, Jeopardy! in 2011
  • DeepMind’s AlphaGo, which became the first computer to defeat a professional in the complex Chinese strategy game Go in 2016, including an in-depth explanation of the surprising Move 37 – a turning point in the history of AI, that shocked the world

This section also looks at how AI sees images, understands language and moves, as artificial intelligence developed beyond the brain to the body. Projects on display include MIT CSAIL’s SoFi – a robotic fish that can independently swim alongside real fish in the sea and Sony’s 2018 robot puppy, aibo, who uses its database of memories and experiences to develop its own personality.

Google PAIR’s project Waterfall of Meaning is a poetic glimpse into the interior of an AI, showing how a machine absorbs human associations between words.

Artist Mario Klingemann’s piece Circuit Training invites visitors to take part in teaching a neural network to create a piece of art. Visitors will first help create the data set by allowing the AI to capture their image, then select from the visuals produced by the network, to teach it what they find interesting. The machine is constantly learning from this human interaction to create an evolving piece of live art.

In Myriad (Tulips), artist Anna Ridler looks at the politics and process of using large datasets to produce a piece of art. Inspired by ‘tulip-mania’ – the financial craze for tulip bulbs that swept across the Netherlands in the 1630s, she took 10,000 photographs of tulips and categorised them by hand, revealing the human aspect that sits behind machine learning. Her second piece Mosaic Virus uses this data set to create a video work generated by an AI, which shows a tulip blooming, an updated version of a Dutch still life for the 21st Century.

Myriad (Tulips) by Anna Ridler atAI: More Than Human. Image credit: Emily Grundon, 2019

Section 3. Data Worlds

At the heart of the main exhibition in The Curve is Data Worlds. This section examines AI’s capability to improve commerce, change society and enhance our personal lives. It looks at AI’s real-life application in fields such as healthcare, journalism and retail.

Affectiva, the leader in Human Perception AI, will demonstrate how AI can improve road safety and the transportation experience, through a driving arcade game during which Affectiva’s AI will track drivers’ emotions and reactions as they encounter different situations.

In Sony CSL’s Kreyon City, visitors plan and build their own city out of LEGO and learn how the combination of human creativity and AI could represent a promising tool in major architecture and infrastructure decisions.

Lauren McCarthy’s experiment to become a human version of a smart home intelligence system explores the tensions between intimacy vs privacy, convenience vs the agency they present, and the role of human labour in the future of automation.

Qosmo’s sound artwork creates a dialogue between human and machine by inviting visitors to make music together with AI.

Nexus Studios have produced a series of interactive works that demonstrate how AI works. Visitors can opt to be classified by an AI, revealing how the computer interprets their image. Nexus Studios have collaborated with artist Memo Akten to present Learning to See, which allows visitors to manipulate everyday objects to illustrate how a neural network trained on a specific data set can be fooled into seeing the world as a painting. It can see only what it already knows, just like us.

Data Worlds also addresses important ethical issues such as bias, control, truth and privacy.

Scientist, activist and founder of the Algorithmic Justice League, Joy Buolamwini, examines racial and gender bias in facial analysis software. As a graduate student, Joy found an AI system detected her better when she was wearing a white mask, prompting her research project Gender Shades. This project uncovered the bias built in to commercial AI in gender classification showing that facial analysis technology AI has a heavy bias towards white males. In parallel to this, Joy wrote AI, Ain’t I A Woman – a spoken word piece that highlights the ways in which artificial intelligence can misinterpret the images of iconic black women.

Joy Buolamwini /The Algorithmic Justice League at MIT Media Lab, part of AI: More Than Human. Image credit: Jimmy Day/MIT Media Lab

Section 4. Endless Evolution

The final section of the exhibition looks at the future of our species and envisions the creation of new species, reflecting on the laws of ‘nature’ and how artificial forms of life fit into this. A newly commissioned set of interviews will discuss themes of the future through the eyes of visionary thinkers.

Massive Attack mark the 20th anniversary of their landmark album Mezzanine by encoding the album in strands of synthetic DNA in a spraypaint can – a nod towards founding member and visual artist Robert del Naja’s roots as the pioneer of the Bristol Graffiti scene. Each spray can contains around one million copies of Mezzanine-encoded ink. The project highlights the need to find alternative storage solutions in a data-driven world, with DNA as a real possibility to store large quantities of data in the future.

Mezzanine will also be at the centre of a new sound composition – a co-production between Massive Attack and machine. Robert Del Naja is working with Mick Grierson at the Creative Computing Institute at University of the Arts London (UAL), students from UAL and Goldsmith’s College, and Andrew Melchior of the Third Space Agency to create a unique piece of art that highlights the remarkable possibilities when music and technology collide. The album will be fed into a neural network and visitors will be able to affect its sound by their actions and movements, with the output returned in high definition.

This section includes Alter 3, created by roboticist Hiroshi Ishiguro and Kohei Ogawa with artificial life researcher Takashi Ikegami and Itsuki Doi. With a body of a bare machine and a genderless, ageless face, Alter learns and matures through an interplay with the surrounding world.

Justine Emard’s piece Co(AI)xistence explores a communication between different forms of intelligences: human and machine. Through signals, body movements and spoken language, she created the interaction between Alter and Mirai Moriyama, a Japanese performer. Using a deep learning system, Alter learns from his experiences and the two try to define new perspectives of co-existence in the world. (So this explains the film running on the big screen behind the robot waving its arms around.)

Stephanie Dinkins’s new work Not The Only One is the multigenerational memoir of one black American family with which visitors can have conversations and ask questions, continuing her ongoing dialogue around AI and race, gender and aging. As society becomes more reliant on artificial intelligence, many voices are left out of the creation of these systems and bias and discrimination can be encoded in AI systems. In Not The Only One, the AI is trained with the needs and ideals of races which are under-represented in the tech sector.

Architect, designer and MIT Professor Neri Oxman presents ongoing projects from her research lab, The Mediated Matter Group at MIT.

The Synthetic Apiary explores the possibility of a controlled space in which seasonal honeybees can produce honey all year round. A large scale investigation into the cultivation of bees and their behaviour has huge implications for the future of the human race, due to the massive decline in bees worldwide over recent years.

Mediated Matter Synthetic Apairy Honeybee Hive in the Synthetic Apiary environment, part of AI: More Than Human at Barbican © The Mediated Matter Group

In an era when we can engineer genomes and design life, Vespers, explores what it means to design (with) life. From the relic of the ancient death mask to the design and digital fabrication of an adaptive and responsive living mask, the project points towards an imminent future where wearable interfaces and building skins are customised not only to fit a particular shape, but also a specific material, chemical and even genetic make-up, tailoring the wearable to both the body and the environment which it inhabits.

For the first time in the UK, Japanese media artist Yoichi Ochiai presents projects from his research lab, Digital Nature, including an artificial butterfly.

Resurrecting The Sublime by Christina Agapakis of Ginkgo Bioworks, Alexandra Daisy Ginsberg, and Sissel Tolaas, brings back the smell of flowers made extinct through human activity. The creation of these smells asks questions about our relationship with nature and the decisions we make as a species.

Japanese art and technology specialist Daito Manabe from Rhizomatiks and neuroscientist Yukiyasu Kamitani present Dissonant Imaginary, a research art project that investigates the relationship between sound and images. Using brain decoding technology facilitated by fMRI (functional magnetic resonance imaging) to generate imagery visualised from brain activity data that changes according to sound, the project seeks to recreate the vivid emotional imagery that can be conjured when listening to a film soundtrack or nostalgic music and foresees a future in which music and visuals may directly interact with the brain as a new medium.

Massachusetts Institute of Technology (MIT), Woods Hole Oceanographic Institute (WHOI), Australian Center for Field Robotics, and NASA present pioneering research that took place in Costa Rican waters on Schmidt Ocean Institute’s Research Vessel Falkor, using the deep sea as a testbed for exploration of Europa – one of Jupiter’s moons.

With the consequences of climate change growing in scale every year, MIT’s Open Agriculture Initiative looks at ensuring our food security for the future with their AI-driven ‘personal computer farms’ that optimise the development of crops in tabletop-sized growing chambers. It hopes to bring controlled agriculture into the household, by gathering crop-growing data from a network of farms and sharing it with the wider public. Strategic design firm Method display their own take on the concept by using upcycled materials and a modular design to build a durable DIY Food Computer.

This section also looks at the research labs using AI to revolutionise healthcare. Lichtman Lab at Harvard and Eyewire both look at mapping the brain in their research projects and the implications this could have for our health. Wake Forest Institute for Regenerative Medicine is engineering tissues and organs made from human cells in the lab. Wyss Institute and Emulate, Inc. present their human Organs-on-Chips technology that contain tiny hollow channels lined with living human cells and tissues, opening up new understanding of how different diseases, medicines, chemicals, and foods affect human health and potentially changing the way drugs are developed forever.

The exhibition ends with a short film produced by Mark Gorton, Visionaries, which lets thinkers and experts Danielle George, Amy Robinson Sterling, Kanta Dihal, Yoichi Ochiai, Francesca Rossi and Andrew Hessel speak about their vision of singularity and the future.

Installation view of AI: More Than Human at the Barbican (Photo by the author)

Level G

A series of new commissions run across the Barbican’s Level G spaces throughout the exhibition.

Digital art and design collective Universal Everything take over the Barbican’s main Silk Street entrance hall to create a new installation, Future You, where visitors can interact with an AI version of themselves. Large digital avatars mimic visitors’ movements onscreen. When the exhibition opens, the character begins in primitive, childlike form and evolves throughout the exhibition’s run, as it learns new ergonomic abilities.

Chris Salter’s piece Totem, in collaboration with Sofian Audry, Takashi Ikegami, Alexandre Saunier and Thomas Spier, is a large-scale, dynamic installation that uses sensing and machine learning to inform its patterns, rhythm and behaviour that will give the installation a feeling of a living, breathing entity.

Lawrence Lek’s open-world video game 2065 is set in a speculative future, when advanced automation means that people no longer have to work and can spend all day playing video games and art is indistinguishable from gaming. Integrating the architecture of the Barbican Curve into the virtual world, players are invited to play the role of an AI to imagine what life might be like in future years.

Artist and designer Es Devlin’s PoemPortraits is a social sculpture that brings together art, design, poetry and machine learning; it has been created in collaboration with Google Arts and Culture and Ross Goodwin. Each visitor will be invited to donate a single word to the piece. This word will be instantly incorporated into a two-line poem generated by an algorithm trained on 20 million words of poetry. This poem will form the photographic flash that illuminates each unique PoemPortrait. The work is cumulative; each poem will also include a word donated by another visitor. At the end of the exhibition, a collective PoemPortrait will be generated from everyone’s contributions: a trace of this transient social sculpture.

Inspired by Raymond Scott’s Electronium machine, Yuri Suzuki’s Digital Electronium gives visitors the chance to input sounds to create a changing soundscape through AI and algorithms.

A Machine View of London, a video work by Certain Measures (Andrew Witt and Tobias Nolte), presents an AI categorising and mapping the shapes of the one million buildings in London. This project is one of their series of FormMaps, an ongoing architectural research project that aims to compare and create a complete catalogue of building patterns from cities around the world.

The exhibition chatbot

To support the exhibition and widen the conversations around artificial intelligence, the Barbican worked with marketing technology agency, Byte, to create a chatbot aimed at stimulating conversations around the role of AI within society. Appearing on the Barbican’s website and Facebook page, the chatbot gives people the chance to engage further with the role of AI tech within different cultural arenas. Opening with a definition of AI, the chatbot develops the conversation around four themes reflected in the exhibition – Why are you afraid of AI? Does data discriminate? Who’s driving the car? And What makes us human?


Related links

Other Barbican reviews

%d bloggers like this: