The Book of Universes by John D. Barrow (2011)

This book is twice as long and half as good as Barrow’s earlier primer, The Origin of the Universe.

In that short book Barrow focused on the key ideas of modern cosmology – introducing them to us in ascending order of complexity, and as simply as possible. He managed to make mind-boggling ideas and demanding physics very accessible.

This book – although it presumably has the merit of being more up to date (published in 2011 as against 1994) – is an expansion of the earlier one, an attempt to be much more comprehensive, but which, in the process, tends to make the whole subject more confusing.

The basic premise of both books is that, since Einstein’s theory of relativity was developed in the 1910s, cosmologists and astronomers and astrophysicists have:

  1. shown that the mathematical formulae in which Einstein’s theories are described need not be restricted to the universe as it has traditionally been conceived; in fact they can apply just as effectively to a wide variety of theoretical universes – and the professionals have, for the past hundred years, developed a bewildering array of possible universes to test Einstein’s insights to the limit
  2. made a series of discoveries about our actual universe, the most important of which is that a) it is expanding b) it probably originated in a big bang about 14 billion years ago, and c) in the first few milliseconds after the bang it probably underwent a period of super-accelerated expansion known as the ‘inflation’ which may, or may not, have introduced all kinds of irregularities into ‘our’ universe, and may even have created a multitude of other universes, of which ours is just one

If you combine a hundred years of theorising with a hundred years of observations, you come up with thousands of theories and models.

In The Origin of the Universe Barrow stuck to the core story, explaining just as much of each theory as is necessary to help the reader – if not understand – then at least grasp their significance. I can write the paragraphs above because of the clarity with which The Origin of the Universe explained it.

In The Book of Universes, on the other hand, Barrow’s aim is much more comprehensive and digressive. He is setting out to list and describe every single model and theory of the universe which has been created in the past century.

He introduces the description of each model with a thumbnail sketch of its inventor. This ought to help, but it doesn’t because the inventors generally turn out to be polymaths who also made major contributions to all kinds of other areas of science. Being told a list of Paul Dirac’s other major contributions to 20th century science is not a good way for preparing your mind to then try and understand his one intervention on universe-modelling (which turned, in any case, out to be impractical and lead nowhere).

Another drawback of the ‘comprehensive’ approach is that a lot of these models have been rejected or barely saw the light of day before being disproved or – more complicatedly – were initially disproved but contained aspects or insights which turned out to be useful forty years later, and were subsequently recycled into revised models. It gets a bit challenging to try and hold all this in your mind.

In The Origin of the Universe Barrow sticks to what you could call the canonical line of models, each of which represented the central line of speculation, even if some ended up being disproved (like Hoyle and Gold and Bondi’s model of the steady state universe). Given that all of this material is pretty mind-bending, and some of it can only be described in advanced mathematical formulae, less is definitely more. I found The Book of Universes simply had too many universes, explained too quickly, and lost amid a lot of biographical bumpf summarising people’s careers or who knew who or contributed to who’s theory. Too much information.

One last drawback of the comprehensive approach is that quite important points – which are given space to breathe and sink in in The Origin of the Universe are lost in the flood of facts in The Book of Universes.

I’m particularly thinking of Einstein’s notion of the cosmological constant which was not strictly necessary to his formulations of relativity, but which Einstein invented and put into them solely in order to counteract the force of gravity and ensure his equations reflected the commonly held view that the universe was in a permanent steady state.

This was a mistake and Einstein is often quoted as admitting it was the biggest mistake of his career. In 1965 scientists discovered the cosmic background radiation which proved that the universe began in an inconceivably intense explosion, that the universe was therefore expanding and that the explosive, outward-propelling force of this bang was enough to counteract the contracting force of the gravity of all the matter in the universe without any need for a hypothetical cosmological constant.

I understand this (if I do) because in The Origin of the Universe it is given prominence and carefully explained. By contrast, in The Book of Universes it was almost lost in the flood of information and it was only because I’d read the earlier book that I grasped its importance.

The Book of Universes

Barrow gives a brisk recap of cosmology from the Sumerians and Egyptians, through the ancient Greeks’ establishment of the system named after Ptolemy in which the earth is the centre of the solar system, on through the revisions of Copernicus and Galileo which placed the sun firmly at the centre of the solar system, on to the three laws of Isaac Newton which showed how the forces which govern the solar system (and more distant bodies) operate.

There is then a passage on the models of the universe generated by the growing understanding of heat and energy acquired by Victorian physicists, which led to one of the most powerful models of the universe, the ‘heat death’ model popularised by Lord Kelvin in the 1850s, in which, in the far future, the universe evolves to a state of complete homegeneity, where no region is hotter than any other and therefore there is no thermodynamic activity, no life, just a low buzzing noise everywhere.

But this is all happens in the first 50 pages and is just preliminary throat-clearing before Barrow gets to the weird and wonderful worlds envisioned by modern cosmology i.e. from Einstein onwards.

In some of these models the universe expands indefinitely, in others it will reach a peak expansion before contracting back towards a Big Crunch. Some models envision a static universe, in others it rotates like a top, while other models are totally chaotic without any rules or order.

Some universes are smooth and regular, others characterised by clumps and lumps. Some are shaken by cosmic tides, some oscillate. Some allow time travel into the past, while others threaten to allow an infinite number of things to happen in a finite period. Some end with another big bang, some don’t end at all. And in only a few of them do the conditions arise for intelligent life to evolve.

The Book of Universes then goes on, in 12 chapters, to discuss – by my count – getting on for a hundred types or models of hypothetical universes, as conceived and worked out by mathematicians, physicists, astrophysicists and cosmologists from Einstein’s time right up to the date of publication, 2011.

A list of names

Barrow namechecks and briefly explains the models of the universe developed by the following (I am undertaking this exercise partly to remind myself of everyone mentioned, partly to indicate to you the overwhelming number of names and ideas the reader is bombarded with):

  • Aristotle
  • Ptolemy
  • Copernicus
  • Giovanni Riccioli
  • Tycho Brahe
  • Isaac Newton
  • Thomas Wright (1771-86)
  • Immanuel Kant (1724-1804)
  • Pierre Laplace (1749-1827) devised what became the standard Victorian model of the universe
  • Alfred Russel Wallace (1823-1913) discussed the physical conditions of a universe necessary for life to evolve in it
  • Lord Kelvin (1824-1907) material falls into the central region of the universe and coalesce with other stars to maintain power output over immense periods
  • Rudolf Clausius (1822-88) coined the word ‘entropy’ in 1865 to describe the inevitable progress from ordered to disordered states
  • William Jevons (1835-82) believed the second law of thermodynamics implies that universe must have had a beginning
  • Pierre Duhem (1961-1916) Catholic physicist accepted the notion of entropy but denied that it implied the universe ever had a beginning
  • Samuel Tolver Preson (1844-1917) English engineer and physicist, suggested the universe is so vast that different ‘patches’ might experience different rates of entropy
  • Ludwig Boltzmann and Ernst Zermelo suggested the universe is infinite and is already in a state of thermal equilibrium, but just with random fluctuations away from uniformity, and our galaxy is one of those fluctuations
  • Albert Einstein (1879-1955) his discoveries were based on insights, not maths: thus he saw the problem with Newtonian physics is that it privileges an objective outside observer of all the events in the universe; one of Einstein’s insights was to abolish the idea of a privileged point of view and emphasise that everyone is involved in the universe’s dynamic interactions; thus gravity does not pass through a clear, fixed thing called space; gravity bends space.

The American physicist John Wheeler once encapsulated Einstein’s theory in two sentences:

Matter tells space how to curve. Space tells matter how to move. (quoted on page 52)

  • Marcel Grossmann provided the mathematical underpinning for Einstein’s insights
  • Willem de Sitter (1872-1934) inventor of, among other things, the de Sitter effect which represents the effect of the curvature of spacetime, as predicted by general relativity, on a vector carried along with an orbiting body – de Sitter’s universe gets bigger and bigger for ever but never had a zero point; but then de Sitter’s model contains no matter
  • Vesto Slipher (1875-1969) astronomer who discovered the red shifting of distant galaxies in 1912, the first ever empirical evidence for the expansion of the galaxy
  • Alexander Friedmann (1888-1925) Russian mathematician who produced purely mathematical solutions to Einstein’s equation, devising models where the universe started out of nothing and expanded a) fast enough to escape the gravity exerted by its own contents and so will expand forever or b) will eventually succumb to the gravity of its own contents, stop expanding and contract back towards a big crunch. He also speculated that this process (expansion and contraction) could happen an infinite number of times, creating a cyclic series of bangs, expansions and contractions, then another bang etc
A graphic of the oscillating or cyclic universe (from Discovery magazine)

A graphic of the oscillating or cyclic universe (from Discovery magazine)

  • Arthur Eddington (1882-1944) most distinguished astrophysicist of the 1920s
  • George Lemaître (1894-1966) first to combine an expanding universe interpretation of Einstein’s equations with the latest data about redshifting, and show that the universe of Einstein’s equations would be very sensitive to small changes – his model is close to Eddington’s so that it is often called the Eddington-Lemaître universe: it is expanding, curved and finite but doesn’t have a beginning
  • Edwin Hubble (1889-1953) provided solid evidence of the redshifting (moving away) of distant galaxies, a main plank in the whole theory of a big bang, inventor of Hubble’s Law:
    • Objects observed in deep space – extragalactic space, 10 megaparsecs (Mpc) or more – are found to have a redshift, interpreted as a relative velocity away from Earth
    • This Doppler shift-measured velocity of various galaxies receding from the Earth is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away
  • Richard Tolman (1881-1948) took Friedmann’s idea of an oscillating universe and showed that the increased entropy of each universe would accumulate, meaning that each successive ‘bounce’ would get bigger; he also investigated what ‘lumpy’ universes would look like where matter is not evenly spaced but clumped: some parts of the universe might reach a maximum and start contracting while others wouldn’t; some parts might have had a big bang origin, others might not have
  • Arthur Milne (1896-1950) showed that the tension between the outward exploding force posited by Einstein’s cosmological constant and the gravitational contraction could actually be described using just Newtonian mathematics: ‘Milne’s universe is the simplest possible universe with the assumption that the universe s uniform in space and isotropic’, a ‘rational’ and consistent geometry of space – Milne labelled the assumption of Einsteinian physics that the universe is the same in all places the Cosmological Principle
  • Edmund Fournier d’Albe (1868-1933) posited that the universe has a hierarchical structure from atoms to the solar system and beyond
  • Carl Charlier (1862-1934) introduced a mathematical description of a never-ending hierarchy of clusters
  • Karl Schwarzschild (1873-1916) suggested  that the geometry of the universe is not flat as Euclid had taught, but might be curved as in the non-Euclidean geometries developed by mathematicians Riemann, Gauss, Bolyai and Lobachevski in the early 19th century
  • Franz Selety (1893-1933) devised a model for an infinitely large hierarchical universe which contained an infinite mass of clustered stars filling the whole of space, yet with a zero average density and no special centre
  • Edward Kasner (1878-1955) a mathematician interested solely in finding mathematical solutions to Einstein’s equations, Kasner came up with a new idea, that the universe might expand at different rates in different directions, in some parts it might shrink, changing shape to look like a vast pancake
  • Paul Dirac (1902-84) developed a Large Number Hypothesis that the really large numbers which are taken as constants in Einstein’s and other astrophysics equations are linked at a deep undiscovered level, among other things abandoning the idea that gravity is a constant: soon disproved
  • Pascual Jordan (1902-80) suggested a slight variation of Einstein’s theory which accounted for a varying constant of gravitation as through it were a new source of energy and gravitation
  • Robert Dicke (1916-97) developed an alternative theory of gravitation
  • Nathan Rosen (1909-995) young assistant to Einstein in America with whom he authored a paper in 1936 describing a universe which expands but has the symmetry of a cylinder, a theory which predicted the universe would be washed over by gravitational waves
  • Ernst Straus (1922-83) another young assistant to Einstein with whom he developed a new model, an expanding universe like those of Friedman and Lemaître but which had spherical holes removed like the bubbles in an Aero, each hole with a mass at its centre equal to the matter which had been excavated to create the hole
  • Eugene Lifschitz (1915-85) in 1946 showed that very small differences in the uniformity of matter in the early universe would tend to increase, an explanation of how the clumpy universe we live in evolved from an almost but not quite uniform distribution of matter – as we have come to understand that something like this did happen, Lifshitz’s calculations have come to be seen as a landmark
  • Kurt Gödel (1906-1978) posited a rotating universe which didn’t expand and, in theory, permitted time travel!
  • Hermann Bondi, Thomas Gold and Fred Hoyle collaborated on the steady state theory of a universe which is growing but remains essentially the same, fed by the creation of new matter out of nothing
  • George Gamow (1904-68)
  • Ralph Alpher and Robert Herman in 1948 showed that the ratio of the matter density of the universe to the cube of the temperature of any heat radiation present from its hot beginning is constant if the expansion is uniform and isotropic – they calculated the current radiation temperature should be 5 degrees Kelvin – ‘one of the most momentous predictions ever made in science’
  • Abraham Taub (1911-99) made a study of all the universes that are the same everywhere in space but can expand at different rates in different directions
  • Charles Misner (b.1932) suggested ‘chaotic cosmology’ i.e. that no matter how chaotic the starting conditions, Einstein’s equations prove that any universe will inevitably become homogenous and isotropic – disproved by the smoothness of the background radiation. Misner then suggested the Mixmaster universe, the  most complicated interpretation of the Einstein equations in which the universe expands at different rates in different directions and the gravitational waves generated by one direction interferes with all the others, with infinite complexity
  • Hannes Alfvén devised a matter-antimatter cosmology
  • Alan Guth (b.1947) in 1981 proposed a theory of ‘inflation’, that milliseconds after the big bang the universe underwent a swift process of hyper-expansion: inflation answers at a stroke a number of technical problems prompted by conventional big bang theory; but had the unforeseen implication that, though our region is smooth, parts of the universe beyond our light horizon might have grown from other areas of inflated singularity and have completely different qualities
  • Andrei Linde (b.1948) extrapolated that the inflationary regions might create sub-regions in  which further inflation might take place, so that a potentially infinite series of new universes spawn new universes in an ‘endlessly bifurcating multiverse’. We happen to be living in one of these bubbles which has lasted long enough for the heavy elements and therefore life to develop; who knows what’s happening in the other bubbles?
  • Ted Harrison (1919-2007) British cosmologist speculated that super-intelligent life forms might be able to develop and control baby universe, guiding the process of inflation so as to promote the constants require for just the right speed of growth to allow stars, planets and life forms to evolve. Maybe they’ve done it already. Maybe we are the result of their experiments.
  • Nick Bostrom (b.1973) Swedish philosopher: if universes can be created and developed like this then they will proliferate until the odds are that we are living in a ‘created’ universe and, maybe, are ourselves simulations in a kind of multiverse computer simulation

Although the arrival of Einstein and his theory of relativity marks a decisive break with the tradition of Newtonian physics, and comes at page 47 of this 300-page book, it seemed to me the really decisive break comes on page 198 with the publication Alan Guth’s theory of inflation.

Up till the Guth breakthrough, astrophysicists and astronomers appear to have focused their energy on the universe we inhabit. There were theoretical digressions into fantasies about other worlds and alternative universes but they appear to have been personal foibles and everyone agreed they were diversions from the main story.

However, the idea of inflation, while it solved half a dozen problems caused by the idea of a big bang, seems to have spawned a literally fantastic series of theories and speculations.

Throughout the twentieth century, cosmologists grew used to studying the different types of universe that emerged from Einstein’s equations, but they expected that some special principle, or starting state, would pick out one that best described the actual universe. Now, unexpectedly, we find that there might be room for many, perhaps all, of these possible universes somewhere in the multiverse. (p.254)

This is a really massive shift and it is marked by a shift in the tone and approach of Barrow’s book. Up till this point it had jogged along at a brisk rate namechecking a steady stream of mathematicians, physicists and explaining how their successive models of the universe followed on from or varied from each other.

Now this procedure comes to a grinding halt while Barrow enters a realm of speculation. He discusses the notion that the universe we live in might be a fake, evolved from a long sequence of fakes, created and moulded by super-intelligences for their own purposes.

Each of us might be mannequins acting out experiments, observed by these super-intelligences. In which case what value would human life have? What would be the definition of free will?

Maybe the discrepancies we observe in some of the laws of the universe have been planted there as clues by higher intelligences? Or maybe, over vast periods of time, and countless iterations of new universes, the laws they first created for this universe where living intelligences could evolve have slipped, revealing the fact that the whole thing is a facade.

These super-intelligences would, of course, have computers and technology far in advance of ours etc. I felt like I had wandered into a prose version of The Matrix and, indeed, Barrow apologises for straying into areas normally associated with science fiction (p.241).

Imagine living in a universe where nothing is original. Everything is a fake. No ideas are ever new. There is no novelty, no originality. Nothing is ever done for the first time and nothing will ever be done for the last time… (p.244)

And so on. During this 15-page-long fantasy the handy sequence of physicists comes to an end as he introduces us to contemporary philosophers and ethicists who are paid to think about the problem of being a simulated being inside a simulated reality.

Take Robin Hanson (b.1959), a research associate at the Future of Humanity Institute of Oxford University who, apparently, advises us all that we ought to behave so as to prolong our existence in the simulation or, hopefully, ensure we get recreated in future iterations of the simulation.

Are these people mad? I felt like I’d been transported into an episode of The Outer Limits or was back with my schoolfriend Paul, lying in a summer field getting stoned and wondering whether dandelions were a form of alien life that were just biding their time till they could take over the world. Why not, man?

I suppose Barrow has to include this material, and explain the nature of the anthropic principle (p.250), and go on to a digression about the search for extra-terrestrial life (p.248), and discuss the ‘replication paradox’ (in an infinite universe there will be infinite copies of you and me in which we perform an infinite number of variations on our lives: what would happen if you came face to face with one of your ‘copies?? p.246) – because these are, in their way, theories – if very fantastical theories – about the nature of the universe and he his stated aim is to be completely comprehensive.

The anthropic principle Observations of the universe must be compatible with the conscious and intelligent life that observes it. The universe is the way it is, because it has to be the way it is in order for life forms like us to evolve enough to understand it.

Still, it was a relief when he returned from vague and diffuse philosophical speculation to the more solid territory of specific physical theories for the last forty or so pages of the book. But it was very noticeable that, as he came up to date, the theories were less and less attached to individuals: modern research is carried out by large groups. And he increasingly is describing the swirl of ideas in which cosmologists work, which often don’t have or need specific names attached. And this change is denoted, in the texture of the prose, by an increase in the passive voice, the voice in which science papers are written: ‘it was observed that…’, ‘it was expected that…’, and so on.

  • Edward Tryon (b.1940) American particle physicist speculated that the entire universe might be a virtual fluctuation from the quantum vacuum, governed by the Heisenberg Uncertainty Principle that limits our simultaneous knowledge of the position and momentum, or the time of occurrence and energy, of anything in Nature.
  • George Ellis (b.1939) created a catalogue of ‘topologies’ or shapes which the universe might have
  • Dmitri Sokolov and Victor Shvartsman in 1974 worked out what the practical results would be for astronomers if we lived in a strange shaped universe, for example a vast doughnut shape
  • Yakob Zeldovich and Andrei Starobinsky in 1984 further explored the likelihood of various types of ‘wraparound’ universes, predicting the fluctuations in the cosmic background radiation which might confirm such a shape
  • 1967 the Wheeler-De Witt equation – a first attempt to combine Einstein’s equations of general relativity with the Schrödinger equation that describes how the quantum wave function changes with space and time
  • the ‘no boundary’ proposal – in 1982 Stephen Hawking and James Hartle used ‘an elegant formulation of quantum  mechanics introduced by Richard Feynman to calculate the probability that the universe would be found to be in a particular state. What is interesting is that in this theory time is not important; time is a quality that emerges only when the universe is big enough for quantum effects to become negligible; the universe doesn’t technically have a beginning because the nearer you approach to it, time disappears, becoming part of four-dimensional space. This ‘no boundary’ state is the centrepiece of Hawking’s bestselling book A Brief History of Time (1988). According to Barrow, the Hartle-Hawking model was eventually shown to lead to a universe that was infinitely large and empty i.e. not our one.
The Hartle-Hawking no boundary Hartle and Hawking No-Boundary Proposal

The Hartle-Hawking no boundary Hartle and Hawking No-Boundary Proposal

  • In 1986 Barrow proposed a universe with a past but no beginning because all the paths through time and space would be very large closed loops
  • In 1997 Richard Gott and Li-Xin Li took the eternal inflationary universe postulated above and speculated that some of the branches loop back on themselves, giving birth to themselves
The self-creating universe of J.Richard Gott III and Li-Xin Li

The self-creating universe of J.Richard Gott III and Li-Xin Li

  • In 2001 Justin Khoury, Burt Ovrut, Paul Steinhardt and Neil Turok proposed a variation of the cyclic universe which incorporated strong theory and they called the ‘ekpyrotic’ universe, epkyrotic denoting the fiery flame into which each universe plunges only to be born again in a big bang. The new idea they introduced is that two three-dimensional universes may approach each other by moving through the additional dimensions posited by strong theory. When they collide they set off another big bang. These 3-D universes are called ‘braneworlds’, short for membrane, because they will be very thin
  • If a universe existing in a ‘bubble’ in another dimension ‘close’ to ours had ever impacted on our universe, some calculations indicate it would leave marks in the cosmic background radiation, a stripey effect.
  • In 1998 Andy Albrecht, João Maguijo and Barrow explored what might have happened if the speed of light, the most famous of cosmological constants, had in fact decreased in the first few milliseconds after the bang? There is now an entire suite of theories known as ‘Varying Speed of Light’ cosmologies.
  • Modern ‘String Theory’ only functions if it assumes quite a few more dimensions than the three we are used to. In fact some string theories require there to be more than one dimension of time. If there are really ten or 11 dimensions then, possibly, the ‘constants’ all physicists have taken for granted are only partial aspects of constants which exist in higher dimensions. Possibly, they might change, effectively undermining all of physics.
  • The Lambda-CDM model is a cosmological model in which the universe contains three major components: 1. a cosmological constant denoted by Lambda (Greek Λ) and associated with dark energy; 2. the postulated cold dark matter (abbreviated CDM); 3. ordinary matter. It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos:
    • the existence and structure of the cosmic microwave background
    • the large-scale structure in the distribution of galaxies
    • the abundances of hydrogen (including deuterium), helium, and lithium
    • the accelerating expansion of the universe observed in the light from distant galaxies and supernovae

He ends with a summary of our existing knowledge, and indicates the deep puzzles which remain, not least the true nature of the ‘dark matter’ which is required to make sense of the expanding universe model. And he ends the whole book with a pithy soundbite. Speaking about the ongoing acceptance of models which posit a ‘multiverse’, in which all manner of other universes may be in existence, but beyond the horizon of where can see, he says:

Copernicus taught us that our planet was not at the centre of the universe. Now we may have to accept that even our universe is not at the centre of the Universe.


Related links

Reviews of other science books

Chemistry

Cosmology

The Environment

Genetics and life

Human evolution

Maths

Particle physics

Psychology

A Closer Look: Colour by David Bomford and Ashok Roy (2009)

This is another superbly informative, crisply written and lavishly illustrated little book in The National Gallery’s A Closer Look series. To quote the blurb:

A Closer Look: Colour explores how painters apply colour, describes different types of pigments, and outlines optical theories and artists’ treatises. The authors explain the effect on colour of the artist’s chosen medium, such as oil, water or egg tempera, and the dramatic impact of new pigments.’

It ranges far and wide across the National Gallery’s vast collection of 2,300 art works, selecting 80 paintings which illustrate key aspects of colour, medium and design. The quality of the colour reproductions is really stunning – it’s worth having the book almost for these alone and for the brief but penetrating insights into a colour-related aspect of each one.

They include works by Seurat, Holbein the Younger, Corot, Duccio, David, Chardin, Ghirlandaio, Monet and Van Dyck in the first ten pages alone!

Aspects of colour

Colour quite obviously has been used by painters to depict the coloured world we see around us. But it has other functions, too. Maybe the two most obvious but easily overlooked are: to represent depth and create the optical illusion of three dimensions on a two dimensional surface; and to reinforce this by indicating sources of light.

Depth A common indication of depth is recreating the common observation that objects at a distance fade into a blue-ish haze. This is often seen in Renaissance paintings depicting increasingly hazy backdrops behind the various virgins and main figures. This is known as aerial perspective.

Light Sources of light need to be carefully calculated in a realistic painting. The book shows how the effect of light sources is achieved by showing glimmers of white paint on metallic objects or even on duller surfaces like wood. There is a particularly wondrous example in Lady Elizabeth Thimbelby and her Sister by Anthony Van Dyck. The authors give a close-up to show how the colour of the yellow dress worn by the main subject is reflected on the bare skin of of the little angel, and even in the catchlight in his right eye, an indication of the depth of thought which goes into his compositions.

Shadows turn out to be an entire subject in themselves. For centuries painters improved their depiction of shadows, at first using grey colours for the shadows of buildings, but quickly realising that the most shadowed things around us are fabrics. Dresses, cloaks all the paraphernalia of costume from the Middle Ages to the turn of the 20th century, involved reams of material which folded in infinite ways, all of them a challenge to the painters’ skill. At the very least, painting a fabric requires not one but three colours: the core colour of the fabric itself, the fabric in shadow, the fabric in highlight, reflecting the light source.

The human eye is not a mechanical reproducer of the world around us. It has physiological quirks and limitations. The book evidences the way that, dazzled by orange sunsets, the human eye might well see evening shadows as violet. Quirks and oddities like this were known to various painters of the past but it was the Impressionists who, as a group, set out to try and capture not what the rational mind knew to be the colour, but the colours as actually perceived by the imperfect eye and misleadable mind.

Emotion In the later 19th century artists across Europe made the discovery that intensity of colour can be used to reflect intensity of emotion. Probably the most popular painter to do this was van Gogh whose intense colours were intended to convey his own personal anguish. This approach went on to become the central technique of the German Expressionist painters (although they aren’t represented in the book, along with all 20th century art, because the National Gallery’s cut-off point is 1900).

Symbolism In earlier centuries, more than its realistic function, colour had an important role in a painting’s symbolism i.e. certain colours are understood to have certain meanings or to be associated with certain people or qualities. The most obvious period is the Renaissance, when the Virgin Mary’s cloak was blue, Mary Magdalene’s cloak was red, St Peter’s cloak was yellow and blue, and so on. From early on this allowed or encouraged Renaissance painters to create compositions designed not only to show a (religious) subject, but to create harmonious visual ‘rhythms’ and ‘assonances’ based on these traditionally understood colour associations.

Pigments and Media

This is dealt with quite thoroughly in another book in the series, Techniques of Painting. There we learn that paint has two components, the binding medium and the pigment. Over the centuries different pigments have been used, mixed into different binding mediums, including egg, egg yolks, oil, painting directly into wet plaster (fresco) and so on.

Painting is done onto supports – onto walls, plaster, or onto boards, metal, canvas or other fabrics. All of these need preparing by stretching (canvas) or smoothing (wood), then applying a ground – or background layer of paint – to soak into the support. Painters of the 14th and 15th centuries used a white ground. In the 16th, 17th and 18th centuries, artists experimented with varying the tone of the ground, which significantly alters the colour of the works painted onto them.

Hardening Binding mediums dry out in two ways: watercolours and synthetic resin paints by simple evaporation. Drying oils such as linseed, walnut or poppy oil harden by chemical reaction with the oxygen in the air. Egg tempera, used extensively in the 14th and 15th century, dries by a combination of both.

This may sound fairly academic but it profoundly affects the whole style and look of a painting. Because tempera dries so quickly (especially in hot, dry Italy) shapes and textures are best built up by short hatched strokes.

This is a detail from the Wilton Diptych (1397) where you can see the way the skin of the Virgin and child and angels has been created by multiple short paint strokes of egg tempera.

Whereas, because oils are slow drying, they allow the artist to merge them into smooth, flowing, continuous transitions of colour. Oil paints = more flowing.

In this detail from Belshazzar’s Feast by Rembrandt, you can see how the gold chain has been rendered with a really thick layer of gold paint. Laying on very thick layers of oil paint is called impasto.

In general, oil paint looks darker and richer than paint made using water-based media such as egg tempera, glue or fresco, which appear lighter and brighter.

Age and decay Painting was, then, a highly technical undertaking, requiring the painter to have an excellent knowledge of a wide range of materials and chemical substances. Different media dry and set in different ways. Different pigments hold their colour – or fade – over time. And this fading can reveal the ground painted underneath.

One of the most interesting aspects of the book is the specific examples it gives of how some pigments have faded or disappeared – sometimes quite drastically – in Old Master paintings.

In Duccio’s The Virgin and Child with Saints Dominic and Aurea, the face and hands of the figures show clearly how the lighter pigments painted in tempera have faded or flaked off allowing the green underpaint to come through. The Virgin was not meant to look green!

Bladders to tubes Pigments had to be ground by hand and mixed in with binders in studios for the medieval and Renaissance period. There are numerous prints showing a Renaissance artist’s studio for what it was, the small-scale manufactory of a craftsman employing a number of assistants and making money by taking on a number of students.

In the 18th century ready-mixed pigments could be transported inside pigs’ bladders. The early 19th century developed the use of glass or metal syringes. But it was in 1841 that an American, John Rand, developed the collapsible metal tube. This marked a breakthrough in the portability of oil paints, allowing artists to paint out of doors for the first time. A generation later a new school arose – the Impressionists – who did just this. Jean Renoir quotes his father, the painter Pierre-Auguste, as saying:

Without paints in tubes there would have been no Cézanne, no Monet, no Sisley or Pissarro, nothing of what the journalists were later to call Impressionism.

Biographies of colours

Primo Levi wrote a classic collection of short stories based on The Periodic Table of chemical elements. It crossed my mind, reading this book, that something similar could be attempted with the numerous pigments which artists have used down the ages.

This book gives a potted history of the half a dozen key colours. It explains how they were originally produced, how different sources became available over the centuries, and how the 19th century saw an explosion in the chemical industry which led to the development of modern, industrially-manufactured colours.

Blue

  • Prime source of blue was the ultramarine colour extracted from the mineral lapis lazuli, which was mined in one location in Afghanistan and traded to the Mediterranean.
  • A cheaper alternative was azurite, which was mined in Europe but had to be ground coarsely to keep its colour, and is also prone to fade into green, e.g. the sky in Christ taking Leave of his Mother by Albrecht Altdorfer (1520). Many artists painted a basic wash of azurite and then used the much more expensive ultramarine to create more intense highlights.
  • Indigo is a dye extracted from plants. At high intensity it is an inky black-blue, but at a lesser intensity also risks fading.
  • A cheaper alternative was smalt, manufactured by adding cobalt oxide to molten glass, cooling and grinding it to powder. It holds its colour badly and fades to grey.
  • In the early 1700s German manufacturers stumbled across the intense synthetic pigment which became known as Prussian blue (the book gives examples by Gainsborough and Canaletto).
  • Around 1803 cobalt blue was invented.
  • In 1828 an artificial version of ultramarine was created in France

Thus the painters of the 19th century had a much wider range of ‘blues’ to choose from than all their predecessors.

The book does the same for the other major colours, naming and explaining the origin of their main types or sources:

Green

  • Terre verte was used as an underpaint for flesh tones in early Italian paintings
  • malachite
  • verdigris, a copper-based pigment was prone to fade to brown and explains why so many Italian landscapes have the same orangey-brown appearance
  • emerald green (a pigment developed in the 19th century containing copper and arsenic)
  • viridian (a chromium oxide)

Red

  • Vermilion, obtained by pulverising cinnabar, liable to fade to brown as has happened with the coat of Gainsborough’s Dr Ralph Schomberg (1770), which should be bright red.

Yellow

  • Lead-tin yellow in the Renaissance
  • from the 17th century lead-based yellow containing antimony known as Naples yellow
  • from the 1820s new tints of yellow became available based on compounds of chromium of which chrome yellow is the most famous
  • cadmium yellow

White

  • Lead white was used from the earliest times. It forms as a crust on metallic lead exposed to acetic acid from sour wine – highly poisonous
  • only in the twentieth century was it replaced by non-toxic whites based on zinc and later, titanium. Unlike all the pigments named so far, lead white keeps its colour extremely well, hence the bright white ruffs and dresses in paintings even when a lot of the brighter colour has gone.

Black 

  • A large range of black pigments was always available, most based on carbon as found in charcoal, soot and so on. Carbon is very stable and so blacks have tended to remain black.

Summary of colours

  1. Over the past 500 years there has been a large amount of evolution and change in the source of the pigments artists use.
  2. Colour in art is a surprisingly technical subject, which quite quickly requires a serious knowledge of inorganic chemistry and, from the 19th century, is linked to the development of industrial processes.
  3. Sic transit gloria mundi or, more precisely, Sic transit gloria artis. The net effect of seeing so many beautiful paintings in which the original colour has faded – sometimes completely – can’t help but make you sad. We live among the wrecks or decay of thousands of once-gloriously coloured artworks. Given the super-duper state of digital technology I wonder if anywhere there exists a project to restore all these faded glories to how they should look!

Disegno versus colore

Vasari, author of The Lives of the Great Artists (155) posed the question, ‘Which was more important, design or colour?’ As a devotee of Michelangelo, the godfather of design, he was on the side of disegno and relates a conversation with Michelangelo about some paintings by Titian (1488-1576) they had seen where Michelangelo praises Titian’s use of colour but laments his poor composition.

The art history stereotype has it that Renaissance Florence was the home of design, while Venice (where Titian lived and worked) put the emphasis on gorgeous colours. This was because Venice was a European centre for the production of dyes and pigments for a wide range of manufacturing purposes, not least glass and textiles.

In late-17th-century France the argument was fought out in the French Academy between Rubénistes (for colour) and Poussinistes (for drawing). Personally, I am more moved by drawing than colour, and a little more so after reading this book and realising just how catastrophically colour can fade and disappear – but, still, there’s no reason not to love both.

Optical theories

Isaac Newton published his Optics in 1704, announcing the discovery that when white light is projected through a prism it breaks down into primary colours, which can then be turned back into white light. Among its far-ranging investigations, the book contained the first schematic arrangement of colours and their ‘opposites’. It wasn’t until well into the 19th century, however, that colour charts began to proliferate (partly because they were required by expanding industrial manufacture, and the evermore competitive design and coloration of products).

And these colour charts bore out Newton’s insight that complementary colours – colours opposite each other on the circle – accentuate and bring each other out.

Colour Circle by Michel Eugène Chevreul (1839)

Colour Circle by Michel Eugène Chevreul (1839)

Colour circles like this systematised knowledge which had been scattered among various artists and critics over the ages. It can be shown that Eugène Delacroix (1798-1863) made systematic use of contrast effects, pairing colour opposites like orange-blue, red-green or yellow-violet, to create stronger visual effects.

On a simplistic level it was the availability of a) new, intense colours, in portable tin tubes, along with b) exciting new theories of colour, which explains the Impressionist movement.

The Impressionists were most interested in trying to capture the changing quality of light, but the corollary of this was a fascination with shadow. Apparently, impressionist painters so regularly (and controversially) paired bright yellow sunlight with the peculiar tinge of violet which is opposite it on the colour charts, that they were accused by contemporary critics of violettomani.

Some examples

The book lists the pigments used to create Titian’s Bacchus and Ariadne. The intense blue sky is made from ultramarine lapis lazuli, as is Ariadne’s drapery and the flowers at the lower right. The blue-green sea is painted with the cheaper azurite. Vermilion gives Ariadne’s sash its red colour. The Bacchante’s orange drapery was painted with a rare arsenic-containing mineral known as realgar.

Titian was aware of the power of colour contrasts long before the 19th century colour wheels, something he demonstrates by placing Ariadne’s red and blue drapery above the primrose yellow cloth by the knocked-over urn at her feet (painted using lead-tin yellow). The green of the tree leaves and the grassy background are created from malachite over-painted with green resinous glazes. An intense red ‘lake’ is used to give Bacchus’s red cloak its depth.

These coloured ‘lakes’ were an important element in Renaissance painting but I had to supplement the book’s information with other sources.

From this I take it that ‘lakes’ were translucent i.e. you could see the colour beneath, and so were used as glazes, meaning you would lay down a wash of one colour and then paint over potentially numerous ‘lakes’ to add highlights, depths or whatever. This build-up of ‘lake’ glazes allowed the layering of multiple variations of colour and so the intensely sensual depiction of the folds on fabrics, the light and shade of curtains and clothes which is so characteristic of Old Master painting.

The book then applies this detailed analysis of colour pigments to a sequence of other Old Masterpieces by Rubens, Velázquez, Rembrandt, Tiepollo, Canaletto, Monet and Seurat.

Conclusion

A Closer Look: Colour makes you appreciate the immense amount of knowledge, science, craft and technique which went into painting each and every one of the National Gallery’s 2,300 artworks (and the depth of scholarship which modern art historians require to analyse and unravel the technical background to each and every painting).

It’s a revelation to read, but also pure joy to be prompted to look, and look again, in closer and closer detail, at so many wonderful paintings.


Related links

Reviews of National Gallery exhibitions

Atomic by Jim Baggott (2009)

This is a brilliantly panoramic, thrilling and terrifying book.

The subtitle of this book is ‘The First War of Physics and the Secret History of the Atom Bomb 1939-49‘ and it delivers exactly what it says on the tin. At nearly 500 pages Atomic is a very thorough account of its subject – the race to develop a workable atomic bomb between the main warring nations of World War Two, America, Britain, France, Germany, Italy, Russia –  with the additional assets of a 22-page timeline, a 20-page list of key characters, 18 pages of notes and sources and a 6-page bibliography.

A cast of thousands

The need for a list of key characters is an indication of one of the main learnings from the book: it took a lot of people to convert theoretical physics into battlefield nuclear weapons. Every aspect of it came from theories and speculations published in numerous journals, and then from experiments devised by scores of teams of scientists working around the industrialised world, publishing results, meeting at conferences or informally, comparing and discussing and debating and trying again.

Having just read The Perfect Theory by Pedro Ferreira, a ‘biography’ of the theory of relativity, I had gotten used to the enormous number of teams and groups and institutes and university faculties involved in science – or this area of science – each containing numerous individual scientists, who collaborated and competed to devise, work through and test new theories relating to Einstein’s famous theory.

Baggott’s tale gives the same sense of a cast of hundreds of scientists – it feels like we are introduced to two or three new characters on every page, which can make it quite difficult to keep up. But whereas progress on the theory of relativity took place at a leisurely pace over the past 100 years, the opposite is true of the development of The Bomb.

This was kick-started when a research paper showing that nuclear fission of uranium might be possible was published in 1939, just as the world was on the brink of war (hence the start date for this book). From that point the story progresses at an increasing pace, dominated by a Great Fear – fear that the Nazis would develop The Bomb first and use it without any scruples to devastate Europe.

The first three parts of the book follow the way the two warring parties – the Allies and the Nazis – assembled their teams from civilian physicists, mathematicians and chemists at various institutions, bringing them together into teams which were assembled and worked with increasing franticness, as the Second World War became deeper and darker.

If the you thought the blizzard of names of theoretical and experimental physicists, mathematicians, chemists and so on in the first part was a bit confusing, this is as nothing compared to the tsunami of names of Army administrators, security chiefs, civil servants, bureaucrats and politicians who are roped in to create and administer the facilities which were established to research and build, first a nuclear reactor, then a nuclear bomb.

Baggott unfolds the story with a kind of unflinching factual pace which is extremely gripping. Each chapter is divided into sections, often only a page long, which explain contemporaneous events at research bases in Chicago, out in the desert at Los Alamos, in Britain, in German research centres, and among Stalin’s harassed scientific community. Each one of these narratives is fascinating, but intercutting them like this creates an almost filming effect of cutting from one exciting scene to another. Baggott’s prose is spare and effective, almost like good thriller writing.

The nuclear spies

And indeed the book strays into actual thriller territory because interwoven with the gripping accounts of the British, Russian, German and American scientists, and their respective military and political masters, is the story of the nuclear spies. I read Paul Simpson’s A Brief History of The Spy a few months ago and it gives good accounts of the activities of Soviet spies Klaus Fuchs, David Greengrass, Theodore Hall, as well as the Rosenbergs. But the story of their spying and the huge amounts of top secret information they handed over to the Russians is so much more intense and exciting when it is situated in the broader story of the nail-biting scientific, chemical, logistical and political races to build The Bomb.

German failure

As everyone knows, the Nazis were not able to construct a functioning bomb before they were militarily defeated in May 1945. But it wasn’t for want of trying, and the main impression from the book was the sense of vicarious horror from the thought of what they’d done if they had made a breakthrough in the final desperate months of spring 1945. London wouldn’t be here. I wouldn’t be here.

Baggott’s account of the German bomb is fascinating in numerous ways. Basically, once the leadership were told it wouldn’t be ready in the next few years, they didn’t make it a priority. Baggott follows the end of the war with a chapter on hos most of the German nuclear scientists were flown to England and interned in a farm outside Cambridge which was bugged. Their conversations were recorded in which they were at first smugly confident that they were being detained because they were so far in advance of the Allies. Thus they were all shocked when they heard the Allies had dropped an atom bomb on Japan in August 1945. At which point they began to develop a new line, one much promoted by German historians since, which is that they could have developed a bomb if they’d wanted to, but had morals and principles and so did all they could to undermine, stall and sabotage the Nazi attempt to build an A bomb.

They were in fact ‘good Germans’ who always hated the Nazis. Baggott treats this claim with the contempt it deserves.

Summary of the science

The neutron was discovered in 1932, giving a clearer picture of what atoms are made of i.e. a nucleus with at least one proton (with a positive electric charge) balancing at least one electron (with a negative charge) in orbit around it. Heavier elements have more than one neutron and electron (always the same number) as well as an increasing number of neutrons which give weight but have no electric charge. Hence the periodic table lists the elements in order of heaviness, starting with hydrogen with one proton and going all the way to organesson, with its 118 protons. Ernest Lawrence in California invented the cyclotron, a device for smashing sub-atomic particles into nuclei to see what happened. In 1934 Enrico Fermi’s team in Italy set out to bombard the nuclei of every known element with neutrons, starting with hydrogen (1) and going through the entire periodic table.

The assumption was that, by bombarding elements with neutrons they would dislodge one or two protons in each nucleus and ‘shift’ the element down the periodic table by one or two places. When the team came to bombard one of the heaviest elements, uranium, they were amazed to discover that the process seemed to produce barium, about half the weight of uranium. The bombardment process seemed to blast uranium nuclei in half. Physics theory, influenced by Einstein, suggested that a) this breakdown would result in the release of energy b) some of the neutrons within the uranium nucleus would not be required by the barium atoms and would themselves shoot out to hit other uranium nuclei, and so on.

  • The process would create a chain reaction.
  • Although the collapse of each individual atom would release a minuscule amount of energy, the number of atoms in such a dense element suggested a theoretically amazing release of energy. If every nucleus of uranium in a 1 kilogram lump was split in half, it would release the same energy as 22,000 tons of TNT explosive.

Otto Frisch, an Austrian Jewish physicist who had fled to Niels Bohr’s lab in Copenhagen after the Nazis came to power, heard about all this from his long-time collaborator, and aunt, Lise Meitner, who was with the German team replicating Fermi’s results. He told Bohr about the discovery. Frisch named it nuclear fission.

In early 1939 papers were published in a German science journal and Nature, while Bohr himself travelled to a conference in America. In the spring of that year fission research groups sprang up around the scientific world. In America Bohr realised anomalies in the experimental results were caused by the fact that uranium comes in two isotopes, U-235 and U-238. The numbers derive from the total number of neutrons and protons in an atom: U-238 has 92 protons and 146 neutrons; U-235 has three fewer neutrons. Slowly evidence emerged that it is the U-235 which breaks down. But it is much rarer than the stable U-238 and difficult to extract and purify. In March 1939 a French team summarised the evidence for nuclear chain reactions in a paper in Nature, specifying the number of particles released by disintegrated nuclei.

All the physicists involved realised that the massive release of energy implied by the experiments could theoretically be used to create an explosive device vastly more powerful than anything then existing. And so did the press. Newspaper articles began appearing about a ‘superbomb’. In April the head of physics at the German Reich Research Council assembled a group devoted to fission research, named the Uranverein, calling for the ban of all uranium exports, and for it to be stockpiled. British MP Winston Churchill asked a friend, Oxford physicist Frederick Lindemann, to prepare a report on the feasibility of a fission bomb. Soviet scientists replicated the results of their western colleagues but didn’t bring the issue to the attention of the authorities – yet. Three Hungarian physicists who were exiles from the Nazis in America grasped the military importance of the discoveries. They approached Einstein and persuaded him to write a warning letter to President Roosevelt, which was written in August 1939 though not delivered to the president until October. Meanwhile the Germans invaded Poland on 1 September and war in Europe began. At this point the Nazis approached the leading theoretical physicist in Germany, Werner Heisenberg, and he agreed to head the Uranverein, leading German research into an atomic bomb until the end of the war.

And so the race to build the first atomic bomb began! The major challenges were to:

  • isolate enough of the unstable isotope U-235 to sustain a chain reaction
  • to kick start the chain reaction somehow, not with the elaborate apparatus available in a lab, but with something which could be packed inside a contain (a bomb) and then triggered somehow
  • a material which could ‘damp’ the process enough so that it could be controlled in experimental conditions

From the start there was debate over the damping material, with the two strongest contenders being graphite – but it turned out to be difficult to get graphite which was pure enough – or ‘heavy water’, water produced with a heavier isotope of hydrogen, deuterium. Only one chemical plant in all of Europe produced heavy water, a fertiliser factory in Norway. The Germans invaded Norway in April 1940 and a spin-off was the ability to commandeer regular supplies from this factory. That is why the factory, and its shipments of heavy water, were targeted for the commando raid and then air raids dramatised in the war movie, The Heroes of Telemark. (Baggott gives a thorough and gripping account of the true, more complex, more terrifying story of the raids.)

Learnings

I never realised that:

  • In the end the Americans built the bomb because they were the only ones with enough resources. Although Hitler and Stalin were briefed about the potential, their scientists told them it would be three or four years before a workable bomb could be made and they both had more pressing concerns. The British had the know-how but not the money or resources. There is a kind of historical inevitability to America being the first to build a bomb.
  • But I never realised there were quite so many communist sympathisers in American society and that so many of them slipped across the line into passing information and/or secrets to the Soviets. The Manhattan Project was riddled with Soviet spies.
  • And I never knew that J. Robert Oppenheimer, the man put in charge of the facilities at Los Alamos and therefore widely known as the ‘father’ of the atom bomb, was himself was such a dubious character, from the security point of view. Well-known for his left-wing sympathies, attending meetings and donating money to crypto-communist causes, he was good friends with communist party members and was approached at least once by Soviet agents to pass on information about the bomb project. No wonder elements in the Army and the FBI wanted him banned from the very project which he was in fact running.

Hiroshima

The first three parts of the book follow in considerable detail the story from the crucial discoveries on the eve of the war, and then interweaves developments in Britain, America and the USSR up until the detonation of the two A-bombs over Hiroshima and Nagasaki on August 6 and 9, 1945.

  • I was shocked all over again to read the idea that, on the eve of the first so-called Trinity test, the scientists weren’t completely confident that the chain reaction might not spread to the nitrogen in the atmosphere and set the air on fire.
  • I was dazzled by the casual way military planners came up with a short list of cities to hit with the bombs. The historic and (by all accounts) picturesque city of Kyoto was on the list but it was decided it would be a cultural crime to incinerate it. Also US Secretary of War Henry Stimson had gone there on his honeymoon, so it was removed from the list. Thus, in this new age, were the fates, the lives and agonising deaths, of hundreds of thousands of civilians decided.
  • I never knew they only did one test – the Trinity test – before Hiroshima. So little preparation and knowledge.

The justification for the use of the bomb has caused argument from that day to this. Some have argued that the Japanese were on the verge of surrendering, though the evidence presented in Baggott’s account militates against this interpretation. My own view is based on two axioms: 1. the limits of human reason 2. a moral theory of complementarity.

Limits of reason When I was a young man I was very influenced by the existentialism of Jean-Paul Sartre and Albert Camus. Life is absurd and the absurdity is caused by the ludicrous mismatch between human claims and hopes of Reason and Justice and Freedom and all these other high-sounding words – and the chaotic shambles which people have made of the world, starting with the inability of most people to begin to live their own lives according to Reason and Logic.

People smoke too much, drink too much, eat too much, marry the wrong person, drive cars too fast, take the wrong jobs, make the wrong decisions, jump off bridges, declare war. We in the UK have just voted for Brexit and Donald Trump is about to become US President. Rational? The bigger picture is that we are destroying the earth through our pollution and wastefulness, and global warming may end up destroying our current civilisation.

Given all these obvious facts about human beings, I don’t see how anyone can accuse us of being rational and logical.

But in part this is because we evolved to live in small packs or groups or tribes, and to deal with fairly simple situations in small groups. Ever since the Neolithic revolution and the birth of agriculture led to stratified and much larger societies and set us on the path to ‘civilisation’, we have increasingly found ourselves in complex situations where there is no one obviously ‘correct’ choice or path; where the notion of a binary choice between Good and Evil breaks down. Most of the decisions I’ve taken personally and professionally aren’t covered by so-called ‘morality’ or ‘moral philosophy’, they present themselves – and I make the decisions – based purely on practical outcomes.

Complementarity Early in his account Baggott explains Niels Bohr’s insight into quantum physics, the way of ‘seeing’ fundamental particles which changed the way educated people think about ‘reality’ and won him a Nobel Prize.

In the 1920s it became clear that electrons, one of the handful of sub-atomic particles, behave like waves and like particles at the same time. In Newton’s world a thing is a thing, self-identical and consistent. In quantum physics this fixed attitude has to be abandoned because ‘reality’ just doesn’t seem to be like that. Eventually, the researchers arrived a notion of complementarity i.e. that we just have to accept that electrons could be particles and waves at the same time depending on how you chose to measure them. (I understand other elements of quantum theory also prove that particles can be in two places at the same time). Conceivably, there are other ways of measuring them which we don’t know about yet. Possibly the incompatible behaviour can be reconciled at some ‘deeper’ level of theory and understanding but, despite nearly a century of trying, nobody has come up with a grand unifying theory which does that.

Meanwhile we have to work with reality in contradictory bits and fragments, according to different theories which fit, or seem to fit, to explain, the particular phenomena under investigation: Newtonian mechanics for most ordinary scale phenomena; Einstein’s relativity at the extremes of scale, black holes and gravity where Newton’s theory breaks down; and quantum theory to explain the perplexing nature of sub-atomic ‘reality’.

In the same way I’d like to suggest that everyday human morality is itself limited in its application. In extreme situations it frays and breaks. Common or garden morality suggests there is one ‘reality’ in which readily identifiable ideas of Good and Bad always and everywhere apply. But delve only a little deeper – consider the decisions you actually have to make, in your real life – and you quickly realise that there are many situations and decisions you have to make about situations which aren’t simple, where none of the alternatives are black and white, where you have to feel your way to a solution often based in gut instinct.

A major part of the problem may be that you are trying to reconcile not two points of view within one system, but two or more incompatible ways of looking at the world – just like the three worldviews of theoretical physics.

The Hiroshima decision

Thus – with one part of my mind I am appalled off the scale by the thought of a hideous, searing, radioactive death appearing in the middle of your city for no reason without any warning, vaporising half the population and burning the other half to shreds, men, women and little children, the old and babies, all indiscriminately evaporated or burned alive. I am at one with John Hersey’s terrifying account, I am with CND, I am against this anti-human abomination.

But with another part of the calculating predatory brain I can assess the arguments which President Truman had to weigh up. Using the A-bomb would:

  1. End a war which had dragged on too long.
  2. Save scores of thousands of American lives, an argument bolstered as evidence mounted that the Japanese were mobilising for a fanatical defence to the death of their home islands. I didn’;t know that the invasion of the southern island of Japan was scheduled for December 1945 and the invasion of the main island and advance on Tokyo was provisionally set to start in march 1946. Given that it took the Allies a year to advance from Normandy to Berlin, this suggests a scenario where the war could have dragged on well into 1947, with the awesome destruction of the entire Japanese infrastructure through firebombing and house to house fighting as well, of course, of vast casualties, Japanese and American.
  3. As the US commander of strategic air operations against Japan, General Curtis LeMay pointed out, America had been waging a devastating campaign of firebombing against Japanese cities for months. According to one calculation some two-and-a-half million Japanese had been killed in these air attacks to date. He couldn’t see why people got so upset about the atom bombs.

Again, I was amazed at the intransigence of the Japanese military. Baggott reports the cabinet meetings attended by the Japanese Prime Minister, Foreign Minister and the heads of the Army and Navy, where the latter refused to surrender even after the second bomb was dropped on Nagasaki. In fact, when the Emperor finally overruled his generals and issued an order to surrender, the generals promptly launched a military coup and tried to confiscate the Emperor’s recorded message ordering the surrender before it could be broadcast. An indication of the fanaticism American troops would have faced if a traditional invasion had gone ahead.

The Cold War

And the other reason for using the bombs was to prepare for after the war, specifically to tell the Soviet Union who was boss. Roosevelt had asked Stalin to join the war on Japan and this he did in August, making a request to invade the north island (the Russians being notoriously less concerned about their own troop losses than the Allies). the book is fascinating on how Stalin ordered an invasion then three days later backed off, leaving all Japan to America. But this kind of brinkmanship and uneasiness which had appeared at Yalta became more and more the dominant issue of world politics once the war was won, and once the USSR began to put in place mini-me repressive communist regimes across Eastern Europe.

Baggott follows the story through the Berlin Airlift of 1949 and the outbreak of the Korean War (June 1950), while he describes the ‘second physics war’ i.e. the Russian push to build an atomic reactor and then a bomb to rival America’s. In this the Russians were hugely helped by the Allied spies who, ironically, now Soviet brutality was a bit more obvious to the world, began to have second thoughts. In fact Klaus Fuchs, the most important conduit of atomic secrets to the Russians, eventually confessed his role.

Baggott’s account in fact goes up to the Cuban Missile Crisis of October 1962 and it is so grippingly, thrillingly written I wished it had gone right up to the fall of the Soviet Union. Maybe he’ll write a sequel which covers the Cold War. Then again, most of the scientific innovation had been achieved and the basic principles established; now it was a question of engineering, of improving designs and outcomes. Of building bigger and better bombs and more and more of them.

The last section contains a running thread about the attempts by some of the scientists and politicians to prevent nuclear proliferation, and explains in detail why they came to nothing. The reason was the unavoidable new superpower rivalry between America and Russia, the geopolitical dynamic of mutually assured destruction which dominated the world for the next 45 years (until the fall of the USSR).

A new era in human history was inaugurated in which ‘traditional’ morality was drained of meaning. Or to put it another way (as I’ve suggested above) in which the traditional morality which just about makes sense in large complex societies, reached its limits, frayed and broke.

The nuclear era exposed the limitations of not only human morality but of human reason itself, showing that incompatible systems of values could apply to the same phenomena, in which nuclear truths could be good and evil, vital and obscene, at the same time. An era in which all attempts at rational thought about weapons of mass destruction seemed to lead only to inescapable paradox and absurdity.


Credit

Atomic: The First War of Physics and the Secret History of the Atom Bomb 1939-49 by Jim Baggott was published in 2009 by Icon Books. All quotes and references are to the 2015 Icon Books paperback edition.

Related links

Reviews of other science books

Chemistry

Cosmology

The Environment

Genetics and life

Human evolution

Maths

Particle physics

Psychology

%d bloggers like this: