Tips For Trying To Think Less Irrationally

Professor Stuart Sutherland divides his book Irrationality: The Enemy Within into 23 chapters, each addressing a different aspect of why human beings are so prone to irrational, illogical, biased and erroneous thinking.

Having trotted through its allotted subject, each chapter ends with a few tentative suggestions of how to address the various biases and errors described in it.

This blog post is a summary of that advice. I have omitted tips which are so tied to specific examples that they’re incomprehensible out of context, and trimmed most of them down (and expanded a few).

The Wrong Impression

  1. Never base a judgement or decision on a single case, no matter how striking.
  2. In forming an impression of a person (or object) try to break your judgement down into his (or its) separate qualities without letting any strikingly good or bad qualities influence your opinion about the remainder: especially in interviews or medical diagnoses.
  3. When exposed to a train of evidence or information, suspend judgement until the end: try to give as much weight to the last piece of evidence as the first.
  4. Try to resist the temptation to seek out only information which reinforces the decision you have already taken. Try to seek out all the relevant information needed to make a decision.

Obedience

  1. Think before obeying.
  2. Question whether an order is justified.

Conformity

  1. Think carefully before announcing a decision or commitment in front of others. Once done, these are hard to change.
  2. Ask yourself whether you are doing or saying something merely because other are doing or saying it. If you have doubts, really reflect on them and gather evidence for them.
  3. Don’t be impressed by advice on a subject from someone just because you admire them, unless they are an expert on the matter in hand.
  4. Don’t be stampeded into acting by crowds. Stand aloof.

In-groups and out-groups

  1. Don’t get carried away by group decisions. Consciously formulate arguments against the group decision.
  2. If you’re forming a team or committee, invite people with different beliefs or skill sets.
  3. Reflect on your own prejudices and the ‘types’ of people you dislike or despise.

Organisational folly

(A list of errors in large organisations, which are difficult to cure, hence there are no tips at the end of this chapter.)

Misplaced consistency

  1. Beware of over-rating the results of a choice you’ve made (because the human tendency is to slowly come to believe all your decisions have been perfect).
  2. Try not to move by small steps to an action or attitude you would initially have disapproved of.
  3. No matter how much time, effort or money you have invested in a project, cut your losses if the future looks uncertain / risky.

Misuse of Rewards and Punishments

  1. If you want someone to value a task and perform well, do not offer material rewards. Appeal to their sense of responsibility and pride.
  2. If you are a manager, adopt as participatory and egalitarian a style as possible.
  3. If you want to stop children (and anyone else) from doing something, try to persuade rather than threatening them with punishment.

Drive and Emotion

  1. Don’t take important decisions when under stress or strong emotions.
  2. Every time you subdue an impulse, it becomes easier to do so.

Ignoring the Evidence (Pearl Harbour)

  1. Search for the evidence against your hypothesis, decision, beliefs.
  2. Try to entertain hypotheses which are antagonistic to each other.
  3. Respect beliefs and ideas which conflict with your own. They might be right.

Distorting the Evidence (Battle of Arnhem)

  1. If new evidence comes in don’t distort it to support your existing actions or views. The reverse: consider carefully whether it disproves your position.
  2. Don’t trust your memory. Countless experiments prove that people remember what they need to remember to justify their actions and bolster their self-esteem.
  3. Changing your mind in light of new evidence is a sign of strength, not weakness.

Making the Wrong Connections

  1. If you want to determine whether one event is associated with another, never attempt to keep the co-occurrence of events in your head. Maintain a written tally of the four possible outcomes in a 2 x 2 box.
  2. Remember that A is only associated with B if B occurs a higher percentage of the time in the presence of A than in its absence.
  3. Pay particular attention to negative cases.
  4. In particular, do not associate things together because you expect them to be, or because they are unusual.

Mistaking Connections in Medicine

(Focuses on doctors failure to use 2 x 2 tables in order to establish correct probabilities in diagnosis, so maybe the tip should be: Don’t try to calculate conditional probabilities in your head – write it down.)

Mistaking the Cause

  1. Suspect any explanation of an event in which the cause and the effect are similar to one another.
  2. Suspect all epidemiological findings unless they are supported by more reliable evidence.
  3. Consider whether an event could have causes other than the one you first think of.
  4. In allocating cause and effect, consider that they might happen in the opposite direction to that you first choose.
  5. Be sceptical of any causal relationship unless there is an underlying theory that explains it.
  6. In apportioning responsibility for an action, do not be influenced by the magnitude of its effect.
  7. Don’t hold someone responsible for an action without first considering what others would have done in their place.
  8. Don’t assume that other people are like you.

Misinterpreting the Evidence

  1. Do not judge solely by appearances. If someone looks more like an X than a Y, they may still be a Y if there are many more Ys than Xs.
  2. A statement containing two or more pieces of information is always less likely to be true than one containing only one piece of information.
  3. Do not believe a statement is true just because part of it is true.
  4. If you learn the probability of X given Y, to arrive at a true probability you must know the base rate of X.
  5. Don’t trust small samples.
  6. Beware of biased samples.

Inconsistent decisions and Bad Bets

  1. Always work out the expected value of a gamble before accepting it.
  2. Before accepting any form of gamble be clear what you want from it – high expected value, the remote possibility of winning a large sum with a small outlay, a probable but small gain, or just the excitement of gambling and damn the expense. If you seriously intend solely to make money, work out the expected value of a gamble before accepting it.
  3. Don’t be anchored by the first figure you hear; ignore it and reason from scratch.
  4. Many connected or conditional probabilities make an event more unlikely with every new addition. Conversely, the sum of numerous independent probabilities may add up to make something quite likely.

Overconfidence

  1. Distrust anyone who says they can predict the present from the past.
  2. Be wary of anyone who claims to be able to predict the future.
  3. Try to control your own over-confidence e.g.
    • wherever possible, try to write out and calculate probabilities rather than using ‘intuition’
    • always think of arguments which contradict your position and work them through

Risks

  1. People are liable to ignore risks if told to i.e. it is managers’ responsibility to assess the risks for their staff.
  2. Insidious chronic dangers may kill more people than dramatic accidents i.e. coal pollution kills more people than nuclear accidents.

False Inferences

  1. Regression to the mean: remember that whenever anything extreme happens, chances are the next thing will be a lot less extreme: explains why second novels or albums or sports seasons are often disappointing following an award-winning first novel, album or season.
  2. If two pieces of evidence always agree, you only need one of them to make a prediction.
  3. Avoid the gambler’s fallacy i.e. the belief that a certain random event is less likely or more likely, given a previous event or a series of events i.e. if you toss a coin long enough heads must come up. No. Each new toss is a new event, uninfluenced by previous events.

The Failure of Intuition

  1. Suspect anyone who claims to have good intuition.
  2. If you are in a profession, consider using mathematical models of decision making instead of trusting your ‘intuition’.

Utility

  1. When the importance of a decision merits the expenditure of time, use Utility Theory.
  2. Before making an important decision decide what your overall aim is, whether it be to maximise the attainment of your goals, to save yourself from loss, to make at least some improvement to your situation etc.

Causes, cures and costs

  • keep an open mind
  • reach a conclusion only after reviewing all the possible evidence
  • it is a sign of strength to change one’s mind
  • seek out evidence which disproves your beliefs
  • do not ignore or distort evidence which disproves your beliefs
  • never make decisions in a hurry or under stress
  • where the evidence points to no obvious decision, don’t take one
  • learn basic statistics and probability
  • substitute mathematical methods (cost-benefit analysis, regression analysis, utility method) for intuition and subjective judgement

Thoughts

This is all very good advice, and I’d advise anyone to read Sutherland’s book. However, I can see scope for improvement or taking it further.

The structure above reflects Sutherland’s i.e. it has arranged the field in terms of the errors people make – each chapter devoted to a type of error with various types of evidence describing experiments which show how common they are.

In a sense this is an easy approach. There exist nowadays numerous lists of cognitive errors and biases.

Arguably, it would be more helpful to try and make a book or helpsheet arranged by problems and solutions, in which – instead of beginning another paragraph ‘Imagine you toss a coin a thousand times…’ in order to demonstrate another common misunderstanding of probability theory – each chapter focused on types real world of situation and how to handle them.

It would be titled something like How to think more clearly about… and then devote a chapter each to meeting new people, interviews, formal meetings and so on. There would be a standalone chapter devoted just to probability theory, since this stands out to me as being utterly different from the psychological biases – and maybe another one devoted solely to gambling since this, also, amounts to a specialised area of probability.

There would be one on financial advisers and stock brokers, giving really detailed advice on what to look for before hiring one, and whether you need one at all.

There would be one solely about medical statistics i.e. explaining how to understand the risks and benefits of medical treatment, if you ever need some.

Currently, although Sutherland’s book and the list of tips listed above are useful, it is impossible to remember all of them. A more practical approach would be to have a book (or website) of problems or situations where you could look up the situation and be reminded of the handful of simple but effective principles you should bear in mind.


Related link

Reviews of other science books

Chemistry

Cosmology

The Environment

Genetics and life

Human evolution

Maths

Particle physics

Psychology

I, Robot by Isaac Asimov (1950)

I, Robot is a ‘fixup’ novel, i.e. it is not a novel at all, but a collection of science fiction short stories. The nine stories originally appeared in the American magazines Super Science Stories and Astounding Science Fiction between 1940 and 1950, and were then compiled into a book for stand-alone publication by Gnome Press in 1950, in the same way that the Foundation trilogy also appeared as magazine short stories before being packaged up by Gnome.

The stories are (sort of) woven together by a framing narrative in which the fictional Dr. Susan Calvin, a pioneer of positronic robots and now 75 years old, tells each story to a reporter whose been sent to do a feature on her life.

These interventions don’t precede and end every story; if they did there’d be eighteen of them; there are in fact only seven and I think the stories are better without them. Paradoxically, they make a more effective continuous narrative without Asimov’s ham-fisted linking passages. Calvin appears as a central character in three of them, anyway, and the comedy pair of robot testers, Powell and Donovan appear in another three consecutive stories, so the stories already contained threads and continuities…

A lot is explained once you learn that these were pretty much the first SF stories Asimov wrote. Since he was born in 1920, Robbie was published when he was just 20! Runaround when he was 21, and so on. His youth explains a lot of the gawkiness of the language and the immaturity of his view of character and, indeed, of plot.

So the reader has a choice: you can either judge Asimov against mature, literary writers and be appalled at the stories’ silliness and clunky style; or take into account how young he was, and be impressed at the vividness of his ideas – the Three Laws, the positronic brain etc – ideas which are silly, but proved flexible and enduring enough to be turned into nearly 40 shorts stories, four novels, and countless spin-offs, not least the blockbusting Will Smith movie.

Introduction

The introduction is mostly interesting for the fictional timeline it introduces around the early development of robots. In 1982 Susan Calvin was born, the same year Lawrence Robertson sets up U.S. Robot and Mechanical Men Inc. The ‘now’ of the frame story interview is 75 years later i.e. 2057.

  • 1998 intelligent robots are available to the public
  • 2002 mobile, speaking robot invented
  • 2005 first attempt to colonise Mercury
  • 2008 Susan Calvin joins U.S. Robot and Mechanical Men Inc as its first robopsychologist
  • 2015 second, successful, attempt to colonise Mercury
  • 2006 an asteroid has a laser beam placed on it to relay the sun’s energy back to earth
  • 2037 the hyperatomic motor invented (as described in the story, Escape!)
  • 2044 the Regions of earth, having already absorbed and superseded ‘nations’, themselves come together to form a global Federation

What this timeline indicates is Asimov’s urge to systematise and imperialise his stories. What I mean is that other short story writers write short stories are always part of a larger narrative (systematise) and the larger narrative tends to be epic – here it is the rise of robots from non-talking playmates to controllers of man’s destiny. Same as the Foundation series, where he doesn’t just tell stories about a future planet, or a future league of inhabited star systems – but the entire future of the galaxy.

1. Robbie (1940, revd. 1950 first appearance in Super Science Stories)

It is and 1996 and George Weston has bought his 8-year-old daughter, Gloria, a mute robot and playfellow. The story opens with them playing and laughing and Gloria telling Robbie stories, his favourite treat. However, Gloria’s mother does not like the thought of her daughter being friends with a robot so gets her husband to take it back to the factory and buy a dog instead. Gloria is devastated, hates the dog and pines away. To distract her her parents take her on a trip to futuristic New York. Gloria is excited but, to her mum’s dismay, chiefly because she thinks the family are going there to track down Robbie, who she’s been told has ‘run away’. When told that’s not the case she returns to sulking. Dad has a bright idea, to take her to a factory where they make robots in order to show Gloria that Robbie is not human, doesn’t have personality, is just an assemblage of cogs and wires. Unbeknown to Gloria or his wife, George has in fact arranged for Robby to be on the production line. Gloria spots him, goes mad with joy and runs across to him – straight into the path of a huge tractor. Before any of the humans can react, Robbie with robot speed hurtles across the shop floor and scoops Gloria out of danger.

This story, like the others, is supposed to give rise to some kind of debate about whether robots are human, have morals, are safe and so on. Well, since it is nearly 2019 and we still don’t have workable robots, that debate is fantasy, and this is a sweet, cheapjack story, written with flash and humour.

2. Runaround (March 1942 edition of Astounding Science Fiction)

It is 2005 and two robot testers, Powell and Donovan, have been sent to Mercury along with Robot SPD-13, known as ‘Speedy’. Ten years earlier an effort to colonise Mercury had been abandoned. Now the pair are trying again with better technology. They’ve inhabited the abandoned buildings the previous settlers left behind but discovered that the photo-cell banks that provide life support to the base are low on selenium and will soon fail.

The nearest selenium pool is seventeen miles away Donovan has sent Speedy to get some. Hours have gone by and he’s still not returned. So the story is a race against time.

But it is also a complicated use of Asimov’s famous Three Laws of Robotics. The pair discover down in the bowels of the abandoned building some primitive robots who carry them through the shade of low hills as close to the selenium pool and Speedy as they can get. (If they go into the direct light of the sunlight they will begin to be irreparably damaged, even through spacesuits, by the fierce radioactive glare.)

They see and hear Speedy (by radio) and discover he appears to be drunk, reciting the words to Gilbert and Sullivan operettas. The Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Now because Speedy was so expensive to build, the Third Law had been strengthened to preserve him and he has discovered something neither spaceman anticipated, which is that near the selenium are pools of iron-eating gas – much of Speedy being made of iron. When Donovan sent him to get some selenium he didn’t word the command particularly strongly.

So what’s happened is that, in Speedy’s mind, the second and third laws have come into conflict and given Speedy a sort of nervous breakdown. Hence the drunk-like behaviour. He approaches the selenium in obedience to the second law; but then detects the gas and backs away.

The astronauts try several tactics, including getting their robots to fetch, and then lob towards the selenium pool, canisters of oxalic acid to neutralise the carbonic gas. Eventually they tumble to the only thing which will trump laws 2 and 3, which is law 1. Powell walks out of the shadow of the bluff where they’ve been sheltering, into full sunlight, and calls to Speedy (over the radio) that the radiation is hurting him and begging Speedy to help. Law one overrides the other two and Speedy, restored to full working order, hurtles over, scoops him up and carries him into the protective shade.

At which point they give Speedy new instructions to collect the selenium, emphasising that it is life or death for them whether the photo-cell banks are replenished. With the full force of Law One behind him, this time Speedy overrides Law three (self protection) fetches loads of selenium, they fix the cells, everyone happy.

3. Reason (April 1941 issue of Astounding Science Fiction)

A year later the same ill-fated couple of spacemen are moved to a space station orbiting the sun whose task is to focus the sun’s energy into a concentrated beam which is then shot back to a received on earth. They finish constructing one of the first of a new range of robots, QT-1, who they nickname ‘Cutie’, and are disconcerted when it starts to question them. Specifically, it refuses to believe that they made it. In a series of increasingly rancorous conversations, Cutie dismisses the men as flimsy assemblages of blood and flesh, obviously not built to last.

Cutie eventually decides that the main power source of the ship must be the ‘Master’.He dismisses all the evidence of space, visible from the ship’s portholes, and all the books aboard the ship, as fables and fantasies designed to occupy the ‘lower’ minds of the men.

No, Cutie has reasoned itself into the belief that ‘There is no Master but Master, and QT-1 is His prophet.’ Despite this it carried on going about its duties, namely supervising the less advanced robots in the various tasks of keeping the space station maintained. Until the guys realise, to their further consternation, that Cutie has passed his religion on to them and they now refuse to obey the humans. In fact they pick up the humans, take them to their living quarters and lock them in under house arrest.

Powell and Donovan become very anxious because a solar storm is expected which will make the immensely high-powered beam to the earth waver and wobble. Even a little amount will devastate hundreds of square miles back on earth. But to their amazement, and relief, Cutie manages the beam perfectly, countering for the impact of the solar storm far better than they could have. At some (buried) level Cutie is still functioning according the 1st and 2nd laws i.e. protecting humans. The pair end up wondering whether a robot’s nominal ‘beliefs’ matter at all, so long as it obeys the three laws and functions perfectly.

Although marked by Asimov’s trademark facetious humour, and despite the schoolboy level on which the ‘debate’ with the robot is carried out, and noting the melodramatic threat of the approach of the solar storm – this is still a humorous, effective short story.

4. Catch That Rabbit (February 1944  issue of Astounding Science Fiction )

It’s those two robot testers, Powell and Donovan, again. They’re jokey banter is laid on with a trowel and reeks of fast-talking, wise-cracking comedians of the era.

Powell said, ‘Mike — you’re right.’
‘Thanks, pal. I knew I’d do it some day.’
‘All right, and skip the sarcasm. We’ll save it for Earth, and preserve it in jars for future long, cold winters.’

The plot is comparable to the previous story. Now they’re on an asteroid mining station where a ‘master’ robot – nicknamed Dave because his number is DV-5 – is in charge of six little worker robots – nicknamed the ‘fingers’ – digging up some metal ore. Problem is they’re not  hitting their quotas and, when Powell and Donovan eavesdrop on the worker robots via a visi-screen, they are appalled to see that, as soon as their backs are turned, Dave leads the other six in vaudeville dance routines!

After much head-scratching and trying out various hypotheses – as in the previous story – they eventually tumble to the problem. The trouble seems to kick off whenever Dave encounters a slight problem. So it would seem that supervising six robots is simply too much of a strain, when an additional problem is added. Solution? Eliminate one of the robots. Dave can handle the remaining five, plus whatever issues arise in the blasting and mining operation, just fine.

There is, however, a typically cheesy Asimov punchline. So what’s with the chorus line dancing? Donovan asks. Powell replies that when Dave was stymied and his processors couldn’t decide what to do – he resorted to ‘twiddling his fingers‘ boom boom!

5. Liar! (May 1941 issue of Astounding Science Fiction)

Accidentally, a robot is manufactured which can read human minds. With typical Yank levity it is nicknamed Herbie, since its number is RB-34. U.S. Robot and Mechanical Men Inc mathematician Peter Bogert and robopsychologist Susan Calvin, at various points, interview it. Now Herbie, as well as answering their questions, reads what’s on their minds, namely that Bogert wants to replace Lanning as head of U.S. Robot and Mechanical Men Inc, and that Calvin is frustratedly in love with a young officer at the firm, Milton Ashe.

To their delight, Herbie tells Bogert that Lanning has handed in his resignation and nominated Bogert to replace him, and tells Calvin that Ashe is in love with her too.

Their happiness doesn’t last. When Bogert confronts Lanning with news of his resignation, the latter angrily denies it. Calvin is on the point of declaring her feelings for Ashe, when the latter announces that he soon to marry his fiancee.

In the climactic scene the four character confront Herbie with his ‘lies’ and it is Calvin who stumbles on the truth. Herbie can read minds. He knows what his human interlocutors wish. He knows revealing that those wishes are unrequited or untrue will psychologically damage them. He is programmed to obey the First Law of Robotics i.e. no robot must harm a human being. And so he lies to them. He tells them what they want to hear.

Beside herself with anger (and frustration) Calvin taunts the cowering robot into a corner of the room and eventually makes its brain short circuit.

Little Lost Robot (March 1947 issue of Astounding Science Fiction)

On Hyper Base, a military research station on an asteroid, scientists are working to develop the hyperspace drive. One of their robots goes missing. US Robots’ Chief Robopsychologist Dr. Susan Calvin, and Mathematical Director Peter Bogert, are called in to investigate.

They are told that the Nestor (a characteristic nickname for a model NS-2 robot) was one of a handful which had had its First Law of Robotics amended. They learn that, as part of their work, the ether scientists on Hyper Base have to expose themselves to risky levels of gamma rays, albeit for only short, measured periods. They and their managers found the Nestors kept interfering to prevent them exposing themselves, or rushing out to fetch them back in – in rigid obedience to the first law, which is to prevent any humans coming from harm.

After the usual red herrings, arguments and distractions it turns out that a nervy physicist, Gerald Black, who had been working with the missing robot, had gotten angry and told it to ‘get lost’. Which is exactly what it proceeded to do. A shipment of 62 Nestors had docked on its way off to some further destination. Next thing anyone knew there were 63 Nestors in its cargo hold and nobody could detected which of the 63 was the one which had had its First Law tampered with.

As usual Asimov creates a ‘race against time’ effect by having Calvin become increasingly concerned that Nestor 10 has not only ‘got lost’ but become resentful at being insulted by an inferior being’, and might carry on becoming more resentful until it plans something actively malevolent.

Calvin carries out a number of tests to try and distinguish Nestor 10, and becomes genuinely alarmed when entire cohorts of the nestors fail to react quickly enough to save a human (placed in a position of jeopardy for the sake of the experiment).

Finally, she catches it out by devising a test which distinguishes Nestor 10 as the only one which has received additional training in dealing with gamma radiation since arriving at the Hyper Drive base, the other 62 remaining ignorant.

After Nestor 10 has been revealed, Calvin sharply orders it to approach her, which it does, whining and complaining about its superiority and how it shouldn’t be treated like that and how it was ordered to lose itself and she mustn’t reveal its whereabouts… and attacks her. At which Black and Bogert flood the chamber with enough gamma rays to incapacitate it. it is destroyed, the other 62 ‘innocent’ nestors are trucked off to their destination.

Once again, this story is a scary indictment of the whole idea of robots, if it turns out that corporations can merrily tamper with the laws of robotics in order to save money, or get a job done, well, obviously they will. In which case the laws aren’t worth the paper they’re written on.

Escape! (August 1945 issue of Astounding Science Fiction)

Published in the month that the War in the Pacific – and so the Second World War – ended, after the dropping of the two atom bombs on Japan.

In that month’s issues of Astounding Science Fiction readers learned that U.S. Robot and Mechanical Men Inc. possess a Giant Brain, a positronic doodah floating in a helium globe, supported by wires etc. Reassuringly, it is a chattily American brain:

Dr. Calvin said softly, ‘How are you, Brain?’
The Brain’s voice was high-pitched and enthusiastic, ‘Swell, Miss Susan.’

A rival firm approaches U.S. Robot etc. It too is working on a hyperdrive and, when its scientists fed all the information into their supercomputer, it crashed. Tentatively, our guys agree to feed the same info into The Brain. Now the thing about the Brain is it is emotionally a child. Dr Calvin thinks that this is why it manages to process the same information which blew up the rival one: because it doesn’t take the information so seriously – particularly the crucial piece of information that, during the hyperdrive, human beings effectively die.

It swallows all the information and happily agrees to make the ship in question. Within a month or so the robots it instructs have built a smooth shiny hyperdrive spaceship. It is over to the two jokers we’ve met in the earlier stories, Powell land Donovan, Mike and Greg, to have a look. But no sooner are they in it than the doors lock and it disappears into space. Horrified at being trapped, the two men wisecrack their way around their new environment. Horrified at losing two test pilots in a new spaceship Dr Calvin very carefully interviews The Brain. Oh, they’ll be fine, it says, breezily.

Meanwhile, Mike and Greg undergo the gut-wrenching experience of hyperspace travel and – weird scenes – imagine themselves dead and queueing up outside the Pearly Gates to say hello to old St Peter. When they come round from these hallucinations, they look at the parsec-ometer on the control board and realise it is set at 300,000.

They were conscious of sunlight through the port. It was weak, but it was bluewhite – and the gleaming pea that was the distant source of light was not Old Sol.
And Powell pointed a trembling finger at the single gauge. The needle stood stiff and proud at the hairline whose figure read 300,000 parsecs.
Powell said, ‘Mike if it’s true, we must be out of the Galaxy altogether.’
Donovan said, ‘Blazed Greg! We’d be the first men out of the Solar System.’
‘Yes! That’s just it. We’ve escaped the sun. We’ve escaped the Galaxy. Mike, this ship is the answer. It means freedom for all humanity — freedom to spread through to every star that exists — millions and billions and trillions of them.’

Eventually the spaceship returns to earth, joking Mike and Greg stumble out, unshaven and smelly, and are led off for a shower.

Dr Calvin explains to an executive board (i.e. all the characters we’ve met in the story, including Mike and Dave) that the equations they gave The Brain included the fact that humans would ‘die’ – their bodies would be completely disassembled, as would the molecules of the space ship – in order for it to travel through hyperspace. It was this knowledge of certain ‘death’ which had made the rivals’ computer – obeying the First Law of Robotics – short circuit.

But Dr Calvin had phrased the request in such a way to The Brain as to downplay the importance of death. (In fact this is a characteristic Asimovian play with words – Dr Calvin’s instructions to The Brain made no sense when I read them, and only make sense now, when she uses them as an excuse for why The Brain survived but the rival supercomputer crashed.

Like most of Asimov’s stories, there is a strong feeling of contrivance, that stories, phrases or logic are wrenched out of shape to deliver the outcome he wants. This makes them clever-clever, but profoundly unsatisfying, and sometimes almost incomprehensible.)

Anyway, The Brain still registered the fact the testers would ‘die’ (albeit they would be reconstituted a millisecond later) and this is the rather thin fictional excuse given for the fact that the Brain retreated into infantile humour – designing a spaceship which was all curves, providing the testers with food – but making it only baked beans and milk, providing toilet facilities – but making them difficult to find, and so on. Oh, and ensuring that at the moment of molecular disintegration, the testers had the peculiar jokey experience of queueing for heaven, of hearing their fellow waiters and some of the angels all yakking like extras in a 1950s musical. That was all The Brain coping with its proximity to breaking the First Law by retreating into infantile humour.

Follow all that? Happy with that explanation? Happy with that account of how the human race makes the greatest discovery in its history?

Or is it all a bit too much like a sketch from the Jerry Lee Lewis show?

Lanning raised a quieting hand, “All right, it’s been a mess, but it’s all over. What now?’
‘Well,’ said Bogert, quietly, “obviously it’s up to us to improve the space-warp engine. There must be some way of getting around that interval of jump. If there is, we’re the only organization left with a grand-scale super-robot, so we’re bound to find it if anyone can. And then — U. S. Robots has interstellar travel, and humanity has the opportunity for galactic empire.’ !!!

Evidence (September 1946 issue of Astounding Science Fiction)

A story about a successful politician, Stephen Byerley. Having been a successful attorney he is running for mayor of a major American city. His opponent, Francis Quinn, claims he is a robot, built by the real Stephen Byerley who was crippled in a car accident years earlier.

The potential embarrassment leads U.S. Robot and Mechanical Men Inc. to send their top robosychologist test whether Byerley is a robot or not.

  • She offers him an apple and Byerley takes a bite, but he may have been designed with a stomach.
  • Quinn sends a journalist with a hidden X-ray camera to photograph Byerley’s insides, but Byerley is protected by some kind of force shield

Quinn and Calvin both make a big deal of the fact that Byerley, if a robot, must obey the three Laws of Robotics i.e. will be incapable of harming a human. This becomes a centrepiece of the growing opposition to Byerley, stoked by Quinn’s publicity machine.

During a globally broadcast speech to a hostile audience, a heckler climbs onto the stage and challenges Byerley to hit him in the face. Millions watch the candidate punch the heckler in the face. Calvin tells the press that Byerley is human. With the expert’s verdict disproving Quinn’s claim, Byerley wins the election.

Afterwards, Calvin visits Byerley and shrewdly points out that the heckler may have been a robot, manufactured by Byerley’s ‘teacher’, a shady figure who has gone ‘to the country’ to rest and who both Calvin and Quin suspect is the real Byerley, hopelessly crippled but with advanced robotics skills.

This is one of the few stories where Asimov adds linking material in which the elderly Calvin tells the narrator-reporter than Byerley arranged to have his body ‘atomised’ after his death, so nobody ever found out.

All very mysterious and thrilling for the nerdy 14-year-old reader, but the adult reader can pick a million holes in it, such as the authorities compelling Byerley to reveal the whereabouts of the mysterious ‘teacher’ or compelling him to have an x-ray.

The Evitable Conflict (June 1950 issue of Astounding Science Fiction)

The Byerley story turns out to be important because this same Stephen Byerley goes on to become the head of the planetary government, or World Co-Ordinator, as it is modestly titled

The story is a fitting end to the sequence because it marks the moment when robots – which we saw, in Robbie as little more than playthings for children in 1998 – taking over the running of the world by the 2050s.

Byerley is worried because various industrial projects – a canal in Mexico, mines in Spain – are falling behind. Either there’s something wrong with the machines which, by this stage, are running everything… or there is human sabotage.

He calls in Susan Calvin, by this stage 70 years old and the world’s leading expert on robot psychology.

She listens as Byerley gives her a detailed description of his recent tour of the Four Regions of Earth (and the 14 year old kid reader marvels and gawps at how the planet will be divided up into four vast Regions, with details of which one-time ‘countries’ they include, their shiny new capital cities, their Asian, Africa and European leaders who Byerley interviews).

This is all an excuse for Asimov to give his teenage view of the future which is that rational complex calculating Machines will take over the running of everything. The coming of atomic power, and space travel, will render the conflict between capitalism and communism irrelevant. The European empires will relinquish their colonies which will become free and independent. And all of humanity will realise, at the same time, that there is no room any more for nationalism or political conflicts. It will become one world. Everyone will live in peace.

Ahhh isn’t that nice.

Except for this one nagging fact — that some of the projects overseen by the Machines seem to be failing. Byerley tells Calvin his theory. There is a political movement known as the Society for Humanity. It can be shown that the men in charge of the Mexican canal, the Spanish mines and the other projects which are failing are all members of the Society for Humanity. Obviously they are tampering with figures or data in order to sabotage project successes, to reintroduce shortages and conflicts and to discredit the Machines. Therefore, Byerley tells Calvin, he proposes having every member of the Society for Humanity arrested and imprisoned.

Calvin – and this is a typical Asimov coup, to lead the reader on to expect one thing and then, with a whirl of his magician’s cape, to reveal something completely different – Calvin says No. He has got it exactly wrong. the Machines, vastly complex, proceeding on more data than any one human could ever manage, and continually improving, acting under an expanded version of the First Law of Robotics – namely that no robot or machine must harm humanity – have detected that the Society for Humanity presents a threat to the calm, peaceful, machine-controlled future of humanity – and so the machines have falsified the figures and made the projects fail — precisely in order to throw suspicion on the Society for Humanity, precisely to make the World Co-Ordinator arrest them, precisely in order to eliminate them.

In other words, the Machines have now acquired enough data about the world and insight into human psychology, as to be guiding humanity’s destiny. It is too late to avert or change, she tells Byerley. They are in control now.

Despite its silliness, it is nonetheless a breath-taking conclusion to the book, and, as with the Foundation stories, makes you feel like you really have experienced a huge and dazzling slab of mankind’s future.


Comments

The inadequacy of the Three Laws epitomises the failure of all attempts to replicate the human mind

I suppose this has occurred to everyone who’s ever read these stories, but the obvious thing about them is that every single story is about robots going wrong. This doesn’t exactly fill you with confidence about a robotic future.

A bit more subtly, what they all demonstrate is that ‘morality’ is a question of human interpretation: people interpret situations and decide how to act accordingly. This interpretative ability cannot be replicated by machines, computers, artificial intelligence, call them what you will. It probably never will for the simple reason that it is imperfect, partial and different in each individual human. You will never be able to programme ‘robots’ with universal laws of behaviour and morality, when these don’t even exist among humans.

Asimov’s Three Laws of Robotics sound impressive to a 14-year-old sci-fi nerd, or as the (shaky) premise to a series of pulp sci-fi stories – but the second you begin trying to apply them to real life situations (for example, two humans giving a robot contradictory orders) you immediately encounter problems. Asimov’s Three Laws sound swell, but they are, in practice, useless. And the fact that the robots in the stories seem to do nothing but break down, demonstrates the problem.

Neither the human mind nor the human body can be replicated by science

Asimov predicted that humans would have developed robots by now (2018, when I write), indeed by the 1990s.

Of course, we haven’t. This is because nobody understands how the human brain works and no technologists have come anywhere near replicating its functionality. They never will. The human brain is the most complicated object in the known universe. It has taken about three billion years to evolve (if we start back with the origin of life on earth). The idea that guys in white coats in labs working with slide rules can come anywhere close to matching it in a few generations is really stupid.

And that’s just human intelligence. On the physical side, no scientists have created ‘robots’ with anything like the reaction times and physical adroitness of even the simplest animals.

We don’t need robots since we have an endless supply of the poor

Apart from the a) physical and b) mental impossibility of creating ‘robots’ with anything remotely like human capacities, the most crucial reason it hasn’t happened is because there is no financial incentive whatsoever to create them.

We have cheap robots already, they are called migrant workers or slaves, who can be put to work in complex and demanding environments – showing human abilities to handle complex situations, perform detailed and fiddly tasks – for as little as a dollar a day.

Charities estimate there are around 40 million slaves in the world today, 2018. So why waste money developing robots? Even if you did develop ‘robots’, could they be as cheap to buy and maintain as human slaves? Would they cost a dollar a day to run? No.

Only in certain environments which require absolutely rigid, inflexible repetitive tasks, and which are suitable for long-term heavy investment because of the certainty of return, have anything like robots been deployed, for example on the production lines of car factories.

But these are a million miles away from the robots Asimov envisaged, which you can sit down and chat to, let alone pass for human, as R. Daneel Olivaw does in Asimov’s robot novels.

All technologies break

And the last but not the least objection to Asimov’s vision of a robot-infested future is that all technologies break. Computers fail. Look at the number of incidents we’ve had just in the past month or so of major breakdowns by computer networks, and these are networks run by the biggest, richest, safest, most supervised, cleverest companies in the world.

On 6 December 2018 around 30 million people use the O2 network suffered a complete outage of the system. The collapse affected 25 million O2 subscribers, customers of Tesco Mobile and Sky Mobile, business such as Deliveroo, the digital systems on board all 8,500 London buses, and systems at some hospitals.

In September 2018 Facebook admitted that at least 50 million accounts had been hacked, with a poissible 40 million more vulnerable. Facebook-owned Instagram and WhatsApp are also affected along with apps and services such as Tinder that authenticate users through Facebook.

In April 2018 TSB’s banking online banking service collapsed following a botched migration to a new platform. Some customers were unable to access their accounts for weeks afterwards. About 1,300 customers were defrauded, 12,500 closed their accounts and the outage cost the bank £180 million.

These are just the big ones I remember from the past few months, and the ones we got to hear about (i.e. weren’t hushed up). In the background of our lives and civilisation, all computer networks are being attacked, failing, crashing, requiring upgrades, or proper integration, or becoming obsolete, all of the time. If you do any research into it you’ll discover that the computer infrastructure of the international banks which underpin global capitalism are out of date, rickety, patched-up, vulnerable to hacking but more vulnerable to complex technical failures.

In Asimov’s world of advanced robots, there is none of this. The robots fix each other and all the spaceships, they are – according to the final story – ‘self-correcting’, everything works fine all the time, leaving humans free to swan around making vast conspiracies against each other.

This is the biggest fantasy or delusion in Asimov’s universe. Asimov’s fictions give no idea at all of the incomprehensible complexity of a computerised world and – by extension – of all human technologies and, by a further extension, of human societies.


Asimov breaks the English language

Asimov is a terrible writer, hurried, slapdash, trying to convey often pretty simple emotional or descriptive effects through horribly contorted phraseology.

As I read I could hear a little voice at the back of my mind, and after a while realised it was the voice of the English language, crying out as if from a long distance away, ‘Help me! Save me! Rescue me from this murderer!’

The main corridor was a narrow tunnel that led in a hard, clatter-footed stretch along a line of rooms of no interdistinguishing features.

Harroway had no doubt on the point of to whom he owed his job.

Dr. Lanning smiled in a relief tangible enough to make even his eyebrows appear benevolent.

The signal-burr brought all three to a halt, and the angry tumult of growingly unrestrained emotion froze.

The two giant robots were invisible but for the dull red of their photoelectric eyes that stared down at them, unblinking, unwavering and unconcerned. Unconcerned! As was all this poisonous Mercury, as large in jinx as it was small in size.


Related links

Other science fiction reviews

1888 Looking Backward 2000-1887 by Edward Bellamy – Julian West wakes up in the year 2000 to discover a peaceful revolution has ushered in a society of state planning, equality and contentment
1890 News from Nowhere by William Morris – waking from a long sleep, William Guest is shown round a London transformed into villages of contented craftsmen

1895 The Time Machine by H.G. Wells – the unnamed inventor and time traveller tells his dinner party guests the story of his adventure among the Eloi and the Morlocks in the year 802,701
1896 The Island of Doctor Moreau by H.G. Wells – Edward Prendick is stranded on a remote island where he discovers the ‘owner’, Dr Gustave Moreau, is experimentally creating human-animal hybrids
1897 The Invisible Man by H.G. Wells – an embittered young scientist, Griffin, makes himself invisible, starting with comic capers in a Sussex village, and ending with demented murders
1898 The War of the Worlds – the Martians invade earth
1899 When The Sleeper Wakes/The Sleeper Wakes by H.G. Wells – Graham awakes in the year 2100 to find himself at the centre of a revolution to overthrow the repressive society of the future
1899 A Story of the Days To Come by H.G. Wells – set in the same London of the future described in the Sleeper Wakes, Denton and Elizabeth fall in love, then descend into poverty, and experience life as serfs in the Underground city run by the sinister Labour Corps

1901 The First Men in the Moon by H.G. Wells – Mr Bedford and Mr Cavor use the invention of ‘Cavorite’ to fly to the moon and discover the underground civilisation of the Selenites
1904 The Food of the Gods and How It Came to Earth by H.G. Wells – two scientists invent a compound which makes plants, animals and humans grow to giant size, leading to a giants’ rebellion against the ‘little people’
1905 With the Night Mail by Rudyard Kipling – it is 2000 and the narrator accompanies a GPO airship across the Atlantic
1906 In the Days of the Comet by H.G. Wells – a passing comet trails gasses through earth’s atmosphere which bring about ‘the Great Change’, inaugurating an era of wisdom and fairness, as told by narrator Willie Leadford
1908 The War in the Air by H.G. Wells – Bert Smallways, a bicycle-repairman from Bun Hill in Kent, manages by accident to be an eye-witness to the outbreak of the war in the air which brings Western civilisation to an end
1909 The Machine Stops by E.M. Foster – people of the future live in underground cells regulated by ‘the Machine’ until one of them rebels

1912 The Lost World by Sir Arthur Conan Doyle – Professor Challenger leads an expedition to a plateau in the Amazon rainforest where prehistoric animals still exist
1912 As Easy as ABC by Rudyard Kipling – set in 2065 in a world characterised by isolation and privacy, forces from the ABC are sent to suppress an outbreak of ‘crowdism’
1913 The Horror of the Heights by Arthur Conan Doyle – airman Captain Joyce-Armstrong flies higher than anyone before him and discovers the upper atmosphere is inhabited by vast jellyfish-like monsters
1914 The World Set Free by H.G. Wells – A history of the future in which the devastation of an atomic war leads to the creation of a World Government, told via a number of characters who are central to the change
1918 The Land That Time Forgot by Edgar Rice Burroughs – a trilogy of pulp novellas in which all-American heroes battle ape-men and dinosaurs on a lost island in the Antarctic

1921 We by Evgeny Zamyatin – like everyone else in the dystopian future of OneState, D-503 lives life according to the Table of Hours, until I-330 wakens him to the truth
1925 Heart of a Dog by Mikhail Bulgakov – a Moscow scientist transplants the testicles and pituitary gland of a dead tramp into the body of a stray dog, with disastrous consequences
1927 The Maracot Deep by Arthur Conan Doyle – a scientist, engineer and a hero are trying out a new bathysphere when the wire snaps and they hurtle to the bottom of the sea, there to discover…

1930 Last and First Men by Olaf Stapledon – mind-boggling ‘history’ of the future of mankind over the next two billion years
1932 Brave New World by Aldous Huxley
1938 Out of the Silent Planet by C.S. Lewis – baddies Devine and Weston kidnap Ransom and take him in their spherical spaceship to Malacandra aka Mars,

1943 Perelandra (Voyage to Venus) by C.S. Lewis – Ransom is sent to Perelandra aka Venus, to prevent a second temptation by the Devil and the fall of the planet’s new young inhabitants
1945 That Hideous Strength: A Modern Fairy-Tale for Grown-ups by C.S. Lewis– Ransom assembles a motley crew to combat the rise of an evil corporation which is seeking to overthrow mankind
1949 Nineteen Eighty-Four by George Orwell – after a nuclear war, inhabitants of ruined London are divided into the sheep-like ‘proles’ and members of the Party who are kept under unremitting surveillance

1950 I, Robot by Isaac Asimov – nine short stories about ‘positronic’ robots, which chart their rise from dumb playmates to controllers of humanity’s destiny
1951 Foundation by Isaac Asimov – the first five stories telling the rise of the Foundation created by psychohistorian Hari Seldon to preserve civilisation during the collapse of the Galactic Empire
1952 Foundation and Empire by Isaac Asimov – two long stories which continue the future history of the Foundation set up by psychohistorian Hari Seldon as it faces down attack by an Imperial general, and then the menace of the mysterious mutant known only as ‘the Mule’
1953 Second Foundation by Isaac Asimov – concluding part of the ‘trilogy’ describing the attempt to preserve civilisation after the collapse of the Galactic Empire
1954 The Caves of Steel by Isaac Asimov – set 3,000 years in the future when humans have separated into ‘Spacers’ who have colonised 50 other planets, and the overpopulated earth whose inhabitants live in enclosed cities or ‘caves of steel’, and introducing detective Elijah Baley to solve a murder mystery
1956 The Naked Sun by Isaac Asimov – 3,000 years in the future detective Elijah Baley returns, with his robot sidekick, R. Daneel Olivaw, to solve a murder mystery on the remote planet of Solaria

1971 Mutant 59: The Plastic Eater by Kit Pedler and Gerry Davis – a genetically engineered bacterium starts eating the world’s plastic

1980 Russian Hide and Seek by Kingsley Amis – in an England of the future which has been invaded and conquered by the Russians, a hopeless attempt to overthrow the occupiers is easily crushed
1981 The Golden Age of Science Fiction edited by Kingsley Amis – 17 classic sci-fi stories from what Amis considers the Golden Era of the genre, namely the 1950s