Category Archives: Estimation Bias

Truth Decay

The truth is not a constant. Knowledge changes and you must change with it if you are to stay in control.

When I was a child, enchanted by science, I learned about planets and dinosaurs. I learned about the outermost planet, Pluto, and the giant saurischian, Brontosaurus.
Now, Pluto is no longer classed as a planet (but as a dwarf planet, along with Eris, Ceres, Haumea & Makemake) and the name Brontosaurus has been relegated to a curious historical footnote against Apatosaurus (whose skeleton body was once insulted by the wrong head and called a Brontosaurus).

In every field of human knowledge – from astronomy to zoology and from geology to sociology – we make progress when new knowledge challenges old ideas. Consequently, the wisest stance to adopt is scepticism: ‘a tendency to doubt’.

For project mangers, doubt is a valuable weapon in your professional arsenal.  Let’s look at some examples.

Project Planning and Doubt

Amos Tversky & Daniel Kahneman (whose wonderful book, ‘Thinking: Fast and Slow‘ I have recommended before) coined the term ‘Planning Fallacy‘ to describe the well-observed tendency to assume that the time things will take is pretty close to the best case scenario. I would add to that ‘Planning Delusion‘; a tendency to believe our plans will be a true reflection of events. They rarely will. Doubt is the key to proper planning and preparation – doubt your best case scenario and doubt your plan.

The only rule I think we can rely on here (and notice, I say ‘think’, connoting doubt) is ‘Hofstadter’s Law’:

‘It always takes longer than you expect;
even when you take into account Hofstadter’s Law.’

This was coined in his book ‘Godel, Escher, Bach: An Eternal Golden Braid‘.

Project Delivery and Doubt

When things go well, we fall into an optimistic bias that leads us to suspect that we are working on a special project that is an exception to Hofstadter’s Law. What rot. Healthy scepticism keeps your senses attuned to the problems, delays, and general foul-ups that are the very nature of life. The sooner you spot them, the simpler it tends to be to fix them, so the heightened awareness that doubt brings is the key to staying in control of your project.

Risk and Doubt

The nature of risk is uncertainty, so where can doubt be of more value? And there are different types of risk.

  • ‘Aleatory risks’ represent inherent uncertainty in the system – we cannot know where a ball will land when the roulette wheel spins.
  • ‘Epistemic risks’ arise uncertainty the uncertainty due to the gaps our knowledge.
  • ‘Ontological risks’ are those about which we are wholly unaware.

We therefore tend to believe that absence of evidence is evidence of absence – and are frequently wrong. Once again, doubt would prevent this mistake. For a summary of these kinds of risk, take a look at my simplified  ‘Four Types of Risk’ infographic.

Stakeholders and Doubt

I am working on my book on stakeholder engagement (publication spring 2014, Palgrave Macmillan) and spoke with a former colleague about his experiences – thank you Paul Mitchell. I loved his tip that:

‘just because they are quiet; it doesn’t mean they agree.’

Spot on – absence of evidence again.

Resistance and Doubt

When people resist our ideas our first instinct is to tackle that resistance – to take it on and aim to overcome it. Wrong! Step 1 is doubt: ‘what if they are right and I am wrong?’ It is a crazy notion, I know, but if it turns out to be true, doubt can save you a lot of wasted time and a possible loss of reputational capital.

Performance and Doubt

I attended an excellent one-day seminar on Positive Psychology in Organisations, led by the inspirational Sarah Lewis. One take away is the doubt we should apply to excellent performance. We tend to consider it to be ‘what we expect’ so we focus on fixing poor performance. One of the vital practices of the best and most flourishing organisations is to focus on this ‘positive deviance’ and work hard to understand and then replicate it.

Decisions and Doubt

Doubt frustrates decision making so it cannot be a good thing, can it? Well, yes, it can. Often, doubt arises from ‘gut instinct’. We know what the facts are telling us, but our intuition disagrees. Daniel Kahneman (yes, him again) will tell us that our instincts are fuelled by bias and faulty thinking, but another excellent thinker, Gary Klein (author of ‘The Power of Intuition’) reminds us that in domains where we have true and deep expertise, that intuition may be working on data we have not consciously processed. Doubt should lead us to look more deeply before committing to an important decision.

Time Management and Doubt

One of the reasons clients most value my time management seminars is that I don’t have a system. Which is good for them, because their people have numerous interruptions in their working day, meaning that any plan they draw up will be stymied by necessary reactions to events. I do advocate making a plan; but I also advocate reviewing it frequently. Sticking to an out of date plan, based on yesterday’s priorities is worse than having no plan at all. This is, of course, a cornerstone of good monitoring and control for us as project managers.

Stress and Doubt

Doubt causes stress, because doubt robs us of control. Is the solution, therefore to work hard to eliminate doubt? It could be in some circumstances, but the solution to removing stress is to regain control, and this need not require you to remove doubt, but to embrace it and make it part of your process. That way, you keep the value of doubt, but take control of how you apply it.

Wisdom and Doubt

Doubt and scepticism suffuse my whole concept of wisdom. It arises from the combination of perception – noticing new evidence – and evolution – altering your world view in accordance with your new knowledge. It features in conduct and judgement, and even in fairness. And what authority can you have, if you hold fast to old certainties in the face of new realities? For more about my concept of what wisdom is, and how to develop it as a professional, take a look at my book, Smart to Wise.


This article was first published in June 2013, in my monthly newsletter/tipsheet, ‘Thoughtscape’. Why not subscribe to my Thoughtscape newsletter?


Advertisements

Risky Shift: Why?

I have written before (Groupthink, Abilene and Risky Shift and Neuroscience of Risky Shift) about Risky Shift. This is a phenomenon every project or change manager needs to be aware of. In short, it is the tendency to for groups to make decisions that have a more extreme risk profile (or more cautious) than any of the members of the group would individually have subscribed to. But why does it happen?

Researchers have proposed a number of theories…

A social psychology of group processes for decision-making

Collins, Barry E.; Guetzkow, Harold Steere.
Wiley, 1964

The authors suggest that a power differential allows higher power group members who favour a more extreme position to persuade other group members to support that position.

Diffusion of responsibility and level of risk taking in groups

Wallach, Michael A.; Kogan, Nathan; Bem, Daryl J.
The Journal of Abnormal and Social Psychology, Vol 68(3), Mar 1964, 263-274.

The authors suggest that in a group, the sense of shared responsibility leaves individuals feeling that they themselves are committing to a lesser share of risk, reducing their level of concern about the implications of the risk.

Social Psychology

Brown, Roger.
New York: Free Press, 1965.

In this classic (and dense) textbook, the author puts forward his theory that in risk valuing cultures like that in the US, where he worked, group members are drawn towards higher risk to maintain their sense of status in the group. This would suggest that, in systemically risk averse cultures, caution would be socially valued leading to cautious shift.

Familiarization, group discussion, and risk taking

Bateson, Nicholas
Journal of Experimental Social Psychology, 1966

This author details experimental evidence leading to the hypothesis that it the discussion process that matters – as group members become more familiar with a proposal, the level of risk seems to diminish: ‘familiarity breeds contempt’ maybe?

The ‘so what?’

  1. Facilitate discussions to minimise power differentials and minimise the impacts of any that remain.
  2. Be clear with group members that they are jointly and severally (that is, individually) responsible for any decision the group makes.
  3. Disentangle status, value and risk. Set the culture around contribution, value and process. Your personal value is linked to your contribution to a robust process.
  4. Facilitate an objective risk assessment of each proposal.

 


Neuroscience of Risky Shift

A while ago, I wrote a post about Groupthink and Risky Shift.

imageOne of the questions that has long been debated among psychologists is what happens when we change our opinions to fit in with a crowd (and there is plenty of evidence that we do – read up on the Asch Conformity Experiments, for example) .

Do we do so to conform to expectations or because our point of view really is changed by the crowd?

Continue reading

Groupthink, Abilene and Risky Shift

Groupthink

Groupthink is a term coined by sociologist and author William H Whyte in an article for Fortune Magazine that has come to be well-used and well-known.  Perhaps the best explanation of it was given by psychologist Irving Janis, who studied groupthink in the context of political decision-making by the US administration from the 1940s to the 1960s:

“A mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.”

Groupthink occurs when we set aside our individual insights in a desire to conform with the ideas of the group.   When we want to introduce change, or estimate risk, an inability to access all points of view is clearly dangerous.

Continue reading

The Science and Politics of Fear

I have just finished reading Risk: The Science and Politics of Fear by Dan Gardner.  It is a good read and I would recommend it highly to anyone with an interest in how perceptions of risk work.

The Psychology of Risk

DanielKahnemanThe book starts with a fairly detailed discussion of the work of Amos Tversky and Daniel Kahneman.  They worked together to understand the biases that we are prone to when we assess risk.  These arise through the quick judgements we make, based on mental rules of thumb.

left, Daniel Kahneman

right, Amos TverskyAmosTversky

When we understand these, we understand something of our response to risk.  This work was so important that Kahneman became the first recipient of the Nobel Prize for Economics who was not an economist.  Tversky had sadly died in 1996, six years before the prize was awarded.

PaulSlovic2left, Paul Slovic

Gardner also considers a fourth bias that has been investigated by Paul Slovic, of Oregon University.  For project managers and list collectors, however, the highlight may be Slovic’s list of eighteen characteristics that boost people’s perception of risk.

The Four Biases

Gardner comes up with some catchy names for these biases, but for the first, he keeps Tversky and Kahneman’s name, “The Anchoring Bias”.  This predisposes us to make inferences based on some earlier information we get – even if that information has little or no relevance.

Bias number two is the “Representativeness Bias”. Gardner calls this the rule of typical things in which we think things are more likely, if the seem to fit a “typical” pattern.

Bias number three is the “Availability Bias”, which Gardener calls the example rule.  Recent events bias our perception of risk, because they are more available to our intuition than counter examples.

Bias number four derives from Slovic’s work.  This is the “Affect Bias”, which predisposes us to think things that are bad are also more likely than they really are.

More Biases

Gardner’s book is full of other perceptual biases that make it essential reading if you want an introduction to the psychology of risk.

… and the Politics bit?

The second half of the book (actually the greater part) deals with how the media and advertisers feed off these biases deliberately to sell newspapers and create fear that makes us buy security products, for example.

This part is fascinating reading if you want to understand the how businesses and politicians can manipulate us, and how the media becomes complicit – not just cynically, but because the journalists that fill it with copy are as prone to these biases as we are.

If I had to be critical, there are perhaps a few too many examples given in these latter chapters, but they do not detract to an excellent introduction to the science and politics of our perceptions of risk.

The “so what?”

Buy Risk: The Science and Politics of Fear by Dan Gardner; read it; internalise its messages; don’t ever trust your evaluation of the likelihood of risk again.

You may also like the following blogs:

The Problems of Probability
… discusses four biases and the problem of estimating risk likelihoods

So what are you “doing” about risk?
… discusses the importance of taking action after you have analysed your risk

I can also recommend Glen Alleman’s excellent blog Herding Cats, and, in particular, Sources of Estimating Errors, in which he discusses how these biases affect our estimates.