Category Archives: Analysing Risk

Truth Decay

The truth is not a constant. Knowledge changes and you must change with it if you are to stay in control.

When I was a child, enchanted by science, I learned about planets and dinosaurs. I learned about the outermost planet, Pluto, and the giant saurischian, Brontosaurus.
Now, Pluto is no longer classed as a planet (but as a dwarf planet, along with Eris, Ceres, Haumea & Makemake) and the name Brontosaurus has been relegated to a curious historical footnote against Apatosaurus (whose skeleton body was once insulted by the wrong head and called a Brontosaurus).

In every field of human knowledge – from astronomy to zoology and from geology to sociology – we make progress when new knowledge challenges old ideas. Consequently, the wisest stance to adopt is scepticism: ‘a tendency to doubt’.

For project mangers, doubt is a valuable weapon in your professional arsenal.  Let’s look at some examples.

Project Planning and Doubt

Amos Tversky & Daniel Kahneman (whose wonderful book, ‘Thinking: Fast and Slow‘ I have recommended before) coined the term ‘Planning Fallacy‘ to describe the well-observed tendency to assume that the time things will take is pretty close to the best case scenario. I would add to that ‘Planning Delusion‘; a tendency to believe our plans will be a true reflection of events. They rarely will. Doubt is the key to proper planning and preparation – doubt your best case scenario and doubt your plan.

The only rule I think we can rely on here (and notice, I say ‘think’, connoting doubt) is ‘Hofstadter’s Law’:

‘It always takes longer than you expect;
even when you take into account Hofstadter’s Law.’

This was coined in his book ‘Godel, Escher, Bach: An Eternal Golden Braid‘.

Project Delivery and Doubt

When things go well, we fall into an optimistic bias that leads us to suspect that we are working on a special project that is an exception to Hofstadter’s Law. What rot. Healthy scepticism keeps your senses attuned to the problems, delays, and general foul-ups that are the very nature of life. The sooner you spot them, the simpler it tends to be to fix them, so the heightened awareness that doubt brings is the key to staying in control of your project.

Risk and Doubt

The nature of risk is uncertainty, so where can doubt be of more value? And there are different types of risk.

  • ‘Aleatory risks’ represent inherent uncertainty in the system – we cannot know where a ball will land when the roulette wheel spins.
  • ‘Epistemic risks’ arise uncertainty the uncertainty due to the gaps our knowledge.
  • ‘Ontological risks’ are those about which we are wholly unaware.

We therefore tend to believe that absence of evidence is evidence of absence – and are frequently wrong. Once again, doubt would prevent this mistake. For a summary of these kinds of risk, take a look at my simplified  ‘Four Types of Risk’ infographic.

Stakeholders and Doubt

I am working on my book on stakeholder engagement (publication spring 2014, Palgrave Macmillan) and spoke with a former colleague about his experiences – thank you Paul Mitchell. I loved his tip that:

‘just because they are quiet; it doesn’t mean they agree.’

Spot on – absence of evidence again.

Resistance and Doubt

When people resist our ideas our first instinct is to tackle that resistance – to take it on and aim to overcome it. Wrong! Step 1 is doubt: ‘what if they are right and I am wrong?’ It is a crazy notion, I know, but if it turns out to be true, doubt can save you a lot of wasted time and a possible loss of reputational capital.

Performance and Doubt

I attended an excellent one-day seminar on Positive Psychology in Organisations, led by the inspirational Sarah Lewis. One take away is the doubt we should apply to excellent performance. We tend to consider it to be ‘what we expect’ so we focus on fixing poor performance. One of the vital practices of the best and most flourishing organisations is to focus on this ‘positive deviance’ and work hard to understand and then replicate it.

Decisions and Doubt

Doubt frustrates decision making so it cannot be a good thing, can it? Well, yes, it can. Often, doubt arises from ‘gut instinct’. We know what the facts are telling us, but our intuition disagrees. Daniel Kahneman (yes, him again) will tell us that our instincts are fuelled by bias and faulty thinking, but another excellent thinker, Gary Klein (author of ‘The Power of Intuition’) reminds us that in domains where we have true and deep expertise, that intuition may be working on data we have not consciously processed. Doubt should lead us to look more deeply before committing to an important decision.

Time Management and Doubt

One of the reasons clients most value my time management seminars is that I don’t have a system. Which is good for them, because their people have numerous interruptions in their working day, meaning that any plan they draw up will be stymied by necessary reactions to events. I do advocate making a plan; but I also advocate reviewing it frequently. Sticking to an out of date plan, based on yesterday’s priorities is worse than having no plan at all. This is, of course, a cornerstone of good monitoring and control for us as project managers.

Stress and Doubt

Doubt causes stress, because doubt robs us of control. Is the solution, therefore to work hard to eliminate doubt? It could be in some circumstances, but the solution to removing stress is to regain control, and this need not require you to remove doubt, but to embrace it and make it part of your process. That way, you keep the value of doubt, but take control of how you apply it.

Wisdom and Doubt

Doubt and scepticism suffuse my whole concept of wisdom. It arises from the combination of perception – noticing new evidence – and evolution – altering your world view in accordance with your new knowledge. It features in conduct and judgement, and even in fairness. And what authority can you have, if you hold fast to old certainties in the face of new realities? For more about my concept of what wisdom is, and how to develop it as a professional, take a look at my book, Smart to Wise.

This article was first published in June 2013, in my monthly newsletter/tipsheet, ‘Thoughtscape’. Why not subscribe to my Thoughtscape newsletter?


Risky Shift: Why?

I have written before (Groupthink, Abilene and Risky Shift and Neuroscience of Risky Shift) about Risky Shift. This is a phenomenon every project or change manager needs to be aware of. In short, it is the tendency to for groups to make decisions that have a more extreme risk profile (or more cautious) than any of the members of the group would individually have subscribed to. But why does it happen?

Researchers have proposed a number of theories…

A social psychology of group processes for decision-making

Collins, Barry E.; Guetzkow, Harold Steere.
Wiley, 1964

The authors suggest that a power differential allows higher power group members who favour a more extreme position to persuade other group members to support that position.

Diffusion of responsibility and level of risk taking in groups

Wallach, Michael A.; Kogan, Nathan; Bem, Daryl J.
The Journal of Abnormal and Social Psychology, Vol 68(3), Mar 1964, 263-274.

The authors suggest that in a group, the sense of shared responsibility leaves individuals feeling that they themselves are committing to a lesser share of risk, reducing their level of concern about the implications of the risk.

Social Psychology

Brown, Roger.
New York: Free Press, 1965.

In this classic (and dense) textbook, the author puts forward his theory that in risk valuing cultures like that in the US, where he worked, group members are drawn towards higher risk to maintain their sense of status in the group. This would suggest that, in systemically risk averse cultures, caution would be socially valued leading to cautious shift.

Familiarization, group discussion, and risk taking

Bateson, Nicholas
Journal of Experimental Social Psychology, 1966

This author details experimental evidence leading to the hypothesis that it the discussion process that matters – as group members become more familiar with a proposal, the level of risk seems to diminish: ‘familiarity breeds contempt’ maybe?

The ‘so what?’

  1. Facilitate discussions to minimise power differentials and minimise the impacts of any that remain.
  2. Be clear with group members that they are jointly and severally (that is, individually) responsible for any decision the group makes.
  3. Disentangle status, value and risk. Set the culture around contribution, value and process. Your personal value is linked to your contribution to a robust process.
  4. Facilitate an objective risk assessment of each proposal.


Four Types of Risk

I have been reading an old blog about “Epistemic, Ontological and Aleatory Risk” on Mathhew Squair’s blog, Critical Uncertainties.  I doubt I am alone in having to think hard to get my head around the definitions.  So, to help myself, I made a diagram.

There is a lot to think about in Matthew’s article and it stimulated a lot of parallel thoughts. The summaries and interpretations in this diagram are all mine, but it owes a lot to Matthew’s article.

Four Types of Risk

Regression to the Mean

… or how things tend to “average out”.

My family and I have been on holiday.  It was lovely, thank you for asking.

My wife and I played a game of crazy golf: my three year old daughter gave up quickly.  Here are our scores on the first five holes – of nine:

Mike Felicity
1 3 4
2 4 4
3 4 6
4 7 10
5 4 5
Sub Total 22 29

The writing was on the wall

I was feeling comfortable about an easy win by this stage.  I shouldn’t have been.  Crazy golf for people who play once a year, at most, is more a game of luck than skill.  Five data points is hardly an strongly significant data set.

Let’s look at the last four holes:

Mike Felicity
6 3 2
7 8 4
8 4 4
9 7 6
Total 44 45

I won… Just.

Regression to the mean

Now let’s be clear, nine data points is barely better than five in terms of statistical significance.  But it is the best I have.  From these data, my average score is 4.9 and Felicity’s 5.0.  With standard deviations of around 2, this difference is of No Significance at all.

All we can conclude is that my wife and I are as good (or bad) as each other.

So what happened past the halfway point?

Nothing.  I had had a run of better than average scores, so it should have been no surprise that my scores drifted in the wrong direction.  The opposite happened for Felicity.  Both of our scores “regressed to the mean”.

A Note of Caution

This example is purely illustrative.  Nine data points is insufficient to be sure of the true means of our scores, if we were to play a lot more.  And if we were, I’d like to think that some skill may start to intrude on our play!

The “so what?”

1.  When you estimate the probabilities of risks that are based on the naturally variability of events, be sure you have a big enough sample to calculate any averages.

2. Beware of runs of luck – a short run of good results may mask an underlying mean result that means, at some stage, you will have a run of bad results.

3. Despite warnings that the past is a poor predictor of the future, it is often the only data we have and we are seduced into complacency or fear by runs of statistical chance.  Look in any large set of truly random numbers and you will find runs of numbers with averages well above or well below the mean of the whole set.

Correlation, Causation and Coincidence

Correlation, coincidence and causation: three words that are often confused.

Two things happen together – either at the same time, or sequentially.  Why?

This is a correlation, and the natural human tendency is to assume that there is a reason for the correation; that something caused the two events to occur together.  Daniel Kahneman and Amos Tversky identified this as resulting from one of the more common cognitive biases that humans are prone to: the representativeness bias.

This bias in the way we think creates patterns in our minds that link events in a way that follow established (or merely perceived) rules.  So, when two things occur one after the other, we perceive causation, because it conforms better to our perceptions of how the world works. When two things coincide, we think there must be a reason, and when we spot statistical anomalies, we suspect that something must be going on.

In the UK, the ratio of boys to girls (under 16 years of age) is approximately 51% to 49%.  I don’t think anyone really knows why the gender ratio at birth (allowing for selective terminations) is not 1:1.  The 51:49 ration does seem to be fairly universal in humans.

But what if we found a town with a ratio of 45:55, or an occupation with 59:41?  Would we think there is something in the water, or some occupational hazard affecting reproductive processes?

In fact, in the 1990s, someone measured the proportion of boys to girls among the children of Israeli fighter pilots 16:84.  Since the fighter pilots were all men a the time, what was their occupation doing to their Y chromosome?

However, any sample is likely to diverge from the average.  Take enough samples and some will diverge a lot.  Taken on its own, that sample will look so odd that something must be going on.  Taken with thousands of other samples and all you have is an outlier.  But it is the outliers that we notice.

Two things happen together: there are three possibilities

  1. Coincidence – things do happen together
  2. Direct Causation  – the two things are related, so that one triggers the other
  3. Indirect Causation – the two things are each related to a third thing, that trigger each

The Impact of Sleep on Risk

Making decisions when you haven’t had enough sleep is a bad idea.  We all know this, but now, researchers at Duke University’s Center for Cognitive Neuroscience have shown us why.

In research led by Professor Scott Huettel , carried out principally by graduate student Vinod Venkatraman, adults have been observed carrying out gambling tasks after being kept awake all night, and their responses compared to those of other volunteers who got a night’s sleep.

Vinod Venkatraman (left) and
Scott Huettel

Continue reading

The Inevitability of Randomness

A common question I hear asked is “where can I find out more about project management?”

Often the person asking is not going to be an avid reader of blogs, much lees keen to spend their journey to work dipping into a book.  So happily I can refer them to my favourite piece of project management learning, Channel 4’s “Grand Designs”.  Without a doubt, this is the best source of project management on British TV and I hope it is available in other countries.

Grand Designs Abroad

If you have not seen it, each show follows one person or family constructing a new home, often from nothing.  They combine a host of great subjects:

  • Project management
  • Architecture
  • Design
  • Fabrication methods and materials
  • Human interest (of course!)

If there isn’t a current series running when you read this blog, then there will almost certainly be a repeated series running on More4 if you have Freeview.  If you aren’t in the UK, you’ll have to do your own research.

28 Weeks

What makes it so watchable is presenter Kevin McCloud’s combination of enthusiasm and knowingness.  There was a lovely episode where, unlike many of the people featured, the woman managing the project had a clear plan – in the form of a rather neat Gantt Chart.  Confidently, she told McCloud that the build would take 28 weeks.

“Really?” he responded, letting his eyebrow tell us what he thought of this assertion.  Apparently so – it was all in the plan.  Needless to say, shift happened.

It turned out that the British weather was a primary confounding factor in her case – what we might call a known unknown.

Chaotic Uncertainty

Where some countries are blessed with a climate, here in Britain, we just have weather, which nobody can predict.  The Meteorological Office used to have a stab at long range forecasts but announced in 2010 they would revert to medium range only after predicting a “barbeque summer” for 2009 and a mild winter for 2009/10.

We are in a time of changing global conditions; meaning that, more than ever, there is little apparent pattern to the weather.  The only thing that is predictable is its uncertainty.  The weather is an example of one type of uncertainty: a chaotic system.  The weather is the result of millions of subtle factors interacting according to a small set of simple physical laws.  What makes it impossible to anticipate accurately is the complexity of the interactions.

True Randomness

Gather enough detail and restrict your predictions to a narrow enough time and geographical range, and you can predict weather.  Forecasters do it every day and, on a day-to-day basis, their predictions mostly get close enough to “correct” to be useful.

Some things, however, can not be predicted at all.  You know that on your technology integration project that there will be hold-ups, integration challenges, personality clashes, technology glitches, supply chain challenges and a hundred other tests of your project stoicism.

Risk management tries to anticipate and plan for these.  And over time, we try to learn from the problems that we have faced and apply that learning to minimise and control risk next time.  But here is the caution.

Humans are pattern-forming creatures.  We are so good at spotting patterns that we are wont to see them where they don’t exist.  Derren Brown did a TV programme a few years ago in which he repeated a famous experiment by BF Skinner, in his own, entertaining way.

A miscellaneous bunch of minor celebrities had to score points but they had no idea what they had to do to score their points.  What they were shown was a counter which told them when they had scored a point.  They needed to figure out what to.  So as soon as a point went on, they each made a hypothesis and repeated the action that they thought had caused the increment.

Magical thinking

If, for example, one of them kicked a toy and the counter clicked, they would assume causality.  They would do it again.  An lo; the counter clicked.  Maybe it didn’t work every time, so they invented new, subsidiary rules, like the need to look up before kicking (I don’t recall the actual details).  Before long, most succumbed to “magical thinking”: a set of beliefs for which there was no evidence, except and increasingly long set of rituals that they were prepared to adapt leading, usually, to a hoped-for result.

In fact the clicking of the counter was random (actually, it was controlled by an operator who incremented the points when one of two goldfish crossed a line on its tank wall – not truly random, but certainly wholly unconnected to what the celebrities were doing).

The “so what?”

Set aside the obvious points about alternative medicine, superstition and religious beliefs.  What decisions are you making on your project that are led by nothing more than tradition, faith and the stories of an elderly spouse?

Some things truly are random to us – yes, there are causes, like the weather, so they are not truly “unknown unknowns”.  But at the time of observation, we don’t have sufficient data to uncover the cause; and in the pragmatic world of project management, we may not have the time to.  Yet that is not an excuse for superstition.