Category Archives: Psychology of Risk

T is for Thrill

What is your appetite for risk? If you have a high appetite for risk, then chances are you are what Frank Farley (Professor of Psychology at Temple University, Philadelphia) would describe as a ‘Type T’ personality.

‘Type T’s  crave excitement, stimulation and arousal, often through thrill seeking behaviour. They enjoy variety and change and have a high tolerance for uncertainty. In day to day life, they are psychologically resilient, believing that they control their own fate. They come across as confident and assertive.

Farley splits ‘Type T’s into two sub-types:

T-Positive

The T+ subtype shows predominantly healthy risk-taking. They are highly creative innovators, prepared to challenge conventional thinking and to take the lead. These are the people who find solutions, think differently and change the world.

T-Negative

The T subtype shows a more destructive personality, taking dangerous risks in search of ever greater thrills. Delinquency and crimes of excitement are the result of this personality taken to extremes. At a lesser extent, a T may put their own life at risk in a dangerous sport – that can also result in serious risk to the people around them or to would be rescuers.

There is an excellent 5-minute video of Frank Farley talking about this on YouTube.

 

Towards the end of this video you will have heard Farley discussing how to provide suitable stimulation for ‘Type T’ children. I wonder if T behaviours start to arise when children have insufficient creative and productive – socially appropriate – outlets, which could lead them to a T+ orientation.

The Genetic Source of Type T

In an earlier blog, ‘Risk Taking – it’s in your genes’, I described the genetic research of Luke Matthews (Harvard University) and Paul Butler (Boston University), who have found of mutations to a dopamine receptor gene that may be linked to risk-taking. I would love to see research on correlating these mutations with Type T personalities – mindful as always that correlation would not form proof of causation.

Type T and Project or Change Management

Farley’s last statement in the video is his assertion that surviving in the 21st Century is about dealing with change. This has always been an essential skill for managers of change. Yet, as project managers, we ted to spend a lot of our time managing-out risk and creating an environment that we can control. So I wonder: how much of a Type T makes a good Project Manager or Change Leader?

We need:

  • tolerance for ambiguity
  • a feeling that we can control our fate
  • self-confidence
  • creative thinking and a preparedness to innovate

Yet we must disdain:

  • toleration of any unnecessary risks
  • innovation for its own sake
  • creating or seeking out thrill and stimulation
  • open, rule-free environments

What balance of Type T or its opposite (which I shall name ‘Type C’ for ‘caution’) do you see as appropriate?

Post Script: The Estimation Debate

Some of my readers may be aware – or even involved – in the debate about whether estimation is sensible or practical in project management – especially in large systems projects. I wonder if the two sides of this debate represent in some ways a polarisation of:

  • Type T‘no estimates’ – let’s figure it out as we go along
  • Type C – estimates are essential for accountability and control

I confess that I have not got involved in this debate because others have articulated my own views far more robustly and rigorously than I could have (see in particular Glen Alleman’s Herding Cats blog for many articles on this).

But I also wonder if my inability to get my head around why anyone would not start with making the best estimates that the data permit is not directly related my low Type T tendencies. If you have been interested in the ‘no estimates’ debate, please do comment.

Post Post Script: It’s Bonfire Night

This blog is published on 5 November and tonight is Bonfire Night in the UK. If you are attending or making your own fireworks display and bonfire tonight: be safe!

 

Risky Shift: Why?

I have written before (Groupthink, Abilene and Risky Shift and Neuroscience of Risky Shift) about Risky Shift. This is a phenomenon every project or change manager needs to be aware of. In short, it is the tendency to for groups to make decisions that have a more extreme risk profile (or more cautious) than any of the members of the group would individually have subscribed to. But why does it happen?

Researchers have proposed a number of theories…

A social psychology of group processes for decision-making

Collins, Barry E.; Guetzkow, Harold Steere.
Wiley, 1964

The authors suggest that a power differential allows higher power group members who favour a more extreme position to persuade other group members to support that position.

Diffusion of responsibility and level of risk taking in groups

Wallach, Michael A.; Kogan, Nathan; Bem, Daryl J.
The Journal of Abnormal and Social Psychology, Vol 68(3), Mar 1964, 263-274.

The authors suggest that in a group, the sense of shared responsibility leaves individuals feeling that they themselves are committing to a lesser share of risk, reducing their level of concern about the implications of the risk.

Social Psychology

Brown, Roger.
New York: Free Press, 1965.

In this classic (and dense) textbook, the author puts forward his theory that in risk valuing cultures like that in the US, where he worked, group members are drawn towards higher risk to maintain their sense of status in the group. This would suggest that, in systemically risk averse cultures, caution would be socially valued leading to cautious shift.

Familiarization, group discussion, and risk taking

Bateson, Nicholas
Journal of Experimental Social Psychology, 1966

This author details experimental evidence leading to the hypothesis that it the discussion process that matters – as group members become more familiar with a proposal, the level of risk seems to diminish: ‘familiarity breeds contempt’ maybe?

The ‘so what?’

  1. Facilitate discussions to minimise power differentials and minimise the impacts of any that remain.
  2. Be clear with group members that they are jointly and severally (that is, individually) responsible for any decision the group makes.
  3. Disentangle status, value and risk. Set the culture around contribution, value and process. Your personal value is linked to your contribution to a robust process.
  4. Facilitate an objective risk assessment of each proposal.

 


Regression to the Mean

… or how things tend to “average out”.

My family and I have been on holiday.  It was lovely, thank you for asking.

My wife and I played a game of crazy golf: my three year old daughter gave up quickly.  Here are our scores on the first five holes – of nine:

Mike Felicity
1 3 4
2 4 4
3 4 6
4 7 10
5 4 5
Sub Total 22 29

The writing was on the wall

I was feeling comfortable about an easy win by this stage.  I shouldn’t have been.  Crazy golf for people who play once a year, at most, is more a game of luck than skill.  Five data points is hardly an strongly significant data set.

Let’s look at the last four holes:

Mike Felicity
6 3 2
7 8 4
8 4 4
9 7 6
Total 44 45

I won… Just.

Regression to the mean

Now let’s be clear, nine data points is barely better than five in terms of statistical significance.  But it is the best I have.  From these data, my average score is 4.9 and Felicity’s 5.0.  With standard deviations of around 2, this difference is of No Significance at all.

All we can conclude is that my wife and I are as good (or bad) as each other.

So what happened past the halfway point?

Nothing.  I had had a run of better than average scores, so it should have been no surprise that my scores drifted in the wrong direction.  The opposite happened for Felicity.  Both of our scores “regressed to the mean”.

A Note of Caution

This example is purely illustrative.  Nine data points is insufficient to be sure of the true means of our scores, if we were to play a lot more.  And if we were, I’d like to think that some skill may start to intrude on our play!

The “so what?”

1.  When you estimate the probabilities of risks that are based on the naturally variability of events, be sure you have a big enough sample to calculate any averages.

2. Beware of runs of luck – a short run of good results may mask an underlying mean result that means, at some stage, you will have a run of bad results.

3. Despite warnings that the past is a poor predictor of the future, it is often the only data we have and we are seduced into complacency or fear by runs of statistical chance.  Look in any large set of truly random numbers and you will find runs of numbers with averages well above or well below the mean of the whole set.

Ten Reasons Crises Happen – and what you can do about them.

Risks don’t always materialise one at a time. Sometimes a whole flood of them takes over your project or business and creates a crisis. We usually think that crises emerge from nowhere, but this is rarely true. More often, we can see the crisis emerging with the benefit of hindsight and, with that in mind; we can draw conclusions as to what caused it. These conclusions offer valuable lessons that can allow us to prevent them… if we are wise enough.

Crises most often happen due to overconfidence, which can lead to an inability to perceive the available evidence:

1. Overconfidence in your predictions

The more work we put into our predictions and the more detail we give them, the more we are seduced into believing they are true. Remember that a prediction is just one scenario and any serious manager or project leader will identify a range of scenarios and make their plans flexible enough to deal with all of them.

2. Overconfidence in your plans

Plans are a special type of prediction – but the problem is compounded by each plan being based on its own set of predictions and assumptions. Make sure that each major element of your plan has its own contingency and risk mitigation strategy.

3. Overconfidence in your systems and processes

Systems and processes create a controlled environment, but they can go wrong or, more likely, we discover that they were not quite the right process for the situation. When we misread the context and implement a poorly chosen process, it is bound to fail under pressure. Constantly monitor your systems for the start of failure modes.

4. Overconfidence in your people

People are a common source of error, so constantly review performance and give feedback. Prioritise training and support, and use people and systems to provide back-up to one-another.

5. Overconfidence in random or unpredictable events,
like the weather

The clue is in the words “random” and “unpredictable”. Yet we often let nothing stronger than faith dictate our planning assumptions. Use evidence and statistics to draw up your scenarios and if sudden bad weather can be catastrophic, either change your plan or have a robust crisis plan in reserve.

6. Overconfidence leading to blindness to small but significant changes

The “confirming evidence trap” leads us to notice things that happen as we expect them to and be blind to small deviations that should alert us to forthcoming problems. Evaluate the detailed data and use “fresh eyes” from an outsider if you are in any doubt.

7. Overconfidence leading to blindness to outside influences

We think we are in control, but constantly survey the horizon for external changes that can affect your project or your business.

8. Overconfidence leading to blindness to the possibility of human error

We think we have the best people with perfect training, but many factors can lead to human error, like sensory overload, stress, multi-tasking, or distractions. Build in fail-safe modes if failure can lead to disaster.

9. Overconfidence leading to blindness to human weaknesses

People succumb to temptation – to take a day off unannounced, or to come to work tired, or ill, or under the influence of drugs or alcohol. We can be seduced by charismatic or attractive colleagues and are susceptible to greed, fraud and bribery. If you think your people are different, ask “what makes them so?” You’ll need a pretty special answer.

10. Overconfidence leading to deafness to warnings

Cassandra warned the Trojans but, believing their walls impregnable, they were deaf to her warnings. Which of your walls have weaknesses. The Greeks came in through the main gate. Listen to Cassandra and evaluate her warnings objectively.

 


Risk Happens! by Mike ClaytonLearn more about managing risk and
avoiding failure in business projects in
Dr Mike Clayton’s new book,
Risk Happens!

See www.riskhappens.co.uk for details.

Buy it in paperback from Amazon here,
or in Kindle format, here.

Why the NHS should Beware the Sunk Cost Trap

The Public Accounts Committee has reported on the NHS National Programme for IT (NPfIT) and, as expected, their report is scathing.

This follows an equally damning report from the National Audit Office (NAO) earlier this year.  (By the way, when you click on the link to my earlier blog, you’ll get a quick guide to the acronym spaghetti that comes when you mix IT projects with Government departments!).

The Report’s Conclusions

Not to put too fine a point on it, The Public Accounts Committee said that the problems with the electronic Detailed Care Records system are making the £7bn project “unworkable”.  £2.7bn has already been spent on what one member of the committee, Richard Bacon, described as a “pipedream”.

The Chair of the Committee, Margaret Hodge, said: “Trying to create a one-size-fits-all system in the NHS was a massive risk and has proven to be unworkable. It should now urgently review whether it is worth continuing with the remaining elements of the care records system.”

BBCBackground article from the BBC

Interviews with Richard Bacon on BBC TV News and
the Radio 4 Today Programme

The Sunk Cost Trap

It is far too easy to look at the committed £2.7bn and see that as a reason to keep going – to avoid the financial, psychological and political pain of having to write it off.  This is our psychological instinct.

It is called the “sunk cost trap” – paying too much attention to the committed investment.  Any decision making must be in reference to the future options: if we do this, the costs and benefits will be these.  No future action will recoup the past investment.  It is gone.

So the analysis must look at what actions will give the best results in spending the remaining budget, some £4.3bn.  The unspent £4.3bn could be re-allocated, to buy other, proven systems, and this achieve good value for money… or maybe spent directly on patient care.

This is provided the Government can extract itself from a complex web of massive contracts with global IT companies, who would, no doubt, want to fight tooth and nail to protect their investments.  This cost and risk must be factored into any calculation upon which a decision is based.

One Advantage

The current situation offers one big advantage over many failing Government projects.  The present Government did not commission the project and, indeed, opposed it.  This means that there is no sunk “political cost” and it is easy to see Ministers distancing themselves further from it.  Indeed, we are already seeing this from current Health Secretary, Andrew Lansley, seen here in a BBC interview.

The “so what?”

Currently, Government is talking about getting the best from its contracts.  This may be the best option, but it must evaluate all of its options before making a decision, without thought of the £2.7bn already spent.

We will see: The Department of Health has confirmed that it will announce plans for the future of ICT in the NHS this autumn.