Tuesday 30 July 2013

Cognitive Dissonance


I recently read the book Mistakes Were Made (But Not by Me) . The authors relate the problems of cognitive dissonance. In particular, how in the face of clear evidence, we often deny that we made a mistake.

They discuss clinic psychology and repressed memory. They discuss police interrogation techniques and false prosecutions. They discuss how cognitive dissonance and self-justification can led from small problems in marriages to divorce, or from small acts of dishonesty to major crimes and fraud.

I suggest that project managers when faced with concrete evidence that project managers' cost and time estimates are wrong will find ways to protect their egos by self-justification. They will find reasons why this evidence does not apply to them. They will explain how their project will be different.

Furthermore, the authors provide evidence that people with the highest self-esteem will be the most likely to deny the evidence. These experts' estimates will not be any better but they will have much more confidence in their estimates.  They will hold more strongly to their original estimates in the face to evidence proving they are wrong.

The authors describe the training of police investigators who are trained using a manual on interrogation techniques that will help obtain a confession from the suspect. The manual provides suggestions on how to determine if a suspect is lying. However, in controlled experiments, those police investigators, who were trained with this manual, did no better than untrained university students at determining if a suspect was lying. The trained investigators were however much more confident that they had correctly distinguished the liars from those who were telling the truth.

This makes me wonder if the courses taught by the Project Management Institute using their Book of Knowledge do something similar. They give project managers more confidence in their estimates but not more accurate estimates.

Deception and Intelligence


I recently read a book called The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life . Robert Trivers writes about how and why animals and humans try to deceive each other.

Trivers opens the book with a discussion of animal behaviour. In particular, he mentions the cuckoo. Cuckoos lay their eggs in other birds' nests and thereby get out of the effort of roosting and feeding the newborns.

Some of these birds have learned to count their eggs. If they find that there are more eggs in their nest than they laid, they abandon the nest and go somewhere else.

So to counter this the cuckoos have learned that when they lay an egg in some other bird's nest, they should push one of the existing eggs out of the nest.  Then the count is the same.

To counter this, the other birds have learned to look for broken eggs on the ground below their nests.

In this way, both the cuckoo and the other birds are constantly learning different strategies to deceive and counter the deception.

Trivers suggests that this deception and counter-deception is how intelligence has been formed over time.  Also it happens much faster than evolution would suggest.

We have seen in earlier posts that project managers tend to have an optimism-bias.  They believe their projects will come in on time and on budget.

They may be attempting to deceive the senior decision makers. According to Trivers, the decision makers should be learning from this deception and trying to counter it. 

I have not seen this type of learning taking place.  Senior decision makers do not appear to be attempting to counter project managers' optimism bias.

The only person who I have seen who appears to be recommending that this deception should be countered is Bent Flyvberg.

I recommend Flyvberg's article “Over Budget, Over Time, Over and Over Again” found here and his books Megaprojects and Risk and Decision-Making On Mega-Projects in which he suggests methods to counter project managers' optimism bias.

Thursday 6 June 2013

On Being Strategically Wrong

I just finished a book by Robert Kurzban called Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Kurzban is a evolutionary psychologist.

I have read many behavioural economics books recently, such as Thinking, Fast and Slow, Wait: The Art and Science of Delay, and How We Decide. Also in the last couple of years, I have read books like Predictably Irrational: The Hidden Forces That Shape Our Decisions , and its critic's book The Logic of Life: The Rational Economics of an Irrational World.

All of these books describe the same famous psychological experiments which try to prove that humans are not rational beings always weighing the costs and benefits of their actions and then taking actions that are in their own self-interest as described by economic theory. However, they often draw different conclusions. It can be quite confusing.

Kurzban's book seems to be the exception. It makes the other books look silly. His concept of the modular mind explains how our brains have developed over millions of years of evolution. Our minds contain many parts or “modules” each with different functions. Sometimes these modules don't communicate very well to each other.

So saying “I think” or “Someone is acting in their self-interest” begs the question who is "I" and what is a "self".

As Kurzban describes most of us have the impression that somewhere inside our head is a central control that is the brains controlling our thoughts. Also, we have the impression that the part of the brain that controls speech actually speaks for all of our modules.

However, split brain patient experiments prove that parts of the right side of the brain can become disconnected from the speaking part of our brain on the left side.

Similar experiments with normal people show that there are many parts of the brain that are not connected to our speaking part. Kurzban suggests that this is why we may have strong opinions about subjects like legalizing abortion. recreational drugs or prostitution without being able to explain the logic behind our opinions.

I highly recommend Kurzban's book.

However, my own field of expertise is optimism bias in project management. Late in my career in the Department of National Defence after I had completed my PhD dissertation entitled Cost Estimation and Performance Measurement in Canadian Defence, I remarked to the Director of Costing Services that Project Managers' estimates of costs should not be trusted. They are unrealistically optimistic about their projects and will systematically underestimate the costs and overestimate the benefits. The Director brushed my comments aside and quickly replied, “Project managers have to be optimistic”.

Kurzban has an interesting insight into this optimism bias from an evolutionary point of view. Being unrealistically optimistic should have put people at an evolutionary disadvantage over time. If some people were unrealistic about their chances of survival in risky situations and acted irrationally, evolution would suggest that their genes would be killed off.

Kurzban hypothesizes that there might be an evolutionary advantage from optimism or being “strategically wrong” in social settings. Namely, it may be helpful in persuading others to do your wishes if you truly belief your plans will be successful.

Although part of your brain may know the facts about the likely success of your plans, the part of your brain that wants to persuade people is able to take control of your behaviour. In that way, you can be convincing in your overly optimistic statements about your project and not actually be lying in the sense of saying something that you don't actually believe.

Therefore, project managers are not really lying about the future costs of their projects. The part of their brain that controls speech may truly believe what they are saying. No amount of factual information about the costs of similar projects will be able to convince them that they are being unrealistic. In fact, it is likely that part of their brain already knows the facts. Unfortunately, that part of their brain is not able to take control of their behaviour.

Sunday 31 March 2013

Modelling and Simulation as Thought Experiments

In my last post, I talked about the potential use of linear regression in cost estimation.  Linear regression is a simple type of model.  I spent much of my career building and using simulation models and mathematical models to predict system behaviour.  These models, that were sometimes complicated, were simplifications of the real-world that could be solved using a computer.

I recently read Jim Manzi's new book, Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society. In it, he discusses the potential value of randomization to divide subjects into test and control groups during experiments. 

Manzi begins the book with a summary of the history of experimentation. He notes that the success of the physical sciences is directly related to the fact that the problems in those fields have low causal density. 

Manzi suggests that the reason that social science has made relatively little progress is that predicting human behaviour involves high causal density and is holistically integrated.

Manzi says that randomized field trials, that have proven useful in clinical trials, are being applied successfully by modern businesses. He suggests that they should be applied more widely in social science and public policy. He says that randomized field trials are the only scientific way to determine if the findings from social science research are valid and the proposed public policies will have the desired effect.

In Manzi's opinion, theory and experimentation are a continuous cycle of knowledge development. However, they are quite separate activities.  Theories can be developed in any manner one may wish.  However, experimentation involves a rigorous method that includes test and control groups and the ability to conduct replications.

By this reasoning, modelling and simulation can be considered an extensive form of theory development.  

For the predictions from models and simulations to be verified, one would need to conduct randomized field trials in the real-world.

Friday 29 March 2013

Belief in Modelling and Simulation

Recently I read Nate Silver's book The Signal and the Noise: Why So Many Predictions Fail — but Some Don't. Nate Silver develops models and uses them to make predictions. He suggests that if the developer of the model thinks his model is good, he or she should be willing to bet on the predictions that it makes.

I also read Daniel Kahneman's book Thinking, Fast and Slow. Kahneman discusses Philip Tetlock's book Expert Political Judgment: How Good Is It? How Can We Know? which suggests that the best experts in making political estimates and forecasts are no more accurate than fairly simple mathematical models of their estimative processes. This is yet another confirmation of what Robyn Dawes termed "the robust beauty of improper linear models." The inability of human experts to out-perform models based on their expertise has been demonstrated in over one hundred fields of expertise over fifty years of research; one of the most robust findings in social science.

In an earlier post, I mentioned the company PRICE Systems Inc who use linear regression to estimate the acquisition cost of military equipment. In a paper by Francois Melesse and David Rose on behalf of the Armed Forces Comptroller called "The Mother of All Guesses", the authors suggest that linear regression can be used not only to estimate the cost of a new military equipment but also to estimate a confidence interval around the cost estimate.

Here is an example of how this can be done based on a sample of 13 observations.

  Cost $Million       X1      X2        X3      X4
      52.7     55      9      20     13
      73.5     49     15      27      9
      61.4     44     11      15     15
      32.6     43      7       8      6
      28.9     38      7      11      1
      47.4     38      8      14      4
      40.5     37      5      10     14
      21.4     28      4       6      4
      15.4     26      2       4      4
      37.5     24      6       6      6
      57.1     21      5       6      4
      21.1     19      3       3      4
      20.0     10      1       2      4

Using the Microsoft Excel linear regression application, they found the following statistics.
 
Regression Statistics
Multiple R 0.91694416
R Square 0.84078659
Adjusted R Square 0.76117989
Standard Error 8.89208319
Observations 13
ANOVA
  df SS MS F Significance F
Regression 4 3340.43608 835.109 10.56176 0.00280361
Residual 8 632.553147 79.06914
Total 12 3972.98923      
  Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%
Intercept 21.2250048 8.29795744 2.557859 0.033759 2.08988065 40.36013 2.089881 40.36013
X Variable 1 -0.72742589 0.40687546 -1.78783 0.111608 -1.6656824 0.210831 -1.66568 0.210831
X Variable 2 4.22796199 1.91636066 2.206245 0.058422 -0.1911736 8.647098 -0.19117 8.647098
X Variable 3 0.70018905 1.13353292 0.617705 0.553942 -1.9137425 3.314121 -1.91374 3.314121
X Variable 4 1.18724 0.70677474 1.6798 0.131508 -0.4425855 2.817065 -0.44259 2.817065

So the regression equation is

Cost = 21.225 - 0.727(X1) + 4.228(X2) + 0.700(X3) + 1.187(X4) 

Our new equipment has the following values for X1, X2, X3 and X4.

X1 = 33, X2 = 5, X3 = 8, X4 = 1

So the predicted cost is

Cost = 21.225 - 0.727(33) + 4.228(5) + 0.700(8) +1.187(1)= 25.15

That is, $25.15 million.

The standard error is $8.89 million.

So based on the normal distribution, 90% of the time the true value will be 1.645 standard deviations away from the estimate.  Thus, the 90% confidence interval on the estimate is

[25.15 – 1.645(8.89), 25.15 + 1.645(8.89)] 

= [$10.52 million, $39.78 million].

I would suggest that this estimate based on a linear regression model of past cost data would be a better prediction of the final cost than the expert opinion of the project manager who is subject to optimism bias.  You can bet on it.

Thursday 21 February 2013

Mental Simulation, Intuition and Insight

In an earlier post, I mentioned Gary Klein, who studies naturalistic decision making or intution.

In his book Sources of Power: How People Make Decisions , he explains how he started by interviewing fire-fighters. He found that when fire-fighters make life and death decisions, they don't consider alternatives and evaluate them according to criteria to determine the best option as suggested by the process recommended by many Operations Researchers for rational decision-making .

Instead, fire-fighters run mental simulations to try to predict the outcome of one course of action after another until they find one that they believe will work.

Without this understanding, it would appear that they are using intuition to make decisions. However, Klein believes that this type of mental simulation only works after many years of experience.

Also in an earlier post, I mentioned that the ultimate goal of Operations Research is the creation of a paradigm shift. Another word for paradigm shift that applies to individuals is “insight” or the “Aha” effect.

In this article, Klein explains how a friend of his gained insight with the help of an associate.  The associate ran the friend through a mental simulation.  Through the mental simulation, his friend could see the fallacy in his thinking and discover how to change is mindset.

A tool of Operations Research is computer simulation. Computer simulations can take many months to build. They also can be difficult to interpret and explain. Because of these latter issues, much of the time computer simulations do not have an impact commensurate with the effort to build them.

If we follow Gary Klein's advice, we should use mental simulation to explain the findings from our computer simulations. This might help us have more impact in changing paradigms.

Wednesday 6 February 2013

Statistical Contingency Cost Estimation


Brent Flyvberg , who wrote Megaprojects and Risk: An Anatomy of Ambition, describes a technique for conducting statistical contingency cost estimation, called reference class forecasting.  This is another form of outside-view that could be useful for improving cost estimation processes.

I will provide an example of how it could help a defence capital program.

First, one needs to collect a representative set of data on the capital cost overruns from the past.  That is, collect data on the original cost estimate for the program and the final actual cost of the program.

Below is simulated cost overrun data for 50 capital programs.

Case #
Overrun
1
100%
2
110%
3
70%
4
140%
5
60%
6
90%
7
90%
8
110%
9
80%
10
90%
11
60%
12
100%
13
50%
14
110%
15
100%
16
110%
17
120%
18
110%
19
140%
20
130%
21
90%
22
100%
23
80%
24
80%
25
60%
26
70%
27
100%
28
80%
29
90%
30
110%
31
110%
32
90%
33
90%
34
100%
35
80%
36
120%
37
90%
38
80%
39
100%
40
80%
41
80%
42
110%
43
110%
44
60%
45
90%
46
140%
47
40%
48
110%
49
120%
50
110%

Then I can find the cumulative probability distribution function from this data.  I need to sort the data from lowest to highest and calculate the appropriate the percentile value for each value.

Below is a table showing the cumulative probability results for this sample.

Overrun
Percentile
40%
2%
50%
4%
60%
6%
60%
8%
60%
10%
60%
12%
70%
14%
70%
16%
80%
18%
80%
20%
80%
22%
80%
24%
80%
25%
80%
27%
80%
29%
80%
31%
90%
33%
90%
35%
90%
37%
90%
39%
90%
41%
90%
43%
90%
45%
90%
47%
90%
49%
100%
51%
100%
53%
100%
55%
100%
57%
100%
59%
100%
61%
100%
63%
110%
65%
110%
67%
110%
69%
110%
71%
110%
73%
110%
75%
110%
76%
110%
78%
110%
80%
110%
82%
110%
84%
120%
86%
120%
88%
120%
90%
130%
92%
140%
94%
140%
96%
140%
98%

Then I produce a smooth curve of the cost overrun versus the percentile.  See the table and graph below.

Overrun
Percentile
40%
2%
50%
4%
60%
9%
70%
15%
80%
25%
90%
41%
100%
57%
110%
75%
120%
88%
130%
92%

 
 
From this graph, I can use the cumulative percentage value on the vertical axis to lookup a cost overrun value on the horizontal axis.  In this way, I can estimate the probability of an actual cost overrun being less than or equal to a particular overrun value.  For example, 25% of the time the actual cost overrun will be less than or equal to 80% of the initial estimate, 50% of the time the actual cost overrun will be less than or equal 100%, and 90% of the time the actual cost overrun will be less than or equal to 125%.

An easier way to interrupt these results is by using the inverse of this function which I found by linear interpolation.  In this case, I can provide a confidence level that a particular contingency cost will cover the expected cost overrun.  See table and graph below for the inverse function.

Confidence Level
Contingency Cost
10%
62%
20%
75%
30%
83%
40%
89%
50%
96%
60%
102%
70%
107%
80%
114%
90%
125%



Thus, using this graph, if I wanted to be 90% confident that I covered the expected cost overrun, I would need to have a contingency cost of 125%.  A contingency cost of 60% would only provide 10% confidence of covering the expected cost overrun.