Thursday 21 February 2013

Mental Simulation, Intuition and Insight

In an earlier post, I mentioned Gary Klein, who studies naturalistic decision making or intution.

In his book Sources of Power: How People Make Decisions , he explains how he started by interviewing fire-fighters. He found that when fire-fighters make life and death decisions, they don't consider alternatives and evaluate them according to criteria to determine the best option as suggested by the process recommended by many Operations Researchers for rational decision-making .

Instead, fire-fighters run mental simulations to try to predict the outcome of one course of action after another until they find one that they believe will work.

Without this understanding, it would appear that they are using intuition to make decisions. However, Klein believes that this type of mental simulation only works after many years of experience.

Also in an earlier post, I mentioned that the ultimate goal of Operations Research is the creation of a paradigm shift. Another word for paradigm shift that applies to individuals is “insight” or the “Aha” effect.

In this article, Klein explains how a friend of his gained insight with the help of an associate.  The associate ran the friend through a mental simulation.  Through the mental simulation, his friend could see the fallacy in his thinking and discover how to change is mindset.

A tool of Operations Research is computer simulation. Computer simulations can take many months to build. They also can be difficult to interpret and explain. Because of these latter issues, much of the time computer simulations do not have an impact commensurate with the effort to build them.

If we follow Gary Klein's advice, we should use mental simulation to explain the findings from our computer simulations. This might help us have more impact in changing paradigms.

Wednesday 6 February 2013

Statistical Contingency Cost Estimation


Brent Flyvberg , who wrote Megaprojects and Risk: An Anatomy of Ambition, describes a technique for conducting statistical contingency cost estimation, called reference class forecasting.  This is another form of outside-view that could be useful for improving cost estimation processes.

I will provide an example of how it could help a defence capital program.

First, one needs to collect a representative set of data on the capital cost overruns from the past.  That is, collect data on the original cost estimate for the program and the final actual cost of the program.

Below is simulated cost overrun data for 50 capital programs.

Case #
Overrun
1
100%
2
110%
3
70%
4
140%
5
60%
6
90%
7
90%
8
110%
9
80%
10
90%
11
60%
12
100%
13
50%
14
110%
15
100%
16
110%
17
120%
18
110%
19
140%
20
130%
21
90%
22
100%
23
80%
24
80%
25
60%
26
70%
27
100%
28
80%
29
90%
30
110%
31
110%
32
90%
33
90%
34
100%
35
80%
36
120%
37
90%
38
80%
39
100%
40
80%
41
80%
42
110%
43
110%
44
60%
45
90%
46
140%
47
40%
48
110%
49
120%
50
110%

Then I can find the cumulative probability distribution function from this data.  I need to sort the data from lowest to highest and calculate the appropriate the percentile value for each value.

Below is a table showing the cumulative probability results for this sample.

Overrun
Percentile
40%
2%
50%
4%
60%
6%
60%
8%
60%
10%
60%
12%
70%
14%
70%
16%
80%
18%
80%
20%
80%
22%
80%
24%
80%
25%
80%
27%
80%
29%
80%
31%
90%
33%
90%
35%
90%
37%
90%
39%
90%
41%
90%
43%
90%
45%
90%
47%
90%
49%
100%
51%
100%
53%
100%
55%
100%
57%
100%
59%
100%
61%
100%
63%
110%
65%
110%
67%
110%
69%
110%
71%
110%
73%
110%
75%
110%
76%
110%
78%
110%
80%
110%
82%
110%
84%
120%
86%
120%
88%
120%
90%
130%
92%
140%
94%
140%
96%
140%
98%

Then I produce a smooth curve of the cost overrun versus the percentile.  See the table and graph below.

Overrun
Percentile
40%
2%
50%
4%
60%
9%
70%
15%
80%
25%
90%
41%
100%
57%
110%
75%
120%
88%
130%
92%

 
 
From this graph, I can use the cumulative percentage value on the vertical axis to lookup a cost overrun value on the horizontal axis.  In this way, I can estimate the probability of an actual cost overrun being less than or equal to a particular overrun value.  For example, 25% of the time the actual cost overrun will be less than or equal to 80% of the initial estimate, 50% of the time the actual cost overrun will be less than or equal 100%, and 90% of the time the actual cost overrun will be less than or equal to 125%.

An easier way to interrupt these results is by using the inverse of this function which I found by linear interpolation.  In this case, I can provide a confidence level that a particular contingency cost will cover the expected cost overrun.  See table and graph below for the inverse function.

Confidence Level
Contingency Cost
10%
62%
20%
75%
30%
83%
40%
89%
50%
96%
60%
102%
70%
107%
80%
114%
90%
125%



Thus, using this graph, if I wanted to be 90% confident that I covered the expected cost overrun, I would need to have a contingency cost of 125%.  A contingency cost of 60% would only provide 10% confidence of covering the expected cost overrun.