# Compare Estimates to Actuals

## 9.2 Compare Estimates to Actuals

Prying yourself loose from single-point/Best Case estimates is half the battle. The other half is comparing your actual results to your estimated results so that you can refine your personal estimating abilities.

Keep a list of your estimates, and fill in your actual results when you complete them. Then compute the Magnitude of Relative Error (MRE) of your estimates (Conte, Dunsmore, and Shen 1986). MRE is computed using this formula:

 (#3)

Figure shows how the MRE calculations would work out for the Best Case and Worst Case estimates presented earlier.

Figure: Figure Example of Spreadsheet for Tracking Accuracy of Individual Estimates

Estimated Days to Complete

Feature

Best Case

Worst Case

Expected Case

Actual Outcome

MRE

In Range from Best Case to Worst Case?

Feature 1

1.25

2

1.54

2

23%

Yes

Feature 2

1.5

2.5

1.83

2.5

27%

Yes

Feature 3

2

3

2.33

1.25

87%

No

Feature 4

0.75

2

1.13

1.5

25%

Yes

Feature 5

0.5

1.25

0.79

1

21%

Yes

Feature 6

0.25

0.5

0.46

0.5

8%

Yes

Feature 7

1.5

2.5

2.00

3

33%

No

Feature 8

1

1.5

1.25

1.5

17%

Yes

Feature 9

0.5

1

0.75

1

25%

Yes

Feature 10

1.25

2

1.54

2

23%

Yes

TOTAL

10.50

18.25

13.625

16.25

80% Yes

Average

29%

In this spreadsheet, the MRE is calculated for each estimate. The average MRE, shown in the bottom row, is 29% for the set of estimates. You can use this average MRE to measure the accuracy of your estimates. As your estimates improve, you should see the MRE decline. The right-most column shows how many estimates are within the best case/worst case range. You should also see the percentage of estimates that fall within the range increase over time.

 Tip #46 Compare actual performance to estimated performance so that you can improve your individual estimates over time.

When you compare your actual performance to your estimates, you should try to understand what went right, what went wrong, what you overlooked, and how to avoid making those mistakes in the future.

Another practice that sets up a feedback loop and encourages accurate estimates is a public estimation review. I've worked with companies that have their developers report on their actual results versus their estimates at a Monday morning standup meeting. This reinforces the idea that accurate estimates are an organizational priority.

Regardless of how you do it, the key principle is to set up a feedback loop based on actual results so that your estimates improve over time. To be effective, the feedback should be as timely as possible; delay reduces effectiveness of the feedback loop (Jørgensen 2002).