art with code

2010-05-05

Bayesian effort estimation game

Update: First round results!

How accurately can you estimate how much effort it takes you to write a new piece of code? Fear not, I have a solution! An untested solution! Very academic, you might say!

  1. Compile a list of features of different sizes that you're going to implement.

  2. Pick a small feature for starters.

  3. Estimate the effort required for it: lines of code, time to "it compiles!", time spent fixing bugs, time spent testing, time spent documenting.

  4. Document the effort: Write in your notebook when you start and stop working on the feature, and the lines of code (look at the commit data). If you don't work on the feature, write why.

  5. When you're done with an estimated task, compare the estimate and the effort by computing the effort/estimate -ratio for the task. This is your new a priori estimate error for that task.

  6. Pick a new feature, estimate, multiply the estimate by the a priori estimate error.

  7. When completing the new feature, calculate its effort/estimate -ratio. Add the ratio to the list of ratios, use their average as the new a priori estimate error. When you have a bunch of errors, plug them into R and start keeping track of their mode, standard deviation and whatever useful statistical variables you can think of. Plot a curve for estimate error per task size.

  8. The goal of the game is to keep the average estimate error of the previous 10 tasks as close to 1 as possible as long as possible.

  9. Take photographs of the people playing this game and make trading cards of them with seasonal averages and all the interesting statistics you collected earlier.

  10. Sell the trading cards to kids.

  11. Profit!

No comments:

Blog Archive