superjawes wrote:Hawkwing74 wrote:superjawes wrote:Are these defects during development or after release? Alternatively, is this a released project or an ongoing one?
These are QA defects. Code has not moved to Production environment yet. So pre-release.
So it's stupid. Gotcha.
Stupid is an understatement. The last project I was on that tracked metrics anywhere close to that level had a bug (defect) tracking system. Defects were classified severity 1-5. Severity 1 was a dead in the water type bug -- stuff stopped working, data got corrupted, etc. Severity 5 was for stuff like a random mis-spelling in a help field or something like that. In order to be considered production code, it had to have no sev 1 or sev 2 bugs outstanding. This was code for ground station processing for a NASA project. There probably more than a thousand sev 5 defects logged. As far as I know, there wasn't any effort being made to address any of them other than as a side effect of other work being done.
Depending on what is considered a defect, a simple typo every 2.5 years would bust the metric.
Defects per work hours is a useless metric by itself. I can go an entire year without any defects, but I might only produce 2000 lines of code. Or, I can churn our 50,000 lines of code in that same year that has 50 defects in it that each take a 8 hours of Q/A to find and retest after fix, and another 8 hours to fix. Which is more productive?
If you are only finding 1 defect per 5000 programmer hours, then you don't need a Q/A team. Defects are fine. As developer productivity goes up, the defect rate will go up. So long a productivity is increasing faster than defects, it is a net win.
This all assumes that you can accurately measure developer productivity with something as simply as lines of code produced per hour and defects per LOC. Hint: you can't.
Good luck and be on the look out for other stupid metrics driving management directives that signal its time to be considering a change of employers.
--SS