Geeks With Blogs


Dylan Smith ALM / Architecture / TFS

I wanted to give my take on developer metrics.  I’ve read some posts by a few people out there (here, here, here, and here) that suggest that developer specific metrics do more harm than good.  The first time I read through their arguments they made a lot of sense to me.  But after thinking about it some more (due to reading Joel Semeniuk’s latest blog post) I’ve changed my opinion.  I still believe that team/project level metrics are very important.  But I also believe that developer-specific metrics have a place and can provide a significant value to the team.  The typical argument against developer-specific metrics is that they can have a negative impact due to developers trying to “game” the system and adjusting their behavior to score high on the metrics regardless if it’s what’s best for the project/team/company or not.  Some examples: 

Lines Of Code - This can just cause developers to write verbose overly complex code.  Ideally developers should be striving to write the simplest code that gets the job done…but this metric can put pressure on them to write unnecessarily lengthy code, or just not spend any time/effort cleaning up their code.

# Unit Tests – This can cause developers to write redundant and/or unnecessary unit tests to artificially inflate their score against the metric. 

% Code Coverage – This can cause developers to spend too much time trying to squeeze out that last couple of % of code coverage rather than spending their time on more valuable activities.  Alternatively it can cause them to leave out code that is hard to achieve code coverage on (e.g. Exception handlers).  This is obviously undesirable as it can lead to bugs.

# Bugs – At first this seems like a decent measure of quality, but it can cause arguments about what should or shouldn’t be recorded as a bug and against whom.  If you’re measuring developer performance using this metric the developers are going to fight tooth and nail to avoid having a bug recorded against them; arguing that it’s a feature, or wasn’t in the spec, etc, etc.  To quote Joel Spolsky “…pretty soon, the measurements give you what you “wanted”: the number of bugs in the bug tracking system goes down to zero.  Of course, there are just as many bugs in the code, those are an inevitable part of writing software, they’re just not being tracked.” 

# Bugs Fixed – This just puts pressure to quickly mark a bug as fixed, without giving the proper effort to fully verify that it is fixed.

The list can go on and on. 

There is one metric that I think is especially important though and that is Velocity.  Velocity is an important metric so long as two criteria are met 1) It’s possible to measure developer-specific velocity under the process you employ 2) Velocity is treated as a measure of value delivered and not just some arbitrary measure of progress such as tasks completed, amount of code written, complexity of code written, etc.

Of course measuring developer velocity has its own problems just like every other metric.  A developer could rush out poor quality code just to increase their perceived velocity. 

Despite all the issues identified with these metrics I believe they do provide a good deal of value so long as they are used in combination and not in isolation.  For example, Velocity is an exceptionally valuable metric (IMHO); but as pointed out above if used in isolation it can lead to a focus on rushed code rather than quality code.  This can be mitigated by having other metrics that measure quality such as # of Bugs and % Code Coverage.

In fact, I think most of the issues with the metrics outlined above are resolved simply by using the metrics in combination. 

Lines Of Code – The issue of having unnecessarily verbose and/or complex code can be mitigated by also measuring % Code Coverage and # of Bugs.

# Unit Tests – This metric may not be such a good choice since I believe that % Code Coverage is a more effective measure of the same thing, but the issue with this is redundant or unnecessary tests.  That can be mitigated by also measuring velocity pressuring the developer to not waste time creating unnecessary tests since it will slow down their velocity. 

% Code Coverage – This metric becomes more balanced in the presence of a Velocity metric which will reduce the temptation to spend too much time squeezing out that last couple %.  Also the # of Bugs metric should help balance the desire to leave out code that is difficult to test.

# Bugs – The issue with this can be mitigated somewhat by having a # Bugs Fixed (or # Bugs Found) metric that may help balance the pressure to log a bug vs. not log a bug. 

# Bugs Fixed – The issue with this can be mitigated by having another metric that tracks the number of bugs re-opened that were initially “fixed” by each developer.

I believe that with an appropriate (balanced) combination of metrics the tendency for developers to “game” the system will be much less likely and much more difficult.  Having said that, you still need to be vigilant and monitor your processes and product(s) for weaknesses.  If a weakness is identified that isn’t easily visible in your current metrics, try and devise a new metric that will help make the weakness visible and hopefully put some pressure on improving that area. 

Overall I believe that having a set of developer metrics can provide valuable information to the team.  It can aid in identifying areas of weaknesses and put pressure on improvement.  The tendency or ability to optimize your behavior against the metrics is significantly reduced by the simple practice of utilizing a combination of varied and balanced metrics.  If there is a negative result due to the metrics then I would suggest simply identifying the weakness and introducing a new metric that puts focus on improving it.  Certainly having this information available and the ability to identify and improve weaknesses with ease is better than having no developer metrics in place for fear of developers “gaming” the system.  That sounds to me like resigning to living with the existing weaknesses due to fear of developing new ones in the course of evolving your practices. 

Posted on Wednesday, February 21, 2007 10:11 AM | Back to top

Comments on this post: Developer Metrics - Useful or Harmful?

# re: Developer Metrics - Useful or Harmful?
Requesting Gravatar...
Hello, yes the key metrics to be used for developer/programmer productivity is indeed complex. But still I would like to know your opinion on which metrics do you feel can help or are used presently to improve programmer productivity in a software development process?

please do reply at my email address:

Expecting your feedback as son as possible

Email -
Left by Hardik J. Radia on Jan 10, 2008 12:43 AM

# re: Developer Metrics - Useful or Harmful?
Requesting Gravatar...
I found your post, searching on Developer Velocity. I am becoming a fan of Evidence Based Scheduling, from Joel S. It says Developer Velocity is the estimated time/actual time. If you have the developer responsible for tracking both (maybe using a system like FogBugz, or my dream of having it integrated with TRAC Time Management and SVN--it is close) then it can give a very good metric for the project and the developer. You can have developers do the estimate and tracking together also, on specific features/use cases/requirements, to see how maybe pairs of developers track in terms of their velocity. I find it very fascinating, and think it is the way it is moving.

Thanks for you post!

Best wishes!
Pete Gordon
Left by Pete Gordon on Apr 30, 2008 6:48 AM

# re: Developer Metrics - Useful or Harmful?
Requesting Gravatar...
Hi, I like the explanation very much, but i would like to implement all the key values in a spreadsheet with the desired formula. So it would be really helpful for me if you provide me a template in which all the formula are present, the prime focus it to calculate the performance and productivity

Left by mitul on Sep 12, 2011 9:16 PM

Your comment:
 (will show your gravatar)

Copyright © Dylan Smith | Powered by: