Geeks With Blogs
Development In a Tesseract Observations and Developmental Experiences

In the traditional waterfall development process, one of the key pieces of quality assurance is the traditional peer review. In theory, peer reviews allow additional coders to view the code and hopefully discover issues before the make it into the release of the software.

Unfortunately, in practice,  the reviews do not really add meaningful value to the development process.  Typically, the reviews are done near the end of development, and are almost always superficial in nature.  The first few files in the review get a decent treatment, but quickly the effort turns to just validating the coding standard and perhaps finding some examples of the reviewers pet issues. In an effort to reduce this tendency, often limits are imposed to the number of files, or number of lines that can be placed in a review, but that only adds the additional overhead of more reviews.

My team is currently involved in a very large coding effort and our peer reviews are encountering the same kind of symptoms.  In an effort to improve the situation, I've been moved to make the following suggestion in how to deal with peer reviews in the future.  Some of the remedy involves borrowing techniques from agile development to help focus the point of the reviews.  It is possible that something similar can be done without the use of the agile methods, but I believe that the method described is a good mix of the traditional and the agile.

The Steps

First, it is important not to review things that are better reviewed by a machine.  We are creating a gateway procedure that uses the tools available to us to eliminate the common, clerical findings that clutter up most peer reviews.  The starting point is using a tool, such as Resharper, to assist in getting the code into the standard format.  Also, tools are used to measure complexity as well as check against basic coding standards.  We are currently using the Static Analysis function in Visual Studio Team Developer Edition.  Finally, we have implemented the requirement for executable unit tests (in MbUnit) that are passing and have at least 85% coverage. By letting the peer review moderator ensure that the code has been run through these tools first, and that the code is passing a basic level of quality before involving more than one other person in a review.

Once the review is allowed to go foward, the 'peer review' is actually divided into two parts.  The first review takes the set of unit tests and matches them against the requirements.  The purpose of this review is two-fold,  to ensure that all the requirements have been satisfied, and to ensure that as written, each test actually tests what it says it does.  The code the unit tests are executing is not reviewed at this time. 

Given that the units test are passing (we would hope), then if the requirements map to the tests and the tests test what they say they do, we have a level of confidence in the code under test.  Additionally, if a problem is found with the requirements, or the tests, then we know that it is fruitless to continue with the rest of the review cycle.

After the review of the unit tests is complete, and any findings in the tests have been worked through the codebase, then we proceed with a review of the code itself.  Because of the earlier steps, the reviewers understand the requirements and have reviewed the tests which should give them a basic understanding of what they are actually reviewing.  Additionally, because all the clerical noise and basic 'correctness' has been dealt with via tools and the earlier reviews, the reviewers are free to examine the more important (and more difficult to automate) issues of maintainability, and hard to find errors such as memory leaks. 

 In the pilot projects where we have run this process all the way through, we are seeing additional benefits from our code reviews.

Posted on Monday, August 20, 2007 1:12 PM Software Engineering | Back to top

Comments on this post: Making the Most of Traditional Peer Reviews

Comments are closed.
Comments have been closed on this topic.
Copyright © Steven Mitcham | Powered by: