Monitoring and enforcing code quality seems to be somewhat of a holy grail in the software industry in that nearly every development shop pursues this goal but few ever even come close to actually achieving it. Here are a few of the common failed approaches I've seen:
- Developer's Handbook - Despite being a darling of auditors, I find this approach to be a largely worthless exercise for anyone except perhaps the author of the document. Like all waterfall-based functional specifications, these documents quickly become outdated and are seldom read and often misinterpreted by developers. Even in ideal circumstances where developers are highly motivated to follow guidelines that are well understood, this honor system approach falls hopelessly short simply because of basic human fallibility. Logical mistakes in code are caught because users, testers, or unit tests complain about something not working, but how are code quality mistakes caught?
- Code Reviews - After becoming disillusioned with the honor-based rules approach, many shops attempt to supplement their developer handbooks with a manual code review process. Although a second set of eyes increases the odds of catching errors, this approach is extremely expensive to implement and subject to all sorts of emotional pitfalls that can easily turn the review into either a meaningless rubber-stamping exercise or an ugly battle of egos. Even with the most mature and disciplined developers, this approach ultimately fails to provide the level of code quality that shops are seeking due to a lack of knowledge or simple oversights by the reviewers.
- Architects - In order to maximize the effectiveness of reviewers, shops might take the step of designating a specialist, such as an architect, to be in charge of overall code quality. Unfortunately, whatever advantages that this position brings to the review process in terms of technical knowledge and increased authority are often nullified by the specialist being too far removed from the code to appropriately identify issues or too far removed from the team to effectively enforce changes. Even when the specialist is viewed with respect rather than ridiculed as an out-of-touch Architecture Astronaut, suggestions tend to be seen as too subjective to be actionable when compared to hard deadlines.
Now for some more promising options that I've seen...
- FxCop - This tool analyzes source code and produces a list of violations where code doesn't comply with a set of preconfigured rules, such as naming standards. By relying on software rather than people to enforce code quality rules, you dramatically decrease the chance for error, the perceived subjectiveness of the process, and the overall cost. Unfortunately, static analysis tools such as FxCop are only capable of monitoring a certain class of the more superficial rules and don't catch logical errors or help identify more complex code quality issues such as cyclomatic complexity, low cohesion, excessive dependencies, or overly complex interfaces.
- NDepend - This static analysis tool picks up where FxCop leaves off by providing visualizations and SQL-like code querying capabilities to identify the areas of the code that suffer from low cohesiveness, excessive complexity, and inappropriate dependencies. More importantly, it provides a big picture view of a code base that allows developers to identify and prioritize issues with the entire code base rather than just issues pertinent to the particular piece of functionality being implemented. This allows someone like an architect to effectively analyze a code-base without having to be intimately familiar with all of the details and provide objective evidence to backup any recommendations for making changes to the code.
Although I firmly believe that static analysis tools provide the most cost effective gains in code quality and should form the first line of defense in any shop's quest for code quality, I do think that manual reviews should still figure into the equation. Tools will never be able to verify whether code is soluble or that code logically satisfies the user's intent or that IT shared assets are effectively being utilized. I even think that a Developer's Handbook can provide value as long as it remains a living document that offers light-weight guidance rather than trying to exhaustively cover low level details that are ultimately technology specific and subject to change.
For anyone interested in learning more about NDepend, I recommend starting with this old Hanselminutes podcast followed by this quick 3 minute online demo of the code querying language in NDepend. Expect more posts from me on this most excellent tool in the near future.