Geeks With Blogs

  • CaffeinatedTwit It would also be nice if google added some sarcasm filters about 2247 days ago
  • CaffeinatedTwit "Singletons are evil" = 739,000 results..."Singletons are aewsome" = 384,000 results...Evil clearly wins. about 2247 days ago
  • CaffeinatedTwit @mpool Noob! You obviously haven't had enough practice when it comes to drinking at lunch. about 2259 days ago
  • CaffeinatedTwit Spent lunch at Borders working on a blog post for Monday. about 2259 days ago
  • CaffeinatedTwit I'm still in the mode where I'm trying to minimize the refactoring until I get the mocking framework more deeply integrated into our process about 2259 days ago
  • CaffeinatedTwit Just discovered the outref option in rhino.mocks. Opted to use that instead of refactoring legacy code to a return object this time. about 2259 days ago
  • CaffeinatedTwit First time posting on twitter in a couple of weeks and the update fails. Good to know things haven't changed much. about 2260 days ago
  • CaffeinatedTwit Overheard: Is there anyone I can talk to there who is having a better day than you are? (coworker to DMV worker) about 2260 days ago
  • CaffeinatedTwit Installing the final version of R# 4.0. Been getting lots of errors with the pre-release version I am using, so hopefully this will help. about 2268 days ago

Caffeinated Coder A Grande, Triple Shot, Non-Fat Core Dump by Russell Ball
Monitoring and enforcing code quality seems to be somewhat of a holy grail in the software industry in that nearly every development shop pursues this goal but few ever even come close to actually achieving it. Here are a few of the common failed approaches I've seen:
  • Developer's Handbook - Despite being a darling of auditors, I find this approach to be a largely worthless exercise for anyone except perhaps the author of the document. Like all waterfall-based functional specifications, these documents quickly become outdated and are seldom read and often misinterpreted by developers. Even in ideal circumstances where developers are highly motivated to follow guidelines that are well understood, this honor system approach falls hopelessly short simply because of basic human fallibility. Logical mistakes in code are caught because users, testers, or unit tests complain about something not working, but how are code quality mistakes caught? 
  • Code Reviews -  After becoming disillusioned with the honor-based rules approach, many shops attempt to supplement their developer handbooks with a manual code review process. Although a second set of eyes increases the odds of catching errors, this approach is extremely expensive to implement and subject to all sorts of emotional pitfalls that can easily turn the review into either a meaningless rubber-stamping exercise or an ugly battle of egos. Even with the most mature and disciplined developers, this approach ultimately fails to provide the level of code quality that shops are seeking due to a lack of knowledge or simple oversights by the reviewers.   
  • Architects - In order to maximize the effectiveness of reviewers, shops might take the step of designating a specialist, such as an architect, to be in charge of overall code quality. Unfortunately, whatever advantages that this position brings to the review process in terms of technical knowledge and increased authority are often nullified by the specialist being too far removed from the code to appropriately identify issues or too far removed from the team to effectively enforce changes. Even when the specialist is viewed with respect rather than ridiculed as an out-of-touch Architecture Astronaut, suggestions tend to be seen as too subjective to be actionable when compared to hard deadlines. 

Now for some more promising options that I've seen...

  • FxCop - This tool analyzes source code and produces a list of violations where code doesn't comply with a set of preconfigured rules, such as naming standards. By relying on software rather than people to enforce code quality rules, you dramatically decrease the chance for error, the perceived subjectiveness of the process, and the overall cost. Unfortunately, static analysis tools such as FxCop are only capable of monitoring a certain class of the more superficial rules and don't catch logical errors or help identify more complex code quality issues such as cyclomatic complexity, low cohesion, excessive dependencies, or overly complex interfaces.
  • NDepend - This static analysis tool picks up where FxCop leaves off by providing visualizations and SQL-like code querying capabilities to identify the areas of the code that suffer from low cohesiveness, excessive complexity, and inappropriate dependencies. More importantly, it provides a big picture view of a code base that allows developers to identify and prioritize issues with the entire code base rather than just issues pertinent to the particular piece of functionality being implemented. This allows someone like an architect to effectively analyze a code-base without having to be intimately familiar with all of the details and provide objective evidence to backup any recommendations for making changes to the code.

Although I firmly believe that static analysis tools provide the most cost effective gains in code quality and should form the first line of defense in any shop's quest for code quality, I do think that manual reviews should still figure into the equation. Tools will never be able to verify whether code is soluble or that code logically satisfies the user's intent or that IT shared assets are effectively being utilized. I even think that a Developer's Handbook can provide value as long as it remains a living document that offers light-weight guidance rather than trying to exhaustively cover low level details that are ultimately technology specific and subject to change.

For anyone interested in learning more about NDepend, I recommend starting with this old Hanselminutes podcast followed by this quick 3 minute online demo of the code querying language in NDepend. Expect more posts from me on this most excellent tool in the near future.

Posted on Monday, September 17, 2007 1:16 AM Software Development Practices , Tools | Back to top

Comments on this post: Code Quality: The Holy Grail of Software

# re: Code Quality: The Holy Grail of Software
Requesting Gravatar...
Of course if you ever run FXCop against Microsoft generated code, you are asking for alot of rule breaking. Give it a try! :D
Left by Robz on Sep 17, 2007 9:24 AM

# re: Code Quality: The Holy Grail of Software
Requesting Gravatar...
Right on target. In addition, consider projects that still require fixed functionality in a fixed time frame, and the code quality will be the first to suffer. Rarely does someone take time to improve the code base in the midst of a project time-crunch.
Left by Troy T. on Sep 17, 2007 10:05 AM

Your comment:
 (will show your gravatar)

Copyright © Russell Ball | Powered by: | Join free