Geeks With Blogs
Blake Caraway
two basic questions

It seems silly and awfully fundamental, but ask any professional software developer this question:
    Does your code work correctly?
When faced with answering this question, most developers will - and should - undoubtedly reply with a resounding "hell yes, my code works!". Now follow that up by asking another question:
    Can you prove the claim that your code works correctly in the next 10 minutes?
The answer to this second question speaks volumes not only about a developer's skillset, but also the team he or she works with, specifically that team's ability to provide solid business value in a timely and efficient manner.


proving code works: the debugger is not the answer

Debugger. The name says it all. It exists so one can fire it up to determine what's going on inside an application line by line b/c the behavior is in question - i.e. there's a bug and you need to determine where it is and what conditions are causing it (duh). If the sole method of determining that your code does (or likely does not) work as expected is by firing up a fully-functional system and actually using the debugger to put the application through its paces by hand, then you are putting your business at risk. It's that simple. The more applications your company has in play, the bigger the risk. Developing - not to mention trying to sustain - business applications in this way is lunacy.

You must be able to prove that a particular system component is working before the final product is assembled. In my experience, this is not how the majority of software is developed throughout the industry. Unfortunately, I've seen (and continue to see) too many developers blindly bang out a new feature and then fire up the debugger to step through the code in order to see it in action for the first time, trying to make sure the new feature plays nicely within the application. Why do we accept this risky behavior that continually jeopardizes our business' ability to generate revenue by constantly turning loose half-baked, untested software applications full of goo into the production environment? Why do we then complain about the amount of time spent supporting said goo-filled application, as if we had nothing to do with bringing about this sad state of affairs?

Using the debugger as the only code verification method carries with it some of the following bad habits:
  • FEAR & UNCERTAINTY -- A developer is afraid of the code he just wrote, thus the constant need of stepping into the code and watching it run all the time in a fully functional system.
  • WASTED TIME -- Requiring a complete, fully functional, end-to-end application environment on your dev box so you can employ the 'code and debug' method of feature development can take a lot of time to set up just right. Making sure you have a fully-loaded database on your system, live web services running somewhere, a message queuing infrastructure configured and running, and/or maybe even a security/authorization infrastructure in place can take a lot of time to configure and keep in working order -- all so a developer can fire up the application and step through the code to see if it works or (more likely) not.
  • GREATER CHANCE OF DEVELOPER ERROR - The manual act of using the debugger to step through code verifying application behavior requires the developer to think about, remember, and reproduce all valid (and invalid) usages of a system component, making sure it can handle all the different imaginable scenarios. Without some sort of test automation, it's almost a certainty the developer will forget to run through one or more scenarios when stepping through the code manually.
I've actually seen developers test entire order systems by hand over a number of days thinking about all the various scenarios that *could* occur, generating actual XML files, waiting for services to pick up said orders, process them into the system, then query data 10 different ways from Sunday to determine SUCCESS or FAILURE. Wow. So now the part of the business that enables revenue generation relies completely on a single developer's memory and uncommon ability to work in this insane manner. Thusly, this developer is the ONLY person on the team that knows about this system b/c there's way too much tribal knowledge to share with others.


testing, testing...is this thing on?

A very obvious fix to this problem is, of course, automated testing. By supporting our code with unit (written before the code - i.e. Test Driven), integration, and user-acceptance tests (and automating them so they run regularly), we help ourselves out immensely by assuring we have a working codebase every step of the way. This feedback is priceless! That sound you hear when some poor, unsuspecting user discovers/proves your application doesn't work is the sound of potential revenue running to your closest competitor and then telling everyone about the terrible experience they had with your company. The earlier on in the development cycle you receive feedback from your code that it's satisfying all business requirements and behaving as expected, the better.

For some time now, we've been aware of the virtues of a software QA organization within a business. But for too long developers have viewed QA organizations as corporate 'bus boys'; there to clean up the mess made during a project's design and development cycle. I recently had a colleague tell me he needed someone else to test the code he's written over the past month (this is days before the code is going to be put into production - doh!). No doubt having multiple parties exercise code is valuable, but if you don't know your code works as expected and you are waiting for others to prove it, then it's time to rethink your approach. This irresponsible behavior on the part of individuals and entire software development teams alike must be extinguished.

Despite the rising popularity in test-driven design and the increased discussion of topics like automated testing and continuous integration, it's clear that most developers have not taken enough interest in these concepts so as to improve themselves by adding these skillsets to their development abilities.


on the road to improvement

Okay, so what are the concepts that go into developing good code supported by a battery of tests? In the blog posts to come, I'll discuss concepts and practices that will (hopefully) help transform developers from fearful, time wasting, error-prone hackers into confident, focused, efficient developers. Posted on Wednesday, November 1, 2006 11:58 PM Rants | Back to top


Comments on this post: Does Your Code Work?

# re: Does Your Code Work?
Requesting Gravatar...
Nice post Blake. Looking forward to the rest.
Left by Jeremy Miller on Nov 02, 2006 4:56 PM

# re: Does Your Code Work?
Requesting Gravatar...
In the second section of your post, you propose a technological solution to the question "does your code work?" However, automated testing really answers the question "does your code work the way you think it should?" A slightly different question. Deceptively subtle.

If developers are spending days thinking about various scenarios that could occur, that indicates to me that there's not a firm grasp on which scenarios must actually be supported. And this drives me straight to one of my points: that developers shouldn't drive testing scenarios...the testing scenarios should be driven by the business requirements that the software is supposed to be fulfilling. If we're spending days in analysis paralysis, the requirements probably aren't clear or precise enough.

Automated testing, regardless of which department is pushing the buttons, is no substitute for writing correct software based on a correct understanding of correct requirements and specifications.

I've found that the more significant benefit of automated testing is to raise awareness of unexpected changes in the way code is functioning from one build to the next -- almost a form of change control. In other words, once software is "code complete" and passes all of its automated tests, a failed unit test means that either a) the automated test wasn't updated for a corresponding code change, or b) code changed that unexpectedly broke the automated test.

And it goes without saying that, during the build and packaging process, things should come to a screeching halt if automated tests don't pass 100%.

A big weakness I see in the use of automated testing is that developers are often expected to write their own automated unit tests. This is no different than asking a fox to guard a henhouse. Again, this is not a weakness in the technology, it's a weakness in the application of the technology. If a developer can make a mistake once, he can make the same mistake twice.

A more effective use of automated testing technology might be to do something like this: Pair up two developers, give them both the same specification, have one write the unit test, and have the other write the actual code. By separating out the implementation from the test building, you can gain an element of objectivity in your unit testing.

Call it Xtreme Unit Testing or XUT if you want. Get the fox away from the henhouse.

Even better feedback than 100% test passage on 100% automated unit test coverage is to get feedback from users. I've yet to see automated tests effectively test important aspects of software acceptance like:

Is this button in the right location?
Does this page look as good as I thought it would?
Does this page allow users to control font size?
Does this software help me get my job done faster or better than if I were not to use the software?

Did you really mean to suggest in your post that user acceptance tests can be automated?

QA's primary responsibility is usually to test the software at a considerably higher level than the tests run by development. Of course, QA often gets sucked into the minutiae of ridiculous nits that take up 80% of their time, leaving the real system testing to those deploying the software, or even worse, end users. But that's a different conversation.

I agree completely that proper unit testing practices are sorely lacking from many software organizations. But I've also worked on projects in which 100% of tests passed, but the software was simply unusable. Can this really be considered "mission accomplished" for the development team?
Left by Chris Smouse on Nov 03, 2006 11:27 PM

# re: Does Your Code Work?
Requesting Gravatar...
Chris,

Thanks for the comments. My main intention for the post on this subject was to bring attention (harp? rant?) to the fact that there is a better way to develop "working code" (defined, obviously, by the requirements set by the stakeholders requesting/using the application features being developed).

The best way, as I see it, to help confirm that your code works as it should, is by supporting the codebase with a battery of tests - unit, integration, and acceptance. A developer working from a list of user stories stating the application's requirements - and having these stories subsequently broken down into programming tasks - can begin coding test first by creating unit tests (using Test Driven Development) to receive immediate and continuous feedback on whether the code is working as expected. The very act of writing a test before the actual code serves as a calibration, allowing the developer to begin working from a known, failed state. By making the test pass, we prove the fact that our calibration device - the test - can function correctly and indicate the code is working as designed. Since the requirements are guiding the developer's efforts, each pass through the TDD Red-Green-Refactor cycle should provide a great deal of confidence that the code is evolving into application features that will eventually come together to form a successful application. As the developer begins to create code in a test first manner, he or she can (and should) constantly run the growing suite of unit tests to ensure the codebase is always in working order.

You are exactly right in saying automated testing provides a constant 'awareness' about the state of a codebase. This is what Continuous Integration is all about. It offers the development team a stream of feedback regarding the health of an application's code base. By automating the unit, integration, and user acceptance tests and reporting the results of each test's execution, a development team is constantly aware of the overall progress they are making.

Your proposed developer pairing scenario doesn't need a new name. It's a very common practice of combining test driven development with pair programming - both key practices of Extreme Programming.

I'm not quite sure I understand your 'fox/henhouse' allegory, though. I don't know why a developer can't be expected to create unit tests. If a development team views test driven development as an important process to be followed in order to create a sustainable, well-factored, domain focused codebase, the self-organized team will continuously police itself and enforce the agreed-upon development process.

Clearly automated tests aren't going to answer rather subjective questions like UI button positioning, page look and feel, and font size control. These things should be evaluated by the stakeholder throughout the development lifecycle and possibly even through some formal UAT (User Acceptance Testing) period.

To answer your question -- yes, user acceptance tests should definitely be automated. If the acceptance test passes, the requirement is satisfied.
Left by Blake Caraway on Nov 08, 2006 12:14 AM

Your comment:
 (will show your gravatar)


Copyright © Blake Caraway | Powered by: GeeksWithBlogs.net