Okay, so recently I was working on a new suggestion for my client.  It was a fairly simple request; create a snapshot of data to be used in monthly calculations, with the option to regenerate the snapshot at a later time.  The snapshot was easy.  I thought regenerating the snapshot would be easy too, but somehow I found a way to make it hard.

The first mistake I made was not writing a test first.  Really this had a lot to do with laziness.  The data that I am capturing is only available via a view into a proprietary database.  Somehow, I needed to figure out how to modify the data behind the view.  It seemed really hard, so I skipped it.

That decision came back to bite me.  I submitted the changes to the customer and soon they reported that they weren’t working.  Perhaps there was a little more than laziness here (perhaps arrogance).  I manually tested my changes, but I didn’t cover every aspect.  As it turns out, the snapshot was being updated correctly, but, the monthly calculations weren’t getting updated.

So now that I got a feature returned as a failure, I decided I better write that test.  In fact, following TDD principles, I knew I should write a test that would fail due to the reported defect.  I started on the test and hit a roadblock and was about to give up on it again.  I mean, this testing stuff is hard!

I chatted with a coworker asking his advice on how to test the data change behind the view.  He provided a simple and elegant solution.  We set up a test data script that would create a “test” table that duplicates the structure of the view.  Then we simply redirected the synonym from the view to the test table.  Now we essentially have a fake view.  And since it is a table, we can manipulate the data to our hearts content.

From this point I was able to continue writing my test.  Everything looked good.  I was sure to test all of the aspects of the new requirements that I could think of.  When I was done, I ran the tests and verified the defect.

Finally, I could fix the problem.  I figured out the problem and was able to fix it fairly easily.  Well, not so fast.  My test was still failing!  I spent hours on the issue.  I knew that the fix was correct.  I tried all kinds of debugging attempts.  I even changed to code to force it to be wrong for a different reason.  Everything seemed to be fine, except my test was still failing.  I was beginning to think that there was a bug in the test itself.

As it turns out, there was more than one problem with the original code.  The second issue was that I had an update statement that performed a join to the snapshot data, but was missing a critical condition in the join.  To uniquely identify the snapshot data, I needed to join to two fields and I forgot one.  So, the update was actually executing multiple times and the last time wasn’t the one I was expecting.

Had I persisted in my laziness and simply added the fix and sent it to the customer, I’d be embarrassed yet again.  But this time, the test saved me.  How many times do I have to learn this lesson?

Tags: ,
posted on Tuesday, September 15, 2009 11:05 AM

Comments

Gravatar
# re: Don’t Be Lazy: Test First
posted by Irwin Quintana
on 10/7/2009 2:16 PM
I'm a TI professional with some knowledge of programming and database administration. In my experience, you're totally right: a test don't must be skipped because unexpected problems can arrive.
In my country there is a saying that say that "the lazy man will always make double work".

Congratulations for your blog.
Regards from Mexico.
Gravatar
# re: Don’t Be Lazy: Test First
on 11/2/2009 11:51 AM
I think the most important thing to do is to ensure that no work is considered done until it is tested. Adequite testing will depend on what it is. Some items are just prohibitively expensive to test 100%.

Test first is ideal when you have a good idea of what you are testing. It ensures that you never right your test against the code but rather against a solid business case. However, I tend to believe that one should iterate over testing just like code. New requirements are usually discovered when building a product and it doesn't make sense to assume that you are going to get all the feedback you need without spending some work on some form of implementation. So, tests change as understanding improves.

Even if you decide not to test first or can't for some reason (ie. you are in a split discipline group and the people responsible for the final tests are running behind or perhaps you are just creating a really small throw-away protoype), always use TDD and related disciplines to analyze the problem. In general, try to drive everything from the business requirement down. Make sure higher level scenarios are definitely covered and drill down to a level of detail sufficient for project size and maintainability. I component that will live forever needs to have it's behavior specified and tested to a level of detail that a one use component does not. A one use component only has to work properly in a limited number of scenarios.

Post A Comment
Title:
Name:
Email:
Comment:
Verification: