Geeks With Blogs
Kyle Burns

I'm a big believer in the benefits of having automated tests live with your code.  I'm not an aspirant to Red-Green-Refactor Nirvana, but I believe there is a huge value to be gained by taking the code from Button1_Click in the only Windows Form application you've ever written and placing it instead into a method that can be used by a unit test framework.  Some of the benefits that I glean from this include my test being preserved past the initial implementation (when you write the next feature of course you're going to replace the code in Button1_Click to test what you're working on right now) which provides some level of regression testing, a clear expression of what the developer intended the code to do (this is improved even more with good naming of test methods), and working examples of how to make use of the API you've developed.  This last benefit can be broken down even further in that a side-effect of writing the tests can be noticing ways to make your API easier to use.

Now that I've established that I really like the idea of automated testing, let me get to the point of this post - don't bother writing unit tests if you're not going to use them to prove your code works.  When I am familiarizing myself with a project, one of the first things that I do after going through whatever high level information is available is attempt to do a "get latest" from source control, build, and run the unit tests.  This gives me a good idea of what remains to be implemented unless the tests don't actually prove anything.  Let's start with a blaringly obvious example:

   1:  [TestMethod]
   2:  public void VacationManagerShouldValidateUserHasAccruedRequestedTime()
   3:  {
   4:  }

It's very likely that the (fictional) author of this test was trying to be proactive and stubbed out all the tests that would be needed for the functionality they were working on, but when I run all the tests in the solution I see the name of the test method with a green "Passed" next to it.  It's a great idea to decide what tests you'll need in advance and actually an approach that I recommend as a way to keep focus on progress against requirements, but instead of allowing the lack of hitting a failure condition make the test report as successful the test could start out like this:

   1:  [TestMethod]
   2:  public void VacationManagerShouldValidateUserHasAccruedRequestedTime()
   3:  {
   4:      Assert.Inconclusive("functionality hasn't been written yet");
   5:  }

Another similar option would be to throw NotImplementedException, but this doesn't quite as clearly identify that you just haven't gotten around to it yet.


Another common problem with automated tests is that many do not actively check for the post-conditions that prove the call was successful.  Building on our fictional example, let's say the test author this time actually put some code in the test:

   1:  [TestMethod]
   2:  public void VacationManagerShouldValidateUserHasAccruedRequestedTime()
   3:  {
   4:      var target = new VacationManager();
   5:      var employeeId = "12345";
   6:      var beginDate = new DateTime(2011, 12, 21);
   7:      var endDate = new DateTime(2011, 12, 23);
   8:      target.SubmitRequest(employeeId, beginDate, endDate);
   9:  }
In this case, context clues and experience tell me that the test author "knows" that if the employee does not have enough time accrued the method will throw an exception, causing the test to fail.  I contend that unit testing is much more about what you can prove than what you know.  If it was about what you know, then you wouldn't have written the test in the first place because you already know your code works.  Let's peek inside VacationManager and see what happened:
   1:  public void SubmitRequest(int employeeId, DateTime beginDate, DateTime endDate)
   2:  {
   3:      //*****************************************************
   4:      //*****************************************************
   5:      // HISTORY:
   6:      // 8/21/2010 initial code complete by Joe Developer
   7:      //*****************************************************
   8:      // 4/19/2011 updated by Maintenance Guy to fix issue
   9:      // where user received timeout error message
  10:      //*****************************************************
  11:      //*****************************************************
  12:      try
  13:      {
  14:          // ... some information gathering
  15:          if(numDaysAvailable < numDaysRequested) throw new Exception("not enough time");
  16:          // ... update the data and kick off approval workflow
  17:      }
  18:      catch(Exception e)
  19:      {
  20:          // don't show user the error (MG 4/19/2011)
  21:      }
  22:  }

Guess what - when Maintenance Guy runs the Unit Test Suite after making his changes the test will still pass.  Think back to when you were still hammering out code in Button1_click.  One level of testing was to click the button and step through the code, but you (hopefully) also had a tool handy to look into the database and see that the record made it and all the fields were populated appropriately.  Take your own fingers and eyeballs out of the equation and you'll find that your automated test needs to do the functional equivalent.  This can either be through directly querying the data store or, if available, using the retrieval mechanisms of your API which of course have their own tests to prove they are trustworthy.

Posted on Thursday, December 29, 2011 5:03 PM | Back to top


Comments on this post: What are you trying to prove?

No comments posted yet.
Your comment:
 (will show your gravatar)


Copyright © Kyle Burns | Powered by: GeeksWithBlogs.net