Geeks With Blogs

News


Dylan Smith ALM / Architecture / TFS

I believe in having 3 layers of testing: Unit Tests, Acceptance Tests, and Exploratory Testing.  Each of these layers is somewhat independent of the other and each layer alone essentially attempts to validate that the entire system works.  However, none of these layers alone is perfect, but having all 3 in place simultaneously makes the situation much better.

Unit Tests
Unit tests are the tests that the programmer writes as he develops the software.  We develop using TDD and the unit tests are automatically generated as part of that process.  Unit Tests should focus on small pieces of functionality, isolating it from other related functionality.  Typically I accomplish this using mock objects to isolate classes or methods from it's dependencies by mocking the dependencies.  If TDD is performed effectively your Unit Tests should automatically provide a very high level of code coverage (90%+ as a rule of thumb).

Ex. I have an UpdateCustomer method I need to implement.  I will have a Unit Test for the "happy case", I will have a unit test passing in a blank customer name (invalid data), I will have a unit test passing in a duplicate customer name (invalid data), and unit tests for any other validation tests that I need to "test-drive" into the implementation.  I will mock out the DatabaseLayer, and any other dependencies the UpdateCustomer method may use to perform validation or other logic.

Acceptance Tests
Acceptance tests are higher level tests that are written from a customers perspective.  They are not concerned with isolating small pieces of functionality like unit tests but instead test the whole system from end-to-end.  If you ever develop "user stories" as part of your requirements (eg. XP) or a scripted test to be manually executed by a human, then an acceptance test is basically an automated version of those.

Eg. For implementing a customer maintenance application I may have an acceptance test that creates a new customer, updates a field on it, retrieves the customer and validates that the properties match what I expect, then does a delete and another retrieval to validate that the delete was successful.

Exploratory Testing
This is testing that is done manually by a human.  This is probably what most people are used to.  It does not use any test script, it is just simply a person using the system and trying to discover new bugs.  In my experience a human is usually very good at identifying weak portions of the system, and then reasoning where other weaknesses may exist based on what they've seen.  I use this in addition to Unit Testing and Acceptance Testing to help us find any bugs the automated tests may have missed.


Now that I've explained at a basic level what each of the test types I use are, lets take a look at how I use these 3 types of testing in concert.

Typically when I'm developing an application I'll start by designing what I think the UI should look like and how it should function (we currently do not write any automated tests directly against the UI layer).  This ensures I'm focusing first and foremost on the user experience and meeting the users needs; since the UI is the only portion of the application the user sees or really cares about.  I'll draw up the UI and write up the code that interacts with my business layer (typically a web service).  In the process of doing this I discover (aka design) how I want/expect the interface on the middle-tier to look like to make it easy for the UI to consume.

Obviously the UI won't compile yet, as the Web Methods I have been coding against and designing as I go don't exist yet in the middle-tier.  So the next step is to create skeleton methods in the middle-tier (Throw New NotImplementedException); just the bare minimum of code needed to get everything to compile.  At this point I'll usually check my code into source control since it is at a compilable state and no failing tests (since I haven't written any new tests yet - that's next).

My next step is to write the first Acceptance test.  This will exercise the web methods from the middle-tier that we consumed while coding the UI.  Once I get a reasonable Acceptance test written up, it should compile, and fail (since all the web methods should be throwing NotImplementedExceptions currently).  This Acceptance test represents the first "user story" that I'm going to focus on implementing.  Now that I have a goal (getting the Acceptance test to pass) I'll start implementing the middle-tier methods using TDD to write unit tests and implementations as I go.

I'll keep using TDD to drive out the implementation necessary to get the Acceptance test to pass.  Eventually I'll get to the point where I have enough implemented that my Acceptance test will pass.  At this point I should have the one passing Acceptance test and quite a few passing unit tests as a result of the TDD.  At this point I'll check in my code into source control (good time to do it since all tests are passing).

The next step is to write another Acceptance Test exercising some more functionality that the first Acceptance Test didn't cover.  Then I'll TDD the implementation required to get this Acceptance test to pass.  Then check-in to source control.  This process repeats until I'm satisfied that I have enough Acceptance tests to cover the functionality required. 

When I use TDD to drive the implementation I stick to the rule that I'm only implementing strictly what is required to make the test pass.  If I feel there's more logic needed in the implementation, I won't write it until I first have a unit test to validate it.  However, when I'm using TDD to drive the implementation of a specific Acceptance test, I do not stick strictly to what I need to make the Acceptance test pass.  For instance, lets say I have an Acceptance Test that calls UpdateCustomer() web method.  While I'm TDD'ing the implementation of this UpdateCustomer web method I'll create unit tests to drive out all the functionality I believe is required in that method.  For example, I may have a handful of unit tests that just call UpdateCustomer() passing in various different types of invalid data.  This will drive the implementation of the validation logic in UpdateCustomer.  I  write these unit tests and implementation, even if that logic isn't specifically exercised by the Acceptance Test I'm currently trying to implement.

Once I have all the Acceptance Tests I feel necessary written and passing then I should be done implementing everything for this portion of the UI as far as I'm concerned.  Now it's time for some exploratory testing.  I will typically perform at least a little bit of exploratory testing myself just to ensure that the UI works properly and everything I needed to implement is in fact completed.  Then you should have somebody with fresh eyes do some more exploratory testing (ideally a dedicated tester in conjunction with the end users).

Anytime a bug is found while performing exploratory testing this indicates that one or more automated tests are missing from the test suite.  There should be at least one unit test to demonstrate the bug, and depending on the nature of the bug you may wish to write an acceptance test in addition to the unit test to demonstrate the bug.

Another thing I would note, is that if anytime during the implementation or maintenance of the application you wind up with a failing acceptance test (that was previously passing) but no failing unit tests, the first step should be to write a failing unit test, make the unit test pass, then check to see if the acceptance test is now passing.  If not write another unit test and repeat.

I feel that having these three layers of testing provides the safety net that enables pain-free refactoring, along with a strong focus on quality code.  Unit Tests take care of testing the basic pieces of logic in isolation, Acceptance Tests ensure that all that logic integrates properly and does in fact work together as a complete system.  Exploratory Testing helps to capture all the bugs and logic that the developer forgot about while writing the unit and acceptane tests.

Let me know what type of testing strategy you follow?  Do you think this sounds like too much effort?  If so what would you do differently?

Posted on Thursday, February 1, 2007 10:28 AM | Back to top


Comments on this post: Testing Strategy

# re: Testing Strategy
Requesting Gravatar...
Pls Can you hep me with what is meant by unit testing strategies
Thanks
JUDE
Left by jude on Dec 28, 2007 7:33 AM

Your comment:
 (will show your gravatar)


Copyright © Dylan Smith | Powered by: GeeksWithBlogs.net