During the last months, I was (for some reasons that are not related to programming) working for a dev shop where software development was done the 'traditional' way: You just write your application's production code, do it as good as you can, and hopefully it will be good enough and no problems will occur in production (this is HDD: Hope-driven development).
Anyway, after I had finished this contract, I felt the need to review some core aspects of what I think is Test-driven development. And while I was doing this, I noticed that some principles (or dogmas, if you prefer) of TDD - you may read them in books or they might be presented to you in some other way when you're learning about TDD - just don't make no sense to me anymore. This post discusses some of the more prominent things in TDD that I don't buy...
#1: "A not compiling test is a failing test"
Huh? Sorry, but no.
A not compiling test is nothing but defective code that is not accepted by the compiler. And a compiler error has exactly nothing to do with the TDD process – it doesn't tell me anything relevant other than that my code is faulty.
Suppose, for example, that you have the following test:
public void SomeTest()
var someClass = new SomeClass();
int result = someClass.ReturnSomeValue(8);
If you try to compile this without having a skeleton of SomeClass and ReturnSomeValue() in place, then the C# compiler will give you this:
What does it say? Well, it tells you exactly one thing: You're trying to reference a symbol that the compiler cannot resolve. Nothing more. The error could come from everywhere in your code, there's nothing on it that makes it somehow specific to testing. So how could one conclude from this error message that it is (or at least is somehow referring to) a failing test?
To really have a failing test, and not just a compiler error, you need to have the following in place:
public class SomeClass
public int ReturnSomeValue(int val)
throw new NotImplementedException();
If you then run the test again, it will compile and give you something like this:
This is what I'd call a meaningful test outcome: The test actually does execute, but the method under test does not (yet) expose the expected behavior.
In short: It makes sense to have empty member skeletons declared before you write any tests. This doesn't contradict the intention of Test-driven development in any way, because it remains true that you always start with a failing test (it's just that it now 'fails' for the correct reason)...
Apart from this, there's a second argument, which does not relate directly to TDD theory, but more to a C# developer's everyday working experience: A developer is more productive and makes less errors when he has decent IntelliSense support at hand. And there only can be IntelliSense if at least an empty method body exists and is accessible.
#2: "Only test the public members of a class"
This statement may be viable for a more BDD-based approach and/or for integration-style tests, but for TDD this doesn't make sense to me: If the important functionality of a class is encapsulated in a non-public method, how could it be a problem then to write tests for this method? After all, testing (and especially TDD) is about functionality, not about visibility.
Consider, for example, the following 'production code':
public int DoSomeComplexCalculation(int arg1, int arg2)
int temp = DoSomeComplexOperation(arg1, arg2);
temp = DoAnotherComplexOperation(temp, arg1, arg2);
return DoOneMoreComplexOperation(temp, arg1, arg2);
Nothing unusual here: We have a public method, which exposes some functionality to the outside world, and internally delegates the task to some helper methods. Each of them does its own part of the job, and the calling method is responsible for orchestrating the calls, combining the partial results in a meaningful way, and finally returning the end result to the caller. So far, this is standard .NET programming, often seen and quite trivial.
But if you want to develop such code in a test-driven way and stick to the Test only public members dogma (and of course you don't want to make everything public), then the only test that you could ever write would be of this form:
public void DoSomeComplicatedCalculation_WhenGivenSomeValues_ReturnsExpectedResult()
int result = new Calculator().DoSomeComplicatedCalculation(1, 2);
Assert.AreEqual(3, result, "Calculation does not yield the expected result.");
Let's be honest: Would that be enough for you to develop correct and robust implementations for DoSomeComplexOperation(), DoAnotherComplexOperation(), and DoOneMoreComplexOperation(), when these helper methods in themselves have to perform quite complicated operations? And you will be sure that you've covered all relevant corner cases? Well, then you're a better programmer than me (and possibly also better than the overwhelming majority of all the other developers out there)...
Because I'm not so enlightened, I need to write quite some more test code to make sure that the production code is of high quality and (at least to my knowledge and skills) error-free. Ideally, I will do this:
- Make the non-public methods accessible to the test assembly by declaring them as internal and giving the test assembly access rights via the InternalsVisibleTo attribute.
- Write some data-driven tests against these internal methods, covering all possible corner cases.
- Write an interaction-based test against the public method, to make sure that it is orchestrating the internal methods (that perform the actual calculation) as intended.
A typical test fixture of this kind then could be similar to this (using the Gallio framework for the tests and Typemock Isolator to verify method interactions):
public class CalculatorFixture
[Row(1, 2, -1)]
[Row(11, 2, 9)]
[Row(100, 100, 0)]
[Row(int.MaxValue, int.MinValue, -1)]
[Row(int.MinValue, int.MinValue, 0)]
public void DoSomeComplexOperation_ReturnsExpectedResult(int arg1, int arg2, int expectedResult)
int result = new Calculator().DoSomeComplexOperation(arg1, arg2);
Assert.AreEqual(expectedResult, result, "'DoSomeComplexOperation()' does not yield the expected result.");
[Row(1, 2, 3, 1)]
[Row(0, 0, 123456, 0)]
[Row(-1, 1, 3, 0)]
[Row(8, -3, 4, 1)]
[Row(1, 1, 0, 999999, ExpectedException = typeof(DivideByZeroException))]
public void DoAnotherComplexOperation_ReturnsExpectedResult(int temp, int arg1, int arg2, int expectedResult)
int result = new Calculator().DoAnotherComplexOperation(temp, arg1, arg2);
Assert.AreEqual(expectedResult, result, "'DoAnotherComplexOperation()' does not yield the expected result.");
[Row(0, 0, 0, 0)]
[Row(0, 567, -567, 567 * 2)]
[Row(1, 2, 2, 1)]
public void DoOneMoreComplexOperation_ReturnsExpectedResult(int temp, int arg1, int arg2, int expectedResult)
int result = new Calculator().DoOneMoreComplexOperation(temp, arg1, arg2);
Assert.AreEqual(expectedResult, result, "'DoOneMoreComplexOperation()' does not yield the expected result.");
[Test, Isolated, TestsOn("DoSomeComplicatedCalculation")]
public void DoSomeComplicatedCalculation_VerifiesTheIntendedInteractions()
const int resultFromDoSomeComplexOperation = -999;
const int resultFromDoAnotherComplexOperation = 1234;
const int arg1 = -1;
const int arg2 = 42;
var calculator = Isolate.Fake.Instance<Calculator>();
Isolate.WhenCalled(() => calculator.DoSomeComplicatedCalculation(0, 0))
Isolate.WhenCalled(() => calculator.DoAnotherComplexOperation(0, 0, 0))
Isolate.WhenCalled(() => calculator.DoSomeComplexOperation(0, 0))
int result = calculator.DoSomeComplicatedCalculation(arg1, arg2);
Isolate.Verify.WasCalledWithExactArguments(() => calculator.DoSomeComplexOperation(arg1, arg2));
Isolate.Verify.WasCalledWithExactArguments(() => calculator.DoAnotherComplexOperation(resultFromDoSomeComplexOperation, arg1, arg2));
Isolate.Verify.WasCalledWithExactArguments(() => calculator.DoOneMoreComplexOperation(resultFromDoAnotherComplexOperation, arg1, arg2));
- Needless to say that the above represents an ideal case, which is not always what you can fully achieve in the real (business) world, where there may be all kinds of other factors in operation (timelines, lack of resources etc.)... -
|A practical note: Because I consider access to internal members as an everyday standard in my development practice, I have a naming convention in place in my projects (test assemblies are named after the assembly they are targeting, followed by an additional .Test), and I have a corresponding R# live template which generates the required InternalsVisibleTo statement for me.
Don't get me wrong: I'm not saying that the above necessarily is somehow preferable or better than anything else – it totally depends on the concrete project and the testing strategy that you apply to it. I'm only stating that Testing Non-Public Members is a perfectly valid testing strategy in its own right. And in some situations, it might be the only one – namely, if you're consequently doing TDD (never write a single line of code when there's no test for it!) and at the same time you want to keep your class' publicly accessible interface as small and clean as possible.
#3: "Never change production code only to make it testable"
As I pointed out in another post, software maintenance makes up by far the biggest portion of a software system's total lifecycle costs, and effective tests help to significantly lower these costs. So if testing can have such a massive positive impact on a companies' budget, why would I then categorially exclude this option only to stick to some theoretical principles?
Of course, like with any other methodology, you have to apply it wisely. The primary goal of production code is to mimick its relating business domain as closely as possible, and it's a very bad idea to shape it after a developer's testing skills in the first place. But usually there is more than one way to skin a cat, and some of them make testing easier, while others will make it harder.
Testability is an important non-functional property of a codebase, so it should not be handled differently than other non-functional requirements. If there's a strong enough reason to change the code accordingly, then just do it.
#4: "TDD is a software design method"
Not really, or at least only to some extend.
It's true that developing your code the TDD way will automatically lead you to well-designed code at the micro level – it's simply not possible to write meaningful tests for spaghetti-, or otherwise poorly shaped, code.
But this effect doesn't apply to the system level. You need to have a clear idea of things like e.g. Class structure, Assembly partitioning, or Code layering before you can start to write the first line of code. And to do this, you need to have good knowledge about software design (Design patterns, S.O.L.I.D. principles, and the like...).
Only then – when your draft is well shaped and generally of high quality upfront – will TDD help you to explore and hammer out the implementation details and will safely lead you down the second half of the road.
I think, these aspects are to a large extend driven by the view that TDD and software testing is considered as some kind of exotic (or at least somehow special) activity, which should not interfere with the 'real' code. This prejudice often is implicitly accepted and goes unnoticed, because it is buried deeply, at the very heart of Test-driven development. But as soon as you radically change your point of view and consider TDD/testing as the normal and preferable way of writing software (whereas writing software without tests is the exceptional case), some of the 'truths' around 'Textbook-TDD' just don't make much sense anymore...