Geeks With Blogs
// ThomasWeller C#/.NET software development, software integrity, life as a freelancer, and all the rest

During the last months, I was (for some reasons that are not related to programming) working for a dev shop where software development was done the 'traditional' way: You just write your application's production code, do it as good as you can, and hopefully it will be good enough and no problems will occur in production (this is HDD: Hope-driven development).

Anyway, after I had finished this contract, I felt the need to review some core aspects of what I think is Test-driven development. And while I was doing this, I noticed that some principles (or dogmas, if you prefer) of TDD - you may read them in books or they might be presented to you in some other way when you're learning about TDD - just don't make no sense to me anymore. This post discusses some of the more prominent things in TDD that I don't buy...

#1: "A not compiling test is a failing test"

Huh? Sorry, but no.

A not compiling test is nothing but defective code that is not accepted by the compiler. And a compiler error has exactly nothing to do with the TDD process – it doesn't tell me anything relevant other than that my code is faulty.

Suppose, for example, that you have the following test:

[Test]
public void SomeTest()
{
    var someClass = new SomeClass();
    int result = someClass.ReturnSomeValue(8);
    Assert.AreEqual(8, result);
}

If you try to compile this without having a skeleton of SomeClass and ReturnSomeValue() in place, then the C# compiler will give you this:

image

 What does it say? Well, it tells you exactly one thing: You're trying to reference a symbol that the compiler cannot resolve. Nothing more. The error could come from everywhere in your code, there's nothing on it that makes it somehow specific to testing. So how could one conclude from this error message that it is (or at least is somehow referring to) a failing test?

To really have a failing test, and not just a compiler error, you need to have the following in place:

public class SomeClass
{
    public int ReturnSomeValue(int val)
    {
        throw new NotImplementedException();
    }
}

If you then run the test again, it will compile and give you something like this:

image

This is what I'd call a meaningful test outcome: The test actually does execute, but the method under test does not (yet) expose the expected behavior.

In short: It makes sense to have empty member skeletons declared before you write any tests. This doesn't contradict the intention of Test-driven development in any way, because it remains true that you always start with a failing test (it's just that it now 'fails' for the correct reason)...

Apart from this, there's a second argument, which does not relate directly to TDD theory, but more to a C# developer's everyday working experience: A developer is more productive and makes less errors when he has decent IntelliSense support at hand. And there only can be IntelliSense if at least an empty method body exists and is accessible.

#2: "Only test the public members of a class"

Why?

This statement may be viable for a more BDD-based approach and/or for integration-style tests, but for TDD this doesn't make sense to me: If the important functionality of a class is encapsulated in a non-public method, how could it be a problem then to write tests for this method? After all, testing (and especially TDD) is about functionality, not about visibility.

Consider, for example, the following 'production code':

public int DoSomeComplexCalculation(int arg1, int arg2)
{
    int temp = DoSomeComplexOperation(arg1, arg2);
    temp = DoAnotherComplexOperation(temp, arg1, arg2);
    return DoOneMoreComplexOperation(temp, arg1, arg2);
}

Nothing unusual here: We have a public method, which exposes some functionality to the outside world, and internally delegates the task to some helper methods. Each of them does its own part of the job, and the calling method is responsible for orchestrating the calls, combining the partial results in a meaningful way, and finally returning the end result to the caller. So far, this is standard .NET programming, often seen and quite trivial.

But if you want to develop such code in a test-driven way and stick to the Test only public members dogma (and of course you don't want to make everything public), then the only test that you could ever write would be of this form:

[Test]
public void DoSomeComplicatedCalculation_WhenGivenSomeValues_ReturnsExpectedResult()
{
    int result = new Calculator().DoSomeComplicatedCalculation(1, 2); 
    Assert.AreEqual(3, result, "Calculation does not yield the expected result.");
}

Let's be honest: Would that be enough for you to develop correct and robust implementations for DoSomeComplexOperation(), DoAnotherComplexOperation(), and DoOneMoreComplexOperation(), when these helper methods in themselves have to perform quite complicated operations? And you will be sure that you've covered all relevant corner cases? Well, then you're a better programmer than me (and possibly also better than the overwhelming majority of all the other developers out there)...

Because I'm not so enlightened, I need to write quite some more test code to make sure that the production code is of high quality and (at least to my knowledge and skills) error-free. Ideally, I will do this:

  • Make the non-public methods accessible to the test assembly by declaring them as internal and giving the test assembly access rights via the InternalsVisibleTo attribute.
  • Write some data-driven tests against these internal methods, covering all possible corner cases.
  • Write an interaction-based test against the public method, to make sure that it is orchestrating the internal methods (that perform the actual calculation) as intended.

A typical test fixture of this kind then could be similar to this (using the Gallio framework for the tests and Typemock Isolator to verify method interactions):

[TestFixture, TestsOn(typeof(Calculator))]
public class CalculatorFixture
{
    [Test, TestsOn("DoSomeComplexOperation")]
    [Row(1, 2, -1)]
    [Row(11, 2, 9)]
    [Row(100, 100, 0)]
    [Row(int.MaxValue, int.MinValue, -1)]
    [Row(int.MinValue, int.MinValue, 0)]
    public void DoSomeComplexOperation_ReturnsExpectedResult(int arg1, int arg2, int expectedResult)
    {
        int result = new Calculator().DoSomeComplexOperation(arg1, arg2);
        Assert.AreEqual(expectedResult, result, "'DoSomeComplexOperation()' does not yield the expected result.");
    }

    [Test, TestsOn("DoAnotherComplexOperation")]
    [Row(1, 2, 3, 1)]
    [Row(0, 0, 123456, 0)]
    [Row(-1, 1, 3, 0)]
    [Row(8, -3, 4, 1)]
    [Row(1, 1, 0, 999999, ExpectedException = typeof(DivideByZeroException))]
    public void DoAnotherComplexOperation_ReturnsExpectedResult(int temp, int arg1, int arg2, int expectedResult)
    {
        int result = new Calculator().DoAnotherComplexOperation(temp, arg1, arg2);
        Assert.AreEqual(expectedResult, result, "'DoAnotherComplexOperation()' does not yield the expected result.");
    }

    [Test, TestsOn("DoOneMoreComplexOperation")]
    [Row(0, 0, 0, 0)]
    [Row(0, 567, -567, 567 * 2)]
    [Row(1, 2, 2, 1)]
    public void DoOneMoreComplexOperation_ReturnsExpectedResult(int temp, int arg1, int arg2, int expectedResult)
    {
        int result = new Calculator().DoOneMoreComplexOperation(temp, arg1, arg2);
        Assert.AreEqual(expectedResult, result, "'DoOneMoreComplexOperation()' does not yield the expected result.");
    }

    [Test, Isolated, TestsOn("DoSomeComplicatedCalculation")]
    public void DoSomeComplicatedCalculation_VerifiesTheIntendedInteractions()
    {
        // Arrange
        const int resultFromDoSomeComplexOperation = -999;
        const int resultFromDoAnotherComplexOperation = 1234;
        const int arg1 = -1;
        const int arg2 = 42;

        var calculator = Isolate.Fake.Instance<Calculator>();
        Isolate.WhenCalled(() => calculator.DoSomeComplicatedCalculation(0, 0))
               .CallOriginal();
        Isolate.WhenCalled(() => calculator.DoAnotherComplexOperation(0, 0, 0))
               .WillReturn(resultFromDoAnotherComplexOperation);
        Isolate.WhenCalled(() => calculator.DoSomeComplexOperation(0, 0))
               .WillReturn(resultFromDoSomeComplexOperation);

        // Act
        int result = calculator.DoSomeComplicatedCalculation(arg1, arg2);

        // Assert
        Isolate.Verify.WasCalledWithExactArguments(() => calculator.DoSomeComplexOperation(arg1, arg2));
        Isolate.Verify.WasCalledWithExactArguments(() => calculator.DoAnotherComplexOperation(resultFromDoSomeComplexOperation, arg1, arg2));
        Isolate.Verify.WasCalledWithExactArguments(() => calculator.DoOneMoreComplexOperation(resultFromDoAnotherComplexOperation, arg1, arg2));
    }
}

- Needless to say that the above represents an ideal case, which is not always what you can fully achieve in the real (business) world, where there may be all kinds of other factors in operation (timelines, lack of resources etc.)... -

A practical note: Because I consider access to internal members as an everyday standard in my development practice, I have a naming convention in place in my projects (test assemblies are named after the assembly they are targeting, followed by an additional .Test), and I have a corresponding R# live template which generates the required InternalsVisibleTo statement for me.

Don't get me wrong: I'm not saying that the above necessarily is somehow preferable or better than anything else – it totally depends on the concrete project and the testing strategy that you apply to it. I'm only stating that Testing Non-Public Members is a perfectly valid testing strategy in its own right. And in some situations, it might be the only one – namely, if you're consequently doing TDD (never write a single line of code when there's no test for it!) and at the same time you want to keep your class' publicly accessible interface as small and clean as possible.

#3: "Never change production code only to make it testable"

Why not?

As I pointed out in another post, software maintenance makes up by far the biggest portion of a software system's total lifecycle costs, and effective tests help to significantly lower these costs. So if testing can have such a massive positive impact on a companies' budget, why would I then categorially exclude this option only to stick to some theoretical principles?

Of course, like with any other methodology, you have to apply it wisely. The primary goal of production code is to mimick its relating business domain as closely as possible, and it's a very bad idea to shape it after a developer's testing skills in the first place. But usually there is more than one way to skin a cat, and some of them make testing easier, while others will make it harder.

Testability is an important non-functional property of a codebase, so it should not be handled differently than other non-functional requirements. If there's a strong enough reason to change the code accordingly, then just do it.

#4: "TDD is a software design method"

Not really, or at least only to some extend.

It's true that developing your code the TDD way will automatically lead you to well-designed code at the micro level – it's simply not possible to write meaningful tests for spaghetti-, or otherwise poorly shaped, code.

But this effect doesn't apply to the system level. You need to have a clear idea of things like e.g. Class structure, Assembly partitioning, or Code layering before you can start to write the first line of code. And to do this, you need to have good knowledge about software design (Design patterns, S.O.L.I.D. principles, and the like...).

Only then – when your draft is well shaped and generally of high quality upfront – will TDD help you to explore and hammer out the implementation details and will safely lead you down the second half of the road.

So what?

I think, these aspects are to a large extend driven by the view that TDD and software testing is considered as some kind of exotic (or at least somehow special) activity, which should not interfere with the 'real' code. This prejudice often is implicitly accepted and goes unnoticed, because it is buried deeply, at the very heart of Test-driven development. But as soon as you radically change your point of view and consider TDD/testing as the normal and preferable way of writing software (whereas writing software without tests is the exceptional case), some of the 'truths' around 'Textbook-TDD' just don't make much sense anymore...

kickit
shoutit
delicious facebook digg reddit linkedin stumbleupon technorati mrwong yahoo google-48x48 twitter email favorites
Posted on Monday, October 31, 2011 1:44 PM Unit Testing/TDD , Architecture and Design , General programming/C# | Back to top


Comments on this post: Some myths of 'Textbook-TDD', and why they are wrong

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Let's say you'd like to take @coreyhaines' advice: "2 parameters on a method implies a) missing abstraction or b) too much work done by method!" and refactor away from "arg1, arg2".

That would cause a lot of pain due to fragile tests IMHO.
Left by Martin R-L on Nov 01, 2011 2:24 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Excellent post. I'm relatively new to testing, as "HDD" has been the norm at my current shop. Fortunately we're getting into unit/integration testing now. I've been introduced to TDD several times and it took a while to really "get it." It's nice to see a post where we stand back and question what we're doing rather than blindly drinking the magical Kool-aid -- even though it's really tasty.
Left by Geoff Mazeroff on Nov 01, 2011 9:36 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
#1. When your "SomeTest" code doesnt compile - it sure isnt passing. What you have done is a tiny design decision.
Throwing NotImplementedException is not valid for a failing test. The test should fail for a valid reason. lets say you forgot to write the assert or screwed up the assert somehow, you wont catch that with the production code throwing a NotImplementedException.
#2. Only test public members. Yes you should only test public methods! If the code is implemented with TDD, that is writing the test first. Then the private methods are covered by tests.
The private method should be a result of a refactoring of the public method, the refactor part in red-green-refactor which is vital in TDD.
Arguments for testing private (or internal) code is usually brought up when code is written in test after style.
#4 You still need design skills to write well designed code with TDD. TDD will assist you with designing the code. A fool with a tool is still a fool :-)
Left by Morgan on Nov 01, 2011 11:06 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Hi there,

thanks for the responses.

@Martin R-L:
2 parameters are already too much for a method? Really? What about things like mathematical operations (addition, multiplication etc.)? Would you also consider them to do 'too much work'? While it's of course true that many parameters on a method do indicate a problem, I think going as far as saying '2 is too much' is overkill and is fairly counterproductive...

@Geoff Mazeroff:
Thanks for the kind words. I think it's necessary to constantly question the tenets of our profession, if we want it to keep alive and become better over time.

@Morgan:
#1: So you would consider it a 'valid reason' for a test to fail if it does not even run? Sorry, but this simply is a logically wrong statement. All you do here is making the conclusion (from the context, not from the code!) that the test might not even run in case it would compile. This is nothing but guesswork, and it could introduce errors (although this surely is not very likely at this point).
#2: Of course everything is covered when writing only one top level test against the public method. But this is not what I'm talking about. TDD is meant to drive your coding. And how could you ever write the helper methods in baby steps when you have no tests that relate to them? You can't. You would write the helper methods and you will work on them until the top level test succeeds. Sure, you will have 100% code coverage (which doesn't mean too much, btw.), but you haven't done Test-driven development.
#4: That's exactly what I'm saying :-)...

- Thomas
Left by Thomas Weller on Nov 02, 2011 6:16 AM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
@Morgan
A "Not-Implemented Exception" sure is a valid reason for a test to fail - you've tested that the code under test is exercising the currently not-implemented code, and proved it.

@Thomas, re #2, I disagree with testing private / internal methods, simply because this is /never/ how your production code will interact with your class. You're testing, and passing tests, based on code running in isolation that will never happen in production.

The solution to your example where you fork out to the three private methods is of course to test these methods indirectly, and if you worry that there's too much logic in these private methods to get full test coverage, then you're either:

a) Not practising TDD thoroughly enough
b) Testing legacy code (fair enough, no way around this!)
c) Looking at a candidate for a new class with public methods corresponding to your currently internal methods.

Russ
Left by Russell Allen on Nov 02, 2011 11:09 AM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Russ,

interesting point. It's true that the tests from the example don't execute the code in a way that could ever happen in production. But I'm usually not thinking about this when implementing the _details_ of a possibly large and complicated algorithm. This seems to me more like a higher level, BDD-like view - which of course is also necessary, but intentionally ignores everything but the result.

Regarding your option c):
Don't! Because this would mean that you're exposing formerly non-public functionality to the outside world. Sure, it may be a perfectly valid option in some cases to refactor the helper methods to a separate class, but then this class also has to be non-public. Otherwise you would break encapsulation (or, in other words, change your overall architecture). This is exactly what I mean when talking about 'shaping your design after a developer's testing skills' ;-)...

- Regards
Thomas
Left by Thomas Weller on Nov 02, 2011 11:37 AM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
I agree with @Morgan. NotImplementedException is generally considered the wrong reason for a test to fail. I've reconfigured Resharper to generate members that return the default values (null for reference types, for example). Most TDD practitioners that I know do the same.

As for testing private methods, I absolutely disagree: *never* test a private method. As @Morgan pointed out, that's typically a sing that You're Doing It Wrong. If you are truly doing TDD, that shouldn't be necessary. You wouldn't write a helper method until you had a failing test for a public member that necessitated it. You might refactor already-covered code into a private method, but that's fine (that's the whole point of the Refactor step in Red-Green-Refactor). If you end up with something where testing private members feels necessary, consider refactoring the code to break up responsibilities.
Left by Matt Honeycutt on Nov 02, 2011 1:02 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Matt,

ok, supposed that 'NotImplementedException' isn't perfect - a compiler error would be the better solution?? C'mon...

And regarding the 'Testing non-public methods' issue: What you describe is more a BDD-like approach, as I already said. As such there is absolutely nothing wrong with it, but it wouldn't help you in test-driving the internal code in any way (which can easily make up something around the 80% mark in a good, heavily encapsulated design).

- Thomas
Left by Thomas Weller on Nov 02, 2011 1:24 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
I don't think Morgan is suggesting that a compiler error is the solution. For me, I like to see the compiler show little red icons in my IDE as a sign that we are still in the red as part of Red-Green-Refactor. So, the reason why the IDE shows red is because we haven't created the SUT yet. That is a good reason to fail because the SUT does not exist. Now, I would create the SUT. The IDE will automatically added the NotImplementedException() to my method. If I run my test then it would fail on NotImplementedException. Although this is a good reason to fail, however, part of making the test to fail is to make sure that the test is failing for the right reason. That is why I like to write my Asserts first before anything (even before I code my SUT). For example, If I want to know if an ADD feature should work correctly then I would write a test with an initial Assert statement like so: Assert.AreEquals(5, Math.Add(2, 3)). Compiling the code would produce a compile error since the Math.Add() does not exist. So, I stub out Math.Add() having the method throw a NotImplementedException. Running the test will produce a red because the SUT is not implemented yet. Good. I'm on the right track. But, "not implemented" is not the final validation that I want my test to validate. The test is expecting to valid if adding 2+3=5. I never get to this Assert since the exception was already thrown beforehand. From here I would remove the NotImplementedException and just return a zero from the SUT. The test failed because 2+3 does not equals zero. Now, it is failing for the right reason.
Left by ptran on Nov 02, 2011 7:02 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
A nicely written post. I must however disagree with your second point:

Regarding testing only the public API: If you test internals (classes or methods, internal or public, it doesn't matter), then you are testing an implementation design, not the functionality of the code. If you test the three complex algorithms that comprise the solution to the public method, then your tests are brittle. Suppose you now find a better way to implement the public method? Now your tests would fail without any good reason. I suggest only testing the public methods of the public classes in the assembly under test. If you developed your code with TDD, you will probably have complete coverage. If the public API test fails, you will know it. A simple look at the call stack, and the use of a half-decent IDE's debugger will solve your bug for you. If you _still_ need to test the code, then perhaps it is big and important enough to be refactored out into a public component of its own.
Left by Assaf Stone on Nov 03, 2011 1:56 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
@Thomas, I'm not saying that it's NotImplementedException vs. compiler-error at all: neither one of those is failing for the right reason. You want your test to fail *on your asserts*. That's the right reason. Failing because the code doesn't compile is a fine (and correct) step on the road to failing for the right reason though. Failing for NotImplementedException, however, is not really helpful to the TDD process IMO.
Left by Matt Honeycutt on Nov 03, 2011 2:53 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Thomas,

I think the presence of these comments suggests that those of us practicing TDD are defensive about the definitive nature of your post: stating your opinions as long-standing myths and how wrong they are. We are sensitive to you misrepresenting our voices but no one (so far) has posted disagreement with the test-first philosophy. In other words, we're all saying the same thing but with different words and different experiences.

I've been doing TDD since early 2003, and the one thing that I've learned is that the fundamental outcome of TDD is early feedback. Early feedback challenges your assumptions and forces you to think differently, which in turn leads to growth of new ideas and approaches. You will look back on this post several years from now with a different attitude. My feedback for you is that this post would be better titled, "What I've learned so far".

#1 - I agree with your position but I think you've misinterpreted or are misrepresenting the original guiding principle of a test-first methodology. Test-first says before you write any code, write a test that shows the code that you would want to see -- by definition this will not compile because it does not yet exist. Busted myth: no one has ever argued for the existence of "not compiling tests" -- I think it would be more accurate to say "a not compiling test is an unfinished test", the same can be said for NotImplementedException though the unfinished part is the implementation required to satisfy the test. Compiling the code means that your assumptions about the language are right, but it doesn't prove the functionality. Compiling frequently is early feedback and a good habit of TDD.

#2 - There seems to be a lot of consensus against you on this. I see your point, use InternalsVisibleTo to enable greater access into testing but I would argue that writing the tests against the internals as you've outlined above is a brittle approach. My latest mantra for TDD is: test-first exasperates bad design. Meaning: if testing it is hard, a change in the design may solve the problem. Moving the algorithm to a separate internal class might be easier to test, but that class should be a proper abstraction and not a collection of implementation methods.

#3 - Agreed, but with caveats. Follow SOLID principles and make abstractions open-ended, flexible and easy isolated. Early adopters to TDD will misread your statement and add public properties to classes like "MethodWasCalled" because they don't know how to test it otherwise. The latter approach is simply a bad design technique for testing purposes only which should be avoided.

#4 - Well said. If you blindly follow TDD and dive into writing the tests your code will organically evolve into something you hadn't originally planned for (a well-tested, but wrong abstraction). Kent Beck's TDD by Example says that it is still TDD if you go off and whiteboard a design for a few days as long as you write the tests before you write the code. I've even heard Uncle Bob refer to scribbling out a proof of concept without tests to understand the problem domain then throw it away and start over with tests.

Kudos. Keep it up.
Left by Bryan on Nov 03, 2011 3:44 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
To state things another way...

#2: If you were attempting to design a class following the tenants of TDD, and you found that the end result of your class was that it did not follow the tenants of TDD, then you weren't really doing TDD were you? In other words you did something wrong.

That doesn't mean that what you were doing was better, worse, or anything else, it just means that you did not do what you set out to do following its rules.

For most people, the details of #2 (testing only the public api rule) is the MOST valuable part of TDD. That is the most obvious part of letting it help you drive your design. It almost guarantees that you don't break SRP, which you are likely doing in your example. Alternatively, you AREN'T breaking SRP, and you have no reason to test anything but the public API, as your class has only one reason to change.

If you are comfortable with that class the way it is, then you likely came to the conclusion that it is not going to change and therefore does not need to be "over-engineered".

If you AREN'T comfortable with that class and feel the need to put unit tests around three different pieces of functionality within that class, then you are saying, I am OK with breaking SRP and not following the tenants of TDD. I would like to NOT refactor my code, but unit test it AFTER I designed/coded it, not letting the tests drive my design through TDD.
Left by Josh on Nov 03, 2011 5:51 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Hi all,

thanks for the feedback, you brought up some interesting points and new food for thought.

Regarding the 'NotImplementedException vs. compiler error' issue:
It's true that the two options are only the first step in the TDD process anyway: they show that the code under test doesn't exist yet, and as such, both are (by design) some sort of crutch that you do only to have a meaningful starting point. But I've very often heard and read the statement that 'a not compiling test IS a failing test' - which is, in this decided form, nothing but a logically wrong statement, and also very counter-intuitive for TDD newbies. Wouldn't be a NotImplementedException a far better 'solution' for this? In my eyes, it makes much more sense, compared to only having a useless compiler error.

Regarding the 'testing non-public members' issue:
It turns out that this is by far the most controversial one here. TBH, this kind of astonishes me, because I consider this in some respect to be the least debatable and most obvious point. Let me tell you a story:
In the second half of 2010, I was writing a service component, that was supposed to periodically monitor the availability of some network resources. My design was basically a strategy pattern: All the various components (there were 4 or 5, one for each resource that had to be monitored) implemented an interface with two methods: 'Init()' and 'PerformCheck()'. Each implementation of this public methods had only 10-20 LOC (arguments checking, error handling, and orchestration of the internal operations, as in the above example). The magic stuff then of the components (pinging, calling web service methods, and also some WCF stuff) was completely internal to the respective component and in some cases was quite complex (the non-public code of these components spanned something around spanned 500-2000 LOC). There were a lot of non-public methods, non-public helper classes, and even non-public interfaces, in short: there was also a lot of design work in the parts of the code that weren't directly exposed to the caller.
So how then would it be possible to implement such code in a TDD way without writing tests against some non-public members? It's not. You'd be back to square one - Hope-driven development: In this case, hoping that your tests will succeed.
The main argument in this situation to put some non-public members under test: Some of the internal steps had quite a lot of different corner cases to cover, so it was extremely helpful to have dedicated data-driven test cases for these scenarios. If you'd had have to cover all these test cases on the public level, you would have ended up with hundreds of different test cases because of their combinatorial nature. Hundreds! It wouldn't have been doable that way, and nobody could ever fully understand the various possible interactions that might possibly occur between the different non-public methods.
Also, I know well that 'testing too much details' is a smell in itself, because it can make your tests brittle and effectively prevent your code from easily being changed/refactored - I fell in that trap quite some times before. But in the above described case, I of course didn't test all non-public methods, but only the ones on a higher abstraction level - the ones that were 'closer' to the public methods (@Josh: I don't see how that could break SRP?!). I didn't say that you should put just about everything under test, what I'm saying is: You should write tests for a method, when it is central to the functioning of your class, and when it contains quite some business logic which can be described as a logical unit in a meaningful way (This does NOT necessarily mean it has to be a chunk of information that would be relevant to the caller!). I don't see how the public/non-public issue could even be relevant in such a situation. This is a decision which should be exclusively done on the architectural level.

- Thomas
Left by Thomas Weller on Nov 04, 2011 8:05 AM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
RE: #2: "Only test the public members of a class"

If you have a private method in your class that you feel deserves testing directly.. then I believe your intuition is right, you should test that code.

Making that code public, or using reflection to access it however is not the answer.

Use this as a sign the class is doing too much, factor out the complex method onto a strategy object that you inject into your current class...

This has the advantage that you can now fully test the public API of the existing class without having to jump though hoops to get the complex method to return edge case data by injecting a stub object.

You can also now fully test your strategy class to give you the confidence you were looking for in the first place that you're code is working.

In my experience this decomposition driven from a desire to test something complex hidden in another class always makes my code easier to read, test and more flexible.

Thanks for raising this point.

- Nigel
Left by Nigel Thorne on Nov 10, 2011 11:28 PM

# Software Development Company
Requesting Gravatar...
I like to appreciate you for this blog, very well written with great thought. so this is a best resource what they get to get out from such problems. Really I liked your post.
Left by PatrickBall on Jun 06, 2012 2:34 PM

# re: Some myths of 'Textbook-TDD', and why they are wrong
Requesting Gravatar...
Well I found out this post because of problems related to this compilation fail thing.

I'm really new to TDD and trying to get an experience in a personal simple project since the projects where I work are just spaguetti fragile sensitive "do not touch on this" code. Man how I hate this, I wonder how people can write things like that, like a 40 parameters method passing all the data from an person separately instead of using an object.

So ok, I'm using C# and Nunit, I added a ClassLibrary to the solution just for the tests, and pointed the Library to start an external program, Nunit.exe in this case.

So when I type ctrl+f5 to execute the program, then execute the tests and see the red bar, since the code doesn't even compile, I can't execute the Nunit interface. So if I have to manipulate my code to compile like creating the skeleton of the method that Im calling, then do not seems to be "write a fail test" the first step at the process. Seems like I have to adjust my code a little bit first for using TDD.

From everything that I've read before put the hands on, I really believe that TDD is a great approach to develop good code and I know that the learning curve is hard at the beggining, I'll give it a try, but all those mantras really is something boring in all sort of methodologies.
Left by Daniel on Sep 06, 2012 4:03 PM

Your comment:
 (will show your gravatar)


Copyright © Thomas Weller | Powered by: GeeksWithBlogs.net | Join free