The Architect´s Napkin

Software Architecture on the Back of a Napkin
posts - 69 , comments - 229 , trackbacks - 0

My Links

News

Article Categories

Archives

Post Categories

The Self-Similar Software Development Process

I read, with interest, Robert C. Martin´s comment on Justin Searl´s critique of a common TDD teaching approach. Strange it was to see “the good uncle” to be so upset about Justin´s article. Why does he take it so personally, it seems, that someone is not content with the state of TDD-affairs? Why salting Justin´s earth?

Anyway… what I wanted to express is my disagreement with Robert C. Martin´s idea of a “Domain Discontinuity”. He proposes there to be two levels in software system design: one, where thinking before implementing is good, and another one where it´s bad. The first being architecture, the second being “the problem domain”.

Unfortunately, he does not define what he means by “architecture”. And his definition of “problem domain” is shallow at best: business rules.

But why this dichotomy?

It´s the requirements, stupid!

Strangely the term “requirements” occures only once in Robert C. Martin´s article. He associates it with tests. I completely agree with that. But why stop there?

Each and every trait of a software system – the look of the UI, the performance of some calculation, the distribution of code across clients and servers, the choice of persistence technology, the number of classes, the interfaces of those classes etc. – must be traceable to some requirement. This is true for coarse grained traits as well as fine grained traits.

The basic categories of requirements to me seem to be:

  • functional requirements or functionality for short
  • non-functional requirements or (primary) qualities for short, e.g. performance, scalability, security, usability etc.
  • evolvability, i.e. a special quality regarding the flexibility to adapt the software system to new requirements
  • production efficiency, i.e. a special quality regarding the ease and speed to implement requirements; it´s a quality of the development process

Software development thus is the art (or craft or disciplin) of making the right choices from a set of options to best fulfill the concrete requirements falling into those categories.

Architecture for qualities

With requirements put first the question again is: What´s software architecture in particular about?

My simple answer to this perennial question is: Software architecture is about designing structures to fulfill non-functional requirements aka qualities.

I don´t care much about whether architects need to be top communicators in order to do this job. I don´t care much about whether they need to be top notch coders to do this job. I don´t care about their tools or the level of their architecture documentation. Just do whatever is necessary to transform requirements into “ideas” of structures for code (and other artifacts).

Architects don´t build, they design. That means they think – before someone else implements. (Please get me right, when I say “architect” I don´t mean a person but a role. The same person can be an architect at one moment and something else in the next moment. I do not advocate special architect job positions. Right to the contrary: I think each and every developer should hone his architecture skills.)

Sure, architecting software is not easy. Not because so many paradigms and technologies need to be taken into consideration, but because of the need to balance. Qualities are contradictory at times. They can pull software structures into opposite directions. Designing for performance might impede evolvability. Designing for security might impede performance.

Architects thus need to deliberate much. They need to think.

Yes, not too much. I know. BDUF´s still bad. But some thinking is in order. All else would negate thousands of years of engineering success. Where the requirements are clear, up-front thinking leads to a solution – however tentative that might be.

So when does thinking about qualities end? Does it end after choosing MVVM over MVC and encapsulating MongoDB access with some kind of adapter or moving business logic into stored procedures?

I won´t say so. Because architecture is whereever anybody decides anything with regard to qualities, especially when it´s pertaining to the software´s structure. That means, architecture is at work while (!) doing TDD. Because what drives the refactoring of a solution? It´s some quality, not functionality. Why extract a method, a class? Why segregate interfaces? Why follow IoC and do DI? That´s all architectural questions.

There´s nothing sacred or lofty about architecture. It´s simply about a couple of issues that need to be resolved, quality issues. Anybody with the right knowledge and capabilities can be an architect. In fact every developer is every day.

Functionality for the problem domain

The second term Robert C. Martin uses is “problem domain”. With the above categories of requirements I guess I know what he means by that: functionality. Not quality, but functionality. That´s fair enough.

And he says, tests should be used to formalize functional requirements (i.e. desired behavior) before the implementation. I agree with that, too.

What I don´t get, though, is: Why not continue to apply the human brain afterwards? Or even before? Why devalue thinking in the form of “thinking up a design”?

If (functional) requirements are clear enough to hard code tests, why aren´t they clear enough to at least come up with a solution sketch for the part of the problem domain in focus?

Again and again I have seen experienced developers fail at simple problem domains like “kata roman numerals”. They don´t fail because they lack extraordinary TDD skills. No, they fail because they have little practice in actually “thinking about solutions before coding”. Theír tool chest contains only a single tool: code.

So what to do about refactoring? Well, where should an idea of a better structure come from when you´re not used to thinking before coding? Refactoring as the third TDD step just defers thinking to the last possible moment – and beyond, as the many TDD examples show where not much refactoring was done. Refactoring is optional. Nothing forces refactoring in TDD. That´s why Keith Braithwaite came up with “TDD as if you meant it”.

But why then say, “They just haven´t gotten it! They need to learn to do better TDD!”? Why not question for a moment the “traditional” TDD approach? Why not – as Justin does – introduce at least some thinking before coding?

The cleanliness of code which TDD might aim at also is just one of many qualities – which needs to be balanced against others, like production efficiency.

At least I´m happy to sacrifice some elegance and cleanliness for efficiency and readability. That´s why I think before I write the first test, no, even before I prioritize test cases. And I think about the design of the code. What functional (!) structure should it have to make it easy to understand? How does that collide or play together with testability?

So I´m an architect while doing TDD. I´m a problem solver. And more often than not I´m not surprised by the coded solutions, but see them to comply with my ideas of how a solution should look like. I know the steps of converting arabic numbers to roman numbers before (!) I start coding. In fact I´m writing code to match my conceptual (algorithmic) approach.

Continuity in designing software on all levels

Architecture is thinking about structures, that is structures to deliver qualities. Solving the challenges of the problem domain ist about structures, that is structures to deliver functionality (in accordance with qualities).

Software is a self-similar system with many levels. It consists of wholes containing parts being wholes again for even smaller parts etc. “It´s holons all the way down” to use a term Arthur Koestler has coined. And because of that the same principles apply on all levels, not just DRY, SRP etc., but also “thinking helps”.

There is no discontinuity as Robert C. Martin suggest. Right to the contrary! Like there is strategic design, there is tactic design and situational design. And that means, there is thinking before doing. There is engineering before hammering and welding.

My interpretation of what Justin has written is: Use your head before you sling the code to make a test green. With that I agree wholeheartedly, even though I might disagree with some details of his approach.

The dichotomy Robert C. Martin suggests between architecture and problem domain does not warrant a fundamentally different approach to design on those different levels. Because design is about structure (and thus about options and decisions). And structure should not be an afterthought or be stumbled upon.

But before someone accuses me of advocating waterfall development: Yes, I do! :-) Because that´s how the world works. Before there is a man made structure, man better has thought about it. Design up-front is inevitable. Even if you want to you cannot avoid it. You just can hide it or deny it. But it´s there.

The question thus is not whether to design before coding or not. It´s not about waterfall or not. It´s just about the scope. If the scope encompasses so many requirements it would take you 3 months to design and 8 to implement, then that´s of course not gonna work. But if the scope is much, much smaller, say, 5 minutes of design and then 10 minutes of implementation… then that´s perfectly ok. The solution thus lies in ever smaller “waterfall iterations”. And – as always in life – it helps to be humble. If you devise a solution design don´t deem it above new insights. Always be prepared to revise it.

Which brings me back to Justin: I haven´t seen any TDD demonstration where 5 or 10 minutes of “design thinking” have been applied before coding. That´s sad. Because it makes it harder than necessary to learn to appreciate the power TDD can have. It´s a great tool – if not taken as the only one to come up with designs.

Print | posted on Friday, January 31, 2014 8:00 PM | Filed Under [ Thinking outside of the box Software design Informed TDD ]

Feedback

Gravatar

# re: The Self-Similar Software Development Process

Ralf, I think you got uncle Bob wrong. I am pretty sure he is not against thinking [first]; actually he directly says that architecture has to come first and "without it you cannot even begin to use TDD" (Which to me exactly means what you say - he just uses other words...).
Actually uncle Bobs text describes the feeling of wrongness I had when I read the article. Justin does kind of wants us to get it right the first time; but if you don't, you have gained nothing by applying his technique.
No, I do not understand that Justin wants us to really design first; I simply see a top down refinement (that's not the same - and it is even more dangerous - especially when actually coded out; and you start by writing a lot of glue tests and code that does not actually solve anything). Althought it LOOKS a lot like flow design... ;-)

I definitely agree with your '5 or 10 minutes of “design thinking”' (however, I always believed that thinking is included in ALL activities. I do not really like that we have to add an explicit "think"-Step into the procedure now... that sounds like we did not think about stuff in the past).
I do NOT agree that everything (really everything) in software comes from requirements - there are levels of freedom... even in tightly defined projects. In agile projects requirements usually even come in after you wrote code - so you HAVE to change it.
TDD (to me) is not about "stumbling upon" the correct design - it just helps(!) you detect when it's wrong.
2/1/2014 1:03 PM | Frank B
Gravatar

# re: The Self-Similar Software Development Process

Frank, sure Robert C. Martin is not against thinking first. But he confines it to the architectural phase. That´s the whole point of his posting. The perceived discontinuity leads him to that conclusion. In his words:

"So our up front decisions can be limited to choosing a user experience, and choosing the architectural pattern that is most consistent with that user experience. Once those choices are made, we can TDD the problem domain into existence."

Thinking he calls "up front design". And he opposes it to TDD - which conclusively must mean "not thinking up front".

As I said: I don´t agree with every detail of Justin´s approach. But to say "top down is wrong" I find a very sweeping judgement.

The problem with TDD demonstrations is: they don´t show much thinking about solutions. They might so deliberation about test case prioritization. But where´s the thinking about a solution approach? I haven´t seen any for kata wordwrap or kata roman numerals. Robert C. Martin once showed some traces of thinking about kata bowling - but then hardly any of that was left in his implementation. (And the reason for that is clear to me: because he approached an algorithmic problem by trying to figure out data structures.)

When I say "think before coding" I mean "solve the problem before coding". That encompasses everything that´s necessary, e.g. finding test cases, prioritizing test cases, but also finding the right functional and structural approach.

As for requirements: Having freedom to choose between approaches/technologies/productus etc. does not mean the result is not dependent upon requirement.

If a quality requirement say "display sorted list in under 1 sec" you might choose between quicksort and bubble sort. Great. Use your freedom and try to accomodate some other requirement as well, e.g. production efficiency. Nevertheless the functional requirement "display sorted list" as well as the non-functional requirement "display in under 1 sec" were driving your decisions.

And of course requirements change. Still each and evey decision should be based on actual requirements. If the scalability requirement today is "serve 100 requests per second" and you choose to deliver it using structure A, and tomorrow the requirement is "serve 10000 requests per second", then you might need to switch to structure B.

But don´t use structure B in the first place. Keep your solutions simple and to what´s known today. And keep them nimble, so you can change them to serve future requirements.

It´s economical decisions all the time. Balance requirement X against Y. Balance today against some unknown future.

2/1/2014 2:08 PM | Ralf Westphal
Gravatar

# re: The Self-Similar Software Development Process

First off, let me say that we agree more than we differ. Thinking is required at all times (or at least while the tests are green).

I think all of this talk about architecture might miss an important concept - system evolution. I fear by putting a high premium on the ability to think of a solution before starting we might be short changing our ability to change existing code. And with TDD, I can RADICALLY change an existing system and have confidence that it all still works.

So I guess I'd like to say, it's ok to dive in. It's possible you might save time in the long run identifying architecture up front. But you might not. And if you pick the wrong one, you need to have confidence you can take a request/response templated web app and turn it into an asynchronous offline mobile app. I guess that's an advanced skill, but I don't think it should be.
2/1/2014 2:20 PM | Steve Jackson
Gravatar

# re: The Self-Similar Software Development Process

@Steve: There it is again: "all of this talk about architecture might miss [...] system evolution" (or evolvability).

First, please note, I clearly and repeatedly emphasized evolvability as an important trait of software systems. I all for it.

Second, it´s not TDD which ensures we "can RADICALLY change" software structures. It´s the presence of automated tests with sufficient coverage. TDD is just one approach promising to deliver hight coverage.

And finally: Just because I can be wrong, I should be cautious about thinking up front too much? Well, let me say the same about coding. Because any code you write - test-driven or not - is much (!) harder to change than any thinking (and diagramming) I do.

The "art of problem solving by thinking" is in bad reputation in our trade. People ostensibly refuse to think; they rather spend time hacking along. And/or they have lost the ability to think things through with some confidence.

Sure, in the end, it´s code which proves us right or wrong. And as I said: the cycles should be short until the next feedback.

However, problem solving is a task of it´s own. And it should always come before coding.
2/1/2014 3:04 PM | Ralf Westphal
Gravatar

# re: The Self-Similar Software Development Process

@Ralf, I guess the difference is that *I* always seem to get it wrong up front. It's only by building that first client of the system (the tests) that I start to identify what this thing really needs to do.

When I did my diagrams up front I found that I was attached to my plans. And I'd spend a lot of time trying to bend my code to fit a design. And I'd feel bad if I couldn't make reality match my plan. I think I'm better about this now, but I still worry about that anchoring effect. It's hard to throw away coherent narratives, even in the face of reality.

Now that's not to say that tests don't anchor a design as well. I recently did a code retreat and our constraint for the session was that you couldn't speak and one person wrote the tests and the other made them pass. My pair and I were speeding along and I suddenly killed all our momentum by picking the wrong abstraction for our next test.

Refactoring is supposed to be the secret sauce that pulls this "stop thinking and just code" back into something that makes sense. But in practice, there's always that one scenario that you're so relieved to get past that you don't want to touch it again. But of course you do, those complicated parts always attract more complexity and make that inevitable need to untangle it all the more pressing.

What I'm really shooting for is emergence. My least fragile designs/modules/abstractions have come about by applying thinking as soon as it got hard. And at that point, I'm all for stepping away from the keyboard and hitting the whiteboard.

Upon reflection though, there is a lot to be said for having an understandable strategy. A code base that is being refactored to coherence always seems to have parts that aren't quite done and don't fit the narrative. If it is possible to create an outline up front that makes it possible to navigate and discover functionality, that's a pretty useful thing. This is the attraction of frameworks, I suppose - too bad I still find myself fighting this structure. In short the conundrum of my experience is this: if there is a defined structure, I will fight against it, if there is no structure, I will fight to find it.
2/1/2014 3:44 PM | Steve Jackson
Gravatar

# re: The Self-Similar Software Development Process

I´m sorry to hear, Steve, you always get your thinking wrong. And I´m at a loss, what the reason for this might be. Maybe it´s perfectionism? Or maybe it´s taking too much scope on your thinking plate? Hm... dunno.

And I´m sorry to hear of so much attachment to your diagrams. Maybe some exercise in doing buddhist sand mandalas would help? ;-) Or maybe less drawing using some fancy tool, but rather with pen & paper?

As for refactoring: I don´t get how anyone - including all TDD and refactoring gurus - can refactor without first (!) thinking about what the resulting structure should look like. I don´t know of any button in a refactoring tool labeled "Just refactor everything!" - and magically the cleanest structure possible is fabricated automatically.

Who decides if this one method should be split in 2 or 3 or left alone? Isn´t that you? Isn´t that thinking before (!) coding, ehm, I mean refactoring?

Frameworks: They are just tools or materials which need to fit into what has been thought up - not the other way around. They are neither good nor bad. They are what they are - and need be used adequately. Unfortunately that´s often not the case. They are set. Or quickly chosen as a replacement for more deliberation.

Conundrum: I think what you call a conundrum is, well, the nature of the beast :-) Software is not static. That´s why in the end "structure" is a too simple term. "System" is better; it connotes more liveliness.

A software system is ever evolving. Some parts of it more than others - at times. Structures need to be softened and hardened again. That´s inevitable, I think.

The problem is, most projects go for too rigid structures. Too early they want to nail down how things are done - once and for all. Par pro toto there is the database scheme (or the domain object model). That´s what people think about first. Very hard, because it´s so fundamental. They need it to get right from the start.

But then... future requirements laugh in their face. But then there is so much structure already that´s serving so many purposes... No way that can be thrown away. So whatever new requirements need to be somehow fit into what´s already there. Thus the monolith is build one requirement at a time.

Class models often are a premature optimization. Likewise are relational data base schemas.

It´s easier to harden a soft structure than to soften a hard structure. The latter we´re trying every day when wrestling with legacy code.

So why not get to the root of the problem? Why not start out with much, much less rigidity and forethought?

Yes, I´m advocating less forethought even though I´m saying more thinking is needed :-) Because it´s all about scope and basic principles and values.
2/1/2014 4:13 PM | Ralf Westphal
Gravatar

# re: The Self-Similar Software Development Process

Hi Ralph,

you said: "Why not question for a moment the 'traditional' TDD approach? Why not – as Justin does – introduce at least some thinking before coding?"

In Straw Man TDD http://codemanship.co.uk/parlezuml/blog/?postid=1170 Jason Gorman collects some of the most common misconceptions about TDD.

His first misconception is "TDD means not doing any up-front thinking about design".

Here is what he wrote:

Nobody has ever suggested this. It would be madness. Read books like Extreme Programming Explained again. You'll see sketches. You'll see CRC cards. You'll even see UML. (Gasp!)

The question really is about how much up-front design is sufficient. And the somewhat glib answer is "just enough". I tend to qualify that as "just enough to know what tests you need to pass". So, if your approach is focused on roles, responsibilities and interactions, then I'd want to have a high-level idea of what those are before diving in to code. If it's more an algorithmic focus, I'd want to have a test list that can act as a roadmap for key examples that - taken together - explain the algorithm. And so on.

I'd stop at the point where I'm asking questions that are best answered in code (e.g., is this an interface? Should this method be exposed? etc) Code is for details.

Philip
2/2/2014 10:10 AM | Philip Schwarz
Gravatar

# re: The Self-Similar Software Development Process

>"Why not question for a moment the 'traditional' TDD approach? Why not – as Justin does – introduce at least some thinking before coding?"

Even extreme programmers like Ron Jeffries say the following:

"Often the best thing to do is just to proceed to 'Test-first programming.' But if you aren't clear on just what to do, or if you would like a little reassurance, or if you would like a few additional ideas, have a quick design session.

Get a few people together and spend a few minutes sketching out the design. Ten minutes is ideal—half an hour should be the most time you spend to do this.

Do a CRC design with a few cards, or sketch some UML on the whiteboard or a sheet of paper. The idea is to understand the objects involved in what we're about to do, and to understand how to implement them, or change them, to accomplish the task."

That extract is from the Quick Design Session chapter of Extreme Programming Installed: http://books.google.co.uk/books?id=l4zO3OWkdIsC&pg=PA69&source=gbs_toc_r&cad=4#v=onepage&q&f=false

Philip
2/2/2014 10:50 AM | Philip Schwarz
Gravatar

# re: The Self-Similar Software Development Process

@Philip: I´m not denying that there are people out there who say "Don't stop thinking before coding." I´m just saying: That´s not how TDD is taught in practice.

Here are two examples for what I mean:

Kata Roman Numerals by Corey Haines: http://katas.softwarecraftsmanship.org/post/38470416869/roman-numerals-kata-with-corey-haines-this-is-the

Kata Word Wrap by Robert C. Martin: http://thecleancoder.blogspot.de/2010/10/craftsman-62-dark-path.html

Now tell me: Where is there a single line telling us anything about the solution approach?

There is none. Neither Robert C. Martin nor Corey Haines tell us, what they have in mind. How they think the problem should be tackled.

Instead both start with a "degenerate test case" instead.

Despite their good intentions and even the explicit focus on the "TDD rhythm" this is quite far away from reality. And I find it even counterproductive. Because the prioritization of tests might depend on the solution approach.

I´m not saying degenerate cases should not be tested. Rather they should not have such a high priority. I want to get a problem solved - but there very thinking about the solution does not begin - in the case of Roman Numerals - before testing 1 -> "I".

And then it depends on my design if the test case 1 -> "I" falls into a different category than 4 -> "IV". That´s (!) what I mean.

Listen to Corey Haines around 4:30. He´s musing about what should be tested after 1 -> "I" and settles for "a more complex constant": 5 -> "V".

But why in all the world should 5 (or "V") be more complex than 1 ("I")? I don´t get it. This wandering around, grasping for hold on the TPP (Transformation Priority Premise) is just... well... completely unenlightened by any idea of a solution. (Which Corey Haines surely is playing, because he already knows his solution.)

Instead, if he had shared with the viewer his solution idea. Listen to him around 13:20. There he pretends to stumble across the fundamental idea of handling "IV" the same way as "I" or "V". That´s what I don´t believe! He does not stumble across this, it does not come easily to him - but he knew it before.

The point is: We never see those guys doing a kata for the first time! Never, ever. So my guess is: For any non-trivial problem we see solved using TDD the solution is a retrofit glossing over any bumps and deviations of the TDD way.

Or take this for example:

http://www.infoq.com/news/2007/05/tdd-sudoku

Ron Jeffries tries hard to find a solution to Sudoku using TDD - but fails. He gives up after five postings.

Well, my guess is: A little more thinking ahead would have helped Ron Jeffries.

Of course I agree with "just enough" thinking. But non (visible) thought process is too little in any case. Especially if you´re one of the gurus. It´s simply misleading.
2/2/2014 11:52 AM | Ralf Westphal
Gravatar

# re: The Self-Similar Software Development Process

@Philip: Why put a limit on "just enough" thinking? 10 to 30 minutes max. How do you know? Shouldn´t it depend on the problem at hand?

And - sorry to say - CRC cards plus UML I have never ever seen to lead to a good design. One reason for that: Effectively nobody is knowledgeable of those tools. (Let alone the lack of usefulness of them ;-)

90% of developers (I´ve checked this number over several years with audiences on conferences and training participants) do not (!) know UML enough to put it to any other than trivial use.

Again: 90% of devs! After more than a decade of UML indoctrination in universities etc. This is across all major platforms (from C to C++, Java, C#, JS, Ruby...).

UML does not have any practical relevance as a design tool. It has, however, relevance as a documentation tool. But that´s a different story.
2/2/2014 12:02 PM | Ralf Westphal
Gravatar

# re: The Self-Similar Software Development Process

As I remember it, Kent, Ron and others adopted the "10-minute rule" (don't let any design argument go past 10 minutes without writing code) to combat their tendency (and the tendency of others) to argue in circles without gathering any evidence to support their arguments. We've all attended those meetings.

As I remember it, we recommended starting directly with test-first programming (as we called it at the time) in order to help programmers see that they didn't *need* to make all those decisions up front, that they would survive without figuring all that out in detail up front, and that indeed some mistakes resist discovery until we write code. Over time, programmers would notice patterns (and I think they differ from programmer to programmer) in which decisions they need to make up front and which decisions they can afford to defer. Unfortunately, some programmers forget that the possibility that these patterns differ from programmer to programmer, so they say things like "you should make *these* decisions up front, but defer *those* decisions to writing code", while other programmers say things like "you don't need to make any decisions up front; you're better off deferring them all". I disagree with both.

When I practised test-first programming (now test-driven development) deeply, I learned a lot about how I think about design. I discovered which patterns I understand most deeply, and therefore can get away with using early, and which patterns I understand least deeply, and therefore force myself to refactor towards before using them. I find value in this awareness of myself and my own practice.

This whole question of "which is better" -- it really bores me. I give you the same answer that I give everyone I teach: it's better to feel comfortable doing both. Maybe it's even best. I feel comfortable adopting that as a default position for most questions. These things that we set up as dichotomies: so few of them are. Mocks or not? Both. London or Detroit? Both. Think ahead or just code? Both. If you don't know both, then you won't understand when to do one and when to do the other.

When I try something like the Roman numerals kata, I'm not trying to learn how to solve the problem. That's easy. I'm trying to learn something about how I think. I'm trying to learn something about my own blindspots. I'm trying to learn something about the design tools that the language gives me. I'm mostly trying to understand myself. Aren't you?
2/6/2014 3:32 PM | J. B. Rainsberger
Gravatar

# re: The Self-Similar Software Development Process

Am I trying to understand myself. Most surely I am. And exactly that led me to what I´m arguing for: to think more before coding.

I´ve observed myself for decades. I´ve observed others for decades. And my conclusion is: We don´t need more coding, we need less.

The mere ability (!) to approach problem solving other through code is atrophied in many, if not most developers.

In the end, yes, you need to be able to do both. Switch between thinking and coding. Fluently. At any time.

But as you said: "If you don´t know both, then you won´t understand when to do one and when to do the other."

With all the TDD literature around (the good and the bad) there´s no lack of "to do one". But in the meantime "the other" has been neglected. From a historical point of view that might be understandable. There was much frustration with the way design was used/taught in earlier days. And so a movement was born to counter that.

Now is the time, though, when we should wisen up and swing back. To a position in the middle between extremes. After a pre-attitude and then a conventionalistic attitude we should transcend dogmas. (I´m alluding to the pre/trans fallacy here, which seems to be at work also in our trade. For an explanation see for example: http://www.integralworld.net/fallacy.html)

To not argue in circles of course is a good habit. In general. It does not pertain to software problems. But as presented the "10 minute rule" can easily be misinterpreted - as an excuse to not talk and think for more than 10 minutes in a row.

Just today I did a 3 hour design session with a team. We explored the problem domain. We explored the solution domain. This was all very easy because diagrams are so malleable.

Now we´re ready for coding. Will we code everything we designed? No, of course not . We´ll slice the whole design into increments we then implement one after another. And if the invalidate some of the design... so what? We don´t care. We welcome new insights. But that does not make the initial exploration less valuable. It served several purposes, e.g. collective design ownership, putting collective intelligence to the problem, or finding ways to organize the team to later most efficiently implement the code.

How will the functional units we left as black boxes be implemented? I don´t know. Maybe using TDD. Maybe some other way. What I know, though, is: they will be worked on in parallel. And I know they will be backed by a test harness. That´s what simple team work requires. Because otherwise we can´t know what the quality of the individual contributions to the whole is.

And if a developer decides to invest even more thinking before coding the functional unit assigned to him... Well, that´s certainly a possibility. Because of the self similarity of software. The process work the same on all levels. At least for me.

To get back to my posting: That´s why I don´t like most TDD presentations. They don´t show "both". If that´s because they assume devs know how to design/think? I don´t know. But I see the effect. People have a hard time to come up with good designs. Still. Not because of TDD, but despite all the efforts put into propagating TDD.

Ron Jeffries faild on Sudoku. Bob Martin's Word Wrap solution has faults. All despite TDD. So I say: Don´t just bet on TDD. It´s just a tool among others.
2/6/2014 7:13 PM | Ralf Westphal
Gravatar

# re: The Self-Similar Software Development Process

Hi Ralf,

I have two comments I am trying to submit, but something is not working and the submissions seems to fail without any error message. I have tried with Chrome/Firefox/IE. Maybe there is something in the comments that results in them silently getting rejected.

Can I e-mail you the comments, so that you can submit them on my behalf?

Philip
2/9/2014 7:50 PM | Philip Schwarz
Gravatar

# re: The Self-Similar Software Development Process

I don't teach TDD to encourage people to think less before coding. I make this clear. I teach it to encourage people to *commit* to less before coding. To help them feel more comfortable changing their mind as they become uncomfortable with some of the decisions they've made.

I don't show "the other" when I talk about TDD, but I do talk about it. I assume that the audience is already familiar with "try to figure out most of the details before writing a line of code", and introduce TDD as a discipline for figuring *less* out before writing a line of code, and figuring more out while writing lines of code. It seems to me an intellectually honest way to present the topic. It took me over five years to start thinking of it this way.

In a 45-minute introductory TDD presentation, I need the full 45 minutes to demonstrate and justify TDD. Taking 5-10 minutes to discuss up-front design decisions seems risky in that format. In such a session, I have one goal: teach the rules and encourage people to start trying to follow them. That's as much as I can achieve in 45 minutes to that audience.

In a 3-day TDD training course, the question of "how much thinking to do up front?" always comes up, and we spend a lot of time discussing it. I would feel dishonest not discussing it.

As for the 10-minute rule, *every* technique will be *easily* and *widely* misunderstood. That will never stop me from writing down and teaching a useful rule. I don't optimise for mindless practitioners; I encourage mindfulness/awareness instead. I also tell people very plainly: none of these rules replace thinking. Once you can follow the rule consistently, you'd better start questioning the rule, otherwise it becomes a hollow ritual, and you'll miss 90% of the value of it. Of course.
2/10/2014 6:15 PM | J. B. Rainsberger
Post A Comment
Title:
Name:
Email:
Comment:
Verification:
 

Powered by: