The Architect´s Napkin

.NET Software Architecture on the Back of a Napkin

  Home  |   Contact  |   Syndication    |   Login
  50 Posts | 0 Stories | 138 Comments | 0 Trackbacks



Post Categories

Friday, August 22, 2014 #

What is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it.

Focus on functionality

Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.)

The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency.

But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.)

Sure, this data could be embellished with functionality. But nevertheless functionality was second.

imageWhen you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects.

I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java.

But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations.

To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of.

What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased.

No customer ever wanted data or structures.

Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer.

Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”).

Build software incrementally

From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about.

Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers).

An increment is some added functionality or quality of functionality.[1]

So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures.

Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.)

The need for abstraction

In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants.

That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale.

imageWe learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams.

That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example.

There are several reasons for that, I´d say.

Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2]

In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart.

And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree.

Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this.

Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality.

That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions.

imageOr, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions.

We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer.

So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data.

Enter: Flow Design.

Abstracting functionality using data flows

I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow.

Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature.


With data flow we get rid of all the limiting traits of former approaches to modeling functionality.

In addition, nomen est omen, data flows include data in the functionality picture.

With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in.

That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2]

Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding.

Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data.


Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too.

The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed.

On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps.

That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman.

Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote.


I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff.

Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy.

Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be.

And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day.

To me data flow design simply is one of the missing links of systematic lightweight software design.

  1. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality.

  2. No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible.

Thursday, June 12, 2014 #

The driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1]

To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base.

To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime.

But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand.

Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory.

That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program.

In Ruby the simplest program looks like this:

puts "Hello, world!"

In C# more is necessary:

class Program {
    public static void Main () {
        System.Console.Write("Hello, world!");

C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function.

Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on.

All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation.

That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more.

A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability.

Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point.

So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function.

Entry points

Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on.

A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes.

But a program with a user interface like this has at least two Entry Points:


One is the main function called upon startup. The other is the button click event handler for “Show my score”.

But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted.

You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes.

And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2]

Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3]


Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter.


Collections of batch processors

I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications.

Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results.


These resources can be the keyboard or main memory or a hard disk or a communication line or a display.

Together many batch processors - large and small - form applications the user perceives as a single whole:


Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-)


Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility.

At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code.


In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this:


The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog.

This is all very simple, but you see, there is not just one thing happening, but several.

  1. Get input (email address, password)
  2. Load user for email address
  3. If user not found report error
  4. Check password
  5. Hash password
  6. Compare hash to hash stored in user
  7. Show next dialog

Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions.

However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features.

Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner.

Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g.

  • First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4.
  • Then hard code a single user and check the email address. Features 2 and 2.1.
  • Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2
  • Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2.
  • Calculate the real hash for the password. Feature 3.1.
  • Switch to the final user directory technology.

Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one.

That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments).

Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually.

Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.


I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible.

Software production is moving forward through requirements one increment at a time, and one function at a time.

In closing

Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected.

The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services.

We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions.

Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments.

  1. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment.

  2. Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1.

  3. If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”.

Wednesday, June 4, 2014 #

The easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for.

It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured.

Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two.

Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer.

Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build.

Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves.

Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times.

Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high.

Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess.

What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often.

Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes.

Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner).

For me there are several resasons for such a fixed and short cycle time for each increment:

Clear expectations

Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management.

However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time.

So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two.

That´s why I believe we can give rough absolute estimates on 3 levels:

  • Noon
  • Tonight
  • Tomorrow

Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.”

Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue.

So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories.

If you like absolute estimates, here you go.

But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues.

To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future.

But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow.

Trust through reliability

Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production.

Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding.

Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises.

That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time.

If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied.

I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay.

Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always.

Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow.

So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes.

Fast feedback

What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate?

If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again.

Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises.

Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value.

By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process.

Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow.

After acceptance the developer(s) can start working on the next issue.


As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility.

After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time.

Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D.

With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty.

I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst.

If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty.

Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time.

Premature completion

Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements.

That´s understandable. But it does not match with the nature of software development. We should know that by now.

Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other.

However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition.

I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours?

Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases).

Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed.

After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect.

In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need?

Pull on practices

So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se.

Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability.

If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently?

I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term.

The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger.

With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises.

Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours.

There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is.

Don´t count the “WTF!”, count the “No way!” utterances.

In closing

For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability.

Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way.

That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales.

Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress…

In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability.

Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

Monday, June 2, 2014 #

Categorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal.

However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted.

Aspects of Functionality

There are two sides to Functionality requirements.


It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state.

I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below).

Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value.

This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible.

So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on.

Aspects of Quality

As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources.

We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions.

So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software.

But Qualities come in at least two flavors:


Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible.

Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security.

That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself.

With a password manager software this might be different. There security might be a Primary Quality.

Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects.

When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean.

Aspects of Security of Investment

Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly.

Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure.

To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it.

SoI to me has two facets:


Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement.

This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back.

Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level.

Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value.

Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers.

Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not.

Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability.

Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements.

An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2]

Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement.

As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day.

This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus.

In closing

As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions.

Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc.

No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement.

Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes?

These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally.

And how do we ensure to balance all requirement aspects?

That needs practices and transparency.

Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment.

Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it.

The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically.

In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier.

You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software?

Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver.

So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this.

Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3]

  1. But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running…

  2. And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1]

  3. Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework.

Wednesday, May 28, 2014 #

In a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”.

I like to respond with this article to his questions. There´s more to say than fits into a commentary.

Mocks and TDD

I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource.

True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion.

But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata.

ITDD for “To Roman Numerals”

gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines.

Now here is, how I would do this kata differently.

1. Analyse

A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled.

“Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear.

If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer.

Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities.

In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions.

Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning.

Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically.

2. Solve

The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody.

Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too.

Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution.

Here´s my solution approach:

The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members.

The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list.

The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution.

All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values:

{1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M}

Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors.

If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process:

  1. Find all the factors
  2. Translate the factors found
  3. Compile the roman representation

Translation is just a look-up. Finding, though, needs some calculation:

  1. Find the highest remaining factor fitting in the value
  2. Remember and subtract it from the value
  3. Repeat with remaining value and remaining factors

Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction.

With this solution in hand I finally can do what TDD advocates: find and prioritize test cases.

As I can see from the small process description above, there are two aspects to test:

  • Test the translation
  • Test the compilation
  • Test finding the factors

Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious.

Testing the compilation is trivial.

Testing factor finding, though, is a tad more complicated. I can think of several steps:

  1. First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M).
  2. Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly.
  3. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]).
  4. Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly.

I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process.

3. Implement

First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say:


Next I implement the API:


The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution:


My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one.

I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition:


As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right.

Here´s the implementation to satisfy the test:


It´s as simple as possible. Right how TDD wants me to do it: KISS.

Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.)


In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science:


Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it.

Having two tests I find more important.

Now for the next low hanging fruit: compilation. It´s even simpler than translation.


A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose…


Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions:


Again, I´m faking the implementation first:


I focus on just the first test case. No looping yet.

Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat.

That´s left for a drill down with a test of the fake function:


There are two main equivalence partitions, I guess: either the first factor is appropriate or some next.

The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.)


And the first of the equivalence partitions on the higher level also is satisfied:


Great, I can move on. Now for more than a single factor:


Interestingly not just one test becomes green now, but all of them. Great!


You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase.

And by the way: Also the acceptance tests went green:


Mission accomplished. At least functionality wise.

Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness.

At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”.

First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant.


Which leads to a small conversion in Find_factors():


And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain.

However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”.

This then is my final test class:


And this is the final production code:


Test coverage as reported by NCrunch is 100%:



Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet.

But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case.

That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design.

I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree.

Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem.

Using scaffolding tests (to be thrown away at the end) brought two advantages:

  • Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them.
  • I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API.

The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code.

Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes.

And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.


PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

Saturday, May 24, 2014 #

Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements.

Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered.

Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that.

Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement.

You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc.

I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives?

Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all.

Fundamental aspects of requirements

If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”.

That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional.

But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project.

What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope.


Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability.


For short I call those aspects:

  • Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for.
  • Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users.
  • Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”.

Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties.

For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable.

With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment.

But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise.

That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants.

Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine.

But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-)

F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not.

In closing

A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones.

  1. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer.

Saturday, March 22, 2014 #

In the beginning there was, well, chaos. Software had no particular anatomy, i.e. agreed upon fundamental structure. It consisted of several different “modules” which where dependent on each other in arbitrary ways:


(Please note the line end symbol I´m using to denote dependencies. You´ll see in a minute why I´m deviating from the traditional arrow.)

Then came along the multi-layer architecture. A very successful pattern to bring order into chaos. Its benefits were twofold:

  1. Multi-layer architecture separated fundamental concerns recurring in every software.
  2. Multi-layer architecture aligned dependencies clearly from top to bottom.


How many layers there are in a multi-layer architecture does not really matter. It´s about the Separation of Concerns (SoC) principle and disentangling dependencies.

This was better than before – but led to a strange effect: business logic was now dependent on infrastructure. Technically this was overcome sooner or later by applying the Inversion of Control (IoC) principle. That way the design time dependencies between layers where separated from the runtime dependencies.


This seemed to work – except now the implementation did not really mirror the design anymore. Also the layers and the very straightforward dependencies did not match a growing number of aspects anymore.

So the next evolutionary step in software anatomy moved away from layers and top-bottom thinking to rings. Robert C. Martin summed up a couple of these architectural approaches in his Clean Architecture:


It keeps and even details the separation of concerns, but changes the direction of the dependencies. They are pointing from technical to non-technical, from infrastructure to domain. The maxim is: don´t let domain specific code depend on technologies. This is to further the decoupling between concerns.

This leads to implementations like this:

For example, consider that the use case needs to call the presenter. However, this call must not be direct because that would violate “The Dependency Rule”: No name in an outer circle can be mentioned by an inner circle. So we have the use case call an interface (Shown here as Use Case Output Port) in the inner circle, and have the presenter in the outer circle implement it.

The same technique is used to cross all the boundaries in the architectures. We take advantage of dynamic polymorphism to create source code dependencies that oppose the flow of control so that we can conform to “The Dependency Rule” no matter what direction the flow of control is going in.

For Robert C. Martin the rings represent implementations as well as interfaces and calling an outer ring “module” implementation from an inner ring “module” implementation at runtime is ok, as long as design time dependencies of interfaces are just inward pointing.

While the Clean Architecture diagram looks easy, the actual code to me seems somewhat complicated at times.

Suggestion for a next evolutionary step

So far the evolution of software anatomy has two constants: it´s about separating concerns and aligning dependencies. Both is good in terms of decoupling and testability etc. – but my feeling is, we´re hitting a glass ceiling. What could be the next evolutionary step? Even more alignment of dependencies?

No. My suggestion is to remove dependencies from the primary picture of software anatomy altogether. Dependencies are important, we can´t get rid of them – but we should stop staring at them.

Here´s what I think is the basic anatomy of software (which I call “software cell”):


The arrows here do not (!) mean dependencies. They are depicting data flow. None of the “modules” (rectangles, triangles, core circle) are depending on each other to request a service. There are no client-service relationships. All “modules” are peers in that they do not (!) even know each other.

The elements of my view roughly match the Clean Architecture like this:


Portals and Providers form a membrane around the core. The membrane is responsible for isolating the core from an environment. Portals and providers encapsulate infrastructure technologies for communication between environment and core. The core on the other hand represents the domain of the software. It´s about use cases, if you want, and domain objects.

My focus when designing software is on functionality. So all “modules” you see are functional units. They do, process, transform, calculate, perform. They are about actions and behavior.

In my view, the primary purpose of software design is wire-up functional units in a way so a desired overall behavior (functional as well as non-functional) is achieved. In short, it´s about building “domain processes” (supported by infrastructure). That´s why I focus on data flow, not on control flow. It´s more along the lines of Functional Programming, and less like Object Oriented Programming.

Here´s how I would zoom in and depict some “domain process”:


Some user interacts with a portal. The portal issues a processing request. Some “chain” of functional units work on this request. They transform the request payload, maybe load some data from resources in the environment, maybe cause some side effect in some resources in the environment. And finally produce some kind of result which is presented to the user in a portal.

None of these “process steps” knows the other. They follow the Principle of Mutual Oblivion (PoMO). That makes them easy to test. That makes it easy to change the process, because any data flow can be deviated without the producer or consumer being aware of it.

In the picture of Clean Architecture Robert C. Martin seems to hint at something like this when he defines “Use Case Ports”. But it´s not explicit. That, however, I find important: make flow explicit and radically decouple responsibilities.

Two pieces are missing from this puzzle: What about the data? And what about wiring up the functional units?

Well, you got me ;-) Dependencies returning. As I said, we need them. But differently than before, I´d say.

Functional units of data flows like above surely share data which means they depend on it:


If data is kept simple, though, such dependencies are not very dangerous. (See how useful it is to have to symbols for relationships between functional units: one for dependencies and one for data flow.)

So far, wiring up the flows just happens. Like building dependency hierarchies at runtime just happens. Usually the code to inject instances of layer implementations at runtime is not shown. But it´s there, and a DI container knows all the interfaces and their implementations.

For the next evolutionary step of software anatomy, however, I find it important to officially introduce the authority which is responsible for such wiring up; it´s some integrator.


If integration is kept simple, though, such dependencies are not very dangerous. “Simple” here (as above with data) means: does not contain logic, i.e. expressions or control statements. If this Integration Operation Segregation Principle (IOSP) is followed, integration code might be difficult to test due to its dependencies – but it´s very simple to write and check during a review.

Stepping back you can see that my dependency story is different from the ones so far:

  • There are no dependencies between functional aspects. They don´t do request/response service calls on each other, but are connected by data flows.
  • There are only dependencies between fundamental organizational concerns completely orthogonal to any domain: integration, operation, and data.


This evolved anatomy of software does not get rid of dependencies. You will continue to use your IoC and DI containers ;-) But it will make testing of “work horse code” (operations) easier, much easier. And the need for using mock frameworks will decrease. At least that´s my experience of some five years designing software like this.

Also, as you´ll find if you try this out, specifications of classes will change. Even with IoC a class will be defined by 1+n interfaces: the interface it implements plus all the interfaces of “service classes” it uses.

But with software cells and flows the class specifications consist only of 1 interface: the interface the class implements. That´s it. Well, at least that´s true for the operation classes which follow the PoMO. That´s useful because those classes are heavy with logic, so you want to make it as simple as possible to specify and test them.


The evolution of a basic software anatomy has come far – but there is still room for improvement. As long as everything revolves around dependencies between technological and domain aspects, there is unholy coupling. So the next consequent move is to get rid of those dependencies – and relegate them to the realm of organizational concerns. In my opinion the overall structure becomes much easier. Decluttered. More decoupled.

Why not give it a chance?


PS: For more details on flows, PoMO, and IOSP see my blog series here – or make yourself comfortable in a chair next to your fireplace and read my Leanpub eBook on it :-)

Sunday, March 16, 2014 #

Doing explicit software architecture is nothing for the faint of heart, it seems. Software architecture carries the nimbus of heavy weight, of something better left to specialists – or to eschew altogether. Both views are highly counterproductive, though. Leaving architecture to specialists builds walls where there should be none; it hampers feedback and flexibility. And not explicitly architecting (or just designing) software leads to less than desirable evolvability as well as collective software ownership.

So the question is, how to do “just enough” software design up-front. What´s the right amount? What´s an easy way to do it?

Since 2005 I´m on a quest to find answers to these questions. And I´m glad I have found some – at least for me ;-) I´ve lost my “fear of the blank flipchart” when confronted with some requirements document. No longer do I hesitate to start designing software. I´ve shrugged off UML shackles, I´ve gotten off the misleading path of object oriented dogma.

This is not to say, there is no value in some UML diagrams or features of object oriented technologies. Of course there is – as long as it helps ;-)

But as with many practices one never reaches the finishing line when it comes to software architecture. Although I feel comfortable attacking just about any requirements challenge, it´s one think to feel confident – and an altogether different to actually live up to the challenge. So I´m on a constant lookout for exercises in software architecture to further hone my skills. That means applying my method – which is a sampling of many approaches with some added idiosyncrasies – plus reflecting on the process and the outcome.

At the Coding Dojo of the Clean Code Developer School I´ve compiled more than 50 such exercises of different sizes (from small code/function katas to architecture katas). If you like, try them out yourself or with your team (some German language skills required, though ;-).

And recently I stumbled across another fine architecture kata. It´s from Simon Brown whose book “Software Architecture for Developers” I read. First the exercise was only in the book, but then he made it public. I included it in the Coding Dojo and added line numbers and page numbers. Find the full text for the requirements of a Financial Risk System here.


Since Simon included pictures of architectural designs for this exercise from students of his trainings, I thought, maybe I should view that as a challenge and try it myself. If I´m confident my approach to software architecture is helpful, too, then why not expose myself with it. Maybe there are interesting similarities to be discovered – maybe there are differences that could be discussed.

Following I´ll approach the architecture for the Financial Risk System (FRS) my way. This will show how I approach a design task, but it might lack some explanation as to the principles underlying this approach. Here´s not enough room, though, to layout my whole thought framework. But I´m working on a book… ;-) It´s called “The Architect´s Napkin – The Cheat Sheet”. But beware, for the first version it´s in German.

Why it´s called the Architect´s Napkin you´ll see in a minute – or read here. Just let me say: If software architecture is to become a disciplin for every developer it needs to be lightweight. And design drawing should fit on an ordinary napkin. All else will tend to be too complicated and hard to communicate.

And now for some design practice. Here´s my toolbox for software architects:


Basic duality: system vs environment

Every software project should start by focusing on what´s its job, and what not. It´s job is to build a software system. That´s what has to be at the center of everything. By putting something at the center, though, other stuff is not at the center. That´s the environment of what´s at the center. In the beginning (of a software project) thus there is duality: a system to build vs the environment (or context) of the system.

And further the system to build is not created equal in all parts. Within the system there is a core to be distinguished from the rest. At the core of the system is the domain. That´s the most important part of any software system. That´s what we need to focus on most. It´s for this core that a customer wants the system in the first place.


That´s a simple diagram. But it´s an important one as you´ll see. Best of all: you can draw it right away when your boss wants you to start a new software project ;-) It even looks the same for all software systems. Sure, it´s very, very abstract. But that helps as long as you don´t have a clue what the requirements are about.

Spotting actors

With the system put into the focus of my attention I go through the requirements and try to spot whose actually going to use it. Who are the actors, who is actively influencing the system? I´m not looking for individual persons, but roles. And these roles might be played by non-human actors, i.e. other software systems.

  • The first actor I encounter is such a non-human actor: the risk calculation scheduler. It requests the software system to run and produce a risk report. Line 8 and 9 in the requirements document allude to this actor.
  • Then on page 2, line 54f the second actor is described: risk report consumer. It´s the business users who want to read the reports.
  • Line 56f on the same page reveal a third actor: the calculation configurator. This might be the same business user reading a report, but it´s a different role he´s playing when he changes calculation parameters.
  • Finally lines 111ff allude to a fourth actor: the monitoring scheduler. It starts the monitoring which observes the risk calculation.

The risk calculation scheduler and the monitoring scheduler are special in so far as they are non-human actors. They represent some piece of software which initiates some behavior in the software system.

Here´s the result of my first pass through the requirements:


Now I know who´s putting demands on the software system. All functionality is there to serve some actor. Actors need the software system; they trigger it in order to produce results [1].

Compiling resources

During my second pass through the requirements I focus on what the software system needs. Almost all software depends on resources in the environment to do its job. That might be a database or just the file system. Or it might be some server, a printer, a webcam, or some other hardware. Here are the resources I found:

  • Page 1, line 10f: the existing Trade Data System (TDS)
  • Page 1, line 11: the existing Reference Data System (RDS)
  • Page 2, line 52f: the risk report (RR)
  • Page 2, line 54f: some means to distribute the RR to the risk report consumers (risk report distribution, RRD)
  • Page 2, line 54f: a business user directory (BUD)
  • Page 2, line 56f: external parameters (EP) for the risk calculation
  • Page 3, line 92ff: an audit log (AL)
  • Page 4, line 111ff: SNMP
  • Page 4, line 117f: an archive (Arc)


Nine resources to support the whole software system. And all of them require some kind of special technology (library, framework, API) to use them.

The diagram as a checklist

What I´ve done so far was just analysis, not design. I just harvested two kind of environmental aspects: actors/roles and resources. But they are important for three reasons. Firstly they help structuring the software system as you´ll see later. Secondly they guide further analysis of the requirements. And last but not least they function as a checklist for the architect.

Each environmental element poses questions, some specific to its kind, some general. And by observing how easy or difficult it is to answer them, I get a feeling for the risk associated with them.

My third pass through the requirements is not along the document, but around the circle of environmental aspects identified. As I focus on each, I try to understand it better. Here are some questions I´d feel prompted by the “visual checklist” to ask:

  • Actor “Risk calculation scheduler”: How should the risk calculation be started automatically each day? What´s the operating system it´s running on anyway? Windows offers a task scheduler, on Linux there is Crontab. But there are certainly more options. Some more research is needed – and talking to the customer.
  • Actor “Risk report consumer”: How should consumers view reports? They need to be able to use Excel, but is that all? Maybe Excel is just a power tool for report analysis, and a quick overview should be gained more easily? Maybe it´s sufficient to send them the reports via email. Or maybe just a notification of a new report should be sent via email, and the report itself can be opened with Excel from a file share? I need to talk to the customer.
  • Actor “Calculation configurator”: What kind of UI should the configurator be using? Is a text editor enough to access a text file containing the parameters – protected by operating system access permissions? Or is a dedicated GUI editor needed? I need to talk to the customer.
  • Actor “Monitoring scheduler”: The monitoring aspect to me is pretty vague. I don´t really have an idea yet how to do it. I feel this to be an area of quite some risk. Maybe monitoring should be done by some permanently running windows service/daemon which checks for new reports every day and can be pinged with a heartbeat by the risk calculation? Some research required here, I guess. Plus talking to the customer about how important monitoring is compared to other aspects.
  • Resource “TDS”:
    • What´s the data format? XML
    • What´s the data structure? Simple table, see page 1, section 1.1 for details
    • What´s the data volume? In the range of 25000 records per day within the next 5 years (see page 3, section 3.b)
    • What´s the quality of the data, what´s the reliability of data delivery? No information found in the requirements.
    • How to access the data? It´s delivered each day as a file which can be read by some XML API. Sounds easy.
    • Available at 17:00h (GMT-4 (daylight savings time) or GMT-5 (winter time)) each day.
  • Resource “RDS”:
    • What´s the data format? XML
    • What´s the data structure? Not specified; need to talk to the customer.
    • What´s the data volume? Some 20000 records per day (see page 3, section 3b)
    • What´s the quality of the data, what´s the reliability of data delivery? No information found in the requirements.
    • How to access the data? It´s delivered each day as a file which can be read by some XML API. Sounds easy – but record structure needs to be clarified.
  • Resource “Risk report”:
    • What´s the data format? It needs to be Excel compatible, that could mean CSV is ok. At least it would be easy to produce. Need to ask the customer. If more than CSV is needed, e.g. Excel XML or worse, then research time has to be alotted, because I´m not familiar with appropriate APIs.
    • What´s the data structure? No information have been given. Need to talk to the customer.
    • What´s the data volumne? I presume it depends on the number of TDS records, so we´re talking about some 25000 records of unknown size. Need to talk to the customer.
    • How to deliver the data? As already said above, I´m not sure if the risk report should be stored just as a data file or be sent to the consumers via email. For now I´ll go with sending it via email. That should deliver on the availability requirement (page 3, section 3.c) as well as on the security requirements (page 3, section e, line 82f, 84f, 89f).
    • Needs to be ready at 09:00h (GMT+8) the day following TDS file production.
  • Resource “Risk report distribution”: This could be done with SMTP. The requirements don´t state, how the risk reports should be accessed. But I guess I need to clarify this with the customer.
  • Resource “Business user directory”: This could be an LDAP server or some RDBMS or whatever. The only thing I´d like to assume is, the BUD contains all business users who should receive the risk reports as well as the ones who have permission to change the configuration parameters. I would like to run a query on the BUD to retrieve the former, and use it for authentication and authorization for the latter. Need to talk to the customer for more details.
  • Resource “External parameters”: No details on the parameters are given. But I assume it´s not much data. The simplest thing probably would be to store them in a text file (XML, Json…). That could be protected by encryption and/or file system permissions, so only the configurator role can access it. Need to talk to the customer if that´d ok.
  • Resource “Audit log”:
    • Can the AL be used for what should be logged according to page 3, section 3.f and 3.g?
    • Is there logging infrastructure already in place which could be used for the AL?
    • What´s the access constraints for the AL? Does it need special protection?
  • Resource “SNMP”: I don´t have any experience with SNMP. But it sounds like SNMP traps can be sent via an API even from C# (which I´m most familiar with). Need to do some research here. The most important question is, how to detect the need to send a trap (see above the monitoring scheduler).
  • Resource “Archive”:
    • Is there any Arc infrastructure already in place?
    • What are the access constraints for the Arc? Does it need special protection?
    • My current idea would be to store the TDS file and the RDS file together with the resulting risk report file in a zip file and put that on some file server (or upload it into a database server). But I guess I need to talk to the customer about this.

In addition to the environmental aspects there is the domain to ask questions about, too:

  • How are risks calculated anyway? The requirements don´t say anything about that. Need to talk to the customer, because that´s an important driver for the functional design.
  • How long will it take to calculate risks? No information on that, too, in the requirements document; is it more like some msec for each risk or like several seconds or even minutes? Need to talk to the customer, because that´s an important driver for the architectural design (which is concerned with qualities/non-functional requirements).
  • When the TDS file is produced at 17:00h NYC time (GMT-5) on some day n it´s 22:00h GMT of the same day, and 06:00h on day n+1 in Singapore (GMT+8). This gives the risk calculation some 3 hours to finish. Can this be done with a single process? That needs to be researched. The goal is, of course, to keep the overall solution as simple as possible.

So much for a first run through the visual checklist the system-environment diagram provides.

2014-03-16 10.56.14

The purpose of this was to further understand the requirements – and identify areas of uncertainty. This way I got a feeling for the risks lurking in the requirements, e.g.

  • The larges risk to me currently is with the domain: I don´t know how the risk calculation is done, which has a huge impact on the architecture of the core.
  • Then there is quite some risk in conjunction with infrastructure. What kind of infrastructure is available? What are the degrees of freedom in choosing new infrastructure? How rigid are security requirements?
  • Finally there is some risk in technologies I don´t have any experience with.

Here´s a color map of the risk areas identified:

2014-03-16 11.17.23

With such a map in my hand, I´d like to talk to the customer. It would give us a guideline in our discussion. And it´s a list of topics for further research. Which means it´s kind of a backlog for things to do and get feedback on.

But alas, the customer is not available. Sounds familiar? ;-) So what can I do? Probably the wisest thing to do would be to stop further design and wait for the customer. But this would spoil the exercise :-) So I´ll continue to design, tentatively. And hopefully this does not turn out to be waste in the end.

Refining to applications – Design for agility

The FRS is too big to be implemented or even further discussed and designed as a whole. It needs to be chopped up :-) I call that slicing in contrast to the usual layering. At this point I´m not interested in more technical details which layers represent. I´d like to view the system through the eyes of the customer/users. So the next step for me is to find increments that make sense to the customer and can be focused on in turn.

For this slicing I let myself be guided by the actors of the system-environment-diagram. I´d like to slice the system in a way so that each actor gets its own entry point into it. I call that application (or app for short).

2014-03-16 14.41.57

Each app is a smaller software system by itself. That´s why I use the same symbol for them like for the whole software system. Together the apps make up the complete FRS. But each app can be delivered separately and provides some value to the customer. Or I work on some app for a while without finishing it, then switch to another to move it forward, then switch to yet another etc. Round and round it can go ;-) Always listening to what the customer finds most important at the moment – or where I think I need feedback most.

As you can see, each app serves a single actor. That means, each app can and should be implemented in a way to serve this particular actor best. No need to use the same platform or UI technologies for all apps.

Also the diagram shows how I think the apps share resources:

  • The Risk Report Calculation app needs to read config data from EP and produces a report RR to be sent to business users listed in BUD. Progress or hindrances are logged to AL.
  • The Config Editor also needs to access BUD to check, who´s authorized to edit the data in EP. Certain events are logged to AL.
  • The Report Reader just needs to access the report file. Authorization is implicit: since the business user is allowed to access the RR folder on some file share, he can load the report file with Excel. But if need be, the Report Reader could be more sophisticated and require the business user to authenticate himself. Then the report reader would also need access to BUD.
  • The Monitor app checks the folder of the report files each day, if a file has arrived. In addition the Monitor app could be a resource to the Report Calculation which can send it heart beat to signal it´s doing well.

The other resources are used just by the Report Calculation.

Now that I have sliced up the whole software system into applications, I can focus on them in turn. What´s the most important one? Where should I zoom in?

Hosting applications – Design for quality

Zooming in on those applications can mean two things: I could try to slice them up further. That would mean I pick an application and identify its dialogs and then then interactions within each dialog. That way I´d reach the function level of a software system, where each function represents an increment. Such slicing would be further structuring the software system from the point of view of the domain. It would be agile design, since the resulting structure would match the view of the customer. Applications, dialogs, and interactions are of concern to him.

Or I could zoom in from a technical angle. I´d leave the agile domain dimension of design which focuses on functionality. But then which dimension should I choose? There are two technical dimensions, in my view. One is concerned with non-functional requirements or qualities (e.g. performance, scalability, security, robustness); I call it the host dimension. Its elements describe the runtime structure of software. The other is concerned with evolvability and production efficiency (jointly called the “security of investment” aspect of requirements); I call it the container dimension. Its elements describe the design time structure of software.

So, which dimension to choose? I opt for the host dimension. And I focus on the Risk Report Calculation application. That´s the most important app of the software system, I guess.

Whereas the domain dimension of my software architecture approach decomposes a software system into ever smaller slices called applications, dialogs, interactions, the host dimension provides decomposition levels like device, operation system process, or thread.

Focusing on one app the questions thus are: How many devices are needed to run the app so it fulfills the non-functional requirements? How many processes, how many threads?

What are the non-functional requirements determining the number of devices for the calculation app of the FRS? It needs to run in the background (page 1, line 7f), it needs to generate the report within 3 hours (line 28 + line 63), it needs to be able to log certain events (page 3, line 94) and be monitored (page 4, lines 111ff).

How many devices need to be involved to run the calculation strongly depends on how long the calculations take. If they are not too complicated, then a single server will do.

And how many processes should make up the calculation on this server? Reading section 2, lines 43ff I think a single process will be sufficient. It can be started automatically by the operating system, it is fast enough to do the import, calculation, notification within 3 hours. It can have access to the AL resource and can be monitored (one way or the other).

At least that´s what I want to assume lacking further information as noted above.

2014-03-16 16.56.33

Of course this host diagram again is a checklist for me. For each new element – device, process – I should ask appropriate questions, e.g.

  • Application server device:
    • Which operating system?
    • How much memory?
    • Device name, IP address?
    • How can an app be deployed to it?
  • Application process:
    • How can it be started automatically?
    • Which runtime environment to use?
    • What permissions are needed to access the resources?
    • Should the application code own the process or should it run inside an application server?

The device host diagram and the process host diagram look pretty much the same. That´s because both container only a single element. In other cases, though, a device is decomposed into several processes. Or there are several devices each with more than one process.

Also this is only the processes which seem necessary to fulfill quality requirements. More might be added to improve evolvability.

Nevertheless drawing those diagrams is important. Each host level (device, process – plus three more) should be checked. Each can help to fulfill certain non-functional requirements, for example: devices are about scalability and security, processes are about robustness and performance and security, threads are about hiding or lowering latency.

Separating containers – Design for security of investment

Domain slices are driven directly by the functional requirements. Hosts are driven by quality concerns. But what drives elements like classes or libraries? They belong to the container dimension of my design framework. And they can only partly be derived directly from requirements. I don´t believe in starting software design with classes. They don´t provide any value to the user. Rather I start with functions (see below) – and then find abstractions on higher container levels for them.

Nevertheless the system-environment-diagram already hints at some containers. On what level of the container hierarchy they should reside, is an altogether different question. But at least separate classes (in OO languages) should be defined to represent them. Mostly also separate libraries are warranted for even more decoupling.

Here´s the simple rule for the minimum number of containers in any design:

  • communication with each actor is encapsulated in a container
  • communication with each resource is encapsulated in a container
  • the domain of course needs its own container – at least one, probably more
  • a dedicated container to integrate all functionality; usually I don´t draw this one; it´s implicit and always present, but if you like, take the large circle of the system as its representation

2014-03-16 17.23.22

Each container has its own symbol: the actor facing container is called a portal and drawn as a rectangle, the resource facing containers are called providers and drawn as triangles, and the domain is represented as a circle.

That way I know the calculation app will consist of 9+1+1+1=12 containers. It´s a simple and mechanical separation of concerns. And it serves the evolvability as well as production efficiency.

By encapsulating each actor/resource communication in its own container, it can more easily replaced, tested, and implemented in parallel. Also this decouples the domain containers from the environment.

Interestingly, though, there is no dependency between these containers! At least in my world ;-) None knows of the others. Not even the domain knows about them. This makes my approach different from the onion architecture or clean architecture, I guess. They prescribe certain dependency orientations. But I don´t. There are simply no dependencies :-) Unfortunately I can´t elaborate on this further right here. Maybe some other time… For you to remember this strange approach here is the dependency diagram for the containers:

2014-03-16 17.25.18

No dependencies between the “workhorses”, but the integration knows them all. But such high efferent coupling is not dangerous. The integration does not contain any logic; it does not depend itself on the operations of the other containers. Integration is a special responsibility of its own.

Although I know a minimal set of containers, I don´t know much about their contracts. Each is encapsulating some API through which it communicates with a resource/actor. But how the service of the containers is offered to its environment is not clear. I could speculate about it, but most likely that would violate the YAGNI principle.

There is a way, however, to learn more about those contracts – and maybe find more containers…

Zooming in – Design for functionality

So far I´ve identified quite some structural elements. But how does this all work together? Functionality is the first requirement that needs to be fulfilled – although it´s not the most important one [2].

I switch dimensions once again in order to answer this question. Now it´s the flow dimension I´m focusing on. How does data flow through the software system and get processed?

On page 2, section 2 the requirements document gives some hints. This I would translate into a flow design like so. It´s a diagram of what´s going on in the calculation app process [3]:

2014-03-16 18.20.43

Each shape is a functional unit which does something [4]. The rectangle at the top left is the starting point. It represents the portal. That´s where the actor “pours in” its request. The portal transforms it into something understandable within the system. The request flows to the Import which produces the data Calculate then transforms into a risk report.

That´s the overall data flow. But it´s too coarse grained to be implemented. So I refine it:

  • Zooming into Import reveals to separate import steps – which could be run in parallel – plus an Join producing the final output.
  • Zooming into Calculate releals several processing steps. First the input from the Import is transformed into risk data. Then the risk data is transformed into the actual risk report, of which then the business users are informed. Finally the TDS/RDS input data (as well as the risk report) gets archived.

The small triangles hint at some resource access within a processing step. Whether the functional unit itself would do that or if it should be further refined, I won´t ponder here. I just wanted to quickly show this final dimension of my design approach.

For Import TDS and Import RDS I guess I could derive some details about the respective container contracts. Both seem to need just one function, e.g. TDSRecord[] Import(string tdsFilename).

The other functional units hint at some more containers to consider. Report generation (as opposed to report storage) looks like a different responsibility than calculating risks, for example. Also Import and Calculate have a special responsibility: integration. They see to that the functional units form an appropriate flow.

At least the domain thus is decomposed into at least three containers:

  • integration
  • calculation
  • report generation

Each responsibility warrants its own class, I´d say. That makes it 12-1+3=14 containers for the calculation application.

Do you see how those containers are a matter of abstraction? I did not start out with them; rather I discovered them by analyzing the functional structure, the processing flow.


So much for my architectural design monologue ;-) Because a monologue it had to be since the customer was not at hand to answer my many questions. Nevertheless I hope you got an impression of my approach to software design. The steps would not have been different if a customer had be available. However the resulting structure might look different.

Result #1: Make sure the customer is close by for questions when you start your design.

The exercise topic itself I found not particularly challenging. The interesting part was missing ;-) No information on what “calculating risks” means. But what became clear to me once more was:

Result #2: Infrastructure is nasty

There are so many risks lurking in infrastructure technologies and security constraints and deployment and monitoring. Therefore it´s even more important to isolate those aspects in the design. That´s what portals and providers are for.

To write up all this took me a couple of hours. But the design itself maybe was only half an hour of effort. So I would not call it “big design up-front” :-)

Nonetheless I find it very informative. I would not start coding with less. Now I talk about focus and priorities with the customer. Now I can split up work between developers. (Ok, some more discussion and design would be needed to make the contracts of the containers more clear.)

And what about the data model? What about the domain model? You might be missing the obligatory class diagram linking data heavy aggregates and entities together.

Well, I don´t see much value in that in this case – at least from the information given in the requirements. The domain consists of importing data, calculating risks and generating a report. That´s the core of the software system. And that´s represented in all diagrams: the host diagram shows a process which does exactly this, the container diagram shows a domain responsible for this, and the flow diagram shows how several domain containers play together to form this core.

Result #3: A domain model is just a tool, not a goal.

All in all this exercise went well, I´d say. Not only used I flows to design part of the system, I also felt my work flowing nicely. No moment of uncertainty how to move on.


[1] Arguably the non-human actors in this scenario don´t really need the software system. But as you´ll see it helps to put them into the picture as agents to cause reactions of the software system.

[2] The most important requirements are the primary qualities. It´s for them that software gets commissioned. Most often that´s performance, scalability, and usability. Some operations should be executed faster, with more load, and be easier to use through software, than without. But of course, before some operation can become faster it needs to be present first.

[3] If this reminds you of UML activity diagrams, that´s ok ;-)

[4] Please not my use of symbols for relationships. I used lines with dots at the end to denote dependencies. In UML arrows are used for that purpose. However, arrows I reserve for data flow. That´s what they are best at: showing from where to where data flows.

Thursday, March 13, 2014 #

Antifragility has attracted some attention lately. I, too, pressed the “I like” button. :-) A cool concept, a term that was missing. Just when Agility starts to become lame, Antifragility takes over the torch to carry on the light of change… ;-)

What I find sad, though, is that the discussion seems to be too technical too soon. There´s the twitter hash tag #antifragilesoftware for example. It´s suggesting, there are tools and technologies and methods, to make software antifragile. But that´s impossible, I´d say. And so the hash tag is misleading and probably doing more harm than good to the generally valuable concept of Antifragility.

Yes, I´m guilty of using the hash tag, too. At first I hadn´t thought about Antifragility enough and was just happy, someone had brought up the term. Later I used it to not alienate others with yet another hash tag; I succumbed to the established pattern.

But here I am, trying to at least once say, why I think, the hash tag is wrong – even though it´s well meaning.

Antifragility as a property

Antifragility is a property. Something can be fast, shiny, edible – or antifragile. If it´s antifragile it thrives on change, it becomes better due to stress. What is antifragile “needs to be shaken and tossed” to improve. “Handle with care” is poison to whatever is antifragile.

As I explained in a previous article, I think Antifragility comes from buffers – which get adapted dynamically. Increase of buffer capacity in response to stress is what distinguishes Antifragility from robustness.

In order to determine if something is antifragile, we need to check its buffers. Are there buffers for expectable stresses? How large is their capacity before stress? Is stress on a buffer-challenging level? How large are the buffer capacities after such stress? If the buffer capacities is larger after challenging stress – of course given a reasonable recuperation period –, then I guess we can put the label “Antifragile” on the observed.

Antifragility is DIY

So far it seems, Antifragility is a property like any other. Maybe it´s new, maybe it´s not easy to achieve, but in the end, well, just another property we can build into our products. So why not build antifragile blenders, cars, and software?

Here´s where I disagree with the suggestion of #antifragilesoftware. I don´t think we can build Antifragility into anything material. And I include software in this, even though you might say it´s not material, but immaterial, virtual. Because what software shares with material things is: it´s dead. It´s not alive. It does not do anything out of its own volition.

But that´s what is at work in Antifragility: volition. Or “the will to live” to get Schopenhauer into the picture after I called Nietzsche the godfather of Antifragility ;-)

Antifragility is only a property of living beings, maybe it´s even at the core of the definition of what life is.

Take a glass, put it in a box, skake the box-with-glass, the glass cracks. The stress on the buffers of the box-with-glass was challenging. Now put some wood wool around the glass in the box. Shake the box-with-glass, the glass is not further damaged. Great! The box-with-glass is antifragile, right?

No, of course not. The box-with-glass is just robust. Because the box-with-glass did not change itself. It was build with a certain buffer. That buffer was challenged by stress. And that´s it. The box-with-glass did not react to this challenge – except it suffered it. It even deteriorated (the glass cracked).

It was you as the builder and observer who increased some buffer of the box-with-glass after the stress. You just improved its robustness.

If the box-with-glass was antifragile it would have reacted itself to the stress. It would have increased its buffer itself. That way the box-with-glass would have shown it learned from the stress.

Antifragility thus essentially is a Do-it-yourself property. Something is only antifragile if it itself reacts to stresses by increasing some buffer. Antifragility thus is a property of autopoietic systems only. Or even: Autopoiesis is defined by Antifragility.

To put it short: If you´re alive and want to stay alive you better be Antifragile.

Antifragility is needed to survive in a changing environment.

So if you want to build antifragility into something, well, then you have to bring it to life. You have to be a Frankenstein of some sorts ;-) Because after you built it, you need to leave it alone – and it needs to stay alive on its own.

You see, that´s why I don´t believe in #antifragilesoftware. Software is not alive. It won´t change by itself. We as its builders and observers change it when we deem it necessary.

Software itself can only be robust. We build buffers of certain sizes into it. But those buffers will never change out of the software´s own volition. At least not before the awakening of Skynet :-)

So forget SOLID, forget µServices, forget messaging, RabbitMQ or whatever your pet principles and technologies might be. Regardless of how much you apply them to software, the software itself will not become antifragile. Not even if you purposely (or accidentally) build a god class :-)

Enter the human

Since we cannot (yet) build anything material that´s alive, we need to incorporate something that´s alive already, if we want to achieve Antifragility. Enter the human.

If we include one or more humans into the picture, we actually can build living systems, ie. systems with the Antifragility property. Before were just technical systems, now it´s social systems.

Not every social system is alive of course. Take the people sitting on a bus. They form a social system, but I would have a hard time to call it autopoietic. There´s nothing holding those people together except a short term common destination (a bus stop). And even this destination does not cause the group of people to try to keep itself together in case the bus breaks down. Then they scatter and each passenger tries to reach his/her destination by some other means.

But if you put together people in order to achieve some goal, then the social system has a purpose – and it will try “to stay alive” until the goal is reached. Of course for this they need to be bestowed with sufficient autonomy.

I think a proper term for such a purpose oriented social system would be organization. A company is an organization, a team is an organization.

An organization like any other system has buffers. It can withstand certain stresses. But above that it is capable of extending its buffers, if it feels that would help its survival towards fulfilling its purpose. That´s what autonomy is for – at least in part.

“Dead” systems, i.e. purely material systems (including software) usually deteriorate under stress – some faster, some slower. At best they might be able to repair themselves: If a buffer got damaged they can bring it back to its original capacity. But then they don´t decide to do this; that´s what “just happens”. That´s how they are built, that´s an automatic response to damage, it´s a reflex.

“Dead” systems however don´t deliberate over buffer extension. They don´t learn. They don´t anticipate, assess risks, and as a result direct effort this way or that to increase buffer capacity to counter future stresses.

Let´s be realistic: that´s the material systems (including software) we´re building. Maybe in some lab mad scientists are doing better than that ;-), but I´d say that´s of no relevance to the current Antifragility discussion.

Software as a tool

If we want to build antifragile systems we´re stuck with humans. Antifragility “build to order” requires a social system. So how does software fit into the picture?

What´s the purpose of a software development team? It´s to help the customer respectively the users to satisfy some needs easier than without it. The customer for example says: “I want to get rich quick by offering an auction service to millions of people.” The customer then would be happy to accept any (!) solution, be that software or a bunch of people writing letters or trained ants. As it happens, though, software (plus hardware, of course) seems make it easier than training ants to reach this goal :-) That´s the only reason the “solution team” will deliver software to the customer. Software thus is just a tool. It´s not a purpose, it´s not a goal, it´s a tool fitting a job at a specific moment.

Of course, a software team would have a hard time delivering something other than software. That way software looks larger than a tool, it looks like a goal, even a purpose. But it is not. It is just a tool, which a software team builds to get its job “Make the life of the users easier” done.

I don´t want to belittle what developing software means. It´s a tough job. Software is complex. Nevertheless it´s just a tool.

On the other hand this view makes it easier to see how Antifragility and software go together. Since software is a tool, it can help or hinder Antifragility. A tool is a buffer. A tool has buffers. These buffers of cause serve Antifragility like any other buffers of a system.

So instead of saying, software should become antifragile – which it cannot. We should say, the socio-technical system consisting of humans plus software should become antifragile. A software company, a software team as organizations can have the property of Antifragility – or they can lack it.

A software producing organization can come under pressure. All sorts of buffers then can be deployed to withstand the stress. And afterwards the organization can decide to increase all sorts of buffers to cope with future stress in an even better way.

Antifragile consequences

µServices or an OR mapper or SOLID or event sourcing are certain approaches to build software. They might help to make its buffers larger. Thus they would help the overall robustness of the organization building the software. But the software itself stays a tool. The software won´t become antifragile through them. Antifragility needs deliberation.

Robustness of any order of a software is just the foundation of Antifragility. It can build on it, because robustness helps survival. It´s not Antifragility, though. That´s Nassim Taleb´s point. Antifragility emerges only if there is some form of consciousness at play. Stress and buffer depletion need to be observed and assessed. Then decisions as to how to deal with the resulting state of buffers have to be taken. Attention has to be directed to some, and withdrawn from others. And energy has to be channeled to some to repair them or even increase them. Which on the other hand means, energy has to be withdrawn from others. That´s why buffers shrink.

Shrinking buffers are a passive effect of energy directed elsewhere. Buffers are rarely actively deconstructed. Rather they atrophy due to lack of attention and energy. That´s what´s happening when you focus your attention on the rearview mirror while driving. You might be increasing your buffer with regard to approaching cars – but at the same time the buffer between your car and the roadside might shrink, because you diverted your attention from looking ahead and steering.

If you want to be serious about Antifragility as a property of a socio-technical system, then you have to be serious about values, strategy, reflection, efficient decision making, autonomy, and accountability. There´s no Antifragility without them. Because Antifragility is about learning, and liviing. Software is just a tool, and technologies and technical principles can only buy you buffer capacity. But who´s to decide where to direct attention and energy to? Who´s to decide which buffers to increase as a result to stress? That requires human intelligence.

First and foremost Antifragility is about people.

That´s why I suggest to drop the #antifragilesoftware in favor of something more realistic like #antifragilityinsoftwaredevelopment. Ok, that´s a tad long. So maybe #antifragileteams?

Wednesday, March 5, 2014 #

The notion of Antifragility has hit the software community, it seems. Russ Miles has posted a couple of small articles on it in his blog, asking what it could mean for software development – but also listing a couple of ingrediences he thinks are needed.

I´ve read the canonical book on antifragility by Nassim Taleb, too. And I second Russ´ description: Antifragile software is software “that thrives on the unknown and change”.

But still… what does it mean? “To thrive on the unknown and change” to me is too fuzzy for me to help me in my day to day work. Thus I cannot even judge if Russ is right when he says, micro services help to implement Antifragility.

That´s why I want to share some thoughts on the topic here. Maybe writing about it helps to clarify the issue for myself – at least ;-)

Is Antifragility something to strive for?

To me that´s a definite Yes. Yes, I want my software to be antifragile.

Clearly software should not be brittle/fragile.

That´s why there exists a canonical non-functional requirement called robustness.

But antifragility goes beyond that. Software should not just withstand stress, it should become better and better by being stressed. Software should improve because of strain, not despite it.

To be more concrete: Changing requirements – functional as well as non-functional – should cause software to be able to adapt even easier to further changes.

The status-quo seems to be the opposite: The more changes are heaped onto a software the more difficult changing it becomes. What we end up with is legacy code, a brownfield, a big ball of mud, spaghetti code. All these terms denote the same: a fragile code base. It´s fragile because the next change request might cause it to break down, i.e. to enter a state where it´s economically infeasible to further change it.

So, yes, I guess Antifragility is something we really need.

Fragility and robustness

But how can we implement Antifragility? How can we check, if approach A, tool T, concept C, principle P etc. benefit Antifragility or harm it?

I guess, we need to start with fragility and robustness to understand Antifragility.

What´s fragility? It´s when a small change in the environment causes harm. A glass vase is fragile, because a slight change in pressure can cause it to break. A football in contrast is not fragile. It´s robust with regard to pressure.

But maybe a football is not robust with regard to heat. Maybe it´s inflammable – and the glass vase not.

What distinguishes a glass vase from a football is their capability to compensate change of certain environment parameters. For short I call this capability a buffer.

If something is more robust or less fragile than something else, its buffers are larger. A glass vase and a football differ in their pressure buffers and their temperature buffers.

That means, if you want to make something more robust, then you need to increase its buffer regarding certain changes. For that you have to know the potential forces/stressors/attack angles. Do you want an item to be more robust regarding temperature or pressure or vibration or radiation or acceleration or viruses or traumata?

Into software you can build buffers for performance, scalability, security, portability etc. Software “can come under attack” from all sorts of non-functional angles. Today it´s 10 users per second, tomorrow it´s 1000 users. That´s stress on a software system. Does it have buffer capacity to compensate it? If so, it´s robust. If not, it´s fragile with regard to the number of users per second.


With this definition of fragility/robustness in place, what does Antifragility mean? Nassim Taleb says, it goes beyond robustness. Robustness is not the opposite of fragility, but just fragility on another level or less fragility.

I´d like to call Nietzsche to the witness stand. To me he´s the godfather of Antifragility, so to speak, because he says:

“What doesn´t kill me, makes me stronger.” (Maxims and Arrows, 8)

Yes, that´s what to me in a nutshell describes Antifragility.

Let me dissect his statement:

First Nietzsche acknowledges there are forces/stressed that can destroy an item. And I´d say, Nassim Talebl does not object. Antifragility does not equal immortality.

Destruction occurs if the buffer capacity of some robustness is exceeded. A glass vase might withstand a larger temperature change than a football. But in the end even glass melts and thus the vase form is destroyed.

That´s the same for antifragile systems. They have buffers, too. And those buffers can be stressed too hard.

But here´s the catch: What if the buffer capacity is not exceeded? What if some stress could be compensated?

For a more or less fragile/robust item this means, well, nothing special. It did not get destroyed. That´s it. Or maybe the buffer capacity is now lower. That´s what we call “wear and tear”.

An antifragile system, though, gets stronger. It changes for the better. No wear and tear, but increased buffer capacity. That´s the difference between levels of fragility and Antifragility.

Fragile, even robust items have static buffers at best. Usually though, their buffers decrease under repeated stress. Antifragile systems on the other hand have dynamic buffer which grow because of stress. Of course, they don´t do that during stress, but after stress. That´s why antifragile systems need rest.

Antifragile systems need time to replenish buffer resources – and extend their buffers. Try that out for yourself with some HIIT (high intensity interval training) for push-ups. It´s a simple proof for the Antifragility of your body. It might hurt a bit ;-) but you can literally watch over the course of a couple of days how your buffers grow, how you get stronger.

HIIT combines high stress (near depletion of buffers) and rest – and thus promotes growth.

And that´s what Antifragility is about: growth. Antifragile systems grow as a reaction to stress. Increasing stressed buffers happens in anticipation of future stresses. Antifragile systems don´t want to be caught off guard again, so they increase their robustness. In essence that´s learning. A larger buffer next time means less risk of harm/destruction through a particular type of stress.

At the same time, though, unused buffers shrink. That´s called atrophy. Stay in bed for a month and you´ll become very weak. Not after a night in bed, not after a day or two. But prolonged underusage of muscles causes them to dwindle.

That´s just economical. Keeping muscle power buffers large costs energy. If those buffers are not used, well, then why spend the energy? It can be put to good use at other buffers or saved altogether.

Antifragility thus does not just mean growth, but also reduction. Stripping down buffers is nothing to fear for a truely antifragile system, because it can build up any buffer at any time if need be.

This (meta-)capability of course is also limited, however. It´s a buffer like any other. And it can be grown – or it can shrink. That´s what we usually call aging. A system ages if it loses its Antifragility. In the end it´s just robust – and finally worn down by some stress.

Becoming alive

If we want to talk about Antifragility in software development we need to talk about life. Software needs to become alive. Or the system consisting of code + humans needs to become alive. Truely alive.

We need to identify stressors. We need to identify buffers to compensate those stressors. We need to measure the capacity of those buffers. We need to find ways to increase – and decrease their capacities timely and at will in reaction to stresses. We need to assess risks of stress recurrence in order to determine if increase or decrease is appropriate. We need to rest. And finally we need to accept death.

To me that means, Antifragility is much, much more than SOLID code plus some micro services plus message based communication via busses plus reactive programming etc. First of all it´s not just about code, because code itself is not and cannot be alive. Thus code itself cannot change itself for the better after stress, like no football can increase its temperature robustness.

Rather the “socio-mechanical” system made up of humans and their code needs to be formed in a way to be antifragile. Which of course also implies certain beneficial structures and relationship in code. But that´s not all to think about.

So we´re not there, yet, I´d say. Antifragility is more than Agility + Clean Code. First of all it´s a mindset, a worldview. And as usual, to instill that into people is not that easy. It will take another 10 to 20 years…

Therefore let today be the first day of another journey in software development. Let the Antifragility years begin…