The Architect´s Napkin

.NET Software Architecture on the Back of a Napkin

  Home  |   Contact  |   Syndication    |   Login
  51 Posts | 0 Stories | 138 Comments | 0 Trackbacks

News

Archives

Post Categories

Sunday, August 24, 2014 #

The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points.

Designing software thus consists of at least three phases:

  1. Analyzing the requirements to find the Entry Points and their signatures
  2. Designing the functionality to be executed when those Entry Points get triggered
  3. Implementing the functionality according to the design aka coding

I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language.

But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought.

Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”.

And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality.

Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do).

But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions.

Functions are not relevant to functionality. Strange, isn´t it.

What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent).

That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development.

To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it.

Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it.

But how to do functional design?

An example of functional design

Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e.

There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line

C:\>deduplicate "a, b, a, c, d, b, e, c, a"
a, b, c, d, e

…or by clicking on a GUI button.

image

This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable.

What then happens is a three step process:

  1. Transform the input data from the user into a request.
  2. Call the request handler.
  3. Transform the output of the request handler into a tangible result for the user.

Or to phrase it a bit more generally:

  1. Accept input.
  2. Transform input into output.
  3. Present output.

This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP).

Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself.

But it´s still on a meta-level. The application to the domain at hand is easy, though:

  1. Accept string list from command line
  2. De-duplicate
  3. Present de-duplicated strings on standard output

And this concrete list of processing steps can easily be transformed into code:

static void Main(string[] args)
{
    var input = Accept_string_list(args);
    var output = Deduplicate(input);
    Present_deduplicated_string_list(output);
}

Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point.

But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.

private static string Accept_string_list(string[] args)
{
    return args[0];
}

private static void 
        Present_deduplicated_string_list(
            string[] output)
{
    var line = string.Join(", ", output);
    Console.WriteLine(line);
}

Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call.

And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done?

Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion:

  • De-duplicate
  • Parse the input string into a true list of strings.
  • Register each string in a dictionary/map/set. That way duplicates get cast away.
  • Transform the data structure into a list of unique strings.

Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:

private static string[] Parse_string_list(string input)
{
    return input.Split(',')
                .Select(s => s.Trim())
                .ToArray();
}

private static Dictionary<string,object> 
        Compile_unique_strings(string[] strings)
{
    return strings.Aggregate(
            new Dictionary<string, object>(),
            (agg, s) => { 
                agg[s] = null;
                return agg;
            });
}

private static string[] Serialize_unique_strings(
               Dictionary<string,object> dict)
{
    return dict.Keys.ToArray();
}

With these three additional functions Main() now looks like this:

static void Main(string[] args)
{
    var input = Accept_string_list(args);

    var strings = Parse_string_list(input);
    var dict = Compile_unique_strings(strings);
    var output = Serialize_unique_strings(dict);

    Present_deduplicated_string_list(output);
}

I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design:

  1. Accept string list from command line
  2. Parse the input string into a true list of strings.
  3. Register each string in a dictionary/map/set. That way duplicates get cast away.
  4. Transform the data structure into a list of unique strings.
  5. Present de-duplicated strings on standard output

You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later.

And as a bonus: all the functions making up the process are small - which means easy to understand, too.

So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface.

Introducing Flow Design

Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm.

An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1]

In its simplest form a process can be written as a bullet point list of steps, e.g.

  • Get data from user
  • Output result to user
  • Transform data
  • Parse data
  • Map result for output

Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming.

Next comes ordering the steps. What should happen first, what next etc.?

  1. Get data from user
  2. Parse data
  3. Transform data
  4. Map result for output
  5. Output result to user

That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD.

Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion?

My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title.

Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with.

So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple.

However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text.

In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps.

That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient.

Visualizing processes

The building block of the functional design notation is a functional unit. I mostly draw it like this:

image

Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware.

Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design.

It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details.

That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview.

But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”.

Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want.

Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call).

TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers.

Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem:

image

For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure.

Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units.

Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified.

Empty brackets mean “no data is flowing”, but nevertheless a signal is sent.

A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose.

A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type.

If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]).

But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent.

Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output.

A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects.

A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input.

If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings).

The Principle of Mutual Oblivion

A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other.

Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving.

Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel.

Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream.

That makes functional units very easy to test. At least as long as they don´t depend on state or resources.

I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility.

How the whole is built, how a larger goal is achieved, is of no concern to the single functional units.

By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other.

Take a nerve cell “controlling” a muscle cell for example:[2]

image

The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way.

Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about.

Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism.

How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment.

My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”?

So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units.

That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology:

image

Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware.

Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle.

Translating functional units into code

Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules:

  • Translate an input port to a function.
  • Translate an output port either to a return statement in that function or to a function pointer visible to that function.

image

The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO.

Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks.

Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input.

image

Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed.

Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”.

In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data.

Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value.

So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]

void filter_stop_words(
       string word,
       Action<string> onNoStopWord) {
  if (...check if not a stop word...)
    onNoStopWord(word);
}

If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation.

Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow.

Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only.

Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically.

Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.

class Filter {
  public void filter_stop_words(
                string word) {
    if (...check if not a stop word...)
      onNoStopWord(word);
  }

  public event Action<string> onNoStopWord;
}

When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road.

Another example of 1D functional design

Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design.

The function signature given is:

string WordWrap(string text, int maxLineLength) 
{...}

That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality.

The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified.

If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line.

Flow Design

Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed?

  • Words need to be assembled into lines
  • Words need to be extracted from the input text
  • The resulting lines need to be assembled into the output text
  • Words too long to fit in a line need to be split

Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line).

Here´s the Flow Design for this increment:

image

The the first three bullet points turned into functional units with explicit data added.

As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit.

In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are.

But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced.

The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail.

Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem.

Implementation

The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility.

image

And the process flow becomes just a sequence of function calls:

image

Easy to understand. It clearly states how word wrapping works - on a high level of abstraction.

And it´s easy to evolve as you´ll see.

Flow Design - Increment 2

So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long.

Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope.

Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible?

Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future.

I just “phase in” the remaining processing step:

image

Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step.

Implementation - Increment 2

A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on:

image

Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with.

Compare this to Robert C. Martin´s solution:[4]

image

How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach.

Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess.

But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design.

But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter.

In closing

I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly.

Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here.

For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.).

Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members.

Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.

 

PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading.


  1. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step.

  2. Image taken from www.physioweb.org

  3. My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer.

  4. Taken from his blog post “The Craftsman 62, The Dark Path”.


Friday, August 22, 2014 #

What is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it.

Focus on functionality

Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.)

The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency.

But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.)

Sure, this data could be embellished with functionality. But nevertheless functionality was second.

imageWhen you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects.

I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java.

But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations.

To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of.

What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased.

No customer ever wanted data or structures.

Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer.

Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”).

Build software incrementally

From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about.

Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers).

An increment is some added functionality or quality of functionality.[1]

So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures.

Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.)

The need for abstraction

In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants.

That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale.

imageWe learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams.

That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example.

There are several reasons for that, I´d say.

Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2]

In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart.

And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree.

Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this.

Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality.

That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions.

imageOr, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions.

We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer.

So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data.

Enter: Flow Design.

Abstracting functionality using data flows

I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow.

Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature.

image

With data flow we get rid of all the limiting traits of former approaches to modeling functionality.

In addition, nomen est omen, data flows include data in the functionality picture.

With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in.

That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2]

Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding.

Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data.

image

Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too.

The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed.

On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps.

That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman.

Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote.

Summary

I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff.

Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy.

Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be.

And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day.

To me data flow design simply is one of the missing links of systematic lightweight software design.


  1. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality.

  2. No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible.


Thursday, June 12, 2014 #

The driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1]

To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base.

To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime.

But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand.

Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory.

That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program.

In Ruby the simplest program looks like this:

puts "Hello, world!"

In C# more is necessary:

class Program {
    public static void Main () {
        System.Console.Write("Hello, world!");
    }
}

C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function.

Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on.

All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation.

That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more.

A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability.

Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point.

So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function.

Entry points

Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on.

A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes.

But a program with a user interface like this has at least two Entry Points:

image

One is the main function called upon startup. The other is the button click event handler for “Show my score”.

But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted.

You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes.

And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2]

Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3]

image

Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter.

image

Collections of batch processors

I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications.

Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results.

image

These resources can be the keyboard or main memory or a hard disk or a communication line or a display.

Together many batch processors - large and small - form applications the user perceives as a single whole:

image

Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-)

Features

Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility.

At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code.

image

In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this:

image

The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog.

This is all very simple, but you see, there is not just one thing happening, but several.

  1. Get input (email address, password)
  2. Load user for email address
  3. If user not found report error
  4. Check password
  5. Hash password
  6. Compare hash to hash stored in user
  7. Show next dialog

Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions.

However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features.

Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner.

Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g.

  • First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4.
  • Then hard code a single user and check the email address. Features 2 and 2.1.
  • Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2
  • Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2.
  • Calculate the real hash for the password. Feature 3.1.
  • Switch to the final user directory technology.

Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one.

That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments).

Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually.

Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.

image 

I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible.

Software production is moving forward through requirements one increment at a time, and one function at a time.

In closing

Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected.

The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services.

We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions.

Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments.


  1. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment.

  2. Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1.

  3. If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”.


Wednesday, June 4, 2014 #

The easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for.

It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured.

Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two.

Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer.

Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build.

Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves.

Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times.

Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high.

Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess.

What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often.

Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes.

Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner).

For me there are several resasons for such a fixed and short cycle time for each increment:

Clear expectations

Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management.

However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time.

So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two.

That´s why I believe we can give rough absolute estimates on 3 levels:

  • Noon
  • Tonight
  • Tomorrow

Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.”

Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue.

So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories.

If you like absolute estimates, here you go.

But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues.

To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future.

But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow.

Trust through reliability

Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production.

Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding.

Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises.

That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time.

If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied.

I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay.

Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always.

Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow.

So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes.

Fast feedback

What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate?

If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again.

Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises.

Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value.

By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process.

Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow.

After acceptance the developer(s) can start working on the next issue.

Flexibility

As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility.

After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time.

Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D.

With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty.

I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst.

If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty.

Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time.

Premature completion

Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements.

That´s understandable. But it does not match with the nature of software development. We should know that by now.

Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other.

However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition.

I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours?

Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases).

Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed.

After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect.

In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need?

Pull on practices

So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se.

Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability.

If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently?

I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term.

The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger.

With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises.

Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours.

There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is.

Don´t count the “WTF!”, count the “No way!” utterances.

In closing

For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability.

Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way.

That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales.

Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress…

In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability.

Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.


Monday, June 2, 2014 #

Categorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal.

However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted.

Aspects of Functionality

There are two sides to Functionality requirements.

image

It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state.

I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below).

Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value.

This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible.

So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on.

Aspects of Quality

As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources.

We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions.

So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software.

But Qualities come in at least two flavors:

image

Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible.

Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security.

That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself.

With a password manager software this might be different. There security might be a Primary Quality.

Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects.

When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean.

Aspects of Security of Investment

Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly.

Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure.

To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it.

SoI to me has two facets:

image

Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement.

This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back.

Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level.

Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value.

Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers.

Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not.

Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability.

Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements.

An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2]

Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement.

As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day.

This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus.

In closing

As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions.

Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc.

No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement.

Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes?

These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally.

And how do we ensure to balance all requirement aspects?

That needs practices and transparency.

Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment.

Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it.

The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically.

In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier.

You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software?

Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver.

So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this.

Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3]


  1. But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running…

  2. And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1]

  3. Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework.


Wednesday, May 28, 2014 #

In a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”.

I like to respond with this article to his questions. There´s more to say than fits into a commentary.

Mocks and TDD

I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource.

True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion.

But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata.

ITDD for “To Roman Numerals”

gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines.

Now here is, how I would do this kata differently.

1. Analyse

A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled.

“Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear.

If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer.

Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities.

In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions.

Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning.

Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically.

2. Solve

The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody.

Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too.

Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution.

Here´s my solution approach:

The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members.

The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list.

The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution.

All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values:

{1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M}

Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors.

If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process:

  1. Find all the factors
  2. Translate the factors found
  3. Compile the roman representation

Translation is just a look-up. Finding, though, needs some calculation:

  1. Find the highest remaining factor fitting in the value
  2. Remember and subtract it from the value
  3. Repeat with remaining value and remaining factors

Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction.

With this solution in hand I finally can do what TDD advocates: find and prioritize test cases.

As I can see from the small process description above, there are two aspects to test:

  • Test the translation
  • Test the compilation
  • Test finding the factors

Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious.

Testing the compilation is trivial.

Testing factor finding, though, is a tad more complicated. I can think of several steps:

  1. First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M).
  2. Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly.
  3. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]).
  4. Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly.

I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process.

3. Implement

First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say:

image

Next I implement the API:

image

The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution:

image

My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one.

I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition:

image

As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right.

Here´s the implementation to satisfy the test:

image

It´s as simple as possible. Right how TDD wants me to do it: KISS.

Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.)

image

In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science:

image

Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it.

Having two tests I find more important.

Now for the next low hanging fruit: compilation. It´s even simpler than translation.

image

A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose…

image

Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions:

image

Again, I´m faking the implementation first:

image

I focus on just the first test case. No looping yet.

Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat.

That´s left for a drill down with a test of the fake function:

image

There are two main equivalence partitions, I guess: either the first factor is appropriate or some next.

The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.)

image

And the first of the equivalence partitions on the higher level also is satisfied:

image

Great, I can move on. Now for more than a single factor:

image

Interestingly not just one test becomes green now, but all of them. Great!

image

You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase.

And by the way: Also the acceptance tests went green:

image

Mission accomplished. At least functionality wise.

Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness.

At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”.

First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant.

image

Which leads to a small conversion in Find_factors():

image

And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain.

However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”.

This then is my final test class:

image

And this is the final production code:

image

Test coverage as reported by NCrunch is 100%:

image

Reflexion

Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet.

But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case.

That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design.

I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree.

Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem.

Using scaffolding tests (to be thrown away at the end) brought two advantages:

  • Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them.
  • I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API.

The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code.

Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes.

And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.

 

PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.


Saturday, May 24, 2014 #

Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements.

Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered.

Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that.

Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement.

You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc.

I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives?

Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all.

Fundamental aspects of requirements

If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”.

That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional.

But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project.

What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope.

image

Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability.

image

For short I call those aspects:

  • Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for.
  • Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users.
  • Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”.

Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties.

For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable.

With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment.

But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise.

That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants.

Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine.

But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-)

F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not.

In closing

A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones.


  1. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer.


Saturday, March 22, 2014 #

In the beginning there was, well, chaos. Software had no particular anatomy, i.e. agreed upon fundamental structure. It consisted of several different “modules” which where dependent on each other in arbitrary ways:

image

(Please note the line end symbol I´m using to denote dependencies. You´ll see in a minute why I´m deviating from the traditional arrow.)

Then came along the multi-layer architecture. A very successful pattern to bring order into chaos. Its benefits were twofold:

  1. Multi-layer architecture separated fundamental concerns recurring in every software.
  2. Multi-layer architecture aligned dependencies clearly from top to bottom.

image

How many layers there are in a multi-layer architecture does not really matter. It´s about the Separation of Concerns (SoC) principle and disentangling dependencies.

This was better than before – but led to a strange effect: business logic was now dependent on infrastructure. Technically this was overcome sooner or later by applying the Inversion of Control (IoC) principle. That way the design time dependencies between layers where separated from the runtime dependencies.

image

This seemed to work – except now the implementation did not really mirror the design anymore. Also the layers and the very straightforward dependencies did not match a growing number of aspects anymore.

So the next evolutionary step in software anatomy moved away from layers and top-bottom thinking to rings. Robert C. Martin summed up a couple of these architectural approaches in his Clean Architecture:

image

It keeps and even details the separation of concerns, but changes the direction of the dependencies. They are pointing from technical to non-technical, from infrastructure to domain. The maxim is: don´t let domain specific code depend on technologies. This is to further the decoupling between concerns.

This leads to implementations like this:

For example, consider that the use case needs to call the presenter. However, this call must not be direct because that would violate “The Dependency Rule”: No name in an outer circle can be mentioned by an inner circle. So we have the use case call an interface (Shown here as Use Case Output Port) in the inner circle, and have the presenter in the outer circle implement it.

The same technique is used to cross all the boundaries in the architectures. We take advantage of dynamic polymorphism to create source code dependencies that oppose the flow of control so that we can conform to “The Dependency Rule” no matter what direction the flow of control is going in.

For Robert C. Martin the rings represent implementations as well as interfaces and calling an outer ring “module” implementation from an inner ring “module” implementation at runtime is ok, as long as design time dependencies of interfaces are just inward pointing.

While the Clean Architecture diagram looks easy, the actual code to me seems somewhat complicated at times.

Suggestion for a next evolutionary step

So far the evolution of software anatomy has two constants: it´s about separating concerns and aligning dependencies. Both is good in terms of decoupling and testability etc. – but my feeling is, we´re hitting a glass ceiling. What could be the next evolutionary step? Even more alignment of dependencies?

No. My suggestion is to remove dependencies from the primary picture of software anatomy altogether. Dependencies are important, we can´t get rid of them – but we should stop staring at them.

Here´s what I think is the basic anatomy of software (which I call “software cell”):

image

The arrows here do not (!) mean dependencies. They are depicting data flow. None of the “modules” (rectangles, triangles, core circle) are depending on each other to request a service. There are no client-service relationships. All “modules” are peers in that they do not (!) even know each other.

The elements of my view roughly match the Clean Architecture like this:

image

Portals and Providers form a membrane around the core. The membrane is responsible for isolating the core from an environment. Portals and providers encapsulate infrastructure technologies for communication between environment and core. The core on the other hand represents the domain of the software. It´s about use cases, if you want, and domain objects.

My focus when designing software is on functionality. So all “modules” you see are functional units. They do, process, transform, calculate, perform. They are about actions and behavior.

In my view, the primary purpose of software design is wire-up functional units in a way so a desired overall behavior (functional as well as non-functional) is achieved. In short, it´s about building “domain processes” (supported by infrastructure). That´s why I focus on data flow, not on control flow. It´s more along the lines of Functional Programming, and less like Object Oriented Programming.

Here´s how I would zoom in and depict some “domain process”:

image

Some user interacts with a portal. The portal issues a processing request. Some “chain” of functional units work on this request. They transform the request payload, maybe load some data from resources in the environment, maybe cause some side effect in some resources in the environment. And finally produce some kind of result which is presented to the user in a portal.

None of these “process steps” knows the other. They follow the Principle of Mutual Oblivion (PoMO). That makes them easy to test. That makes it easy to change the process, because any data flow can be deviated without the producer or consumer being aware of it.

In the picture of Clean Architecture Robert C. Martin seems to hint at something like this when he defines “Use Case Ports”. But it´s not explicit. That, however, I find important: make flow explicit and radically decouple responsibilities.

Two pieces are missing from this puzzle: What about the data? And what about wiring up the functional units?

Well, you got me ;-) Dependencies returning. As I said, we need them. But differently than before, I´d say.

Functional units of data flows like above surely share data which means they depend on it:

image

If data is kept simple, though, such dependencies are not very dangerous. (See how useful it is to have to symbols for relationships between functional units: one for dependencies and one for data flow.)

So far, wiring up the flows just happens. Like building dependency hierarchies at runtime just happens. Usually the code to inject instances of layer implementations at runtime is not shown. But it´s there, and a DI container knows all the interfaces and their implementations.

For the next evolutionary step of software anatomy, however, I find it important to officially introduce the authority which is responsible for such wiring up; it´s some integrator.

image

If integration is kept simple, though, such dependencies are not very dangerous. “Simple” here (as above with data) means: does not contain logic, i.e. expressions or control statements. If this Integration Operation Segregation Principle (IOSP) is followed, integration code might be difficult to test due to its dependencies – but it´s very simple to write and check during a review.

Stepping back you can see that my dependency story is different from the ones so far:

  • There are no dependencies between functional aspects. They don´t do request/response service calls on each other, but are connected by data flows.
  • There are only dependencies between fundamental organizational concerns completely orthogonal to any domain: integration, operation, and data.

image

This evolved anatomy of software does not get rid of dependencies. You will continue to use your IoC and DI containers ;-) But it will make testing of “work horse code” (operations) easier, much easier. And the need for using mock frameworks will decrease. At least that´s my experience of some five years designing software like this.

Also, as you´ll find if you try this out, specifications of classes will change. Even with IoC a class will be defined by 1+n interfaces: the interface it implements plus all the interfaces of “service classes” it uses.

But with software cells and flows the class specifications consist only of 1 interface: the interface the class implements. That´s it. Well, at least that´s true for the operation classes which follow the PoMO. That´s useful because those classes are heavy with logic, so you want to make it as simple as possible to specify and test them.

Conclusion

The evolution of a basic software anatomy has come far – but there is still room for improvement. As long as everything revolves around dependencies between technological and domain aspects, there is unholy coupling. So the next consequent move is to get rid of those dependencies – and relegate them to the realm of organizational concerns. In my opinion the overall structure becomes much easier. Decluttered. More decoupled.

Why not give it a chance?

 

PS: For more details on flows, PoMO, and IOSP see my blog series here – or make yourself comfortable in a chair next to your fireplace and read my Leanpub eBook on it :-)


Sunday, March 16, 2014 #

Doing explicit software architecture is nothing for the faint of heart, it seems. Software architecture carries the nimbus of heavy weight, of something better left to specialists – or to eschew altogether. Both views are highly counterproductive, though. Leaving architecture to specialists builds walls where there should be none; it hampers feedback and flexibility. And not explicitly architecting (or just designing) software leads to less than desirable evolvability as well as collective software ownership.

So the question is, how to do “just enough” software design up-front. What´s the right amount? What´s an easy way to do it?

Since 2005 I´m on a quest to find answers to these questions. And I´m glad I have found some – at least for me ;-) I´ve lost my “fear of the blank flipchart” when confronted with some requirements document. No longer do I hesitate to start designing software. I´ve shrugged off UML shackles, I´ve gotten off the misleading path of object oriented dogma.

This is not to say, there is no value in some UML diagrams or features of object oriented technologies. Of course there is – as long as it helps ;-)

But as with many practices one never reaches the finishing line when it comes to software architecture. Although I feel comfortable attacking just about any requirements challenge, it´s one think to feel confident – and an altogether different to actually live up to the challenge. So I´m on a constant lookout for exercises in software architecture to further hone my skills. That means applying my method – which is a sampling of many approaches with some added idiosyncrasies – plus reflecting on the process and the outcome.

At the Coding Dojo of the Clean Code Developer School I´ve compiled more than 50 such exercises of different sizes (from small code/function katas to architecture katas). If you like, try them out yourself or with your team (some German language skills required, though ;-).

And recently I stumbled across another fine architecture kata. It´s from Simon Brown whose book “Software Architecture for Developers” I read. First the exercise was only in the book, but then he made it public. I included it in the Coding Dojo and added line numbers and page numbers. Find the full text for the requirements of a Financial Risk System here.

image

Since Simon included pictures of architectural designs for this exercise from students of his trainings, I thought, maybe I should view that as a challenge and try it myself. If I´m confident my approach to software architecture is helpful, too, then why not expose myself with it. Maybe there are interesting similarities to be discovered – maybe there are differences that could be discussed.

Following I´ll approach the architecture for the Financial Risk System (FRS) my way. This will show how I approach a design task, but it might lack some explanation as to the principles underlying this approach. Here´s not enough room, though, to layout my whole thought framework. But I´m working on a book… ;-) It´s called “The Architect´s Napkin – The Cheat Sheet”. But beware, for the first version it´s in German.

Why it´s called the Architect´s Napkin you´ll see in a minute – or read here. Just let me say: If software architecture is to become a disciplin for every developer it needs to be lightweight. And design drawing should fit on an ordinary napkin. All else will tend to be too complicated and hard to communicate.

And now for some design practice. Here´s my toolbox for software architects:

image

Basic duality: system vs environment

Every software project should start by focusing on what´s its job, and what not. It´s job is to build a software system. That´s what has to be at the center of everything. By putting something at the center, though, other stuff is not at the center. That´s the environment of what´s at the center. In the beginning (of a software project) thus there is duality: a system to build vs the environment (or context) of the system.

And further the system to build is not created equal in all parts. Within the system there is a core to be distinguished from the rest. At the core of the system is the domain. That´s the most important part of any software system. That´s what we need to focus on most. It´s for this core that a customer wants the system in the first place.

image

That´s a simple diagram. But it´s an important one as you´ll see. Best of all: you can draw it right away when your boss wants you to start a new software project ;-) It even looks the same for all software systems. Sure, it´s very, very abstract. But that helps as long as you don´t have a clue what the requirements are about.

Spotting actors

With the system put into the focus of my attention I go through the requirements and try to spot whose actually going to use it. Who are the actors, who is actively influencing the system? I´m not looking for individual persons, but roles. And these roles might be played by non-human actors, i.e. other software systems.

  • The first actor I encounter is such a non-human actor: the risk calculation scheduler. It requests the software system to run and produce a risk report. Line 8 and 9 in the requirements document allude to this actor.
  • Then on page 2, line 54f the second actor is described: risk report consumer. It´s the business users who want to read the reports.
  • Line 56f on the same page reveal a third actor: the calculation configurator. This might be the same business user reading a report, but it´s a different role he´s playing when he changes calculation parameters.
  • Finally lines 111ff allude to a fourth actor: the monitoring scheduler. It starts the monitoring which observes the risk calculation.

The risk calculation scheduler and the monitoring scheduler are special in so far as they are non-human actors. They represent some piece of software which initiates some behavior in the software system.

Here´s the result of my first pass through the requirements:

image

Now I know who´s putting demands on the software system. All functionality is there to serve some actor. Actors need the software system; they trigger it in order to produce results [1].

Compiling resources

During my second pass through the requirements I focus on what the software system needs. Almost all software depends on resources in the environment to do its job. That might be a database or just the file system. Or it might be some server, a printer, a webcam, or some other hardware. Here are the resources I found:

  • Page 1, line 10f: the existing Trade Data System (TDS)
  • Page 1, line 11: the existing Reference Data System (RDS)
  • Page 2, line 52f: the risk report (RR)
  • Page 2, line 54f: some means to distribute the RR to the risk report consumers (risk report distribution, RRD)
  • Page 2, line 54f: a business user directory (BUD)
  • Page 2, line 56f: external parameters (EP) for the risk calculation
  • Page 3, line 92ff: an audit log (AL)
  • Page 4, line 111ff: SNMP
  • Page 4, line 117f: an archive (Arc)

image

Nine resources to support the whole software system. And all of them require some kind of special technology (library, framework, API) to use them.

The diagram as a checklist

What I´ve done so far was just analysis, not design. I just harvested two kind of environmental aspects: actors/roles and resources. But they are important for three reasons. Firstly they help structuring the software system as you´ll see later. Secondly they guide further analysis of the requirements. And last but not least they function as a checklist for the architect.

Each environmental element poses questions, some specific to its kind, some general. And by observing how easy or difficult it is to answer them, I get a feeling for the risk associated with them.

My third pass through the requirements is not along the document, but around the circle of environmental aspects identified. As I focus on each, I try to understand it better. Here are some questions I´d feel prompted by the “visual checklist” to ask:

  • Actor “Risk calculation scheduler”: How should the risk calculation be started automatically each day? What´s the operating system it´s running on anyway? Windows offers a task scheduler, on Linux there is Crontab. But there are certainly more options. Some more research is needed – and talking to the customer.
  • Actor “Risk report consumer”: How should consumers view reports? They need to be able to use Excel, but is that all? Maybe Excel is just a power tool for report analysis, and a quick overview should be gained more easily? Maybe it´s sufficient to send them the reports via email. Or maybe just a notification of a new report should be sent via email, and the report itself can be opened with Excel from a file share? I need to talk to the customer.
  • Actor “Calculation configurator”: What kind of UI should the configurator be using? Is a text editor enough to access a text file containing the parameters – protected by operating system access permissions? Or is a dedicated GUI editor needed? I need to talk to the customer.
  • Actor “Monitoring scheduler”: The monitoring aspect to me is pretty vague. I don´t really have an idea yet how to do it. I feel this to be an area of quite some risk. Maybe monitoring should be done by some permanently running windows service/daemon which checks for new reports every day and can be pinged with a heartbeat by the risk calculation? Some research required here, I guess. Plus talking to the customer about how important monitoring is compared to other aspects.
  • Resource “TDS”:
    • What´s the data format? XML
    • What´s the data structure? Simple table, see page 1, section 1.1 for details
    • What´s the data volume? In the range of 25000 records per day within the next 5 years (see page 3, section 3.b)
    • What´s the quality of the data, what´s the reliability of data delivery? No information found in the requirements.
    • How to access the data? It´s delivered each day as a file which can be read by some XML API. Sounds easy.
    • Available at 17:00h (GMT-4 (daylight savings time) or GMT-5 (winter time)) each day.
  • Resource “RDS”:
    • What´s the data format? XML
    • What´s the data structure? Not specified; need to talk to the customer.
    • What´s the data volume? Some 20000 records per day (see page 3, section 3b)
    • What´s the quality of the data, what´s the reliability of data delivery? No information found in the requirements.
    • How to access the data? It´s delivered each day as a file which can be read by some XML API. Sounds easy – but record structure needs to be clarified.
  • Resource “Risk report”:
    • What´s the data format? It needs to be Excel compatible, that could mean CSV is ok. At least it would be easy to produce. Need to ask the customer. If more than CSV is needed, e.g. Excel XML or worse, then research time has to be alotted, because I´m not familiar with appropriate APIs.
    • What´s the data structure? No information have been given. Need to talk to the customer.
    • What´s the data volumne? I presume it depends on the number of TDS records, so we´re talking about some 25000 records of unknown size. Need to talk to the customer.
    • How to deliver the data? As already said above, I´m not sure if the risk report should be stored just as a data file or be sent to the consumers via email. For now I´ll go with sending it via email. That should deliver on the availability requirement (page 3, section 3.c) as well as on the security requirements (page 3, section e, line 82f, 84f, 89f).
    • Needs to be ready at 09:00h (GMT+8) the day following TDS file production.
  • Resource “Risk report distribution”: This could be done with SMTP. The requirements don´t state, how the risk reports should be accessed. But I guess I need to clarify this with the customer.
  • Resource “Business user directory”: This could be an LDAP server or some RDBMS or whatever. The only thing I´d like to assume is, the BUD contains all business users who should receive the risk reports as well as the ones who have permission to change the configuration parameters. I would like to run a query on the BUD to retrieve the former, and use it for authentication and authorization for the latter. Need to talk to the customer for more details.
  • Resource “External parameters”: No details on the parameters are given. But I assume it´s not much data. The simplest thing probably would be to store them in a text file (XML, Json…). That could be protected by encryption and/or file system permissions, so only the configurator role can access it. Need to talk to the customer if that´d ok.
  • Resource “Audit log”:
    • Can the AL be used for what should be logged according to page 3, section 3.f and 3.g?
    • Is there logging infrastructure already in place which could be used for the AL?
    • What´s the access constraints for the AL? Does it need special protection?
  • Resource “SNMP”: I don´t have any experience with SNMP. But it sounds like SNMP traps can be sent via an API even from C# (which I´m most familiar with). Need to do some research here. The most important question is, how to detect the need to send a trap (see above the monitoring scheduler).
  • Resource “Archive”:
    • Is there any Arc infrastructure already in place?
    • What are the access constraints for the Arc? Does it need special protection?
    • My current idea would be to store the TDS file and the RDS file together with the resulting risk report file in a zip file and put that on some file server (or upload it into a database server). But I guess I need to talk to the customer about this.

In addition to the environmental aspects there is the domain to ask questions about, too:

  • How are risks calculated anyway? The requirements don´t say anything about that. Need to talk to the customer, because that´s an important driver for the functional design.
  • How long will it take to calculate risks? No information on that, too, in the requirements document; is it more like some msec for each risk or like several seconds or even minutes? Need to talk to the customer, because that´s an important driver for the architectural design (which is concerned with qualities/non-functional requirements).
  • When the TDS file is produced at 17:00h NYC time (GMT-5) on some day n it´s 22:00h GMT of the same day, and 06:00h on day n+1 in Singapore (GMT+8). This gives the risk calculation some 3 hours to finish. Can this be done with a single process? That needs to be researched. The goal is, of course, to keep the overall solution as simple as possible.

So much for a first run through the visual checklist the system-environment diagram provides.

2014-03-16 10.56.14

The purpose of this was to further understand the requirements – and identify areas of uncertainty. This way I got a feeling for the risks lurking in the requirements, e.g.

  • The larges risk to me currently is with the domain: I don´t know how the risk calculation is done, which has a huge impact on the architecture of the core.
  • Then there is quite some risk in conjunction with infrastructure. What kind of infrastructure is available? What are the degrees of freedom in choosing new infrastructure? How rigid are security requirements?
  • Finally there is some risk in technologies I don´t have any experience with.

Here´s a color map of the risk areas identified:

2014-03-16 11.17.23

With such a map in my hand, I´d like to talk to the customer. It would give us a guideline in our discussion. And it´s a list of topics for further research. Which means it´s kind of a backlog for things to do and get feedback on.

But alas, the customer is not available. Sounds familiar? ;-) So what can I do? Probably the wisest thing to do would be to stop further design and wait for the customer. But this would spoil the exercise :-) So I´ll continue to design, tentatively. And hopefully this does not turn out to be waste in the end.

Refining to applications – Design for agility

The FRS is too big to be implemented or even further discussed and designed as a whole. It needs to be chopped up :-) I call that slicing in contrast to the usual layering. At this point I´m not interested in more technical details which layers represent. I´d like to view the system through the eyes of the customer/users. So the next step for me is to find increments that make sense to the customer and can be focused on in turn.

For this slicing I let myself be guided by the actors of the system-environment-diagram. I´d like to slice the system in a way so that each actor gets its own entry point into it. I call that application (or app for short).

2014-03-16 14.41.57

Each app is a smaller software system by itself. That´s why I use the same symbol for them like for the whole software system. Together the apps make up the complete FRS. But each app can be delivered separately and provides some value to the customer. Or I work on some app for a while without finishing it, then switch to another to move it forward, then switch to yet another etc. Round and round it can go ;-) Always listening to what the customer finds most important at the moment – or where I think I need feedback most.

As you can see, each app serves a single actor. That means, each app can and should be implemented in a way to serve this particular actor best. No need to use the same platform or UI technologies for all apps.

Also the diagram shows how I think the apps share resources:

  • The Risk Report Calculation app needs to read config data from EP and produces a report RR to be sent to business users listed in BUD. Progress or hindrances are logged to AL.
  • The Config Editor also needs to access BUD to check, who´s authorized to edit the data in EP. Certain events are logged to AL.
  • The Report Reader just needs to access the report file. Authorization is implicit: since the business user is allowed to access the RR folder on some file share, he can load the report file with Excel. But if need be, the Report Reader could be more sophisticated and require the business user to authenticate himself. Then the report reader would also need access to BUD.
  • The Monitor app checks the folder of the report files each day, if a file has arrived. In addition the Monitor app could be a resource to the Report Calculation which can send it heart beat to signal it´s doing well.

The other resources are used just by the Report Calculation.

Now that I have sliced up the whole software system into applications, I can focus on them in turn. What´s the most important one? Where should I zoom in?

Hosting applications – Design for quality

Zooming in on those applications can mean two things: I could try to slice them up further. That would mean I pick an application and identify its dialogs and then then interactions within each dialog. That way I´d reach the function level of a software system, where each function represents an increment. Such slicing would be further structuring the software system from the point of view of the domain. It would be agile design, since the resulting structure would match the view of the customer. Applications, dialogs, and interactions are of concern to him.

Or I could zoom in from a technical angle. I´d leave the agile domain dimension of design which focuses on functionality. But then which dimension should I choose? There are two technical dimensions, in my view. One is concerned with non-functional requirements or qualities (e.g. performance, scalability, security, robustness); I call it the host dimension. Its elements describe the runtime structure of software. The other is concerned with evolvability and production efficiency (jointly called the “security of investment” aspect of requirements); I call it the container dimension. Its elements describe the design time structure of software.

So, which dimension to choose? I opt for the host dimension. And I focus on the Risk Report Calculation application. That´s the most important app of the software system, I guess.

Whereas the domain dimension of my software architecture approach decomposes a software system into ever smaller slices called applications, dialogs, interactions, the host dimension provides decomposition levels like device, operation system process, or thread.

Focusing on one app the questions thus are: How many devices are needed to run the app so it fulfills the non-functional requirements? How many processes, how many threads?

What are the non-functional requirements determining the number of devices for the calculation app of the FRS? It needs to run in the background (page 1, line 7f), it needs to generate the report within 3 hours (line 28 + line 63), it needs to be able to log certain events (page 3, line 94) and be monitored (page 4, lines 111ff).

How many devices need to be involved to run the calculation strongly depends on how long the calculations take. If they are not too complicated, then a single server will do.

And how many processes should make up the calculation on this server? Reading section 2, lines 43ff I think a single process will be sufficient. It can be started automatically by the operating system, it is fast enough to do the import, calculation, notification within 3 hours. It can have access to the AL resource and can be monitored (one way or the other).

At least that´s what I want to assume lacking further information as noted above.

2014-03-16 16.56.33

Of course this host diagram again is a checklist for me. For each new element – device, process – I should ask appropriate questions, e.g.

  • Application server device:
    • Which operating system?
    • How much memory?
    • Device name, IP address?
    • How can an app be deployed to it?
  • Application process:
    • How can it be started automatically?
    • Which runtime environment to use?
    • What permissions are needed to access the resources?
    • Should the application code own the process or should it run inside an application server?

The device host diagram and the process host diagram look pretty much the same. That´s because both container only a single element. In other cases, though, a device is decomposed into several processes. Or there are several devices each with more than one process.

Also this is only the processes which seem necessary to fulfill quality requirements. More might be added to improve evolvability.

Nevertheless drawing those diagrams is important. Each host level (device, process – plus three more) should be checked. Each can help to fulfill certain non-functional requirements, for example: devices are about scalability and security, processes are about robustness and performance and security, threads are about hiding or lowering latency.

Separating containers – Design for security of investment

Domain slices are driven directly by the functional requirements. Hosts are driven by quality concerns. But what drives elements like classes or libraries? They belong to the container dimension of my design framework. And they can only partly be derived directly from requirements. I don´t believe in starting software design with classes. They don´t provide any value to the user. Rather I start with functions (see below) – and then find abstractions on higher container levels for them.

Nevertheless the system-environment-diagram already hints at some containers. On what level of the container hierarchy they should reside, is an altogether different question. But at least separate classes (in OO languages) should be defined to represent them. Mostly also separate libraries are warranted for even more decoupling.

Here´s the simple rule for the minimum number of containers in any design:

  • communication with each actor is encapsulated in a container
  • communication with each resource is encapsulated in a container
  • the domain of course needs its own container – at least one, probably more
  • a dedicated container to integrate all functionality; usually I don´t draw this one; it´s implicit and always present, but if you like, take the large circle of the system as its representation

2014-03-16 17.23.22

Each container has its own symbol: the actor facing container is called a portal and drawn as a rectangle, the resource facing containers are called providers and drawn as triangles, and the domain is represented as a circle.

That way I know the calculation app will consist of 9+1+1+1=12 containers. It´s a simple and mechanical separation of concerns. And it serves the evolvability as well as production efficiency.

By encapsulating each actor/resource communication in its own container, it can more easily replaced, tested, and implemented in parallel. Also this decouples the domain containers from the environment.

Interestingly, though, there is no dependency between these containers! At least in my world ;-) None knows of the others. Not even the domain knows about them. This makes my approach different from the onion architecture or clean architecture, I guess. They prescribe certain dependency orientations. But I don´t. There are simply no dependencies :-) Unfortunately I can´t elaborate on this further right here. Maybe some other time… For you to remember this strange approach here is the dependency diagram for the containers:

2014-03-16 17.25.18

No dependencies between the “workhorses”, but the integration knows them all. But such high efferent coupling is not dangerous. The integration does not contain any logic; it does not depend itself on the operations of the other containers. Integration is a special responsibility of its own.

Although I know a minimal set of containers, I don´t know much about their contracts. Each is encapsulating some API through which it communicates with a resource/actor. But how the service of the containers is offered to its environment is not clear. I could speculate about it, but most likely that would violate the YAGNI principle.

There is a way, however, to learn more about those contracts – and maybe find more containers…

Zooming in – Design for functionality

So far I´ve identified quite some structural elements. But how does this all work together? Functionality is the first requirement that needs to be fulfilled – although it´s not the most important one [2].

I switch dimensions once again in order to answer this question. Now it´s the flow dimension I´m focusing on. How does data flow through the software system and get processed?

On page 2, section 2 the requirements document gives some hints. This I would translate into a flow design like so. It´s a diagram of what´s going on in the calculation app process [3]:

2014-03-16 18.20.43

Each shape is a functional unit which does something [4]. The rectangle at the top left is the starting point. It represents the portal. That´s where the actor “pours in” its request. The portal transforms it into something understandable within the system. The request flows to the Import which produces the data Calculate then transforms into a risk report.

That´s the overall data flow. But it´s too coarse grained to be implemented. So I refine it:

  • Zooming into Import reveals to separate import steps – which could be run in parallel – plus an Join producing the final output.
  • Zooming into Calculate releals several processing steps. First the input from the Import is transformed into risk data. Then the risk data is transformed into the actual risk report, of which then the business users are informed. Finally the TDS/RDS input data (as well as the risk report) gets archived.

The small triangles hint at some resource access within a processing step. Whether the functional unit itself would do that or if it should be further refined, I won´t ponder here. I just wanted to quickly show this final dimension of my design approach.

For Import TDS and Import RDS I guess I could derive some details about the respective container contracts. Both seem to need just one function, e.g. TDSRecord[] Import(string tdsFilename).

The other functional units hint at some more containers to consider. Report generation (as opposed to report storage) looks like a different responsibility than calculating risks, for example. Also Import and Calculate have a special responsibility: integration. They see to that the functional units form an appropriate flow.

At least the domain thus is decomposed into at least three containers:

  • integration
  • calculation
  • report generation

Each responsibility warrants its own class, I´d say. That makes it 12-1+3=14 containers for the calculation application.

Do you see how those containers are a matter of abstraction? I did not start out with them; rather I discovered them by analyzing the functional structure, the processing flow.

Retrospective

So much for my architectural design monologue ;-) Because a monologue it had to be since the customer was not at hand to answer my many questions. Nevertheless I hope you got an impression of my approach to software design. The steps would not have been different if a customer had be available. However the resulting structure might look different.

Result #1: Make sure the customer is close by for questions when you start your design.

The exercise topic itself I found not particularly challenging. The interesting part was missing ;-) No information on what “calculating risks” means. But what became clear to me once more was:

Result #2: Infrastructure is nasty

There are so many risks lurking in infrastructure technologies and security constraints and deployment and monitoring. Therefore it´s even more important to isolate those aspects in the design. That´s what portals and providers are for.

To write up all this took me a couple of hours. But the design itself maybe was only half an hour of effort. So I would not call it “big design up-front” :-)

Nonetheless I find it very informative. I would not start coding with less. Now I talk about focus and priorities with the customer. Now I can split up work between developers. (Ok, some more discussion and design would be needed to make the contracts of the containers more clear.)

And what about the data model? What about the domain model? You might be missing the obligatory class diagram linking data heavy aggregates and entities together.

Well, I don´t see much value in that in this case – at least from the information given in the requirements. The domain consists of importing data, calculating risks and generating a report. That´s the core of the software system. And that´s represented in all diagrams: the host diagram shows a process which does exactly this, the container diagram shows a domain responsible for this, and the flow diagram shows how several domain containers play together to form this core.

Result #3: A domain model is just a tool, not a goal.

All in all this exercise went well, I´d say. Not only used I flows to design part of the system, I also felt my work flowing nicely. No moment of uncertainty how to move on.

Endnotes

[1] Arguably the non-human actors in this scenario don´t really need the software system. But as you´ll see it helps to put them into the picture as agents to cause reactions of the software system.

[2] The most important requirements are the primary qualities. It´s for them that software gets commissioned. Most often that´s performance, scalability, and usability. Some operations should be executed faster, with more load, and be easier to use through software, than without. But of course, before some operation can become faster it needs to be present first.

[3] If this reminds you of UML activity diagrams, that´s ok ;-)

[4] Please not my use of symbols for relationships. I used lines with dots at the end to denote dependencies. In UML arrows are used for that purpose. However, arrows I reserve for data flow. That´s what they are best at: showing from where to where data flows.


Thursday, March 13, 2014 #

Antifragility has attracted some attention lately. I, too, pressed the “I like” button. :-) A cool concept, a term that was missing. Just when Agility starts to become lame, Antifragility takes over the torch to carry on the light of change… ;-)

What I find sad, though, is that the discussion seems to be too technical too soon. There´s the twitter hash tag #antifragilesoftware for example. It´s suggesting, there are tools and technologies and methods, to make software antifragile. But that´s impossible, I´d say. And so the hash tag is misleading and probably doing more harm than good to the generally valuable concept of Antifragility.

Yes, I´m guilty of using the hash tag, too. At first I hadn´t thought about Antifragility enough and was just happy, someone had brought up the term. Later I used it to not alienate others with yet another hash tag; I succumbed to the established pattern.

But here I am, trying to at least once say, why I think, the hash tag is wrong – even though it´s well meaning.

Antifragility as a property

Antifragility is a property. Something can be fast, shiny, edible – or antifragile. If it´s antifragile it thrives on change, it becomes better due to stress. What is antifragile “needs to be shaken and tossed” to improve. “Handle with care” is poison to whatever is antifragile.

As I explained in a previous article, I think Antifragility comes from buffers – which get adapted dynamically. Increase of buffer capacity in response to stress is what distinguishes Antifragility from robustness.

In order to determine if something is antifragile, we need to check its buffers. Are there buffers for expectable stresses? How large is their capacity before stress? Is stress on a buffer-challenging level? How large are the buffer capacities after such stress? If the buffer capacities is larger after challenging stress – of course given a reasonable recuperation period –, then I guess we can put the label “Antifragile” on the observed.

Antifragility is DIY

So far it seems, Antifragility is a property like any other. Maybe it´s new, maybe it´s not easy to achieve, but in the end, well, just another property we can build into our products. So why not build antifragile blenders, cars, and software?

Here´s where I disagree with the suggestion of #antifragilesoftware. I don´t think we can build Antifragility into anything material. And I include software in this, even though you might say it´s not material, but immaterial, virtual. Because what software shares with material things is: it´s dead. It´s not alive. It does not do anything out of its own volition.

But that´s what is at work in Antifragility: volition. Or “the will to live” to get Schopenhauer into the picture after I called Nietzsche the godfather of Antifragility ;-)

Antifragility is only a property of living beings, maybe it´s even at the core of the definition of what life is.

Take a glass, put it in a box, skake the box-with-glass, the glass cracks. The stress on the buffers of the box-with-glass was challenging. Now put some wood wool around the glass in the box. Shake the box-with-glass, the glass is not further damaged. Great! The box-with-glass is antifragile, right?

No, of course not. The box-with-glass is just robust. Because the box-with-glass did not change itself. It was build with a certain buffer. That buffer was challenged by stress. And that´s it. The box-with-glass did not react to this challenge – except it suffered it. It even deteriorated (the glass cracked).

It was you as the builder and observer who increased some buffer of the box-with-glass after the stress. You just improved its robustness.

If the box-with-glass was antifragile it would have reacted itself to the stress. It would have increased its buffer itself. That way the box-with-glass would have shown it learned from the stress.

Antifragility thus essentially is a Do-it-yourself property. Something is only antifragile if it itself reacts to stresses by increasing some buffer. Antifragility thus is a property of autopoietic systems only. Or even: Autopoiesis is defined by Antifragility.

To put it short: If you´re alive and want to stay alive you better be Antifragile.

Antifragility is needed to survive in a changing environment.

So if you want to build antifragility into something, well, then you have to bring it to life. You have to be a Frankenstein of some sorts ;-) Because after you built it, you need to leave it alone – and it needs to stay alive on its own.

You see, that´s why I don´t believe in #antifragilesoftware. Software is not alive. It won´t change by itself. We as its builders and observers change it when we deem it necessary.

Software itself can only be robust. We build buffers of certain sizes into it. But those buffers will never change out of the software´s own volition. At least not before the awakening of Skynet :-)

So forget SOLID, forget µServices, forget messaging, RabbitMQ or whatever your pet principles and technologies might be. Regardless of how much you apply them to software, the software itself will not become antifragile. Not even if you purposely (or accidentally) build a god class :-)

Enter the human

Since we cannot (yet) build anything material that´s alive, we need to incorporate something that´s alive already, if we want to achieve Antifragility. Enter the human.

If we include one or more humans into the picture, we actually can build living systems, ie. systems with the Antifragility property. Before were just technical systems, now it´s social systems.

Not every social system is alive of course. Take the people sitting on a bus. They form a social system, but I would have a hard time to call it autopoietic. There´s nothing holding those people together except a short term common destination (a bus stop). And even this destination does not cause the group of people to try to keep itself together in case the bus breaks down. Then they scatter and each passenger tries to reach his/her destination by some other means.

But if you put together people in order to achieve some goal, then the social system has a purpose – and it will try “to stay alive” until the goal is reached. Of course for this they need to be bestowed with sufficient autonomy.

I think a proper term for such a purpose oriented social system would be organization. A company is an organization, a team is an organization.

An organization like any other system has buffers. It can withstand certain stresses. But above that it is capable of extending its buffers, if it feels that would help its survival towards fulfilling its purpose. That´s what autonomy is for – at least in part.

“Dead” systems, i.e. purely material systems (including software) usually deteriorate under stress – some faster, some slower. At best they might be able to repair themselves: If a buffer got damaged they can bring it back to its original capacity. But then they don´t decide to do this; that´s what “just happens”. That´s how they are built, that´s an automatic response to damage, it´s a reflex.

“Dead” systems however don´t deliberate over buffer extension. They don´t learn. They don´t anticipate, assess risks, and as a result direct effort this way or that to increase buffer capacity to counter future stresses.

Let´s be realistic: that´s the material systems (including software) we´re building. Maybe in some lab mad scientists are doing better than that ;-), but I´d say that´s of no relevance to the current Antifragility discussion.

Software as a tool

If we want to build antifragile systems we´re stuck with humans. Antifragility “build to order” requires a social system. So how does software fit into the picture?

What´s the purpose of a software development team? It´s to help the customer respectively the users to satisfy some needs easier than without it. The customer for example says: “I want to get rich quick by offering an auction service to millions of people.” The customer then would be happy to accept any (!) solution, be that software or a bunch of people writing letters or trained ants. As it happens, though, software (plus hardware, of course) seems make it easier than training ants to reach this goal :-) That´s the only reason the “solution team” will deliver software to the customer. Software thus is just a tool. It´s not a purpose, it´s not a goal, it´s a tool fitting a job at a specific moment.

Of course, a software team would have a hard time delivering something other than software. That way software looks larger than a tool, it looks like a goal, even a purpose. But it is not. It is just a tool, which a software team builds to get its job “Make the life of the users easier” done.

I don´t want to belittle what developing software means. It´s a tough job. Software is complex. Nevertheless it´s just a tool.

On the other hand this view makes it easier to see how Antifragility and software go together. Since software is a tool, it can help or hinder Antifragility. A tool is a buffer. A tool has buffers. These buffers of cause serve Antifragility like any other buffers of a system.

So instead of saying, software should become antifragile – which it cannot. We should say, the socio-technical system consisting of humans plus software should become antifragile. A software company, a software team as organizations can have the property of Antifragility – or they can lack it.

A software producing organization can come under pressure. All sorts of buffers then can be deployed to withstand the stress. And afterwards the organization can decide to increase all sorts of buffers to cope with future stress in an even better way.

Antifragile consequences

µServices or an OR mapper or SOLID or event sourcing are certain approaches to build software. They might help to make its buffers larger. Thus they would help the overall robustness of the organization building the software. But the software itself stays a tool. The software won´t become antifragile through them. Antifragility needs deliberation.

Robustness of any order of a software is just the foundation of Antifragility. It can build on it, because robustness helps survival. It´s not Antifragility, though. That´s Nassim Taleb´s point. Antifragility emerges only if there is some form of consciousness at play. Stress and buffer depletion need to be observed and assessed. Then decisions as to how to deal with the resulting state of buffers have to be taken. Attention has to be directed to some, and withdrawn from others. And energy has to be channeled to some to repair them or even increase them. Which on the other hand means, energy has to be withdrawn from others. That´s why buffers shrink.

Shrinking buffers are a passive effect of energy directed elsewhere. Buffers are rarely actively deconstructed. Rather they atrophy due to lack of attention and energy. That´s what´s happening when you focus your attention on the rearview mirror while driving. You might be increasing your buffer with regard to approaching cars – but at the same time the buffer between your car and the roadside might shrink, because you diverted your attention from looking ahead and steering.

If you want to be serious about Antifragility as a property of a socio-technical system, then you have to be serious about values, strategy, reflection, efficient decision making, autonomy, and accountability. There´s no Antifragility without them. Because Antifragility is about learning, and liviing. Software is just a tool, and technologies and technical principles can only buy you buffer capacity. But who´s to decide where to direct attention and energy to? Who´s to decide which buffers to increase as a result to stress? That requires human intelligence.

First and foremost Antifragility is about people.

That´s why I suggest to drop the #antifragilesoftware in favor of something more realistic like #antifragilityinsoftwaredevelopment. Ok, that´s a tad long. So maybe #antifragileteams?