Is it true that Agile doesn’t work?
Before anybody posts my address and a photo of my house on
Facebook along with death threats, I don’t believe that Agile doesn’t work.
Personally, I believe that it is the current best (much of the time) approach
to software development when long-term development speed, responding to
changing requirements, and quality are important.
Agile is not really a methodology,
but a categorization of several approaches to software development that previously
had taken the label, lightweight. Agile practices have been around for at least
a couple of decades now, but have only been called Agile since 2001 . We have proponents
such as Robert C. “Uncle Bob” Martin asserting that, “The jury is in, the case
is closed. TDD works, and works well.”  Kent Beck, Martin,
and numerous other agile practitioners have demonstrated remarkable success
using Agile methodologies.
Even with that, we see many screeds against Agile  , even to the extent
that TDD 
and Agile 
are dead, and that most unit tests are waste .
Note that TDD is just one part of agile but some use it to condemn the
methodology. We hear the same story with pair programming. On top of that,
someone has written an Anti-Agile Manifesto .
My intent here is identify a commonality in the arguments
given by some that Agile does not work. My hope is that it will induce
discussion that will perhaps lead to viable solutions. We have to remember that
our common goal as professionals is producing quality software that adds value
to the business within budget constraints. We also prefer to accomplish this
without destroying our personal lives or health. I believe that the problems
people attribute to Agile’s “failure” are not attributable to Agile, but to the
Analyzing Agile’s purported failures
The course objective of a particular discipline which I am
certified to teach is, “To develop the knowledge, skills, and attitude
necessary to [perform the discipline].” Knowledge encompasses what to do and why
to do it, or the theory; skill covers how and the ability to do it; attitude is
the maturity to apply the other two parts consistently and appropriately. Looking
at that objective statement we see a three-legged stool; all three parts are
necessary, none alone is sufficient to indicate competence in the discipline.
I believe this applies to many, if not all, disciplines
including software engineering and Agile practices.
From casual observation of teams with which I have worked
and/or managed, and from comments I have heard and read, I have identified some
shortcomings with the anti-agile crowd and its complaints that fall into one or
more of those three categories:
They lack the knowledge, the theoretical foundation of:
- their craft (i.e. the fundamentals of software engineering)
- the history that led us to Agile
- the problems that Agile attempts to address
They lack the skills to:
- write code following the fundamentals of
- apply appropriate practices for the problem on
which they are working
They lack the attitude to:
- overcome their predisposition to make agile fail
- embrace change
- trust the theory and fundamentals of software
engineering and Agile
- recognize (or admit) what they don’t know, then
study to improve
Before we get into specific examples, I will describe my
perspective on the problem. Agile is not a process and is not a silver bullet
(and has never been described as such by its original proponents). Various Agile proponents have described sets
of practices such as XP, Scrum, Crystal (Clear), etc. that have become known as
Agile methodologies. When somebody says that Agile is dead or that it doesn’t
work, they make a blanket statement against numerous different, albeit similar,
Agile describes a set of values that when followed by
competent professionals can lead to successful software projects but is also
very open to changing requirements. This means that a team does not “do” Agile,
a team strives to “be” Agile. This means that an Agile team can respond to
changing requirements without sacrificing quality or productivity.
So, then, why do Agile teams fail?
I do not believe that Agile teams fail. I believe that some
teams fail and some of those teams purport to be (or are attempting to be)
Agile. I hear teams ask, “Are we” (or say, “We are”) “doing agile” (Scrum, XP,
etc.). But agile is a set of principles, not a goal. Agile is not something we
do, it is something we are. Those questions/statements are looking for some
process or practice to identify them as Agile, so by the Agile Manifesto, those
teams are not Agile. This is at least a failure of knowledge.
One thing that none of the Agile methodologies has is a way
to make a person a software engineer. They have an unspoken, unwritten assumption
that the team has competent developers. That is not a shortcoming of Agile;
every methodology that I have seen makes the same assumption. Perhaps Agile makes that incompetence more
apparent. This is a failure of the developers' skills.
Others don’t like Agile, perhaps for some emotional reason,
so set out to prove it doesn’t work by making it fail. I have seen team members
take a passive-aggressive approach and cause a project to fail, then blame it
on Agile. Still others don’t understand how to apply the tools so they fail,
but blame the tools or practices so by extension, Agile. This is a failure of
Again, none of these reasons constitutes a failure of Agile.
They are all failures of the people.
I understand the human nature that makes some people resistant
to change. This is a significant problem in software development that Alistair
Cockburn wrote about in 1999 in, “Characterizing people as
non-linear, first-order components in software
development” . Although the
programmers had good tools at their disposal, they refused to use them even
though they had sufficient training. That is a problem in attitude.
A colleague once was working on contract for a company that
used software as its main avenue to provide its services. Since software drove
their business and they needed fast and frequent releases, they were “doing
Agile.” One day leading up to a release, he noticed that many of his unit tests
had been disabled. He questioned why and was told that his tests were breaking
the build. Naturally he asked which requirements or assumptions had changed
that caused the tests to fail. The response: none. The tests were just breaking
the build. This is certainly a problem in attitude (they were resistant to
change their way of coding and fix their code to pass the tests) but might also
include a problem in knowledge (did they even realize that the tests failed
because their code was wrong?).
James Coplien wrote that most unit tests he sees are waste. Some
people try to use his paper as an indictment of unit testing. However, my
reading of the paper does not lead me to that conclusion. He writes that many
programmers, for various reasons, do not write good tests.
Some people are apparently predisposed to dislike Agile, so
they use Jim’s paper, apparently without reading and understanding it, to show
that Agile is no good because one of its practices is no good. But as we see,
Coplien is not saying that unit testing is no good; he is saying that it is
often poorly used, leading to poor results. (As a note, I do not consider his
paper a screed against Agile or unit testing.) Citing his paper that way is an
We must remember that unit testing is a tool, and therefore,
no better than the programmer using it. As an example, one could use a hammer
to drive in a screw. It would work; just not the way one would intend. However,
that is neither the fault of the hammer nor the screw.
I encountered a unit test once where the programmer set up
an object then called a Boolean method and tested that it returned true. There
was no complimentary test that looked for false and nothing in the test showed
why that particular configuration of the object should return true. I had to
look at the method’s code itself to try to learn what it was supposed to do.
There I learned that the method either returned true or threw an exception; it
never returned false!
As we might guess, the method was used as a condition in an
if-test in the client code (thankfully, it only occurred once in the system). I
found a significant else-block to handle the false return, but nowhere did the
system catch the exception. This clearly demonstrates incompetence on the
programmer’s part but also a problem with knowledge and/or attitude. The
programmer might not even have recognized the lack of skill, but clearly did
nothing to address the shortcoming. This is a clear example of one thing
Coplien was describing; a poorly written unit test. But it also shows a
programmer who did not understand the exception mechanism or function returns.
A complaint against Agile that strikes me as interesting is
that the various originators were all super-programmers who would succeed with
whatever approach they used. A comment like that must have been made by someone
who doesn’t know the history of software development. Super-programmers going
all the way back to Winston Royce, and probably farther, have been trying to
solve the problem of creating reliable software. We wouldn’t spend the time and
effort if all it required were super-programmers.
The Commonality is Incompetence
Seeing that last complaint about super-programmers, then
looking at the other observations that I have made, I notice among them a
commonality. They all indicate a lack of competence at some level in the
thee-legged stool. Notice that the three-legged stool is not unique to Agile;
it applies to almost everything of consequence that we do. A team that is weak
in any of the three areas will probably fail regardless of their approach. It
is clearly not an indictment of Agile. Agile, as with any practice, process, or
methodology, depends on competent people.
"Writing the Agile Manifesto," [Online]. Available:
R. C. Martin, in The
Clean Coder: A Code of Conduct for Professional Programmer, Prentice
"Seven Things I Hate About Agile," 22 August 2012. [Online].
"The Case Against Agile: Ten Perennial Management Objections," 17
April 2012. [Online]. Available:
D. H. Hansson,
"TDD is dead. Long live testing.," 23 April 2014. [Online].
"The End of Agile: Death by Over-Simplification," 17 March 2014.
"Why Most Unit Testing is Waste," [Online]. Available:
A. A. Manifesto, "Anti Agile Manifesto,"
February 2014. [Online]. Available:
"Characterizing people as non-linear, first-order components in
software development," 21 October 1999. [Online]. Available: http://alistair.cockburn.us/Characterizing+people+as+non-linear,+first-order+components+in+software+development.
The first day of the first semester programming class, I introduced the students to the concept that the most important part of building a system, whether it is something physical like a bridge to cross a river or a computer system, is to understand the problem.
Many novice programmers believe that if they are writing code, they are making progress, and if they are not writing code, they are wasting time.
The problem with that mindset is that, to be making progress, they must be writing the correct code correctly. They cannot possibly know if they are writing the correct code, or if they are writing it correctly if they do not fully understand the problem.
Every problem has two sides: the requirement and the solution set.
We speak of a solution set because there are potentially multiple solutions to any given problem. However, we cannot begin to discuss a solution when we do not fully understand the requirement (the problem). We solidify our understanding by asking questions.
A problem typically has constraints. You hope that the customer told you the constraints up front, but perhaps he does not even know them all. You have to discuss the problem and ask questions.
If you instead jump into proposing solutions, you waste time and might end up like this contrived example:
Get me across the river
1. Take off your shoes and wade
a. The water is too cold
a. The river is 15 feet wide
3. Use a canoe
a. The river is running too fast
4. Use a ferry
a. The river is at the bottom of a 100-meter deep canyon
5. Cut a tree and use it as a bridge
a. I have a car
6. Construct a bridge from steel and concrete
Now, if the contractor had instead asked questions, it might go more like this:
Get me across the river
1. What are the river’s dimensions?
a. 15 feet wide at the bottom of a 100-meter deep canyon
2. Are you walking?
a. No, I have a car
3. Construct a bridge from steel and concrete
By thinking about the problem and asking appropriate questions, the engineer was able to expose constraints and provide a workable solution in half the time as otherwise. In both examples, however, the customer thought he gave a reasonably simple explanation of a simple problem.
Sometimes a simple requirement is complex in more subtle ways. For example, the simple problem of importing data from text files into a database. That sounds like a simple problem until you start asking questions and learn that there are multiple kinds of text files.
We can have comma delimited files, string delimited files, column delimited files, some use ASCII encoding, others use EBCDIC encoding, some use UTF-8, others, UTF-16. Microsoft versions use a different line ending than UNIX and the same is true for file endings.
As you ask questions about this, you might decide that it is not a single requirement with multiple constraints. Perhaps it is actually multiple requirements, with on high-level requirement that generalizes them. However, you can only learn that by asking effective questions. Additionally, analyzing the problem above shows that once we have determined the specific differences in the input files, mapping the fields to the database is probably common between he different file types. That means that the beginning and the end of the process probably uses common code, but the details in the middle will have some differences.
So, by asking questions and thinking about the problem, you have identified several requirements that had been masked as one, but also commonalities in the sub-requirements that are probably areas for hidden code duplication. You probably noticed that there are potential areas that you can remove duplication by applying polymorphism, so when you have written code to import two file versions, you start looking at ways to refactor and take advantage of that.
These are things that, if you are not actively asking questions and discussing the problem with the customer and the other programmers, you are not as likely to realize and the system suffers as a result.
Remember, the customer wants the system badly enough to pay for it. He will appreciate these questions because that helps to assure that he gets the system that he wants. Well-reasoned questions are likely to remind the customer of details that he had forgotten to include, or perhaps has not even thought of himself.
When you are preparing for, and sitting in the planning meeting, ask questions relentlessly until you fully understand the problem. Then, ask more questions.
One of the things that I consider fundamental concerns a
method or function. Historically, a method does exactly one thing, which its
name describes. It starts at the top and ends at the bottom; i.e., it has
exactly one entrance and one exit.
With such a simple guideline, how can anybody submit a method
that deviates from it? For example, the following method names tell when it
happens, not what it does:
Interestingly, these names do not seem to tell us anything:
One reason that I have observed that causes difficulty for programmers
to choose an appropriate name is that their method violates the “one thing”
part of the guideline. Their methods do several different things.
So, how can a programmer know if the method they wrote meets
these guidelines? They should be able to explain:
- first, what the method does (this should take a
very few words, and the method name should already prep us for that)
- next, (from a high level) the algorithm they
used (this will take more words than the introduction, but not many more)
If they cannot do that, it is possibly a sign that the code
is not ready for review. Send it back for refactoring. A method with more than
10 lines is probably not ready for review. The programmer will not be able to
do those two things above.
A junior programmer might not yet have developed the skills
to get all of his method to fewer than ten lines. A code review should help
with that. Ideally, methods should be less than about five lines.
When you write a short method, it is hard to make it do more
than one thing, and easy to give it a descriptive name.
I regularly rant about programmers not knowing the fundamentals
without giving any concrete examples. An acronym that I coined with an
esteemed colleague during one such rant is FBC (Fully Buzzword Compliant). Many
programmers can spew buzzwords like pickup lines in a bar, and many of them can
even give a correct definition. However, when it comes time to apply the
associated technique or technology, they are clueless regarding where to start.
One thing that perplexes me about this is that there is so
much information on the Web that would help foster this capability, but these
programmers do not seem inclined to educate themselves. A second thing is that
these fundamentals have been around longer than most of these programmers have
been alive. They are nothing new or groundbreaking. Modern programming
languages and techniques build on them; hence, they are fundamental.
Furthermore, I consider internalizing these fundamentals an
ethical responsibility and necessary to becoming a true craftsman, rather than
being a code hack who is FBC. Given the plethora of writings on the Web and published in
books, I do not feel alone in this regard.