Geeks With Blogs
Chris Falter .NET Design and Best Practices

Recently a project leadership team I was assisting had the difficult task of selecting between implementation technologies for a distributed architecture.  We had brainstormed a list of candidates, and after some investigation were able to enumerate their strengths and weaknesses well enough.  But then the decision-making process started to come unhinged as we thrashed about, weighing the various options.  How do you narrow the field to a winner, anyway?

When I was able to frame the process in terms of the five steps you are about to read, though, the path to a confident decision became clear.  While I am not suggesting that this is the evaluation methodology to end all evaluation methodologies, it certainly helped us.

Step 1: Reduce the Candidate List via Head-to-Head Comparisons.  You will make your job easier if you can quickly knock out some of the contenders.  Among the options we were considering for synchronizing data were

  • using SQL Server replication, and
  • writing our own application to read database records marked with an update flag, update records in a partner node with the data, and then remove the flags.

Ultimately, we realized that the application we were thinking of writing would be remarkably similar in structure and purpose to SQL Server's merge replication.  Since we had no intention of re-inventing the wheel, we removed the idea of writing such an application from our candidate list.

Step 2: Build an Analysis Table.  Since the goal is to learn how the technology options will influence the development of your system, the next step is to build a table that will compare the options according to how well they implement potential system features.  Start by lining up the remaining technology options on the x-axis, and system features on the y-axis.  The level of difficulty in implementing a feature, using a technology option, will be the data at the intersection of an option and a feature.  Here is a sample of how an analysis grid might look for a distributed architecture:

Feature Option 1 Option 2 Option 3
Workflow: Overnight Batch Simple Simple Relatively Easy
Workflow: Straight-Through (Central Only) Difficult Very Difficult Moderately Difficult
Workflow: Straight-Through (All Nodes) Very Difficult Impossible Difficult
Data Replication: Near Real-time Very Difficult Moderately Difficult Moderately Difficult
Data Replication: Every Hour Moderately Difficult Impossible Relatively Easy
Post-failure Data Re-sync: overnight Simple Relatively Easy Relatively Easy
Post-failure Data Re-sync: immediate Moderately Difficult Relatively Easy Impossible

 

In this example we see that straight-through workflow processing at the central node will be difficult using option 1, very difficult using option 2, and moderately difficult using option 3.  The ease of implementation (for all the feature/option pairs) ranges from simple to impossible, with four intermediate values (relatively easy, moderately difficult, difficult, and very difficult).  Given the murkiness of IT crystal balls, there is probably no point in attempting a finer-grained analysis of the difficulty level.

Step 3. Assign Scores.  In order to compare the various combinations of features and options, you need to assign a score to every combination.  If you assign a score for a level of difficulty, you can quantify the comparison.  The Cohn scale, a pseudo-Fibonacci series in which each score is about 50% greater than its predecessor, is a good choice.  Many shops are already using the Cohn scale to estimate user stories in agile methodologies, so the familiarity can help.  If you convert a level of difficulty into a score on the Cohn scale, you are assuming that it is about 50% more difficult than the next lower level, as you can see below:

Description Points
Simple 1
Relatively Easy 2
Moderately Difficult 3
Difficult 5
Very Difficult 8
Impossible 10,000
Converting Descriptions Into Points (Cohn Scale)

 

"Impossible" is converted to an extremely high number in order to allow a numeric comparison. 

After assigning the scores, the example table looks like this:

Feature Option 1 Option 2 Option 3
Workflow: Overnight Batch 1 1 2
Workflow: Straight-Through (Central Only) 5 8 3
Workflow: Straight-Through (All Nodes) 8 10000 5
Data Replication: Near Real-time 8 3 3
Data Replication: Every Hour 3 10000 2
Post-failure Data Re-sync: overnight 1 2 2
Post-failure Data Re-sync: immediate 3 2 10000

 

Step 4. Discuss Features and Effort with Your Customer.  Of course your customer will want to understand the choices they have.  Using the analysis table from step 3, you could tell your customer that if they desperately desire straight-through workflow processing on all nodes in the distributed system, near real-time data replication, and immediate re-synchronization of data after a failure in the link between distributed nodes, the only technology available is option 1, which will "cost" 19 points.  You could then point out that if it is acceptable to wait until overnight to re-synchronize data after a communications link failure, you could implement option 3 at a cost of 10 points--about half the effort.

What if my customer wants an estimate in terms of actual resources (person-days)?  Your customer certainly has the right to get a ball-park estimate of the costs, of course.  Since it is impractical to work up an estimate for each combination of features and options, you could perform an estimate for the lowest scoring combination instead, which will then become the basis for a conversion factor between points and effort.  In the example, the lowest scoring combination is overnight batch workflow, hourly data replication, and overnight re-synchronization of nodes after a communication failure, using option 1 (5 points).  If you estimate this combination as requiring 10 person-weeks of effort, then the conversion factor is 2 person-weeks per point.  As a result, you can estimate that the desperately desired combination (costing 19 points) will require 38 person-weeks.  Obviously, your project plan should not rely on this ball-park estimate; it is only precise enough to help the project team make an informed technology choice.

Step 5. Choose the Lowest Scoring Option for the Desired Feature Set.  In most projects, the biggest cost factor (and biggest risk) is the use of human resources, so the option which requires the least effort should usually win.  However, if there is a near tie between first and second place, you might want to weigh other factors such as licensing costs and vendor support in order to make your choice.

What methodology have you used for choosing between technology options?  Have you used an analysis table like this?  Leave a comment!

Posted on Tuesday, August 5, 2008 12:33 PM Software Architecture | Back to top


Comments on this post: Five Steps to Evaluate Technology Options

No comments posted yet.
Your comment:
 (will show your gravatar)


Copyright © Chris Falter | Powered by: GeeksWithBlogs.net