Jamie Kurtz

Promoting architectural simplicty

  Home  |   Contact  |   Syndication    |   Login
  27 Posts | 0 Stories | 54 Comments | 5 Trackbacks

News



Archives

Saturday, July 14, 2012 #

I am moving my blog to another site – WordPress.com to be exact. If you care to follow, please visit www.jamiekurtz.com.


Sunday, June 24, 2012 #

When was the last time you ordered a pizza like this:

“I want the high school kid in the back to do the following… make a big circle with some dough, curl up the edges, then put some sauce on it using a small ladle, then I want him to take a handful of shredded cheese from the metal container and spread it over the circle and sauce, then finally I want the kid to place 36 pieces of pepperoni over the top of the cheese” ??

Probably never. My typical pizza order usually goes more like this: “I want a large pepperoni pizza”.

In the world of software development, we try so hard to be all things agile. We:

  • Write lots of unit tests
  • We refactor our code, then refactor it some more
  • We avoid writing lengthy requirements documents
  • We try to keep processes to a minimum, and give developers freedom
  • And we are proud of our constantly shifting focus (i.e. we’re “responding to change”)

Yet, after all this, we fail to really lean and capitalize on one of agile’s main differentiators (from the twelve principles behind the Agile Manifesto):

“Working software is the primary measure of progress.”

That is, we foolishly commit to delivering tasks instead of features and bug fixes. Like my pizza example above, we fall into the trap of signing contracts that bind us to doing tasks – rather than delivering working software.

And the biggest problem here… by far the most troubling outcome… is that we don’t let working software be a major force in all the work we do. When teams manage to ruthlessly focus on the end product, it puts them on the path of true agile. It doesn’t let them accidentally write too much documentation, or spend lots of time and money on processes and fancy tools. It forces early testing that reveals problems in the feature or bug fix. And it forces lots and lots of customer interaction. 

Without that focus on the end product as your deliverable… by committing to a list of tasks instead of a list features and bug fixes… you are doomed to NOT be agile. You will end up just doing stuff, spending time on the keyboard, burning time on timesheets. Doing tasks doesn’t force you to minimize documentation. It makes it much harder to respond to change. And it will eventually force you and the client into contract haggling. Because the customer isn’t really paying you to do stuff. He’s ultimately paying for features and bug fixes. And when the customer doesn’t get what they want, responding with “well, look at the contract - we did all the tasks we committed to” doesn’t typically generate referrals or callbacks.

In short, if you’re trying to deliver real value to the customer by going agile, you will most certainly fail if all you commit to is a list of things you’re going to do. Give agile what it needs by committing to features and bug fixes – not a list of ToDo items.

So the next time you are writing up a contract, remember that the customer should be buying this:

blog-pizza

Not this:

blog-project


Sunday, September 4, 2011 #

When developing WCF services that interact with a custom Security Token Service (STS), you will need to create at least one X.509 certificate. If you have access to a trusted certificate authority – e.g. a Windows Active Directory domain – then this task is pretty simple. But if you don’t, or maybe you would just rather create a set of self-signed certificates, here is an approach that works well for me.

This particular scenario utilizes three separate certificates. The first one is named “localhost” and is used to create an HTTPS binding in IIS 7.5. The other two certificates are used to sign and encrypt the security token created by our custom STS. Note that the certificate used for the HTTPS binding is called “localhost” so that running the sites on our laptops will always be valid – since the host name of the local development sites will always be “localhost”.

The PowerShell script below essentially uses MakeCert to create the issuer certificate – which is the one called “localhost”. Then we import that certificate into the LocalMachine Trusted Root store, so that we can use it as a trusted issuer and signer of the other two certificates. When using MakeCert to create the other two certificates, we use the –in, –ir, and –is arguments to tell MakeCert to sign them with the “localhost” certificate we created (and that is now fully trusted since we imported it into the Trusted Root store).

 

 

   1:   
   2:  $issuerCertificate = "localhost"
   3:  $tokenCertificates = "TokenSigningCert", "TokenEncryptingCert"
   4:   
   5:   
   6:  $makecert = 'C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\makecert.exe'
   7:  $certmgr = 'C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\certmgr.exe'
   8:   
   9:  function CreateIssuerCertificate {
  10:      param($certificateSubjectName)
  11:   
  12:      $exists= ls cert:\LocalMachine\My | select subject | select-string "cn=$certificateSubjectName"
  13:      if($exists -ne $null)
  14:      {
  15:          echo "$certificateSubjectName certificate already exists"
  16:      }
  17:      else
  18:      {
  19:          ls $env:temp\$certificateSubjectName.* | del
  20:          & $makecert -r -pe -n "cn=$certificateSubjectName" -ss My -sr LocalMachine -sky exchange -sy 12 "$env:temp\$certificateSubjectName.cer"
  21:          & $certmgr -add -c "$env:temp\$certificateSubjectName.cer" -s -r localmachine root
  22:      }
  23:  }
  24:   
  25:  function CreateTokenCertificate {
  26:      param($certificateSubjectName, $issuerCertificateSubjectName)
  27:   
  28:      $exists= ls cert:\LocalMachine\My | select subject | select-string "cn=$certificateSubjectName"
  29:      if($exists -ne $null)
  30:      {
  31:          echo "$certificateSubjectName certificate already exists"
  32:      }
  33:      else
  34:      {
  35:          & $makecert -pe -n "cn=$certificateSubjectName" -ss My -sr LocalMachine -sky exchange -sy 12 -in "$issuerCertificateSubjectName" -ir LocalMachine -is My "$env:temp\$certificateSubjectName.cer"
  36:      }
  37:  }
  38:   
  39:   
  40:   
  41:  CreateIssuerCertificate $issuerCertificate
  42:   
  43:  foreach($cert in $tokenCertificates)
  44:  {
  45:      write-host "Creating certificate $cert (signed by $issuerCertificate)"
  46:      CreateTokenCertificate $cert "$issuerCertificate"
  47:  }

Thursday, August 18, 2011 #

My Epiphany – Part 1

After reading Continuous Delivery, by Jez Humble and David Farley, I couldn’t help but think “wow! this is the key to becoming truly agile!”. I submit that may be a little overstated, but nonetheless by minor epiphany has grown into an outright passion for enabling rapid development AND delivery of small bite-sized pieces of applications.

As we’re all well immersed into the Agile way these days (or, at least, we’re all trying!!), our common goal should be to provide customers a continuous stream of value; value here is measured in terms of bug fixes and new features as requested by the customer. And you absolutely cannot support a continuous stream of value if you can’t rapidly and reliably move those bug fixes and new features all the way from the developer commit to the live production system.

And now with the recent focus on cloud-based hosting for our applications, it is even more important that we subscribe to the paradigm shift proposed by Jez and David. Yes, I said “paradigm shift”. And that’s the way I see it, honestly. There’s simply no way a software provider can compete these days if all of their value offerings are locked up in the source repository for months on end. To create any other kind of delivery infrastructure that does NOT follow the recommendations laid out in Continuous Delivery, is a guarantee that your value – i.e. what you’re charging the customer for – will stay locked away for longer than it takes that customer to shop elsewhere.

 

And Part 2

Now, being an architect, this is where my [minor] epiphany really grows some legs. Dare I propose the following:

  1. You can’t turn internal agile practices into a continuous customer value stream without the Humble/Farley automated “deployment pipeline”. And,
  2. You can’t truly support the automated “deployment pipeline” without an architecture that is highly componentized and very loosely coupled

Before I go any further… sure, you can create an automated deployment pipeline even with a big monolithic ball-of-mud application. I’ve seen it done. But is is hard. And when I say “hard” I don’t mean challenging or difficult and it’s Friday afternoon and I want to go home. I mean too hard. And costly. And frustrating. And people will get burned out quickly. If you try to push an application too quickly through a deployment pipeline when its underlying architecture can’t support it, it will be crazy hard. So there, yes, you can theoretically keep your ball-of-mud architecture, but you will be severely limited, frustrated, and probably sleep deprived!

 

Components (good) and Dependencies (bad)

The architecture I’m talking about centers around very small components. These small components can be considered the smallest possible unit of deployment for a given functional group or feature (or, group of features). That’s pretty subjective, but those of us with a little gray hair understand. If we draw a circle around a single component in the application, there can be too much in the circle. And there can be too little in the circle. We’ll defer a deep dive around componentization for later. Just know that the circle we draw has a lot to do with dependencies.

These components… they must be totally isolated starting from inception and design all the way through to delivery. That means the design teams and BAs and user experience experts aren’t hearing things like “oh, you can’t improve that piece of the UI without upgrading the ENTIRE PLATFORM” from the engineers. And they’re not being told that in order to create a new style on the buttons in one area “we’ll have to upgrade the controls toolkit which will affect EVERY SCREEN IN THE SYSTEM”.

Ok, sure, dependencies are a necessary evil. But we must always strive to minimize dependencies between components. The dependency diagram between components should look very flat, and ideally no lines between the components. Further, we must ruthlessly avoid implicit (and, usually, unnecessary) dependencies by way of dependencies on shared “framework” libraries. Because once you have two components with a dependency on a shared library, you can’t fix a bug or add a new feature by way of the shared library without necessarily affecting both components. And the more stuff that exists in the “framework”, the higher the chances are that your change will be in that framework.

Occasionally we must remove our geeky engineering hat and strive for balance between DRY and dependencies on shared libraries. Duplicate code kills. But so do dependencies. Resist the urge to dump anything that even slightly smells like shared code into a “framework”. Sometimes it’s better to let these components live independently – even if they duplicate some code here and there. Heck, there’s duplicate code all around the world – i.e. we don’t all share the same exact “framework” libraries. It’s all in your perspective, and how big you draw your circle, that defines whether or not code is even duplicated in the first place.

One more thing about dependencies: try to depend on a contract of-sorts, and not on a binary. In other words, create an architecture that allows components to be tolerant of changes of other components. We want to allow components to change without requiring a complete build and regression run of all other components.

 

What Do We Get?

Once we have an architecture that consists of small isolated components, and we’ve minimized or eliminated dependencies, we not only can support the proposed automated deployment pipeline, we also reap other benefits. To name a few, we have an architecture and an environment that:

  • Promotes and supports agility from the entire team in terms of response to issues or new requests
  • Allows smaller bits and packages to be deployed – which lowers risk for the product owners and customers
  • Allows teams and components to utilize frameworks and patterns that best suite their own unique needs (rather than everyone depending on a lowest-common-denominator)
  • Avoids the situation where one component’s change in technology or development practice necessarily affects all other components.

I’m sure you’ve heard something like this: “but we’re not ready to support SuperLibrary version 2.0, we need to stay with 1.0 for the next 6 months. And since we’re sharing the SuperLibrary DLLs, you’ll have to find another (read: more difficult and expensive) way to meet your customer requests.”

We must understand that there is actual realizable value in being able to deliver independently from other components. And that value is usually higher than the value gained by trying too hard to avoid duplicate code. Not always – it’s a balance.

 

Some Common Antipatterns

To wrap it up, here’s a few things I’ve noticed might indicate that you aren’t ready to create a high-powered rapid customer value delivery pipeline. Feel free to comment on what you’ve witnessed.

Long-running builds – If it takes 10s of minutes just to compile your solution/package, then it’s too big and needs to be broken down into smaller components. And if it takes 10s of minutes to compile and run all first-phase unit tests, it’s still too big! At the risk of sounding over-generalized, I think a single component should take about 15 to 30 seconds to compile and maybe another 30 seconds to run all unit tests. That’s pretty subjective, but you get the idea.

Multiple web sites / virtual directories in one solution/package – One easy way to draw a circle around a component is by way of its web application deployment package. And since two web sites are generally independent from a user’s viewpoint, you can most likely separate each web site project into its own component.

Relying on feature branches to allow teams to work independently – This is a common antipattern exhibited by an architecture that creates large components. In order for one team to fix bugs or create new features in their component, they must create a branch of the ENTIRE application. Anyone who has lived or is living through a feature-branch nightmare knows what I’m talking about. Don’t force branches for every team or every new feature. Let the components themselves be the “branch”.

Creating a patch requires a build of all components – Bugs happen. Hotfixes happen, too. And sometimes we need to create a patch and get it out quickly. But if that patch or hotfix requires me to rebuild the entire system – ouch!! We need to lower risk, not increase it. And we need to create patches that we feel confident will not introduce change into other areas of the system. And that starts with not rebuilding those other areas of the system!


Monday, July 25, 2011 #

As I get more and more proficient with Windows PowerShell, I find that my profile is getting more and more useful. I thought I’d post my current profile’s content here, in case anyone finds any of these functions useful.

 

set-alias gh get-help

set-alias gcmd get-command

set-alias wo where-object

set-alias ss select-string

function sync {tfpt online /adds /deletes /diff "$args" /recursive /noprompt}

function kil {kill -name $args[0]}

function e {explorer $args[0]}

function sn { & 'C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\sn.exe' $args }

function sn64 { & 'C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\x64\sn.exe' $args }

function prompt { "PS>" }

function hosts { notepad c:\windows\system32\drivers\etc\hosts }

function pro { notepad C:\Users\jkurtz\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 }

function gcp {get-content $args[0] | out-host -paging}

function edit { notepad $args[0] }

function npp { & 'C:\Program Files (x86)\Notepad++\notepad++.exe' $args }

function Replace-String($find, $replace, $path, $include)
{
    ls $path -include $include –exclude *.dll,*.exe -recurse | select-string $find -list | % { echo "Processing contents of $($_.Path)"; (get-content $_.Path) | % { $_ -replace $find, $replace } | set-content $_.Path -Force }
    ls $path *$find* -include $include -recurse |% { echo "Renaming $($_.FullName) to $($_.FullName.Replace($find, $replace))";mv $_.FullName $_.FullName.Replace($find, $replace) }
}


I also tend to keep a running list of change-directory functions handy. For example, if I am working on a project called “Popcorn”, I will create a function that lets me change the current directory to that project’s trunk – like this:

function cd-pc {cd c:\projects\popcorn\trunk; gl }

 

I realize I could use PowerShell drives. But I’m just not all that comfortable using drive letters – mapped drives in Windows or PowerShell drives.


Tuesday, January 26, 2010 #

In VS2010 Beta 2 you can associate your own activity designer with a custom activity in one of two ways:

  1. Using System.ComponentModel.DesignerAttribute on the custom activity
  2. Implementing System.Activities.Presentation.Metadata.IRegisterMetadata, using the Register() method to set up the association at run-time

It should be obvious that option (1) couples your custom activity directly to a specific designer. While not ideal, in many cases this is sufficient. But, there are some scenarios that require a looser coupling. For example, I may want to choose one of several designers at run-time – based on the user’s authorization. This post describes the steps needed to implement option (2) – because it’s a little tricky.

First, let’s assume we have a project in our solution called MyActivities. Let’s also assume that this project contains a custom activity called SendEmailActivity. This activity derives from CodeActivity and simply uses System.Net.Mail.SmtpClient to send an email within the Execute() method. Our SendEmailActivity activity has several InArguments – to allow the user to set the email address, subject, body, etc. The code for our SendEmailActivity might look like this:

   1: using System;
   2: using System.Activities;
   3: using System.Net.Mail;
   4:  
   5: namespace MyActivities
   6: {
   7:     public class SendEmailActivity : CodeActivity 
   8:     {
   9:         public InArgument<string> To { get; set; }
  10:         public InArgument<string> From { get; set; }
  11:         public InArgument<string> Subject { get; set; }
  12:         public InArgument<string> Body { get; set; }
  13:         public InArgument<string> Host { get; set; }
  14:  
  15:         protected override void Execute(CodeActivityContext context)
  16:         {
  17:             SmtpClient client = new SmtpClient(Host.Get(context));
  18:  
  19:             try
  20:             {
  21:                 client.Send(
  22:                     From.Get(context),
  23:                     To.Get(context),
  24:                     Subject.Get(context),
  25:                     Body.Get(context));
  26:             }
  27:             catch (Exception ex)
  28:             {
  29:                 string error = ex.Message;
  30:                 if (ex.InnerException != null)
  31:                 {
  32:                     error = ex.InnerException.Message;
  33:                 }
  34:                 Console.WriteLine("Failure sending email: " + error);
  35:             }
  36:         }
  37:     }
  38: }

 

So now, to create an activity designer and associate it to the SendEmailActivity:

  1. Create a Workflow Activity Designer Library project in the same solution with a name that appends ".VisualStudio.Design" to the name of the project where the custom activity resides. In this example, the name would be MyActivities.VisualStudio.Design
  2. Create your activity designer. We'll call the designer class SendEmailActivityDesigner. (I’ve pasted the XAML for this designer at the end of this post)
  3. From the new designer project (MyActivities.VisualStudio.Design), reference the following:
    1. MyActivities - as a project reference
    2. PresentationFramework
    3. System.Activities.Core.Presentation
    4. System.Activities.Presentation
  4. In the code behind for the activity designer's XAML (SendEmailActivityDesigner.xaml.cs), implement the IRegisterMetadata interface. This includes utilizing Metadatastore in the IRegisterMetadata.Register() method. The code should look like this:
   1: using System.Activities.Presentation.Metadata;
   2: using System.ComponentModel;
   3:  
   4: namespace MyActivities.VisualStudio.Design
   5: {
   6:     public partial class SendEmailActivityDesigner : IRegisterMetadata
   7:     {
   8:         public SendEmailActivityDesigner()
   9:         {
  10:             InitializeComponent();
  11:         }
  12:  
  13:         public void Register()
  14:         {
  15:             AttributeTableBuilder builder = new AttributeTableBuilder();
  16:             builder.AddCustomAttributes(
  17:                 typeof(SendEmailActivity), 
  18:                 new DesignerAttribute(typeof(SendEmailActivityDesigner)));
  19:             MetadataStore.AddAttributeTable(builder.CreateTable());
  20:         }
  21:     }
  22: }

 

Now, in order for the VS activity designer to "see" your designer for the corresponding activity you need to make sure the MyActivities.VisualStudio.Design.dll file ends up in the same folder as MyActivities.dll. I've done this so far with a simple post-build event on the MyActivities.VisualStudio.Design project.

Then, finally, in the project that contains the workflow on which you want to drag-and-drop the SendEmailActivity make sure you add project references to both MyActivities and MyActivities.VisualStudio.Design.

* Note that this works only for the Visual Studio hosted workflow designer. If you re-host the designer in your own application you need to remove the "VisualStudio" part of the project/DLL name from your activity designer project. In our example this would be: MyActivities.Design.

 

Here’s the XAML code for the SendEmailActivityDesigner designer. The code-behind content is above, under step 4. (I’m certainly no WPF expert so go easy on me!)

   1: <sap:ActivityDesigner x:Class="MyActivities.VisualStudio.Design.SendEmailActivityDesigner"
   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   3:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   4:     xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
   5:     xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation" 
   6:     xmlns:sapc="clr-namespace:System.Activities.Presentation.Converters;assembly=System.Activities.Presentation">
   7:     <sap:ActivityDesigner.Resources>
   8:         <sapc:ArgumentToExpressionConverter
   9:             x:Key="ArgumentToExpressionConverter"
  10:             x:Uid="swdv:ArgumentToExpressionConverter_1" />
  11:     </sap:ActivityDesigner.Resources>
  12:  
  13:     <Grid>
  14:         <Grid.RowDefinitions>
  15:             <RowDefinition></RowDefinition>
  16:             <RowDefinition></RowDefinition>
  17:             <RowDefinition></RowDefinition>
  18:             <RowDefinition></RowDefinition>
  19:             <RowDefinition></RowDefinition>
  20:         </Grid.RowDefinitions>
  21:         <Grid.ColumnDefinitions>
  22:             <ColumnDefinition Width=".4*"></ColumnDefinition>
  23:             <ColumnDefinition Width=".6*"></ColumnDefinition>
  24:         </Grid.ColumnDefinitions>
  25:  
  26:         <TextBlock 
  27:             Grid.Row="0"
  28:             Grid.Column="0"
  29:             Text="Host: " />
  30:  
  31:         <sapv:ExpressionTextBox
  32:             Name ="HostTextBox"
  33:             Grid.Row="0"
  34:             Grid.Column="1"
  35:             Expression="{Binding Path=ModelItem.Host, Mode=TwoWay,
  36:                 Converter={StaticResource ArgumentToExpressionConverter},
  37:                ConverterParameter=In}" OwnerActivity="{Binding Path=ModelItem}"
  38:                 MinLines="1" MaxLines="1" MinWidth="100"
  39:                 HintText="Host"/>
  40:  
  41:         <TextBlock 
  42:             Grid.Row="1"
  43:             Grid.Column="0"
  44:             Text="From: " />
  45:  
  46:         <sapv:ExpressionTextBox 
  47:             Grid.Row="1"
  48:             Grid.Column="1"
  49:             Expression="{Binding Path=ModelItem.From, Mode=TwoWay,
  50:                 Converter={StaticResource ArgumentToExpressionConverter},
  51:                ConverterParameter=In}" OwnerActivity="{Binding Path=ModelItem}"
  52:                 MinLines="1" MaxLines="1" MinWidth="100"
  53:                 HintText="From"/>
  54:  
  55:         <TextBlock 
  56:             Grid.Row="2"
  57:             Grid.Column="0"
  58:             Text="To: " />
  59:  
  60:         <sapv:ExpressionTextBox 
  61:             Grid.Row="2"
  62:             Grid.Column="1"
  63:             Expression="{Binding Path=ModelItem.To, Mode=TwoWay,
  64:                 Converter={StaticResource ArgumentToExpressionConverter},
  65:                ConverterParameter=In}" OwnerActivity="{Binding Path=ModelItem}"
  66:                 MinLines="1" MaxLines="1" MinWidth="100"
  67:                 HintText="To"/>
  68:  
  69:         <TextBlock 
  70:             Grid.Row="3"
  71:             Grid.Column="0"
  72:             Text="Subject: " />
  73:  
  74:         <sapv:ExpressionTextBox 
  75:             Grid.Row="3"
  76:             Grid.Column="1"
  77:             Expression="{Binding Path=ModelItem.Subject, Mode=TwoWay,
  78:                 Converter={StaticResource ArgumentToExpressionConverter},
  79:                ConverterParameter=In}" OwnerActivity="{Binding Path=ModelItem}"
  80:                 MinLines="1" MaxLines="1" MinWidth="100"
  81:                 HintText="Subject"/>
  82:  
  83:         <TextBlock 
  84:             Grid.Row="4"
  85:             Grid.Column="0"
  86:             Text="Body: " />
  87:  
  88:         <sapv:ExpressionTextBox 
  89:             Grid.Row="4"
  90:             Grid.Column="1"
  91:             Expression="{Binding Path=ModelItem.Body, Mode=TwoWay,
  92:                 Converter={StaticResource ArgumentToExpressionConverter},
  93:                ConverterParameter=In}" OwnerActivity="{Binding Path=ModelItem}"
  94:                 MinLines="10" MaxLines="15" MinWidth="700" MinHeight="100"
  95:                 HintText="Body"/>
  96:     </Grid>
  97: </sap:ActivityDesigner>

I’m working on a project where we are using the Composite Application Library from Microsoft’s patterns & practices team. You can read the official documentation on that site and on MSDN for all the details, but basically the CAL allows you to build applications using totally decoupled modular components – or “modules” in CAL vernacular. These modules are discovered at runtime and are registered in the CAL container, which then handles each modules’ loading, showing, unloading, etc. (I’m greatly simplifying here).

To enable runtime module discovery, you can pick between one of four different “cataloging” methods:

  • Populate from code
  • Populate from XAML
  • Populate from a configuration file
  • Populate from a directory

The fourth one, populating from a directory, is what we wanted to use. This method of cataloging allows you to drop modules into a directory and have them picked up by the CAL. Essentially, it examines all assemblies in the directory and looks for types decorated with the ModuleAttribute attribute.

The CAL’s implementation of this directory cataloging only allows for a single directory. It does this through the DirectoryModuleCatalog catalog. As taken from the CAL’s documentation, the following example will configure your application to search in a Modules subdirectory for all modules:

   1: protected override IModuleCatalog GetModuleCatalog()
   2: {
   3:     return new DirectoryModuleCatalog() {ModulePath = @".\Modules"};
   4: }

Very cool!! But… we need to search through multiple directories – not just a single directory.

Long story short, I was able to subclass the DirectoryModuleCatalog to create a new directory-based catalog that can search as many directories as you want to give it. Now, you might laugh at my new catalog, but it does work!! Here it is:

   1: /// <summary>
   2: /// Allows our shell to probe multiple directories for module assemblies
   3: /// </summary>
   4: public class MultipleDirectoryModuleCatalog : DirectoryModuleCatalog
   5: {
   6:     private readonly IList<string> _pathsToProbe;
   7:      
   8:     /// <summary>
   9:     /// Initializes a new instance of the MultipleDirectoryModuleCatalog class.
  10:     /// </summary>
  11:     /// <param name="pathsToProbe">An IList of paths to probe for modules.</param>
  12:     public MultipleDirectoryModuleCatalog(IList<string> pathsToProbe)
  13:     {
  14:         _pathsToProbe = pathsToProbe;     
  15:     }
  16:  
  17:     /// <summary>
  18:     /// Provides multiple-path loading of modules over the default <see cref="DirectoryModuleCatalog.InnerLoad"/> method.
  19:     /// </summary>
  20:     protected override void InnerLoad()
  21:     {
  22:         foreach (string path in _pathsToProbe)
  23:         {
  24:             ModulePath = path;
  25:             base.InnerLoad();
  26:         }
  27:     }
  28: }

All you need to do is provide an IList<string> of paths – that’s it! So, to update the CAL’s sample:

   1: protected override IModuleCatalog GetModuleCatalog()
   2: {
   3:     IList<string> pathsToProbe = GetPathsToProbe();
   4:     return new MultipleDirectoryModuleCatalog(pathsToProbe);
   5: }

For quite a while now I’ve been wondering if our culture of text messages, emails, messenger/chat conversations, short YouTube clips, rapid-fire news channels, PodCasts, blog posts, Twitter, and Facebook updates is somehow hurting my ability to focus on and develop deeper concepts. By that I mean really dig in, learn, and add value to some larger-then-me idea. As I write these few sentences, emails are streaming in, the phone is ringing, and no doubt my friends and family are Tweeting and writing on my Facebook wall. And sometimes the urge to check on these things is too much. Actually, most of the time it IS too much – and I give in. In computer terms I am performing a context switch, thus flushing out my cache and starting over with the next task. When I finally get back to writing this blog post, my brain struggles to re-engage and become effective again.

It has been know for a while that even answering emails – while working on other mentally tasking activites – can really set you back. It interrupts flow. So you may spend 1 minute reading an email, but then your brain spends another 20 minutes trying to get back to the point at which you switched from the task at hand to read that email. The term "flow” is often used to describe the state in which the average person is really effective. Their brain is chugging along and they are adding real value to whatever they are working on. (well, I suppose they could be adding dis-value to whatever they’re working on, but that’s another story!).

One of the best books I’ve ever read on productivity is Getting Things Done, by David Allen. He provides a very simple and pragmatic approach to organizing and prioritizing all the “things” we have to do in life – whether small work tasks, or larger life goals. One of Mr. Allen’s tips that really stuck with me – among many – was reserving only certain times of the day for checking email. For example, maybe you check your email at 9am, 12pm, and 4pm. Then in between those times you are working (or playing or painting or walking, etc.). I learned long ago that I work much better by simply turning off the little Outlook “new mail” notifications (both visually and the little bell sound). The goal is to get into a “flow” – a state described very well by David Chaplin in his article on Maximizing Development Productivity:

Being in flow is when you are fully immersed in a task. You are so focused on it that you are almost in a trance like state. Hours can go by without you noticing. Work gets done very fast. When you are in flow you are at running at your highest velocity. It takes approximately 20 minutes to get into flow. However, if you get disturbed and knocked out of flow, it will take another 20 minutes immersion time before you are back in full flow. It is important to stay immersed in flow for long periods at a time to get anything considerable done.

Take a look at the productivity graphs in that article, and compare the “Ideal Productivity” graph to the “Disruptions” graph. Most developers, and probably most people in general, would agree with the idea of “flow”. Further, most people would probably agree that it takes time to re-engage in “flow” – following an interruption.

But I was wondering, what is the relationship between this constant interruption of flow and my ability to actually stay in flow (i.e. ignore the interruptions and stay engaged)? In other words, flip the whole theory upside down. As a people, are we learning to better deal with the constant multi-tasking demands and context switching? Or is a pervasive interruption of flow actually damaging to our ability to really focus on and develop an idea?

I can see in myself, that yes, this constant context switching is altering my modus operandi. My mind seems to crave a context switch!! Am I learning to become bored with a mere 10 minutes of “flow”? As a result of years of multi-tasking, is it now much harder to get myself to focus on anything that is not ultra exiting? Is my brain compelled – even addicted – to catching a quick news article, or read a short blog post, or fire off a Twitter update?

This morning I stumbled on this recent article, http://www.crn.com/it-channel/219401343;jsessionid=DZRE4FY5OPW5JQE1GHPCKHWATMY32JVN, titled “Multitaskers Not Very Good At Multitasking”. This study is early in the exploration of multi-tasking and its relationship with our current-age media, but its results resonate with my thinking:

The subjects were asked to perform a simple cognitive filtering test, to focus on the characteristics of a group of red triangles while ignoring a group of blue triangles. The end result: So-called multitaskers performed worse than people who were not regular media multitaskers, according to Reuters.

Similar findings occurred when the study participants took tests to measure organizational ability and task switching. Multitaskers were slower to shift their attention from one task to another.

While this doesn’t directly speak to my feelings and observations around the interruption of “flow”, it certainly debunks the idea that we are getting better at managing all of our inputs. Such practices as meditation, reading actual books (you know, with pages and chapters), taking naps, and enjoying long walks may need become more than infrequent pleasures. We may find ourselves in a state where these activities are required just to avoid the kind of mental degeneration we are starting to see all around us. While watching 30-second news clips and reading 5-word Facebook updates is certainly enjoyable and keeps us in touch with our friends and family and the rest of the world, I don’t think society can really move forward without deep intellectual engagement in ideas and uninterrupted “flow”.


Tuesday, May 19, 2009 #

A few weeks ago I was in line at the grocery store behind a lady who was buying a fair amount of groceries. Pretty typical trip to the store, except that as I watched, she was placing her items (from the cart) onto the little conveyor thing quite atypically. Rather than just lining everything up one after the other, she was placing the items here and there on the conveyor. It seemed rather random to me. Until at one point, as she was closely watching the total price go up and up, she said “ok, that’s enough.” The cashier looked at her kind of funny, but the lady simply responded “that’s all I can afford”. So the cashier stopped – didn’t scan anymore groceries.

Apparently, she’d prioritized her load of groceries – either in the cart and/or as they were being placed on the conveyor. She knew what she could commit to, and she’d didn’t over commit.

Of course, it’s a little rude to then expect the grocery clerk to go and return all the lower-priority items that didn’t make this grocery “sprint”. But it was fun to watch!


Tuesday, February 24, 2009 #

I have a small confession to make...

A little TFS web service I built over the past few days was the first “real” production-ready application I’ve ever written from scratch where I’ve been good about maintaining unit tests. I even pushed myself to practice real-live TDD, and then even use Moq.

I have to tell you, that I am now a firm believer in the benefits of leaning very heavily on your unit tests. Of course I've always believed others' stories, and have even worked on projects/products where I had to maintain a fair set of tests. But not having ever really tried it myself – on a new project where I could create real isolated unit tests, I think my belief was kind of shallow. Sort of like… believing that the life vest will save your life before every really needing it. And then one day you find yourself involved in a boating accident and you awake after being unconscious in the water for two hours. Then you REALLY believe it.

I think this small experience will help me in conveying to others the incredible importance of building that unit test base. Because after only a few tests were written (and hence, only a bit of code written), I found myself getting nervous (as usual) making changes. But then I could flip to the Visual Studio Test View, run all the tests, and feel some assurance that I wasn't totally hosing things. Then as bugs would crop up during user acceptance testing, the very first thing I would do is write a new unit test that I knew would fail – based on the conditions I saw in the UAT. Then I would proceed to change the code until the unit test passed. What a great feeling!! And further, I know that anyone else making changes to my code will have to pass all the same unit tests – another very assuring feeling.

It really is amazing how much we screw up in code!?!?! Even the stupidest littlest things. There were times when I would think "This is lame, I don't need a test. But I'll write one anyway - and I'll write it before I even write the code." And then it would literally take me three or four tries to get the test to pass!! Crazy. Makes me very scared about all the untested code churn going on in the world!!

Oh, and using Moq was a real treat - it only took me a few unit tests to get the hang of it. Really helps reduce the time feedback cycles during development - as well as help in isolating your tests to smaller and smaller units (i.e. "unit tests").


Saturday, January 3, 2009 #

Today some quick tricks finally came together for me with Windows PowerShell. So I can finally write scripts and quickly execute them from the PowerShell shell.

If you haven't yet installed PowerShell, you can find it here: http://www.microsoft.com/windowsserver2003/technologies/management/powershell/download.mspx.

My simple goal was to be able to a) write a short script that can take parameters, and b) be able to easily run it from the shell. Here's what I did:

  1. Close any open PowerShell windows
  2. Add your script path to the Windows PATH environment variable. I'm using "C:\Users\jkurtz\Documents\WindowsPowerShell", but it can really be anywhere.
  3. Make sure you have configured PS to run scripts. This page gives the details, but basically do the following:
    1. Open up a new PowerShell window
    2. Execute this: Set-ExecutionPolicy RemoteSigned
  4. Create a simple script, named with a "ps1" extension, in the folder specified in step (2) above. I've included a sample script below
  5. Now simply enter the name of the script at the PowerShell command-line (including any required arguments) and hit Enter

If you want to try it out, but don't have a script in mind, you can use this one. It's not the most useful script in the world, but it works for this example. Just use Notepad to create a file in your script path called "members.ps1", and paste the following contents into it.

param (
	[string] $domainname = $(throw 'please specify a domain name (can use . for local machine)'),
	[string] $groupname = $(throw 'please specify a group name')
)

$group =[ADSI]("WinNT://" + $domainname + "/" + $groupname)
$members = @($group.psbase.Invoke("Members")) 
$members | foreach {$_.GetType().InvokeMember("Name", 'GetProperty', $null, $_, $null)} 

 

Then, if you've followed the 5 steps above, you can simply enter the following at the PowerShell command-line:

members -domainname DOMAIN -groupname "domain admins"

Or, leave out the -domainname and -groupname switches:

members DOMAIN "domain admins"

For a reference and guides to using PowerShell, here's some great resources:

Note that version 2.0 of PowerShell is coming soon.


Thursday, July 24, 2008 #

This really bit me recently, so I want to point it out. You can read Martin Woodward's post here: http://planetscm.org/user/22/tag/tfs%20top%20tip/, but I've copied the jist of it below.

Basically, TFS 2008 will not associate any changesets to the first build for a Build Definition. This means that if you create your branch, then create a corresponding Build Definition, then make a bunch of changes to the branch (i.e. the changesets) BEFORE running your first build, all those changesets will never be associated with any builds. Ouch!!

The reasoning for this behavior actually makes sense, as described by Martin in the link above:

The reason why Team Build 2008 works in this way is that when Team Build successfully completes a build it stores the label applied to that last good build.  The next time it runs a build that is successful it will compare the two labels to detect which changesets were included in the build.  It will then look over those changesets for any associated work items and update them to include the build number in which they were fixed.

So of course be careful when upgrading from 2005 to 2008, as you should baseline all your existing builds BEFORE making any changes to the code. If you don't, that link between your changesets and the first builds will be lost forever :(

I agree with Martin, though, that as a matter of best practice you should always run a build from a new build definition prior to starting work on the associated branch.


Thursday, June 12, 2008 #

General references to Visual Studio Team System

 

TFS Licensing (can't forget this!!!)

 

Some great TFS blogs and forums

 

Tools and add-ins to TFS

 

Process templates for TFS

 

Book recommendations

http://www.amazon.com/Professional-Foundation-Server-Jean-Luc-David/dp/0471919306/ref=pd_bbs_1?ie=UTF8&s=books&qid=1212692014&sr=8-1

  • How to implement IT governance such as Sarbanes-Oxley
  • How to work with mixed environments (including Java and .NET)
  • How to set up the product for large distributed environments
  • How and why to take multiple lifecycles into consideration when deploying and using Team System
  • How to create custom development tools and administer and customize work items
  • How to monitor your team project metrics using SQL Server Reporting Services

http://www.amazon.com/Visual-Studio-Team-System-Development/dp/0321418506/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1212692014&sr=8-2

  • Using VSTS to support the transition to Agile values and techniques
  • Forming Agile teams and building effective process frameworks
  • Leveraging Team Foundation Version Control to help teams manage change and share their code effectively
  • Implementing incremental builds and integration with Team Foundation Build
  • Making the most of VSTS tools for Test-Driven Development and refactoring
  • Bringing agility into software modeling and using patterns to model solutions more effectively
  • Using the FIT integrated testing framework to make sure customers are getting what they need
  • Estimating, prioritizing, and planning Agile projects

http://www.amazon.com/Software-Engineering-Microsoft-Visual-Development/dp/0321278720/ref=pd_bbs_sr_3?ie=UTF8&s=books&qid=1212692014&sr=8-3

  • The role of the value-up paradigm (versus work-down) in the software development lifecycle, and the meanings and importance of “flow”
  • The use of MSF for Agile Software Development and MSF for CMMI Process Improvement
  • Work items for planning and managing backlog in VSTS
  • Multidimensional, daily metrics to maintain project flow and enable estimation
  • Creating requirements using personas and scenarios
  • Project management with iterations, trustworthy transparency, and friction-free metrics
  • Architectural design using a value-up view, service-oriented architecture, constraints, and qualities of service
  • Development with unit tests, code coverage, profiling, and build automation
  • Testing for customer value with scenarios, qualities of service, configurations, data, exploration, and metrics
  • Effective bug reporting and bug assessment
  • Troubleshooting a project: recognizing and correcting common pitfalls and antipatterns

http://www.codeplex.com/TFSGuide (available as a free PDF download)


Tuesday, June 10, 2008 #

Today I was trying to find out who has what checked out from a certain folder in TFS - call it $/Project/Folder. I happened to not have most of that folder downloaded yet - i.e. not in my workspace. I went to the command line, and typed the following:

tf status c:\Project\Folder /recursive /user:*

That returned with “There are no pending changes.” – which is interesting because I can see that there ARE pending changes. And yes, my workspace mapping / working folders are configured properly. So then I tried this instead:

tf status $/Project/Folder /recursive /user:*

And that showed all the actual pending changes.

So I guess the moral of the story is that you need to watch out using local paths – as it seemed be answering the question “what are all the pending changes for only the files I have in my workspace?”. Coincidentally, one of the guys on my team mentioned to me the other day that he generally uses server paths when using the tf command-line tool.


Saturday, June 7, 2008 #

I recently discovered that after you install the 2008 Team Explorer (here) - and you already had the 2005 Team Explorer installed - you may not able to view or edit TFS work items in Excel. Instead, you get an error that says: "TF80076: The data in the work item is not valid or you do not have permissions to modify the data. Please correct the problem and retry."

This error is caused when two different versions of the TFS Office Integration plug-in are installed but not configured properly. More details of the problem and a work around can be found here. However, the instructions for configuring your machine to use the 2008 add-in aren't correct. I've posted the correct details below. Make sure you close all instances of Visual Studio prior to following these steps.

 

To use the Visual Studio 2008 Team Explorer add-in

1.     At the command prompt, change directories to:
        C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies\

2.     Unregister the Visual Studio 2005 Team Explorer add-in by running the following command:
        regsvr32 /u TFSOfficeAdd-in.dll

3.     At the command prompt, change directories to:
        C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\

4.     Register the Visual Studio 2008 Team Explorer add-in by running the following command:
        regsvr32 TFSOfficeAdd-in.dll