One of the earliest lessons I was taught in Enterprise development was "always program against an interface". This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class. Why? "It's more reusable" was one answer. "It doesn't tie you to a specific implementation" a slightly more knowing answer. And let's not forget the discussion ending "it's a standard". The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it.
It wasn't until a few years later that I finally heard the term "Inversion of Control". Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force. For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops:
1: ICatalog catalog = new Catalog();
2: Category categories = catalog.GetCategories();
In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object. If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality. That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops.
So how do I test my code without needing Catalog to work? A very primitive approach I've seen is to change the line the instantiates catalog to read:
1: ICatalog catalog = new FakeCatalog();
once the test is run and passes, the code is switched back to the real thing. This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work. Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object. Using this approach, the code may look something like this:
1: ICatalog catalog = CatalogFactory.GetCatalog();
The code inside the factory is responsible for deciding "what kind" of catalog is needed. This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces. Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server".
This is where software intended specifically to facilitate Inversion of Control comes into play. There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons. From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team. I'm primarily focusing on this library because it questions about it inspired this posting.
At Unity's core and that of most any IoC framework is a catalog or registry of components. This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y". It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it". The object that exposes most of the Unity functionality is the UnityContainer. This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T. When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied.
There are three basic ways that I have seen Unity used within projects. Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern.
The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface. The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation.
Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea. As an alternative, classes can be designed for Dependency Injection. Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world. This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers. When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use. During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class. The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used. The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring).
The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation. When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container. Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary. I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors.
This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code. I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".