Dependency Injection - Good or Bad Practice?
Jacob Proffitt shares his musings about Dependency Injection on his blog. He gives an example of a data access pattern, which might be implemented by directly managing the data access provider or by injecting the provider or the connection via Dependency Injection. He conclude that
The benefit to this pattern is that the class is now disconnected from the data provider. The disadvantage is that now my calling code has to handle the provider.
In his opinion Dependency Injection has mainly been “hyped” by Unit Testing frameworks using mock objects:
The real reason that DI has become so popular lately, however, has nothing to do with orthogonality, encapsulation, or other “purely” architectural concerns. The real reason that so many developers are using DI is to facilitate Unit Testing using mock objects.
There are several mock object libraries making use of dependency injection, e.g. NMock and Rhino.Mocks. Jacob refers to TypeMock as an example of a “Superior .Net Mocking” solution, which “claims to allow you to mock objects used by your classes without having to expose those internal objects at all”. He closes by saying:
And why am I still hearing about the virtues of a pattern whose sole perceptible benefit is allowing mock objects in Unit Tests?
Nate Kohari disagrees and starts a rather lengthy discussion in the comments to Jacob’s post. He says that
The real benefits of DI appear when you use a framework (sometimes called an “inversion of control container”) to support it. When you request an instance of a type, a DI framework can build an entire object graph, wiring up dependencies as it goes.
How can you say that dependency injection […] creates loosely coupled units that can be reused easily when the whole point of DI is to require the caller to provide the callee’s needs? That’s an increase in coupling by any reasonable assessment.
Nate responds to this argument on his blog by defending Dependency Injection:
This is why dependency injection frameworks like Ninject, Castle Windsor, and StructureMap exist: they fix this coupling problem by washing your code clean of the dependency resolution logic. In addition, they provide a deterministic point, in code or a mapping file, that describes how the types in your code are wired together.
He also responds to Jacob’s argument that the Factory Pattern already solves all the problem DI claims to address:
Factory patterns are great for small implementations, but like dependency-injection-by-hand it can get extremely cumbersome in larger projects. Abstract Factories are unwieldy at best, and relying on a bunch of static Factory Methods (to steal a phrase from Bob Lee) gives your code “static cling” — static methods are the ultimate in concreteness, and make it vastly more difficult to alter your code.
He continues in the (absolutely worth to read) comments of his post:
Now, let’s consider the DI vs. provider model argument. The real benefit of DI over a provider model (abstract factory) is the ability to wire up multiple levels of the object graph at once. With an abstract factory, you can get different implementations for a specific dependency. However, wiring the dependencies of the dependencies (and so on) is not part of the equation, unless you have a bunch of abstract factories.
I generally have to agree with Nate. But I would suggest to use dependency injection sparingly. You don’t have to (or you mustn’t) describe every dependency between types in a central mapping file. Although your dependencies are then described in one place and may be changed at deployment, the remaining code gets very generic and difficult to understand (the inner workings, which are injected at runtime, are missing). Some dependencies are implementation details and should remain within your code. Others are on an “upper” level and good candidates for DI.
Posted by Hartmut Wilms at 29.08.07 10:28
TrackBack URL for this entry: