Posts in "Architecture"

Special case pattern, primitive obsession and accidental design

I define accidental design as a design that’s present in a software system, but was not consciously chosen for. Instead, it’s driven by other factors, such as tooling, co-workers, past experience, existing code and a whole lot of other reasons. Two of the most common cases I encounter have to do with the special case pattern and primitive obsession.

For this blog post I’ll use examples from the Payroll Case Study.

Special case pattern

The special case pattern is all about null checks. Everybody’s written code like this:

The problem with this code is that I have no clue about the semantic meaning of the if statement. It could be that that an Employee with the specified Id wasn’t found, or it could mean that it was found but just isn’t active anymore, or both. The special case pattern is a way to make your intent more explicit:

It’s more clear what we’re checking here: whether the employee was found or not. Personally, I prefer to decouple the Employee interface from having to know about NotFound states and such, so I’d have GetEmployee return an GetEmployeeResult:

Anyway, what does this have to with accidental design? Well, the entire reason we can check for null is because we’ve defined Employee as a reference type. That means a variable of that type can actually be null. Had it been a struct, or had the language not supported null for reference types, we would have definitely designed it in a different, probably more explicit, way. For example, let’s say we have a method printing the Maximum value for a set of ints, or “Not available” if it’s empty. I’m pretty sure nobody would ever implement that as follows:

if(maxValue == 0) {
return “Not available”;
return maxValue.ToString();

Instead, we would probably design something to handle the special case of an empty set result explicitly, be it by an IntResult, use a TryGetMax method or making the returned int nullable. The latter might seem contradictory, but the point is that our code makes explicit that we will not always be able to get a maximum value from an int set. In this case we have actually thought about representing the real range of results, whereas by just defaulting to return a null whenever an edge case occurs, we haven’t. The latter situation is accidental design since we did actually make a design decision, but not consciously.

Primitive obsession

Primitive obsession is the tendency to express most of your code in terms of language primitives such as int, float, double, decimal, string, DateTime, Timespan, etc. There are 3 common sources of problems with this approach, nicely illustrated by the following code from the payroll case study:

How often have you introduced bugs because you mixed up the order of the parameters, and accidentally mixed up the salary and the commission rate? I can say I have. Also, what is the commission rate expressed in, is it a percentage or fraction?  What if we want to change the representation of the percentage commission rate to an int? We’ll have to track and change it everywhere.

The underlying problem is that we’re working at the wrong level of abstraction for this problem. ints, decimals, strings, etc are at the level of abstraction of the language/platform, but we’re working at the level of the business here. Things would be much better if we’d change it to something like this:

This way, type safety would prevent us from mixing up the parameters, we can create nice explicit factory methods on our Salary and CommissionRate classes, and changes in data representation would be local to those objects only. Besides that, aren’t we told in high school that an number without a unit is meaningless?

Again, this is accidental design at work: our environment provides us with these primitive types, so we tend to use them. These primitive types, however, are not designed by you. Had they been different, your design would’ve been much different. How much of that would you be comfortable with?

One more example of this is the lack of a Date type in .NET. Because of this we tend to represent all dates as DateTime’s. This has caused me headaches in more than one project, and could’ve been mitigated had we created a Date type ourselves (it’s even worse when you deal with Unix timestamps).


Accidental design might sound like a bad thing in general, but to be honest I actually think a lot of design “decisions” are made that way. If it wasn’t, we’d probably never ship. Still, I think it’s good practice to be aware of as many of them as you can, and know your options when you actually do need to make something more explicit. Other areas where I see a lot of accidental design (or architecture) are choices for languages, frameworks and database technology. I might write about those in a later post.

With regard to the examples: In the case of GetEmployee method, it might actually be a pretty good convention to return null when the specified Employee couldn’t be found. But by being a convention, it’s actually become a conscious decision. Also, an argument I’ve heard against wrapping every primitive is that it introduces needless complexity and a reduced overview of your project due to the proliferation of classes. Those are sound arguments, but in my experience they don’t weigh up against the benefits.

CQRS without ES and Eventual Consistency

Command Query Responsibility Segregation (CQRS) is one of the concepts the profoundly changed the way I develop software. Unfortunately most of the information available online conflates CQRS with event sourcing (ES) and asynchronous read-model population (introducing eventual consistency). While those 2 techniques can be very useful, they are actually orthogonal concerns to CQRS. In this post I’ll focus on the essence of CQRS, and why it helps creating more maintainable, simpler software.

A little history

In 2012 I was working on a project that had some fairly involved business logic, so we tried applying DDD to control the complexity. All started out fairly well, and we actually created a pretty rich, well-factored model that handled all the incoming transactions in a way that reflected how the business actually thought about solving the problem.

There was just one problem with our solution: the (entity) classes in our domain model were littered with public get methods. Now, those getters weren’t used to make decisions in other objects (which would break encapsulation), but were only there for displaying purposes. So whenever I need to display an entity on a page, I would fetch the entity from a repository and use the model’s get methods to populate the page with data (maybe via some DTO/ViewModel mapping ceremony).

While I think getters on entities are a smell in general, if the team has enough discipline to not use the getters to make any business decisions, this could actually be workable. There were, however, bigger problems. In a lot of scenario’s we needed to display information from different aggregates on one page: let’s say we have Customer and Order aggregates and want to display a screen with information about customers and their orders. We were now forced to include an Orders property on the Customer, like this:

From now on, whenever we were retrieving a Customer entity from the database we needed to decide whether we also needed to populate the Orders property. You can imagine that things get more complex as you increase the amount of ‘joins’, or when we just needed a subset of information, like just the order count for a given customer. Rather quickly, our carefully crafted model, made for processing transactions, got hidden by an innumerable amount of properties and its supporting code for populating those properties when necessary. Do I even need to mention what this meant for the adaptability of our model?

The worse thing was, I had actually seen this problem grind other projects to a halt before, but hadn’t acted on it back then, and I almost hadn’t this time. I thought it was just the way things were done™. But this time, I wasn’t satisfied and I thought I’d do a little googling and see if other people are experiencing the same problem.

Enter CQRS

I ended up finding a thing or two about a concept called Command Query Responsibility Seggregation, or CQRS. The basic premise of CQRS is that you shouldn’t be using the code that does the reading to also do the writing. The basic idea is that a model optimized for handling transactions (writing) cannot by simultaneously be good at handling reads, and vice versa. This idea is true for both performance AND code structure. Instead, we should treat reading and writing as separate responsibilities, and reflect that in the way we design our solution:


By applying this separation we can design both of them in a way that’s optimal for fulfilling their specific responsibility.

In most of the domains I’ve worked in, the complexity lies in the transaction-handling part of our system: there are complex business rules that determine the outcome of a given command. Those rules benefit from a rich domain model, so we should choose that for our write side.

On the read side, things in these systems are much simpler and usually just involve gathering some data and assembling that together into a view. For that, we don’t need a full-fledged domain model, but we can probably directly map our database to DTO’s/ViewModels and then send them to the view/client. We can easily accommodate new views by just introducing new queries and mappings, and we don’t need to change anything on our command-processing side of things.

I’ve recently worked on a project that actually had all the complexity on the read side, by the way. In that case the structure is actually inverted. I’ll try to blog about that soon.

Note that in the above description, the data model is shared between the read and write model. This is not a strict necessity and, in fact, CQRS naturally allows you to also create separate data models. You would typically do that for performance reasons, which is not the focus of this post.

But what if I change the data model!?

Obviously, in the above description both the read and write model are tightly coupled to the data model, so if we change something in the data model we’ll need changes on both sides. This is true, but in my experience this has not yet been a big problem. I think this is due to a couple of reasons:

  • Most changes are actually additions on the write side. This usually means reading code keeps working as it is.
  • There is conceptual coupling between the read and write side anyway. In a lot of cases, if we add something on the write side, it’s of interest to the user, and we need to display it somewhere anyway.

The above isn’t true if you’re refactoring your data model for technical reasons. In this case, it might make sense to create a separate data model for reading and writing.


For me, the idea of CQRS was mind-blowing since everyone I ever worked with, ever talked with or read a book from used the same model to do reading and writing. I had simply not considered splitting the two. Now that I do, it helps me create better focused models on both sides, and I consider my designs to be more clear and thereby maintainable and adaptable.

The value in CQRS for me is mostly in the fact that I can use it to create cleaner software. It’s one of the examples where you can see that the Single Responsibility Principle really matters. I know a lot of people also use it for more technical reasons, specifically that things can be made more performant, but for me that’s just a happy coincidence.


Greg Young is the guy that first coined CQRS, and since I learned a lot of stuff on CQRS from his work it would be strange not to acknowledge that in this post. The same goes for Udi Dahan. So if you’re reading this and like to know more (and probably better explained), please check out their blogs.

Dependency Inversion with Entity Framework in .NET

I wrote about the dependency inversion principle (DIP) a couple of weeks ago and got some questions about the practical implementation issues when applying it in a .NET environment, specifically how to apply it when using Entity Framework (EF). We came up with three solutions, which I’ll detail in this post.

Note that even though this post is written for EF, you can probably easily extrapolate these strategies to other data access tools such as NHibernate, Dapper or plain old ADO.NET as well.

Recap: Dependency Inversion for data access

Our goal is to have our data access layer depend on our business/domain layer, like this:


When trying to implement this with EF, you’ll run into an interesting issue: where do we put our entity classes (the classes representing the tables)? Both the business layer and data access layer are possible, and there are trade-offs to both. We’ve identified three solutions that we find viable in certain circumstances, and that’s exactly the topic of this post.

Strategy 1: Entities in DAL implementing entity interface

For the sake of this post we’ll use a simple application that handles information about Users. The Users are characterized by an Id, Name and Age. The business layer defines a UserDataAccess that allows the ApplicationService classes to fetch a user by its id or name, and allows us to persist the user state to disk.

Our first strategy is to keep the definition of our entity classes in the DAL:


Note that we need to introduce an IUser interface in the business layer since the dependency runs from DAL to business. In the business layer we then implement the business logic by working with the IUser instances received from the DAL. The introduction of IUser means we can’t have business logic on the User instances, so logic is moved into the ApplicationService classes, for example validating a username:

public class UserApplicationService {
     public void ChangeUsername(int userId,string newName) {
        var user = _dataAccess.GetUserById(userId);
        user.Name = newName;

    public int AddUser(string withUsername,int withAge) {

        var user = _dataAccess.New();
        user.Age = withAge;
        user.Name = withUsername;

        return user.Id;

    private static void AssertValidUsername(string withUsername) {
        if(string.IsNullOrWhiteSpace(withUsername)) {
            throw new ArgumentNullException("name");

Note that this is pretty fragile since we need to run this check every time we add some functionality that creates or updates a username.

One of the advantages of this style is that we can use annotations to configure our EF mappings:

class User : IUser{
    public int Id {get;set;}

    public string Name {get;set;}
    public int Age{get;set;}  

A peculiarity of this solution is how to handle adding users to the system. Since there is no concrete implementation of the IUser in the business layer, we need to ask our data access implementation for a new instance:

public class UserApplicationService {
    public int AddUser(string withUsername, int withAge) {
        var user = _dataAccess.New();
        user.Age = withAge;
        user.Name = withUsername;

        return user.Id;


public class PersistentUserDataAccess : UserDataAccess{
    public IUser New() {
        var user = new User();


        return user;


The addition of the new User to the context together with the fact that Save doesn’t take any arguments is a clear manifestation that we expect our data access implementation to implement a Unit of Work pattern, which EF does.

It should be clear that this style maintains a pretty tight coupling between the database and object representation: our domain objects will almost always map 1-to-1 to our table representation, both in its names and its types. This works particularly well in situations where there isn’t much (complex) business logic to begin with, and we’re not pursuing a particularly rich domain model. However, as we’ve already seen, even with something as simple as this we can already run into some duplication issues with the usernames. I think in general this will lead you to a more procedural style of programming.

The full source to this example is available in the DIP.Interface.* projects on GitHub.

Strategy 2: Define entities in the business layer

Our second strategy defines the User entities directly in the business layer:


The current version of EF (6.1) handles this structure without problems, but older versions or other ORMs might not be so lenient and require the entities to be in the same assembly as the DbContext.

An advantage of this strategy is that we can now add some basic logic to our User objects, such as the validation we had earlier:

public class User {
    public string Name {
        get {
            return _name;
        set {
            if(string.IsNullOrWhiteSpace(value)) {
                throw new ArgumentNullException("name");
            _name = value;


We now cannot use annotations to configure our mapping anymore, so we have to configure it through the EF fluent API to achieve the same result:

class UserDbContext : DbContext{
    public DbSet Users{get;set;}

    protected override void OnModelCreating(DbModelBuilder modelBuilder) {

                    new IndexAnnotation(
                        new IndexAttribute() { 
                        IsUnique = true 

I think this is probably a bit less clean and discoverable than the annotated version, but it’s still workable.

The method of adding new Users to the system also changes slightly: we now can instantiate User objects within the business layer, but need to explicitly add it do the unit of work by invoking the AddNewUser method:

public class UserApplicationService {
    public int AddUser(string withUsername, int withAge) {
        var user = new User {
            Age = withAge,
            Name = withUsername


        return user.Id;


public class PersistentUserDataAccess : UserDataAccess{
    public void AddNewUser(User user) {


Just as with strategy 1, there is still a large amount of coupling between our table and object structure, having the same implications with regard to programming style. We did, however, manage to remove the IUser type from the solution. In situations where I don’t need a rich domain model, I favor this strategy over the previous one since I think the User/IUser stuff is just a little weird.

The full source to this example is available in the DIP.EntitiesInDomain.* projects on GitHub.

Strategy 3: Map the entities to proper domain objects

In this case we decouple our DAL and business layer representation of the User completely by introducing a Data Mapper:


This is the most flexible solution of the three: we’ve completely isolated our EF entities from our domain objects. This allows us to have properly encapsulated, behavior-rich business objects, which have no design compromises due to data-storage considerations. It is, of course, also more complex, so use it only when you actually need the aforementioned qualities.

Validation of the username is now done in the Username factory method, and is not a responsibility of the User object at all anymore:

public struct Username {
    public static Username FromString(string username) {
        if(string.IsNullOrWhiteSpace(username)) {
            throw new ArgumentNullException("name");
        return new Username(username);


public class User {
    public Username Name { get; private set; }
    public UserId Id { get; private set; }
    public Age Age { get; private set; }

Using rich domain objects like this is extremely hard (if at all possible) with the other strategies, but can also be extremely valuable when designing business-rule heavy applications (instead of mostly CRUD).

Actually persisting Users to disk is also completely different with this strategy, since we can’t leverage EF’s change tracking/unit of work mechanisms. We need to explicitly tell our UserDataAccess which object to save and map that to a new or existing UserRecord:

public class UserApplicationService {
    public void ChangeUsername(UserId userId, Username newName) {
        var user = _dataAccess.GetUserById(userId);




public class PersistentUserDataAccess:UserDataAccess {
    public void Save(User user) {
        if(user.IsNew) {

    private void SaveExistingUser(User user) {
        var userRecord = _context.Users.Find(user.Id.AsInt());


    private void SaveNewUser(User user) {
        var userRecord = new UserRecord {};


In general, this style will be more work but the gained flexibility can definitely outweigh that.

The full source to this example is available in the DIP.DataMapper.* projects on GitHub.


In this post we explored 3 strategies for applying the Dependency Inversion principle to Entity Framework in .NET based applications. There are probably other strategies (or mixtures of the above, specifically wrapping EF entities as state objects in DDD) as well, and I would be happy to hear about them.

I think the most important axis to evaluate each strategy on is the amount of coupling between database schema and object structure. Having high coupling will result in a more easy/quick-to-understand design, but we won’t be able to design a really rich domain model around them and lead you to a more procedural style of programming. The 3rd strategy provides a lot of flexibility and does allow for a rich model, but might be harder to understand at a first glance. As always, the strategy to pick depends on the kind of application you’re developing: if you’re doing CRUDy kind of work, perhaps with a little bit of transaction script, use one of the first 2. If you’re doing business-rule heavy domain modelling, go for the 3rd one.

All code (including tests) for these strategies is available on GitHub.

Improving refactoring opportunities by reducing the amount of public contracts

Last week I had an interesting conversation with some of my BuildStuff friends about every developer’s most favorite topic: documentation. The specific problem was how to keep documentation in sync with the code and how to handle that. This resulted in João referring to the docs of a project he’s working on, logstash. In logstash most (all?) the documentation is generated from code and they have success using that method. Still, I think everybody knows of situations where your xmldocs, phpdocs or just plain comments diverge from the actual code that it describes. This will obviously cause problems when you generate docs off of it. So why does it work for logstash?

Well, we came up with the notion that what you actually can do is document code/apis/whatever that are publicly available. The reason being that this stuff is already being used by other people, and therefore you already shouldn’t be breaking it. When you change something you need to make sure that other people’s stuff keeps working. Therefore, in these scenarios, documentation will not get out of sync since the code that is being documented can simply not change.

In the case of logstash, they better make sure they are backward compatible with older versions (at least within a major version), otherwise nobody would ever upgrade or even use it; it would be too much of a hassle reconfiguring on every release. For that reason I think it actually is viable to generate documentation for things like configuration files: they shouldn’t change in a breaking way anyway.


So what does this have to do with refactoring? Well, basically, everything. It turns out that the only code you can refactor is code that nobody else directly depends on, that is, code that is not-public. The key being the word public: as soon as a piece of code becomes public you introduce a contract with the (potential) user(s) of this code. It doesn’t really matter if you expose it via a class library, API or even a user interface: as long as it’s publicly exposed, you have a contract with its users. That also means that from now on, you won’t be able to refactor the interface of whatever you’ve exposed.

One example I really like to illustrate this point is the bowling game example from Uncle Bob’s Agile Software Development: Principles, Patterns, and Practices. In this example, Uncle Bob implements a game of bowling doing TDD and pair programming, and in the course of implementing it, he lays out their thinking patterns. I’m not going to recount the entire story here, but like to focus on one of the acceptance tests (for more code see

public void test_example_game() {
    int[] throws =  new int[] { 1,4,4,5,6,4,5,5,10,0,1,7,3,6,4,10,2,8,6 };

    var game = new Game();

    foreach(var throwScore in throws) {


Their goal is to make this test pass, and they proceed by thinking about (actually exploring) a couple of ways of designing a solution. They start with a solution that uses Frame and Throw classes (nouns from the domain of bowling) next to the Game class, but can’t really find a way to make that work nicely, so they decide not to use those classes. The final solution they presented was very simple and used only two classes. What’s important here is that from the point of view of a consumer, it doesn’t really matter how it’s implemented. The consumer, as defined by the acceptance tests, only cares about the Game class. The way you implement it doesn’t really matter, as long as you adhere to the contract specified by the acceptance test.

No problems so far, but what I see happening a lot is that when someone comes up with an implementation that does use a Frame and/or Throw class, those classes are also made publicly available. This happens, for example, when you reference them in a unit-test driving the development. Doing that, however, implicitly introduces a contract you now need to maintain. And this contract has nothing to do with the behavior of the system, but only with the specific implementation. As a side effect, you’ve now significantly reduced the opportunity to refactor because the implementation that doesn’t use the the Throw and Frame classes is not available as a solution anymore. Instead, you now need to maintain three classes and its methods (at least 4, I guess), instead of just the one class with 2 methods. In a real project, this build-up of contracts becomes a burden really fast, and it can completely kill the adaptability of your application, because you have to respect all the existing contracts.

Testing & mocks

All this stuff is also very tightly related to testing with mocks. I’ve personally stepped into the trap of testing every interaction between collaborating objects more than once. The canonical way of testing those interactions is using mocks and substituting one of the collaborators by a mock of its interface. The problem with this strategy is that every class and interface used to implement some sort of behavior must become public:


On the left, we have 6 public interfaces, while on the right we have just 1. This results in us having to maintain 6 contracts on the left, while only 1 on the right. In my experience, maintaining 1 contract is just easier than 6. DHH, Kent Beck and Martin Fowler talked about having tests coupled to implementation in their Is TDD dead? sessions, and this is exactly what that is. By mocking every interaction, you completely coupled tests to implementation, and have no room for refactoring. You experience the binding of this contract directly when you refactor the code and also need to refactor the tests. That’s not how it should work.

Also, from an integrated product point of view you need to setup elaborate IoC container configurations to get your app running in the right way, and it’s the client that makes those decisions. All this just doesn’t feel like information hiding to me.

So, to be honest, in the classicist vs mockist discussion, I really think you should lean strongly to classicist. I do use mocks, but only for stuff that involves the external world, or is otherwise slow, such as a mail server, database or web service. This is something, however, that I continuously re-evaluate (can I run an in-proc mail server/database, for example).


I call the contracts of my system the Public Surface Area of my system. For the reasons described above, I always try to optimize for an as-small-as-possible (as little as possible contracts) public surface area. By reducing the surface area, I have less contracts to maintain, which gives me more opportunities to refactor. In general, reducing surface area gives me a system that is better maintainable and testable and therefore has a better design.

BTW. Note that there can be multiple contracts in the same system calling upon each other. For example: a UI layer may call into an application layer. Since UI testing sucks, we rather test the application layer independently, thereby introducing new contracts besides the contracts implicit in the UI. If UI testing was easy, I might not want the additional burden of maintaining a separate set of contracts for the application layer.

Dependency Injection: are you doing it wrong?

A lot of people are applying dependency injection (DI) in their software designs. As anything, DI has its proponents and opponents but I believe that given the right context it can actually help your software design by making dependencies more explicit and better testable.

A lot of implementations of DI I encounter have a problem though: they have the actual dependency running in the wrong way. In the case of business software, for example, this would be the business layer depending on the the data layer instead of the other way around. This can cause some pretty significant problems downstream.

In this post I’ll review the basics of dependency injection, dependency inversion and the problems that occur when the dependency is running the wrong way.

Injecting data access code in business apps

One of the contexts in which I’ve had success applying DI is data access in business software. In order to keep our business rule tests run fast enough we need an in-memory data source instead of a disk-based one, since those are usually (still) not fast enough. You usually end up with something like this:

interface DataAccess {
    Record GetRecordById(int id);
    void Save(Record record);

class Application {
    DataAccess _access;

    public Application(DataAccess access) {
        _access = access;

    public void UpdateRecordName(int id, string newName) {
        var rec = _access.GetRecordById(id);
        rec.Name = newName;

In our tests, we then initialize Application with some kind of InMemoryDataAccess implementation and our production system we uses a PersistentDataAccess. All pretty standard stuff and something I see happening in one form or another in most code bases I encounter.

Dependency Inversion

Dependency Injection is closely related to the dependency inversion principle (the D of SOLID). This principle states two things:

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend on details. Details should depend on abstractions.

Now, this is a principle I don’t see widely applied. Instead, I usually see something like this:


Here, we have the business layer depending on the data access code in the data access layer. When using Entity Framework code first on .NET, for example, we would have PersistentDataAccess being the class deriving from DbContext (maybe MyDomainDbContext) and from that we would extract an interface like IMyDomainDbContext representing the DataAccess interface.

This is a clear violation of the dependency inversion principle: in this case the high-level module is the module where the business logic lives (i.e. the business layer) and the low-level module is the module containing the data access code (i.e. the data layer). Adhering to the principle means that our business logic layer should not depend on our data access layer, which it does.

How does this cause us problems?

1. The data layer cannot access our domain classes

Since the business layer depends on the data layer we cannot access types from our business layer without creating circular dependency chains. While some environments actually allow those (and some don’t), in general it’s not considered particularly healthy if there are (a lot of) dependency cycles across modules.

But generally, I do want to be able to create and use types from my business layer in the data layer. For example, in DDD I want to return Domain Entities or Value objects from my DAL. Since those classes are defined in the domain layer, it is impossible to access those types in the DAL. So instead we end up with something like this:


As you can see, in general, I won’t be able to use any custom business types in the DAL, including the simplest of value objects. This means all the mapping needs to happen in the business layer, which means we need to change the business layer whenever we add a new way of mapping.

By reversing the dependency, we can move all the mapping code to the DAL, keeping our business layer clutter-free and allowing the data layer to change independently:


This is the heart of the dependency inversion principle. Note that we moved the definition of the DataAccess interface from the DAL to the business layer. This is another key principle: we let consumers define the interface, and leave it up to others conform to this interface.

2. It’s hard to keep your business layer clean

As you can see in the previous section, when we have the dependency pointing the wrong way, our business layer gets polluted with all kind of stuff that doesn’t really belong there, like the mapping code above. This makes the business layer harder to reason about, since we can’t do it in isolation anymore. This hurts productivity and new entrants to your code base need more time to get going.

3. All implementations depend on our data layer

Let’s say we’re writing tests for our business layer. We’ll end up with something like this:


In this diagram we see that our Tests package depends on the DAL and that means that, since dependencies are transitive, our Tests package depends on everything the DAL depends on. This will usually mean that our Tests package gets to depend on the implementation chosen in the DAL layer. So if our production implementation of DataAccess is backed by something like Entity Framework, we also need to take in that dependency in the Tests package. Since we’re probably not using anything from Entity Framework for our tests, being forced to take this dependency doesn’t really feel right.

It’s even worse if we’d like to switch to a completely different DAL implementation (which, agreed, doesn’t happen as much as we’re led to believe). But let’s say we’re writing a new DAL based on NHibernate instead of the current Entity Framework. Changing ORMs is something I have actually seen happen. In this case our NHibernate implementation will have a dependency on Entity Framework. It’s pretty obvious that just cannot be right.

As before, we can solve this problem by moving DataAccess to the business layer:


Here we’ve dropped the dependency, and the design is much simpler than the one before.

4. The data layer can force changes to the business layer

To see how this is possible, let’s again consider this situation:


If we now change a mapped property of our EntityRecord class, this will break our EntityMapper, which lives in the business layer. This is obviously something we shouldn’t want, since an implementation detail of a low-level component (data access) can now break, and therefore force a change to, a high-level component (business). In general, we just don’t want something as low-level as data access changing the most important part of our software, that is, the business.


In this post I’ve shown some problems that can occur when you’re applying dependency injection without also taking into account dependency inversion. In my experience these problems can become pretty big  the further you get into a project.

Therefore, next time you’re using dependency injection, please also consider the direction of your dependencies and put them in a direction where low-level components depend on the high-level ones. It will save you a lot of trouble later on.

Disclaimer: In this post I mostly focused on the situation of business apps, which I’m most familiar with. I am, however, pretty sure these principles apply in other contexts as well.