Posts in ".NET"

.NET is slowing me down

.NET has been my primary development environment for a little over 5 years now. I’ve always really liked it, had a lot of success with it, and learned a lot while using it. The tooling and maturity of the platform is, and has been, right where it had to be for me. In a lot of projects, it allowed me to really focus on the domain and I seldom have to write custom tooling for doing standard stuff, which I had to on other platforms. This allowed me to deliver a lot of value to my clients, and they’re happy about that.

There is, however, a problem that’s growing bigger and bigger with .NET: it’s getting slow to develop on. This doesn’t mean .NET itself is getting slow, it means the developer experience is getting slower. To illustrate my point, I’ve measured the times to the very first byte (that is, the time after a rebuild) for the template application for various MVC versions:

.NET Version MVC Version First available date Time to very first byte (seconds)
4.0 2 March 2010 1.00
4.0 3 January 2011 1.12
4.5.2 3 May 2014 1.45
4.0 4 August 2012 2.63
4.5.2 4 May 2014 2.89
4.5.2 5.2.3 January 2015 3.47
4.6 5.2.3 July 2015 3.58
4.6 6.0.0-beta5 July 2015 1.89

So, over the course of 5 years the time to load the first page has increased by a factor 3.5, and 2.5 seconds in absolute terms. Now, it seems ASP.NET 5 is going to reduce times a bit, but still not to the 2010 level.

To make matters worse: something like Entity Framework is getting equally slower, and hitting a page that goes to the database might easily take somewhere between 5-10 seconds. The same goes for tests: running the first easily takes a couple of seconds due to EF only.

Environmental Viscosity

So, what’s the problem? Environmental viscosity. To quote Uncle Bob from PPP:

Viscosity of the environment comes about when the development environment is slow and inefficient. For example, if compile times are very long, developers will be tempted to make changes that don’t force large recompiles, even though those changes don’t preserve the design.

This is exactly what’s going on here. Because load times are slow, I tend to:

  • Make bigger changes before reloading
  • Write less tests
  • Write tests that test larger portions of functionality
  • Implement back-end code in the front-end (HTML/JavaScript)
  • Visit reddit while the page loads

All these things are undesirable. They slow me down, and compromise the quality of the software. If you ever worked with “enterprise” CMS software, you’ve seen this happen to the extreme (I sure have): there might be minutes between writing a change and the page actually being loaded.

Even if you don’t do all the above the things, and slavishly wait for the page to load/test to run every time, you’re still wasting your time, which isn’t good. You might not recognize it as being a big deal, but imagine making 500 changes every day, that translates to 500 x 5s = 2500 seconds of waiting. That’s more than 40 minutes of waiting, every day.

Architecture

To reiterate: slow feedback compromises software quality. What I want, therefore, is feedback on my changes under a second, preferably under 500ms. My choice of technology/tools will definitely factor in this requirement, and it will be a strong factor.

For example, my choice for data access defaults to Dapper these days, because it’s just much faster than EF (tbf, I also rely less on “advanced” mappings). Even something like PHP, for all its faults, tends to have a time-to-very-first-byte that’s an order of magnitude faster than .NET apps, and therefore be something I might consider when other .NET qualities aren’t that important.

To me, the development experience is as much part of software architecture as anything else: I consider anything related to building software a part of architecture and since slow feedback compromises software quality, the development experience is certainly part of architecture.

The future of .NET

I certainly hope Microsoft is going to improve on these matters. There is some hope: ASP.NET 5 and Entity Framework 7 are right around the corner, and they promise to be lighter-weight, which I hope translates in faster start-up times. Also, Visual Studio 2015 seems to be a bit faster than 2013 (which was and is terrible), but not as fast a VS2012. I guess we’ll have to wait and see. For the time being, though, I’ll keep weighing my options.

Starving outgoing connections on Windows Azure Web Sites

I recently ran into a problem where an application running on Windows Azure Web Apps (formerly Windows Azure Web Sites or WAWS) was unable to create any outgoing connections. The exception thrown was particularly cryptic:

[SocketException (0x271d): An attempt was made to access a socket in a way forbidden by its access permissions x.x.x.x:80]
   System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) +208
   System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket,
     IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception) +464

And no matter how much I googled, I couldn’t find anything related. Since it was definitely related to creating outgoing connection, and not specific to any service (couldn’t connect to HTTP or SQL), I started to consider that WAWS was limiting the amount of outbound connections I could make. More specifically, I hypothesized I was running out of ephemeral ports.

So I did a lot of debugging, looking around for non-disposed connections and such, but couldn’t really find anything wrong with my code (except the usual). However, when running the app locally I did see a lot of open HTTP connections. Now, I’m not gonna go into details but it turns out this had something to do with a (not very well documented) part of .NET: ServicePointManager. This manager is involved in all HTTP connections and keeps connections open so they can be reused later.

When doing this on a secure connection with client authentication, there are some specific rules on when it can reuse the connections, and that’s exactly what bit me: for every outgoing request I did, a new connection was opened, not reusing any already open connection.

The connections stay open for 100 seconds by default, so if I had enough requests coming in (translating to a couple of outgoing requests each), the amount of connections indeed became quite high. On my local machine, this wasn’t a problem, but it seems Web Apps constrains the amount of open connections you can have.

As far as I know, these limits aren’t documented anywhere, so instead I’ll post them here. Note that these limits are per App Service plan, not per App.

App Service Plan Connection Limit
Free F1 250
Shared D1 250
Basic B1 1 Instance 1920
Basic B2 1 Instance 3968
Basic B3 1 Instance 8064
Standard S1 1 Instance 1920
Standard S1 2 Instances 1920 per instance
Standard S2 1 Instance 3968
Standard S3 1 Instance 8064
Premium P1 1 Instance (Preview)  1920

I think it’s safe to say that the amount of available connections is per instance, so that 3 Instances S3 have 3*8604 connections available. I also didn’t measure P2 and P3 and I assume they are equal to the B2/S2 and B3/S3 level. If someone happens to know an official list, please let me know.

The odd-numbering of the limits might make more sense if you look at it in hex: 0x780 (1920), 0xF80 (3968) and 0x1F80 (8064).

If you run into trouble with ServicePointManager yourself, I have a utility class that might come in handy to debug this problem.

 

Creating self-signed X.509 (SSL) certificates in .NET using Mono.Security

***Disclaimer***

I’m not a security expert. For that reason, I’m not completely sure in what kind of situations you can use this solution, but you should probably not use it for any production purposes. If you are an expert, please let me know if there are any problems (or not) with the solution.

*** End disclaimer ***

I recently had to programmatically create self-signed X.509 certificates in a .NET application for my WadGraphEs Azure Dashboard project. Specifically, I wanted to generate a PCKCS#12 .pfx file containing the private key and the certificate, as well as a DER .cer file containing the certificate file only.

Unfortunately there doesn’t seem to be an out of the box managed API available from Microsoft, but I was able to make it work using Mono.Security. To see how it’s done, let’s start how to generate them with makecert.exe in the first place.

Creating a self-signed certificate using makercert.exe

makecert.exe is a Microsoft tool that you can use to create self-signed certificates. It’s documented here, and the basic syntax to create a self signed certificate is:

makecert -sky exchange -r -n "CN=certname" -pe -a sha1 -len 2048 -ss My "certname.cer"

This will do a couple of things:

  • Generate a 2048 bit long private/public exchange type key pair
  • Generate a certificate with name “CN=certname” and signed with above-mentioned keys
  • Store the certificate + private key in the “My” certificate store
  • Store the DER format certificate only in the file “certname.cer”

So the .cer file containing the certificate is already generated using this method, and we can get to the .pfx file by exporting it (Copy to file…) from certmgr.msc.

Now, the problem is we can’t easily do this from code. I specifically needed a managed solution, so invoking makecert.exe from my application wouldn’t do it, and neither would using the Win32 APIs. Luckily, the Mono guys actually created a managed makecert.exe port, so with a bit of tuning it should actually be possible to generate the certificate.

Mono.Security to the rescue

The code to the makecert port is available at https://github.com/mono/mono/blob/master/mcs/tools/security/makecert.cs. To use it to generate the self-signed certificate I extracted the code that’s actually used given the provided command line parameters above, and put that into its own class:

Generating a .pfx and .cer is now done as follows (once you’ve nuget installed Mono.Security):

And that’s it, you have now created a pfx/cer pair from pure managed code.

Closing remarks on Mono.Security

There are a couple of peculiarities with the makecert port:

  • The tool initializes CspParameters subjectParams and CspParameters issuerParameters based on the command line arguments, but it does not actually seem to be using them when generating the certificate. I don’t think our set of command line parameters actually influences those two objects, but it’s still a little bit weird.
  • The tool doesn’t support the -len parameter, so I’ve changed the way to generate the key by not using the RSA.Create() factory, but instead hard-code it to new RSACryptoServiceProvider(2048), which should do it. I’ve also confirmed the length using both OpenSSL and certmgr.msc.

It’d be great if someone can independently verify whether the above two points are indeed working as intended.

Anyway, big thanks to the Mono.Security team for providing the makecert port.

 

Dependency Inversion with Entity Framework in .NET

I wrote about the dependency inversion principle (DIP) a couple of weeks ago and got some questions about the practical implementation issues when applying it in a .NET environment, specifically how to apply it when using Entity Framework (EF). We came up with three solutions, which I’ll detail in this post.

Note that even though this post is written for EF, you can probably easily extrapolate these strategies to other data access tools such as NHibernate, Dapper or plain old ADO.NET as well.

Recap: Dependency Inversion for data access

Our goal is to have our data access layer depend on our business/domain layer, like this:

dip

When trying to implement this with EF, you’ll run into an interesting issue: where do we put our entity classes (the classes representing the tables)? Both the business layer and data access layer are possible, and there are trade-offs to both. We’ve identified three solutions that we find viable in certain circumstances, and that’s exactly the topic of this post.

Strategy 1: Entities in DAL implementing entity interface

For the sake of this post we’ll use a simple application that handles information about Users. The Users are characterized by an Id, Name and Age. The business layer defines a UserDataAccess that allows the ApplicationService classes to fetch a user by its id or name, and allows us to persist the user state to disk.

Our first strategy is to keep the definition of our entity classes in the DAL:

dip-interface

Note that we need to introduce an IUser interface in the business layer since the dependency runs from DAL to business. In the business layer we then implement the business logic by working with the IUser instances received from the DAL. The introduction of IUser means we can’t have business logic on the User instances, so logic is moved into the ApplicationService classes, for example validating a username:

public class UserApplicationService {
     public void ChangeUsername(int userId,string newName) {
        AssertValidUsername(newName);
        var user = _dataAccess.GetUserById(userId);
        user.Name = newName;
        _dataAccess.Save();
    }

    public int AddUser(string withUsername,int withAge) {
        AssertValidUsername(withUsername);

        var user = _dataAccess.New();
        user.Age = withAge;
        user.Name = withUsername;
        _dataAccess.Save();

        return user.Id;
    }

    private static void AssertValidUsername(string withUsername) {
        if(string.IsNullOrWhiteSpace(withUsername)) {
            throw new ArgumentNullException("name");
        }
    }
    ...
}

Note that this is pretty fragile since we need to run this check every time we add some functionality that creates or updates a username.

One of the advantages of this style is that we can use annotations to configure our EF mappings:

class User : IUser{
    [Key]
    public int Id {get;set;}

    [Index(IsUnique=true)]
    [MaxLength(128)]
    [Required]
    public string Name {get;set;}
    public int Age{get;set;}  
}

A peculiarity of this solution is how to handle adding users to the system. Since there is no concrete implementation of the IUser in the business layer, we need to ask our data access implementation for a new instance:

public class UserApplicationService {
    public int AddUser(string withUsername, int withAge) {
        var user = _dataAccess.New();
        user.Age = withAge;
        user.Name = withUsername;
        _dataAccess.Save();

        return user.Id;
    }

    ...
}

public class PersistentUserDataAccess : UserDataAccess{
    public IUser New() {
        var user = new User();

        _dbContext.Users.Add(user);

        return user;
    }

    ...
}

The addition of the new User to the context together with the fact that Save doesn’t take any arguments is a clear manifestation that we expect our data access implementation to implement a Unit of Work pattern, which EF does.

It should be clear that this style maintains a pretty tight coupling between the database and object representation: our domain objects will almost always map 1-to-1 to our table representation, both in its names and its types. This works particularly well in situations where there isn’t much (complex) business logic to begin with, and we’re not pursuing a particularly rich domain model. However, as we’ve already seen, even with something as simple as this we can already run into some duplication issues with the usernames. I think in general this will lead you to a more procedural style of programming.

The full source to this example is available in the DIP.Interface.* projects on GitHub.

Strategy 2: Define entities in the business layer

Our second strategy defines the User entities directly in the business layer:

dip-entitties-in-app

The current version of EF (6.1) handles this structure without problems, but older versions or other ORMs might not be so lenient and require the entities to be in the same assembly as the DbContext.

An advantage of this strategy is that we can now add some basic logic to our User objects, such as the validation we had earlier:

public class User {
    public string Name {
        get {
            return _name;
        }
        set {
            if(string.IsNullOrWhiteSpace(value)) {
                throw new ArgumentNullException("name");
            }
            _name = value;
        }
    }

    ...
}

We now cannot use annotations to configure our mapping anymore, so we have to configure it through the EF fluent API to achieve the same result:

class UserDbContext : DbContext{
    public DbSet Users{get;set;}

    protected override void OnModelCreating(DbModelBuilder modelBuilder) {
        base.OnModelCreating(modelBuilder);

        modelBuilder.Entity()
            .HasKey(_=>_.Id)
            .Property(_=>_.Name)
                .HasMaxLength(128)
                .IsRequired()
                .HasColumnAnnotation("Index",   
                    new IndexAnnotation(
                        new IndexAttribute() { 
                        IsUnique = true 
                }));
            
    }
}

I think this is probably a bit less clean and discoverable than the annotated version, but it’s still workable.

The method of adding new Users to the system also changes slightly: we now can instantiate User objects within the business layer, but need to explicitly add it do the unit of work by invoking the AddNewUser method:

public class UserApplicationService {
    public int AddUser(string withUsername, int withAge) {
        var user = new User {
            Age = withAge,
            Name = withUsername
        };
        _dataAccess.AddNewUser(user);

        _dataAccess.Save();

        return user.Id;
    }

    ...
}

public class PersistentUserDataAccess : UserDataAccess{
    public void AddNewUser(User user) {
        _context.Users.Add(user);
    }

    ...
}

Just as with strategy 1, there is still a large amount of coupling between our table and object structure, having the same implications with regard to programming style. We did, however, manage to remove the IUser type from the solution. In situations where I don’t need a rich domain model, I favor this strategy over the previous one since I think the User/IUser stuff is just a little weird.

The full source to this example is available in the DIP.EntitiesInDomain.* projects on GitHub.

Strategy 3: Map the entities to proper domain objects

In this case we decouple our DAL and business layer representation of the User completely by introducing a Data Mapper:

dip-data-mapper

This is the most flexible solution of the three: we’ve completely isolated our EF entities from our domain objects. This allows us to have properly encapsulated, behavior-rich business objects, which have no design compromises due to data-storage considerations. It is, of course, also more complex, so use it only when you actually need the aforementioned qualities.

Validation of the username is now done in the Username factory method, and is not a responsibility of the User object at all anymore:

public struct Username {
    public static Username FromString(string username) {
        if(string.IsNullOrWhiteSpace(username)) {
            throw new ArgumentNullException("name");
        }
        return new Username(username);
    }

    ...
}

public class User {
    public Username Name { get; private set; }
    public UserId Id { get; private set; }
    public Age Age { get; private set; }
    ...
}

Using rich domain objects like this is extremely hard (if at all possible) with the other strategies, but can also be extremely valuable when designing business-rule heavy applications (instead of mostly CRUD).

Actually persisting Users to disk is also completely different with this strategy, since we can’t leverage EF’s change tracking/unit of work mechanisms. We need to explicitly tell our UserDataAccess which object to save and map that to a new or existing UserRecord:

public class UserApplicationService {
    public void ChangeUsername(UserId userId, Username newName) {
        var user = _dataAccess.GetUserById(userId);

        user.ChangeUsername(newName);

        _dataAccess.Save(user);
    }

    ...
}

public class PersistentUserDataAccess:UserDataAccess {
    public void Save(User user) {
        if(user.IsNew) {
            SaveNewUser(user);
            return;
        }
        SaveExistingUser(user);
    }

    private void SaveExistingUser(User user) {
        var userRecord = _context.Users.Find(user.Id.AsInt());

        _userMapper.MapTo(user,userRecord);
        _context.SaveChanges();
    }

    private void SaveNewUser(User user) {
        var userRecord = new UserRecord {};
        _userMapper.MapTo(user,userRecord);
        _context.Users.Add(userRecord);
        _context.SaveChanges();
        user.AssignId(UserId.FromInt(userRecord.Id));
    }

    ...
}

In general, this style will be more work but the gained flexibility can definitely outweigh that.

The full source to this example is available in the DIP.DataMapper.* projects on GitHub.

Conclusion

In this post we explored 3 strategies for applying the Dependency Inversion principle to Entity Framework in .NET based applications. There are probably other strategies (or mixtures of the above, specifically wrapping EF entities as state objects in DDD) as well, and I would be happy to hear about them.

I think the most important axis to evaluate each strategy on is the amount of coupling between database schema and object structure. Having high coupling will result in a more easy/quick-to-understand design, but we won’t be able to design a really rich domain model around them and lead you to a more procedural style of programming. The 3rd strategy provides a lot of flexibility and does allow for a rich model, but might be harder to understand at a first glance. As always, the strategy to pick depends on the kind of application you’re developing: if you’re doing CRUDy kind of work, perhaps with a little bit of transaction script, use one of the first 2. If you’re doing business-rule heavy domain modelling, go for the 3rd one.

All code (including tests) for these strategies is available on GitHub.