Date archives "August 2015"

.NET is slowing me down

.NET has been my primary development environment for a little over 5 years now. I’ve always really liked it, had a lot of success with it, and learned a lot while using it. The tooling and maturity of the platform is, and has been, right where it had to be for me. In a lot of projects, it allowed me to really focus on the domain and I seldom have to write custom tooling for doing standard stuff, which I had to on other platforms. This allowed me to deliver a lot of value to my clients, and they’re happy about that.

There is, however, a problem that’s growing bigger and bigger with .NET: it’s getting slow to develop on. This doesn’t mean .NET itself is getting slow, it means the developer experience is getting slower. To illustrate my point, I’ve measured the times to the very first byte (that is, the time after a rebuild) for the template application for various MVC versions:

.NET Version MVC Version First available date Time to very first byte (seconds)
4.0 2 March 2010 1.00
4.0 3 January 2011 1.12
4.5.2 3 May 2014 1.45
4.0 4 August 2012 2.63
4.5.2 4 May 2014 2.89
4.5.2 5.2.3 January 2015 3.47
4.6 5.2.3 July 2015 3.58
4.6 6.0.0-beta5 July 2015 1.89

So, over the course of 5 years the time to load the first page has increased by a factor 3.5, and 2.5 seconds in absolute terms. Now, it seems ASP.NET 5 is going to reduce times a bit, but still not to the 2010 level.

To make matters worse: something like Entity Framework is getting equally slower, and hitting a page that goes to the database might easily take somewhere between 5-10 seconds. The same goes for tests: running the first easily takes a couple of seconds due to EF only.

Environmental Viscosity

So, what’s the problem? Environmental viscosity. To quote Uncle Bob from PPP:

Viscosity of the environment comes about when the development environment is slow and inefficient. For example, if compile times are very long, developers will be tempted to make changes that don’t force large recompiles, even though those changes don’t preserve the design.

This is exactly what’s going on here. Because load times are slow, I tend to:

  • Make bigger changes before reloading
  • Write less tests
  • Write tests that test larger portions of functionality
  • Implement back-end code in the front-end (HTML/JavaScript)
  • Visit reddit while the page loads

All these things are undesirable. They slow me down, and compromise the quality of the software. If you ever worked with “enterprise” CMS software, you’ve seen this happen to the extreme (I sure have): there might be minutes between writing a change and the page actually being loaded.

Even if you don’t do all the above the things, and slavishly wait for the page to load/test to run every time, you’re still wasting your time, which isn’t good. You might not recognize it as being a big deal, but imagine making 500 changes every day, that translates to 500 x 5s = 2500 seconds of waiting. That’s more than 40 minutes of waiting, every day.


To reiterate: slow feedback compromises software quality. What I want, therefore, is feedback on my changes under a second, preferably under 500ms. My choice of technology/tools will definitely factor in this requirement, and it will be a strong factor.

For example, my choice for data access defaults to Dapper these days, because it’s just much faster than EF (tbf, I also rely less on “advanced” mappings). Even something like PHP, for all its faults, tends to have a time-to-very-first-byte that’s an order of magnitude faster than .NET apps, and therefore be something I might consider when other .NET qualities aren’t that important.

To me, the development experience is as much part of software architecture as anything else: I consider anything related to building software a part of architecture and since slow feedback compromises software quality, the development experience is certainly part of architecture.

The future of .NET

I certainly hope Microsoft is going to improve on these matters. There is some hope: ASP.NET 5 and Entity Framework 7 are right around the corner, and they promise to be lighter-weight, which I hope translates in faster start-up times. Also, Visual Studio 2015 seems to be a bit faster than 2013 (which was and is terrible), but not as fast a VS2012. I guess we’ll have to wait and see. For the time being, though, I’ll keep weighing my options.

Start closing the end user feedback loop!

The most important feedback loop in any software development project is the feedback you get from end users. The reason for this is simple: since they’re the ones actually using the product, it’s them paying for it, either directly or indirectly. If your product isn’t being used, I can guarantee you development on it is going to end sooner than later.

Unfortunately, it seems this isn’t common knowledge. Instead, we tend to focus on the feedback of the client and (implicitly) assume that when the client is happy, the end user will be happy. Also, since it’s the client that pays us, it only seems reasonable that it’s only their feedback that counts. Well, it turns out clients in general aren’t that much better in figuring out what their users want. Instead, they rely on feedback from those actual users to decide what the next feature is going to be, or what needs to be improved.

Having efficient ways to gather such feedback is therefore of extreme importance, yet often overlooked. In a lot of cases, feedback is only being gathered by physically talking to the users. While this results in high quality feedback, it’s not very efficient and the probability of missing things is very high.

Luckily, we, as developers, can help: there are a lot of technological ways to gather feedback more efficiently, and it’s our responsibility to make those methods available to our clients. Below are 5 techniques you can use to start shortening the feedback loop today.

Five things you can start doing today

Analyse web server log files

The web server logs contain a wealth of information. For example, it can help you:

  • Find out which features are used a most-often by looking at the request path
  • Look for bad user experiences by seeing which requests have high response times or error responses
  • See when your users use the product most. Does that map to what you expect?
  • Figure out which users are heavy users
  • Track individual users as they browse through your site/app

It’s easy to analyze log files with some custom code, or you can use something like Log Parser.

Setup Google Analytics events

If you’re using GA you can use events to track user actions. Use this to see what buttons they’re clicking, whether they’re scrolling, etc. Use this to figure out if users are actually interacting with your site/app as expected.

Install chat software

In-page chat widgets are popping up everywhere. You can install one to provide an easily accessible way for users to contact you. Make sure someone is actually answering the chat, though, or you might leave a bad impression.

Investigate abandoned funnels

Funnels can be abandoned for many reasons: it might indicate a use case you didn’t expect, maybe there was a technical problem or the user changed their mind. Either way, it’s interesting for you to know why. Use any method you have available to figure out why they happen: correlate logs, events, chats, etc. If you have an e-mail address, send them an e-mail to ask why.

A/B testing

A/B testing can help you figure out what your users care and don’t care about. Both are equally important: if they care about something, do it more. If they don’t care about something: don’t try it again and focus on things that do work. You can write the infrastructure yourself, but there are off-the-shelf solutions available as well.

Truly care

Just setting things up is not enough; you should also deeply care about the results. If anything looks inconsistent, you should investigate it. If data isn’t being collected, you should find out why. If some hypothesis is not coming true, you should think of ways to figure out why that is. Do whatever you can to learn more about your users.

Every person on a development team should be aware about who the users are, why they’re using your product, why they keep coming back, etc. It’s not just for one single role within the team (think Product Owner) to care about this stuff; everybody should feel responsible. If everybody in the team cares deeply about the users, the product will become much better, there will be more alignment and your work will be more satisfying.

Vectors as ADT

I talked about autognostic objects a couple of weeks ago, and in that post contrasted them with abstract data types (ADTs). I promised to follow up with a post on an ADT implementation, so here it is.

First of all, let’s state the autognosis property once again: an autognostic object can only have detailed knowledge of itself. This constraint is required for objects, but not for ADTs. On the contrary: ADTs are allowed (maybe even expected) to inspect detailed information from other values of their own type (and only of their type).

From that point of view it means it’s perfectly fine to implement the Vector add operation in an ADT as follows:

As you can see, we blatantly access the private data (the x and y) of the addend in order to perform the calculation. We can do this because both the augend and addend are of type Vector and ADTs are allowed to access each others private data when they’re of the same type.

The name Vector denotes a type abstraction. With this kind of abstraction, the abstraction boundary is based on a type name (Vector). This means that as a client all you can see is the type and operations, but the implementation is hidden. “Within” the type, though, you have full access to the implementation and representations. It also means that, contrary to objects, you cannot easily interoperate with other values, since they have a different type and therefore have a hidden representation. All ADTs are based on type abstraction.

This also has some implications for extensibility; specifically that an ADT has to know all possible representations. To see that, let’s say we again want to add a polar representation for the Vector. We do this so we can keep the full accuracy when creating a vector based on polar coordinates, accuracy that would have been lost if we’d convert it to rectangular coordinates first. In JavaScript, we can implement that as follows:

It isn’t pretty, but in languages that have sum types and static typing it tends to work a bit better.

The important thing is that we significantly had to change the ADT to support the new representation. In fact, every new representation will require changes to the ADT. Compare that to objects, where we were able to add new representations without changing any of the existing representations. The reason being that ADTs are abstracted by type, while objects are abstracted by interface.

In general, ADTs are much less suited to adding new representations than objects are. It turns out this difference in extensibility is at the heart of the differences between ADTs and objects, and I’ll dive into that further in a future post. Don’t think that all is bad with ADTs though, they have other qualities… If you’d like a sneak peak, check out the Expression Problem on Wikipedia.

Starving outgoing connections on Windows Azure Web Sites

I recently ran into a problem where an application running on Windows Azure Web Apps (formerly Windows Azure Web Sites or WAWS) was unable to create any outgoing connections. The exception thrown was particularly cryptic:

[SocketException (0x271d): An attempt was made to access a socket in a way forbidden by its access permissions x.x.x.x:80]
   System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) +208
   System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket,
     IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception) +464

And no matter how much I googled, I couldn’t find anything related. Since it was definitely related to creating outgoing connection, and not specific to any service (couldn’t connect to HTTP or SQL), I started to consider that WAWS was limiting the amount of outbound connections I could make. More specifically, I hypothesized I was running out of ephemeral ports.

So I did a lot of debugging, looking around for non-disposed connections and such, but couldn’t really find anything wrong with my code (except the usual). However, when running the app locally I did see a lot of open HTTP connections. Now, I’m not gonna go into details but it turns out this had something to do with a (not very well documented) part of .NET: ServicePointManager. This manager is involved in all HTTP connections and keeps connections open so they can be reused later.

When doing this on a secure connection with client authentication, there are some specific rules on when it can reuse the connections, and that’s exactly what bit me: for every outgoing request I did, a new connection was opened, not reusing any already open connection.

The connections stay open for 100 seconds by default, so if I had enough requests coming in (translating to a couple of outgoing requests each), the amount of connections indeed became quite high. On my local machine, this wasn’t a problem, but it seems Web Apps constrains the amount of open connections you can have.

As far as I know, these limits aren’t documented anywhere, so instead I’ll post them here. Note that these limits are per App Service plan, not per App.

App Service Plan Connection Limit
Free F1 250
Shared D1 250
Basic B1 1 Instance 1920
Basic B2 1 Instance 3968
Basic B3 1 Instance 8064
Standard S1 1 Instance 1920
Standard S1 2 Instances 1920 per instance
Standard S2 1 Instance 3968
Standard S3 1 Instance 8064
Premium P1 1 Instance (Preview)  1920

I think it’s safe to say that the amount of available connections is per instance, so that 3 Instances S3 have 3*8604 connections available. I also didn’t measure P2 and P3 and I assume they are equal to the B2/S2 and B3/S3 level. If someone happens to know an official list, please let me know.

The odd-numbering of the limits might make more sense if you look at it in hex: 0x780 (1920), 0xF80 (3968) and 0x1F80 (8064).

If you run into trouble with ServicePointManager yourself, I have a utility class that might come in handy to debug this problem.