Date archives "April 2015"

When not to do responsive design

The premise of responsive webdesign is great: by using one set of HTML, CSS and JavaScript for all device types you can shorten development time and costs compared to having multiple. While I haven’t seen any proof of this, I can imagine this being so in certain situations. In general, however, I don’t think this is the case and there are a lot of situations where responsive can actually increase your development time and costs.

As part of my job, I regularly get asked by clients whether they should go responsive. My usual answer is that they probably shouldn’t, for various reasons. In this post I’ll list some of them.

You want to test new features often

As a first case let’s say you’re introducing a new feature, and are not yet completely sure if your users are gonna value it. In this case it doesn’t make sense to directly go all the way. Instead, you want to put something into the market as quickly as possible, and when it turns out your feature is actually a hit, you want to implement it for other platforms.

Clients are often surprised by this reasoning, since it’s assumed that the whole point of responsive is that you have to do the work only once. Well, that’s just not true. No matter how you do it, you will need to design how the scaling up/down will actually work; how the pages are laid out at the different sizes, how the interaction works. This will always be extra work, apart from any coding activities. On the coding side, there will still be extra development work since building HTML/CSS that works for a variety of devices types will always be harder than building it for a single device type. Finally, you will need to test it on all the device types.

So going responsive increased your time to market, but what you wanted is to test the feature ASAP, instead of on all your users.

You want to support a new class of device

You already have a mobile site, but now want to add a desktop version. You have two options here: build separate desktop HTML/CSS or rebuild both the desktop and mobile site as a responsive site.

I’d always go for the first option because of the following implications when (re)building both together:

  • I get more development work since I now need to implement two design instead of one.
  • The mobile site needs to be retested and debugged.
  • I can probably not release both independently. That means I probably need to wait with making changes to the mobile site until I’ve released the desktop site, which might take a while.

In short, it’s gonna take more time to do a desktop version and you’re holding back mobile development, while I’m not convinced there is any benefit. Adding just the desktop version also goes hand in hand with the XP principle of baby steps, changing one thing at a time, which I strongly believe in.

One thing I might do is take into consideration that when I’ve finished the desktop version, I’m gonna retrofit that as a responsive version of the mobile site. But only when I can see the benefits of going responsive (see conclusion).

The use cases vary greatly among device classes

When I’m buying something online, I usually do this on a desktop. I’d like to browse a little bit, compare, read some reviews, etc, before making the purchase. After making the purchase, I then use my phone to regularly check up on the status of my order.

It’s pretty obvious these are two completely different use cases, and therefore require completely different interaction models and navigation paths. This is gonna be hard to do when you’re doing responsive, since responsive assumes pretty much a 1-to-1 mapping to what you can do on each page, and how you navigate. So again, without a clear benefit, you’ve seriously constrained your own ability to provide the most appropriate user experience for each device type.

You want separate teams handling mobile and desktop

As stated above, since use cases probably vary among device classes, I might want to have separate teams handling the development of each, both of them optimizing for their specific type of use cases. I want those teams to be autonomous. Having them work in the same code base is not gonna make that work: there needs to be a lot of coordination to avoid breaking each other’s work, and you can never release independently. So using responsive hurts your ability to scale/structure your teams as you like.

Conclusion

To be fair, most of the problems listed above can actually be circumvented if you try hard enough. Doing that, however, nullifies the entire argument for doing responsive in the first place, which is saving time/costs.

The underlying problem with all of the above cases is that you’re introducing coupling, and we all know that the wrong kind of coupling can really hurt your code. In the above examples, the coupling is working against you instead of for you, manifesting itself in less flexibility, less agility, longer times to market and a lesser end user experience. All this without any real, clear benefit. For me, that’s hard to justify. Especially since that, in my experience, it’s not that hard to build either a separate desktop or mobile variant of your site once you already have the other. Most of the time actually goes into other work, such as settling on functionality, implementing that functionality, designing basic styles (which you can share), etc. I think in a lot of situations this will actually save development time/costs compared to going responsive.

Only in situations where you have a very basic design and just a small amount of functionality that’s not going to change a lot, and you don’t care about flexibility a lot, responsive might actually reduce development time, but certainly not a lot (I dare say at most 5%, if you do reuse basic style components).

Don’t get me wrong, I strongly feel you should provide a good experience for as big an audience as possible. I just don’t think responsive (across all device classes) is the general way to do it.

On URLs

Some people think the only way to get the same URL for desktop and mobile is doing responsive. This is not true since you can detect device class server-side and then decide which HTML you’re gonna serve. And really, Google doesn’t care.

Afterthought: How this relates to MVC

MVC has emerged as method to have multiple views on the same data. Jeff Atwood once wrote an article that on the web, HTML can be seen as the Model and CSS as the View. I don’t agree. For me, HTML is part of the view. To show multiple representations of the same data (the model), as you do when viewing on multiple devices, you create multiple views, comprising both HTML and CSS.

Creating self-signed X.509 (SSL) certificates in .NET using Mono.Security

***Disclaimer***

I’m not a security expert. For that reason, I’m not completely sure in what kind of situations you can use this solution, but you should probably not use it for any production purposes. If you are an expert, please let me know if there are any problems (or not) with the solution.

*** End disclaimer ***

I recently had to programmatically create self-signed X.509 certificates in a .NET application for my WadGraphEs Azure Dashboard project. Specifically, I wanted to generate a PCKCS#12 .pfx file containing the private key and the certificate, as well as a DER .cer file containing the certificate file only.

Unfortunately there doesn’t seem to be an out of the box managed API available from Microsoft, but I was able to make it work using Mono.Security. To see how it’s done, let’s start how to generate them with makecert.exe in the first place.

Creating a self-signed certificate using makercert.exe

makecert.exe is a Microsoft tool that you can use to create self-signed certificates. It’s documented here, and the basic syntax to create a self signed certificate is:

makecert -sky exchange -r -n "CN=certname" -pe -a sha1 -len 2048 -ss My "certname.cer"

This will do a couple of things:

  • Generate a 2048 bit long private/public exchange type key pair
  • Generate a certificate with name “CN=certname” and signed with above-mentioned keys
  • Store the certificate + private key in the “My” certificate store
  • Store the DER format certificate only in the file “certname.cer”

So the .cer file containing the certificate is already generated using this method, and we can get to the .pfx file by exporting it (Copy to file…) from certmgr.msc.

Now, the problem is we can’t easily do this from code. I specifically needed a managed solution, so invoking makecert.exe from my application wouldn’t do it, and neither would using the Win32 APIs. Luckily, the Mono guys actually created a managed makecert.exe port, so with a bit of tuning it should actually be possible to generate the certificate.

Mono.Security to the rescue

The code to the makecert port is available at https://github.com/mono/mono/blob/master/mcs/tools/security/makecert.cs. To use it to generate the self-signed certificate I extracted the code that’s actually used given the provided command line parameters above, and put that into its own class:

Generating a .pfx and .cer is now done as follows (once you’ve nuget installed Mono.Security):

And that’s it, you have now created a pfx/cer pair from pure managed code.

Closing remarks on Mono.Security

There are a couple of peculiarities with the makecert port:

  • The tool initializes CspParameters subjectParams and CspParameters issuerParameters based on the command line arguments, but it does not actually seem to be using them when generating the certificate. I don’t think our set of command line parameters actually influences those two objects, but it’s still a little bit weird.
  • The tool doesn’t support the -len parameter, so I’ve changed the way to generate the key by not using the RSA.Create() factory, but instead hard-code it to new RSACryptoServiceProvider(2048), which should do it. I’ve also confirmed the length using both OpenSSL and certmgr.msc.

It’d be great if someone can independently verify whether the above two points are indeed working as intended.

Anyway, big thanks to the Mono.Security team for providing the makecert port.