Tuesday, 16 July 2013

Tools for Assessing Software Developers

It’s been a while since I last wrote on the subject of how to hire great software developers and weed out any applicants that aren’t experienced enough for the more senior positions within your team. Given the advent of new tools that are available to conduct such interviews, I felt it was worth updating my previous advice on the subject.

Skype is probably the single biggest game-changer in technical recruiting in recent years. Particularly if distance is an issue, using Skype to conduct interviews is a no-brainer.

Previously, phone screens were the de facto best way of carrying out an initial sift of shortlisted candidates. And to be honest they were never that good of a predictive indicator. What’s different about Skype is that, provided the candidate in question has an IDE at home (and most experienced developers do) you can use it to quickly screen candidates’ coding ability. There’s nothing like seeing someone actually using an IDE right from your very first ‘meeting’ to get a feel for whether the experience they profess to have on their CV actually translates into meaningful skills that they’re capable of applying to realistic business problems.

Skype allows you and the candidate to see one another. For the hirer, that enables you to get feedback from any non-verbal cues about their interest in the job and aptitude for same. It also allows you to screen-share, so you can see what they’re typing in real time in their IDE. In those respects, Skype is even better than trying to conduct a similar process in person, because you don’t need to crowd around a laptop screen or use a projector to be able to see them at work.

So, by all means don’t rule any interesting CVs out on the mere grounds that the applicant doesn’t have a webcam, a development setup at home, or a fast enough internet connection to facilitate a video call. But if they do have those assets available it makes it much easier to confirm their ability in a matter of minutes, before either party has invested any great amount of time in the process. 

The second biggest innovation in recent years, in my opinion, is Github. It’s always been desirable for candidates to provide code samples as a means of demonstrating their skill. However, previously you could never be sure that any work submitted was a candidates own. Most candidates are honest. Just occasionally, however, you’d identify someone that had provided an impressive ‘code sample’, but who it later transpired couldn’t programme a tenner out of a cash machine. Wherever they had plagiarised such samples from, it was clear that they didn't actually understand them themselves. (Such antics are quite probably how this guy here got his job.) It’s a waste of both of your time if you only discover this fact when it comes to sitting down in front of a laptop at interview and you ask the candidate to take you through their solution, only to find they can’t explain the first thing about how it works or why certain design choices have been made.

Github aids candidates’ credibility by being a freely-available online source control solution, that verifiably identifies the authors of any content submitted. Not only can you freely download any complete solutions that have been placed there, but you can see the individual check-ins that went in to producing each solution and the thought processes indicated by the comments associated with same. If you know what you’re looking at, those fine details tell you much more about a candidate than a mere CV full of buzzwords and all the glowing references in the world ever could. And unlike copying whole solutions you didn't write yourself, forging a history of the individual check-ins that go in to making up a complete solution is all but impossible.

With Github, you can also confirm a demo project’s creation date. This is important. Do you ever get the impression that candidates’ CVs are merely re-wordings of your job spec? This is in some ways understandable, and arises from the fact that the standard advice jobseekers are given is to tailor their CVs to highlight relevant experience. But still, as a hiring manager you sometimes would prefer to see what a candidate felt their own strengths were, before they knew what you were actually looking for. Github gives you that insight. If you’re looking for someone that has experience in Technology ‘X’, being able to see that they’ve completed a project using that technology some months before your particular requirement even came up is a pretty convincing demonstration that the candidate actually does know what they’re talking about when it comes to the subject concerned*.

(* That said, outside of specialist contracting roles, where you do expect new hires to hit the ground running from day 1, hiring software developers should rarely if ever merely be about hiring a particular skillset. It’s always better to instead hire for aptitude and attitude, and train for skill when you need to. Because new technologies come up all the time, and it’s no good hiring one-trick ponies that are incapable of keeping up with constantly-emerging technologies. Or, worse still, people that may be gifted as individuals but whose personality problems render them unsuitable for teamwork. You can teach people with the right aptitude and temperament almost any technical skill they need to know. The best ones will be capable of constantly improving themselves. But you can’t teach them not to try and use their one golden hammer to solve every single problem they come across. And you can’t teach them not to be an arrogant control freak that alienates their peers.)

The above are great ways to identify talent. That said, I know from working with a great many talented software developers over the years, that a lot of them don’t have the time to work on open source projects on Github whilst they’re fitting a family life around about being great assets to their existing employer. And some of them live in places where the internet connection is slow, making Skype a difficult option.

So, for people for whom Skype and Github aren’t options, there is a Plan ‘B’ you can use. A less-preferable secondary approach that also works is to conduct an initial phone screen using a stock list of questions. I’m loathe to suggest an undue correlation between merely knowing the answers to some coding trivia questions and actual meaningful ability as a software developer. One is merely knowledge, the other is a demonstration of actual intelligence. However, there are just some basic things that you should know about any language or technology you profess to be proficient in, and that knowledge can be used as a baseline check if need be.

E.g., for a junior level C# developer, I’d expect them to know:

  • Q. What are the scopes you may use to limit Field/Property visibility, and to what extent do they make these aspects of a class visible?

    A. Public, Private, Protected, Internal and Protected Internal.
    (NB: I wouldn’t fault anyone for failing to name that last as a distinct scope in its own right, whose limit is a combination of that afforded by ‘Protected’ and ‘Internal’.)

The key thing is that there are no trick questions here that would require knowledge of obscure parts of the .Net framework. Candidates may or may not not happen to have used certain discrete parts of the 4000-plus namespaces in the .Net Framework, but good developers could easily look up and utilise any part of the Framework if they needed to with only a couple of hours research. Asking about the features of a specific namespace is therefore pretty meaningless. The questions above instead just concern basic, core features of the C# language. Anyone that has used C# at all should be reasonably expected to be aware of them.

Questions like these don’t help you identify whether someone is a great developer or not. Seeing how candidates write actual code using a real IDE is the only thing that enables you to do that. These questions are purely intended as a baseline negative check to help you identify any manifestly-unqualified candidates where the other preferred means of confirming ability mentioned earlier are unavailable.

For more senior C#  developers, I’d expect them to know more advanced, but still core, features of the language. E.g., :

For a Lead Developer or Architect, I’d expect them to be able to speak meaningfully about:

  •  Can you describe some Design Patterns? (e.g., please explain what Singleton is, What is the Decorator pattern? Tell me about a time when you used them?)

  • What are your thoughts on Inversion of Control / Dependency Injection? What about Test Driven Development? Do you always use them on every solution?* If not, what criteria do you use when deciding whether to expend the additional effort? What are the limitations of IoC? Which of the 22 plus frameworks that presently exist have you encountered on live projects?
    (* FWIW, I personally believe that using these presently-fashionable methodologies and techniques on every single project is about as misguided as never using them.)

  •  What is an abstract class?*
    (* The observant will notice that this last question is the same question used for junior developers. It’s amazing how many Architects can recite high-level summaries of chapters from the Gang of Four, but who’ve lost touch with how coding actually works in the trenches. It gets more difficult as your career develops to keep in touch with the front line, but my personal belief is that you can only lead great developers if you actually share their pain by hitting a keyboard yourself once in a while. You certainly shouldn’t exhibit any signs of Hero Syndrome or micro-managerial tendencies by needing to be involved in writing every line of code yourself, and you shouldn’t try to do developers’ thinking for them. You need to entrust and empower those you lead by allowing them the freedom to get on with any tasks you delegate to them using their own skill. However, it is important to implement a particular feature yourself every so often, purely to keep your own skills current in an ever-changing technical landscape. Otherwise you only lose touch with emergent technologies. A clear sign that you aren’t getting enough personal keyboard time is when you begin to lose the basic knowledge that even junior developers working under you are expected to possess.)

For any one topic that I consider myself experienced enough to assess others in, I have a list of about 200 such questions that represent basic knowledge I’d expect most people to know at each level. During an initial phone screen, selecting two or three such questions as baseline checks is the next best alternative to using Skype or Github to assess whether there’s any potential.

I wouldn’t lose sleep over anyone getting any one individual question wrong. (Especially if they’re honest enough to admit they don’t know a particular fact. The very best people show awareness of things they don’t presently know, whilst less skilled individuals are often paradoxically unaware of their own current limitations. That inability to perceive their own present weaknesses leads to them failing to ever improve. This is known as the Dunning-Kruger Effect.) I still prefer actually seeing a person code using Skype, Github or even YouTube in preference to using coding trivia as an initial screening tool, but phone screens using basic questions to eliminate candidates is the next best option for the initial sift of candidates that invariably apply to almost any openly-advertised technical position. You can apologise to the ones that find it ridiculously easy afterwards, and explain the reasoning behind your using such simple baseline checks.

Skype and Github are better options because they represent positive checks for ability, whilst asking baseline questions is merely a negative check to identify the absence of basic knowledge. However, if a candidate can’t answer any of the simple baseline questions appropriate to their level of seniority, that’s clearly someone that you won’t take forward to interview.

For anyone that attends an in-person interview, I’d always recommend seeing them code using an actual IDE. (If you’ve seen them do so via Skype previously, obviously you can skip this step). The best way to do this is to attach a projector to a laptop that’s loaded up with a full IDE and an internet connection, and watch them work. I once had a hiring manager tell me that they used pen and paper coding exercises instead “because they didn’t want the candidate to have access to Intellisense, and all those other ‘cheats’ that a full IDE provides”. No, I don’t understand the logic behind that one either. I found myself wondering if they’d ask a prospective master carpenter to bang in nails wearing a blindfold, and decide from how swollen their thumbs were afterwards which was the ‘best’ at their craft.

Just like when you’re using Skype, you can record candidates’ efforts to build a quick solution using free tools like CamStudio recorder if you like. That approach can be very useful if you work in a large organisation and have a wider selection committee that will need to review the interview later on. It can also feel a little like an unfriendly interrogation, though, so you need to decide what’s right for your own organisational culture. Personally, I’d only record a coding test if there were a need to show the recording to other members of your recruitment panel afterwards. And I would explain to the candidate that the purpose was to save them having to demonstrate their ability multiple times to different people.

It’s important to make clear that the problem you’re asking them to solve constitutes realistic work, but not real work on an actual business problem. The first activity is a meaningful test of their skill. The second would merely represent unpaid work, and that would risk making you look like a freeloader. One problem I’ve seen used in the past and that I thought was a pretty fair baseline check read something like this:

“Design a system that allows you to model shapes as objects. Each shape should be capable of outputting a text description of itself. The description given in each case will be:

‘I am a _________. I have ____ sides, and ____ corners. My colour is ______. Their lengths are _______.’

There will be appropriate Properties in any classes you use to model such shapes to store the information to be supplied in the blanks in the above description.

You can implement this solution using any UI you like. Have specific classes that describe the shapes ‘triangle’, ‘square’, ‘rectangle’ and ‘circle’”

A developer should be able to come up with a simple design that has a base (possibly abstract) class that provides any shared Properties like colour, numSides, etc. They can either implement a Method in that abstract class to allow a string description to be output, or they can override the default ToString method. Classes describing the specific shapes requested should be inherited from this base. Extra points for having the perception to make appropriate properties/fields read only in more specific classes (i.e., you don’t want consumers to be able to create a triangle with four sides). Points too for using inheritance where appropriate (e.g., realising that a square is just a more specific instance of a rectangle.)  Nothing too taxing, and no trick questions or tasks that would take an unreasonable amount of time. Just a simple problem to allow developers to show that they’re not a non Fizz-buzzer.

As this is a blog about assessment tools, it’s worth mentioning ‘online’ tests like ProveIT, Brain Bench, and Codility. These ‘tests’ fall into two main categories:

  • Tests that attempt to assess ability based on being able to instantly-recall knowledge of obscure parts of particular frameworks.
  • Tests that try to assess an actual ability to write code, but not using an actual IDE.

My opinion on using obscure trivia to assess problem-solving ability is well-documented. I’m with Einstein on this one, who when asked what the speed of sound was once said that:

“[I do not] carry such information in my mind since it is readily available in books. ...The value of a college education is not the learning of many facts but the training of the mind to think.” *

[ * New York Times, 18 May 1921 ]

I don’t consider memorising a lot of obscure and easily-obtainable facts to be a good indicator of programming ability. Nor do I consider not being able to recall such facts at will to be an indicator of a lack of ability. Developers have Google and reference books available on the job. I’m therefore only concerned with testing those aspects of a developer’s ability that those tools can’t provide.

That leaves those online ‘tests’ that attempt to assess coding skill, such as Codility. There’s nothing wrong with the basic idea of getting candidates to write code as a demonstration of their existing ability and potential. However, there’s a big difference between writing code using an actual IDE, and attempting to write code using a web browser (which is how Codility works). In a real IDE, you have Intellisense, code snippets, meaningful object navigation (e.g., if you place the carat on the usage of a class or property in Visual Studio and use the F12 key, it’ll take you to where that class/property is implemented), colour coding of keywords and objects, compilation checking as you type, etc, etc. Codility advocates believe that because that assessment tool has a “compile solution now” button at the bottom of the browser window that amounts to the same thing. It simply doesn’t. Going back to my earlier analogy about inappropriate ways to assess carpentry skills, you’ve merely gone from using a blindfold to asking the candidate to wear sunglasses in a dimly-lit room.  

Codility tests run in a web browser

The main problem with Codility et al, however, is simply this. They don’t give you anything that you don’t also get by watching a candidate solve a real problem using a real IDE. Because of this, you invariably find that these tools are preferred by interviewers that don’t possess skills in the language concerned themselves. Such interviewers don’t use an IDE / laptop with a projector approach, because they simply wouldn’t understand what it was they were looking at. By using Codility instead, they’re generally looking for an ‘easy’ way to understand whether a given solution is ‘right’ or ‘wrong’, without having to go to the trouble of understanding why such a value judgement has been arrived at themselves. Good candidates are aware of this, and the best of them will be concerned that if you only understand how good they are because some automagically-marked test tells you what to think, how are you going to be able to fairly assess their performance on the actual job in the absence of such feedback?

Everyone knows that good interviews are a two-way street. Candidates are assessing you and your organisation just as you are assessing them. Sending a signal that you don’t understand what it is that they do can damage your credibility and your employer/manager brand considerably. So, if you’re not technical yourself (and some managers aren’t), I’d generally recommend instead asking one of your existing staff that you trust to be able to make a meaningful assessment of a candidate's ability to accompany you when assessing candidates’ technical fit.

A second problem with Codility, in my opinion, is that solving discrete problems using technology in the real world rarely works in such black and white terms as a solution being ‘more’ or ‘less’ right than other approaches. There are generally a great many ways to satisfy any one problem. Which one(s) is/are ‘correct’ is all about context. Tests that focus on an overly-narrow set of criteria when determining success may not always identify the best candidate, even if they identify someone that produces the fastest solution, or the one that uses the least (or most) lines of code to solve a problem. e.g., if someone were to use the line  123 << 4  to get the result 1968 instead of writing  123 * 16  , that might be the genius you need to optimise nanoseconds on calculations within the firmware for a graphics card, or they might just be That One Guy that writes unreadable code that produces hard to find bugs. (Mostly, though, they’ll just be someone that doesn’t realise low-level arithmetic optimisations like bitwise operators are largely meaningless in languages like C#, where high-level code is converted at compile time into optimised MSIL before being converted into even more optimised machine code specific to the hardware it’s running on.)

You can try Codility for yourself here, and I'd strongly recommend that you do so if you're considering using it to fairly assess candidates. It's not enough just to get someone else to look at the test for you, unless you ask your chosen guinea pig to work under the exact same time constraints as candidates will be asked to work to. That also means they only get one shot at the test, just like candidates.

In the interests of debunking The Emperor's New Code, when I tested Codility out as an assessment tool I found that I didn't produce a 100% solution myself first time in the time allowed. I therefore felt it'd be unfair to ask candidates to do something that I myself couldn't.

I doubt that many people could produce an 'optimal' result in the timeframe allowed, particularly when you don't get to see the criteria that will be deemed to constitute an 'optimal' solution before submitting your answer. When they only have a short window to think about the problem, candidates will be inclined to focus on providing a solution that works rather than one that shaves milliseconds off of the runtime. And even where candidates do provide an 'optimal' solution, there doesn't seem to be much allowance for readability in the simplistic percentage score returned.

I suspect that most 100% results that users might see from this tool may be best explained by the fact that there are many solutions to the tests published online, and some candidates will be inclined to copy one of those.

This deliberately-obscure and unreable
solution scores 100%

(Full-size view available here)

This shorter and more readable solution also scores 100%

My overall conclusion: companies that let computer algorithms select the best people to work for them rather than the other way round may well be disappointed by the results.

Sunday, 30 June 2013

BranchedMigrator : A Database Schema Management Tool

NB: Owing to YouTube's policy of trying to force Google+ on YouTube members, I no longer
        host content on YouTube. Apologies for the inconvenience.

I recently got round to doing a bit of work on an open source project of mine called BranchedMigrator. Inspired by Sean Chambers’ wonderful FluentMigrator, it’s a database schema versioning tool for use in continuous integration environments. 

You can download a copy here

Saturday, 2 March 2013

A Guide to The Cloud, Part 1 - For Muggles

In recent years, I’ve been involved in a number of cloud computing projects. Most recently, this included a very enjoyable project working for a forward-looking games company based in Glasgow. This blog post is intended to dispel some of the myths that linger about the various technologies that enable cloud computing projects to work. The content in this first part is primarily aimed at non-technical managers looking to get an understanding of what the cloud can do for them. In Part 2, aimed at a more technical audience, I’ll delve more deeply into the underlying technologies.

Let's Make Lots of Money

“The Cloud” is one of those buzzword phrases that’s been bandied around an awful lot. In the process, it’s had its meaning stretched and diluted a great deal. There’s been a lot of misinformation about what does and does not constitute a cloud computing project / platform. Common aspects of the various definitions I’ve encountered have included:

  • Applications that are web-based.

  • The hosting of those web applications, and the databases that underlie them, on remote hardware that isn’t located in the same building as the development team.

  • Lower hardware maintenance costs.

  • The ability to scale applications as an application’s user base grows.

A difficulty with some of the discussion that has fallen under the “cloud” umbrella, is that some or all of these qualities are also found in projects that are not true “cloud” applications, and never will be.

For the avoidance of doubt, when I speak of cloud computing projects, I am talking specifically about projects that encapsulate all of the following discrete qualities:

  • They are web applications that are accessible across the open internet, and are designed from the ground up to be deployed to dedicated cloud-computing platforms. This involves considering scalability, security and ease of deployment (discussed below) as primary design goals. It is not simply taking an existing Java EE 6 or ASP.Net application that was once hosted on internally-managed hardware and deploying it to a small number of servers in a single data centre.

  • Projects where the hardware to which the above solutions are deployed is not directly managed by the party that owns/writes the software. That is, an organisation that deploys a solution ‘to the cloud’ typically doesn’t know or care about where the physical server upon which their application runs resides, beyond broad geographical considerations. So, whilst it’s often possible to choose between “Asia”, “Europe”, “North America”, etc, when deciding roughly where your application will be hosted, if your hardware management is any more fine-grained than that then you are not using cloud technologies at all; you’re simply remotely-managing hardware that you are still heavily-invested in maintaining yourself. 

  • Solutions where you can scale your application to serve a greater number of users quickly and reliably. This typically involves a combination of leaving managing any physical hardware up to the third party you purchase cloud hosting services from, and an awareness within the development team of scalability issues as they apply to software design.

In Part 2 of this blog post I’ll get into some specific technical implementation details involving one particular set of cloud technologies: Windows Azure and ASP.Net MVC, in conjunction with SQL Azure. But first, let’s have a look at some general design considerations that apply whichever cloud platform you are using, and that should be clearly understood by technical and non-technical managers of cloud computing projects alike:


I’ve worked on a range of types of application that have been used for a wide variety of purposes, from the very most trivial you can think of to mission-critical applications that needed to work every single time. An example of the diverse range of problems I’ve been involved in solving includes:

·        Automating precision engineering manufacturing processes for producing delicate parts that keep satellites in orbit
·        National power utility infrastructure management
·        DV-cleared national government work
·        A national police project
·        Investment banking applications aimed at Anti Money Laundering
·        A system for designing custom zombies for use in online games (seriously)

All of which is to say, I fully appreciate the need for security and I have a wide enough grounding in a diverse range of applications that required same to be able to make an informed judgement about whether cloud technologies are sufficiently well-protected to be able to use for each of the above discrete applications. I get it. Really I do. (Hey, there’s nothing more important than protecting society against the ever-present threat of a zombie apocalypse, right?)

I suspect that most if not all of the Public Sector and banking organisations with whom I’ve worked would be horrified at the idea of storing their sensitive data on hardware they didn’t physically control. (Even though many organisations in those sectors experience very serious problems anyway, even when working solely with hardware they get to fully manage in ways with which they are more comfortable.)  There’s something falsely-comforting to the uninitiated about having physical control of actual touchable hardware. It’s the same misguided illusion of security that makes some people store all their life savings under a mattress rather than putting it in a bank for safekeeping.

As well as the psychological difficulties some organisations/managers have in letting go control of physical hardware, in Europe specifically there are also some rather ill-conceived and as yet legally-untested rules concerning the processing of data outside the EU. So, if you operate there you might be forgiven for wondering whether you are allowed to store sensitive customer information on physical hardware that may be located outside Europe, even if you might wish to do so. Like the EU cookie law, it’s nonsense that’ll get over itself soon enough. But still, misguided and vague concerns like these allow people with a predilection to do so to spread worry and doubt about the security and legality of using cloud technologies they don’t fully understand, to solve problems they’d rather would just go away.

Without getting into the technical details too deeply at this juncture, in summary it is possible to easily encrypt data to a level where even the most sophisticated state/non-state actors can’t access it. If desired, it’s possible to encrypt all of the data you store on cloud servers, or just those parts that are particularly sensitive like passwords. Implementation details aside, most of the encryption schemes in use today use the same public key cryptography principle (though new approaches can and are being developed all the time). It’s the same process that allows you to safely access your bank account online, and make purchases from online retailers without risk of a third party being able to intercept and misuse your details. It’s safe: if it weren’t, there would be a lot more online fraud than there is.

Some organisations that operate in the cloud include: Amazon, Google, Microsoft and the National Security Agency. So, if anyone ever tries to tell you that you shouldn’t use a cloud solution purely on the grounds of security, I suggest you point them at the above links and invite them to come up with a more supportable rationalisation for their preferred approach.


Aside from security, this is probably the second most important concern for cloud applications. Scalability is the ability of a given application to be able to adequately and reliably serve the needs of users under a diverse range of conditions. This involves several discrete design considerations, some or all of which may affect your project, depending on its nature:

The ability to support many concurrent users

First and foremost, your application must have the ability to support many thousands of concurrent users equally as well as supporting individual users in isolation. This design consideration is very easy to overlook when you’re working on a Proof Of Concept, where you’re mainly focused on providing features and the only people developers need to satisfy are the rest of their peers in the development team (hopefully augmented by some independent testers that will have the luxury of working on a version of the system that has not yet gone live and where they are consequently not using the system under stress). To be able to have confidence that systems work under the stress of heavy concurrent use, it’s important to test for that specific design goal using appropriate methods. There are various ways to do so that typically involve using a test harness to simulate such use; more on the technical implementation details of that in Part 2.

Considering the strengths of multi-tenancy vs single tenancy

Most software that’s used today tends to be written with a single end-user organisation in mind. If that’s the type of project you’re working on, you can dispense with this consideration altogether, since it doesn’t affect you. However, for some types of application, it’s the case that the same basic application gets delivered to multiple end user organisations, each of whom will have their own user base and subtle usage considerations. In these circumstances, an assessment must be made about the relative benefits and drawbacks of allowing different organisations to share instances of your application (known as multi-tenancy solutions) vs allowing each customer to have their own instance (known as single tenancy).

There’s no ‘right’ or ‘wrong’ answer that fits every situation. However, some things to consider include: Will different customers want to use different versions of your application at the same time? E.g., if customer ‘A’ buys version 1 of your application, and some time later customer ‘B’ comes along and purchases your latest improved version with additional features (version 2), are you going to move every customer that is presently on version 1 up to the latest version for free to satisfy the desire of your latest customer to buy the latest version? And if so, are your existing customers going to be happy to make the move?

The answers to these questions will dictate whether you should provide everyone with their own instance of your application, or attempt to cater to the needs of multiple organisations using one dedicated version.


As new customers of your cloud-hosted solution come on board, you’ll need to consider how you are going to cater for providing them with the service they will be paying for. Whether you’re going to take a multi- or single- tenancy approach is a separate consideration. You also need to consider how you are going to get from the point of a customer requesting the ability to use your service, and that service being up and running. This typically involves, but is not necessarily limited to :

  • Setting up a database to contain the end-user organisation’s information.

  • Providing an instance of the web application that is specific to the end user organisation. E.g., you might provide the exact same stock management solution to a supermarket as you do to a company that makes metal parts. If you do, the supermarket is unlikely to want to direct their customers to to check the price of milk at their local superstore.

  • You don’t want to get too deeply into managing physical hardware (not having that headache is one of the advantages that cloud computing is meant to bring you). However, you may still want to take an interest in the general geographical area that your solution will be deployed to. If you acquire a customer that has a large user base in Asia, for reasons of bandwidth management you’re unlikely to want to route all the traffic to that customer’s instance of your solution via the North American cloud hub that you used to develop and test your solution.

Most importantly, as an organisation that provides a cloud-hosted Software As A Service solution to others, you do not want to waste a great deal of time and effort getting developers involved in the above matters at the time of deployment. Planning and preparation for deployment needs to be done in advance if it’s to be executed efficiently.

Ideally, you’d like it to be the case that your salespeople can speak with potential new customers, and for those customers to be up and running with a minimum of fuss as soon as a contract for service has been signed. You shouldn’t need a DBA to set up the database, a developer to create a copy of the web application, and a tester to make sure it all still works as intended, just to supply something to customer ‘B’ that you’ve already supplied to Customer ‘A’.

Fortunately, there are solutions to the deployment process that involve minimal work at deployment time. I’ll get into the technical details more in Part 2, but for now I’ll just note that there are tools that, provided they’re used correctly, make the process as simple as running a single script to achieve All Of The Above goals.

In Part 2 I’ll discuss in detail how you can use a combination of Powershell, NAnt, and FluentMigrator to automate the deployment process. Key to the success of these is one final piece of the puzzle…

Continuous Integration and Version Control

The Joel Test has been kicking around for quite a while now, and whilst it is showing its age a little, many of the most savvy developers ask questions from it when deciding where to work. (Side note: yes, believe it or not, the best developers do still get to do that. Even in this economy. Think of the number of businesses that don’t use the internet or IT in some way; that’s the number of places good developers can’t find work, and those organisations are consequently who you’re competing against for the best talent). There aren’t too many organisations still operating today, thank goodness, that don’t provide basic tools like bug tracking and source control. Rather fewer have testing teams that are completely independent from the development team. Fewer still ensure quiet conditions for developers, and in my experience almost no organisation has been capable of doing daily builds or builds in one step at will.

The ability to deploy easily and at will is covered above. Related to that, however, is the consideration of how you will support multiple versions of your solution, some of which may be being used by different customers simultaneously. Part of the reason that most organisations aren’t able to deploy different versions at will is that, as noted earlier, most software today is simply written for one group of users and will only ever been used by and updated for that one specific group of users. If that’s the category your project falls into, then you don’t need to read any further. For those organisations that produce solutions for use by more than one customer at a time, sooner or later you’re going to have to delve into the topic of version control and continuous integration.

Continuous Integration is the process of managing features that are being developed by your R&D / development team, and determining which version(s) of your product will be benefactors of new features / bug fixes that are continually being developed. One day your R&D team might be working on Super Duper feature ‘X’ that is only going to be made available to new customers or ones that upgrade to your latest version. Another day those same developers might be addressing a critical bug that’s been discovered, the fix to which will be rolled out to users of all versions as soon as it’s been through development and testing.

There are tools available that automate and manage this process as a separate activity to development. I’ll discuss one of these tools – TeamCity – in detail in Part 2.

Friday, 4 January 2013

Turning Coffee Into Code : Delonghi Magnifica ESAM04.110.S Espresso Maker

OK, you got me. This blog is neither about software development in general, nor .Net in particular. But it is about a topic that's close to many developers' hearts: coffee.


I'll prefix this by saying that I'm not a coffee snob, though I do have some pretty specific ideas about what I enjoy in a good cup. I don't like the stuff they sell at Starbucks, for example, and I find Costa's brew is way too bitter for me. The nicest store-bought coffee I've tasted actually comes from McDonalds fast food restaurants – their blend tastes really nice. If I didn't have to stand in a line six people deep, all trying in vain to order the food they actually want whilst some poor underpaid student embarrassedly tries to upsell them Things They Don't Want, I'd probably grab a coffee at McD's more often.

My usual brew at home until recently has been Gold Blend instant coffee. That said, I've been working in the centre of Glasgow with a great bunch of guys in a games company for the past couple of months (the first time I've been back there to work in a while). Being in a location that's surrounded by every conceivable type of coffee outlet imaginable, including all of those mentioned above within 100 yards of my office door, has got me to thinking about coffee seriously again. That's led me to invest in a good bean-to-cup machine, so that I can enjoy my coffee just how I want it whilst working at home once my present contract has finished.

After some research, I chose the following machine; the Delonghi Magnifica  ESAM04.110.S espresso maker :

I've tried other coffee machines that use pre-ground coffee in the past. To be honest, the results were not at all good. As mentioned above, I find some brands of instant coffee to be very nice. However, if you are going to grind the beans and attempt to make the real thing, then buying pre-ground beans is frankly a step backwards rather than forwards from instant coffee. Coffee beans lose their flavour very quickly indeed: within seven days of being ground. So, unless you're grinding your own as part of the processs, you're probably not getting anywhere like the intended flavour. Pre-ground coffees tend to have been sitting in a warehouse for anything up to six months by the time consumers get them, so they're already well past their best, however they've been packaged, and whichever temperature or atmosphere they've been stored in.

Other unfavourable aspects of those machines I'd tried in the past (which typically cost less than £50) included the fact that they just didn't get the water hot enough to make a great cup. So, even if the ground coffee going in had been right, the results would still have been mediocre. So, I thought I'd invest in a 'proper' bean-to-cup machine this time, to see if better results were achievable at home

The Good Stuff

1) Takes the grind out of grinding

The model above alleviates the issue of ground beans quickly losing their flavour by having an internal grinder that you can fine tune to your own taste. You can set it to grind the beans more finely if you want a smoother brew. (In practice, I've found that leaving it at the median factory setting has achieved just the right balance between a quick brew and a fine enough grind that you get all the flavour). Because the coffee is ground only in the right amount to make each cup and right before it's used, the resulting flavour is generally excellent.

With this particular model, you can also use pre-ground coffee if you really want to. However, since whole coffee beans are generally cheaper or the same price as pre-ground coffee, and tastes very much better, there's really no incentive to.

2) Hot stuff

As it only makes one or two cups at a time, the coffee produced is always piping hot. You can also adjust that setting to suit your own tastes: for me, the hottest setting felt about right.

You can also adjust the strength of the coffee itself. I quite like mine about 25% of the maximum possible strength: strong enough to get a nice caffeine buzz, but without being overpowering or resulting in too bitter a brew à la Starbucks or Costa!

3) Lots of choice, and beans are inexpensive

For beans, I've tried two varieties thus far: Italian Lavazza espresso beans, and a blend called Lazy Sunday by Taylors of Harrogate. 

They're both nice and give a mild, creamy coffee. However, Lazy Sunday just edges it for me. I've also bought some Illy beans to try in the coming weeks; we'll see how that goes. I'm looking forward to trying some other blends after that to compare and contrast: that's all part of the fun.

In essence, there are two types of bean: Arabica and Robusta. There are also lots of different types of roast, typically graded from 1 (Mild) through to 5 (Dark).

Arabica beans (which as the name suggests come from high-altitude regions within the Arabian Peninsula) are the oldest and probably more mild of the two, with chocolaty tones. If your tastes are anything like mine, they'll probably suit your palate best. Robusta beans are grown further afield, in places like South America, the Caribbean and Africa, at the same high altitudes. Robusta beans taste earthier and stronger than Arabica. Some blends use a mixture of the two types: Lazy Sunday is in that category. 

The roast gradings of 1 (Mild) to 5 (Dark) are a little easier to get a handle on. Counter-intuitively, the middle grade of 3 (Medium) is actually the most flavourful, and the one I like the best. If the bean is roasted less than 3 then the cup will be very mild indeed, and will contain a lot of caffeine. If it's roasted very darkly then it'll taste very smoky, but won't contain as much caffeine since it'll have been burnt up during the roasting process. A medium blend preserves that caffeine kick with a nice, mild flavour: a winning combination for me!

4) Easy to clean

The Magnifica machine itself has proven easy to clean and maintain. The coffee grounds are condensed into little discs during the brewing process: one for each brew made.

It's easy just to empty the container out whenever the appropriate warning light comes on (about every fifteen cups or so), then give the container a rinse out and dry it ready for the next brew.

The instructions advise cleaning out the internal filter around once a month, and not to use a dishwasher or any form of soap for cleaning (as soap can jam the very fine metal filter the machine uses). In practice I find it easiest just to give the filter a rinse under the tap and then dry it with a paper towel whenever the grounds container needs emptying, and you're set to go for the next few days' worth of coffee. 

The Bad Stuff 

1) Price 

The machine itself costs in the region of £350. There are various models in the Delonghi range that go down to about £275, and as a beginner to using espresso machines it's not that easy to see what the benefits of one model against another are. And it's not just as simple as choosing the newest, whizziest-bangiest option either: some of the older models in the same range are more expensive and give better results in terms of reliability/performance/features than newer ones.

I found this site to be quite useful for providing an independent guide, and for allowing you to directly compare and contrast the different models that are available.

In the end, I chose the model I did mostly because other Delonghi models in the same range had a lot of very good customer reviews on Amazon, and because it happened to be the model in the Delonghi range that was available from my local electrical retailer. With Christmas approaching, and consequent delays in the post, I wanted to enjoy my machine whilst I had a couple of weeks away from my current contract over Christmas and New Year, rather than having to wait for an item to be delivered from Amazon (which I'd have been quite happy to do at other times in the year).

I've been very happy with the model I got, though I'm sure the cheaper one available through Amazon would have been just as good.

2) Designed by committee

When I was researching which machine was right for me, I noticed a lot of variety in the options that were available. The cost ranged from between about £250 and £900. The main thing that seemed to differentiate the more expensive models were that they had digital controls, and the ability to store and retrieve different settings. That didn't seem particularly important to me, since once I'd found the settings I liked I intended to keep them that way. However, if you're buying a machine for an office where each person has their own individual taste, the ability to store and replicate multiple different preferences rather than fiddling with the controls manually each time might make more sense.

The unintuitive way that machines in the £250-£500 bracket work is my only real quibble about the Magnifica. It appears to have been designed by the same guy that was responsible for Windows Vista: the controls are horrible, and it's very easy to get confused about how to use the machine at first, particularly for accessing the type of features you'll only set infrequently, such as the water hardness setting.

These are the controls:

I won't go on ad infinitum about each individual feature, but describing just one in detail may help demonstrate the issue.

The coffee comes out of the two nozzles marked 'A' in the photograph above. These  nozzles are also where water comes out around twenty seconds after you switch on the machine, as it goes through a cleaning cycle. That same cleaning cycle is repeated (with attendant unexpected hot water flow) when the machine turns itself off after two hours on standby. There's a drip tray on the machine, but it still makes a bit of a mess when a full flow of water is deluged into it unexpectedly. So, I've taken to having a cup positioned under those nozzles at all times when I'm not making a brew, just in case it catches me unawares. The cleaning cycle can be sort of useful as you can use the water to pre-heat your cup(s), provided you remember to position them under the nozzles before turning the machine on!

So far, so good.

Then we come to the milk frother 'wand' (labelled 'B' in the photo above). This device is meant to provide steam to froth milk for making cappuccinos. And it does. Provided you master the magical ritual for making it work first (I suspect that's how it came to be known as a "wand").

The process for getting frothy milk involves first pressing the button marked 'C', then turning the knob marked 'D' anti-clockwise. But don't go putting your milk cup under the wand straight away or anything crazy like that, because what comes out of that 'wand' at first is - yes, you've guessed it - boiling hot water!

To get the process to work properly, you need to get that same cup that you use to catch unexpected water flows from the coffee nozzles, and place it under the wand. Then you turn the knob and let the boiling water turn to steam over the next ten seconds or so. Only then do you switch over to your pre-prepared cup of milk (which should only be about a third  full, since it expands in volume as you steam/froth it). If you forget to press the button marked 'C' before you turn the knob 'D' you'll be waiting a long time for that flow of boiling water to turn to steam. I personally found that a little pointless initially: if knob 'D' had been dedicated to the function of producing steam there would have been no need for button 'C' at all.

When you're done steaming the milk, naturally you're going to want to make some coffee to put that freshly-frothed milk into. However, Cappuccino By Vista has thwarted you once again. Because it's at this juncture you'll find that the machine is too hot to make coffee, and needs to be left alone for ten minutes or so to cool down (by which time your milk will be flat and cold).

There is a solution that doesn't require access to a Tardis to be able to go back in time and make sure you prepare the coffee first. That solution is to press button 'C' again until the light next to it goes out. This causes knob 'D' to stop being able to produce steam and puts it into "hot water" mode. Whilst in this mode, you place a cup under knob 'D' and turn it until steam stops flowing and boiling water resumes once again (takes about ten seconds again). This has the corollary effect of cooling the machine down enough to make coffee once more. See?, I knew there must be a 'logical' purpose to requiring two controls (a button and a knob) to control the frother wand when at first it seemed that only one should be needed.

I think you'll agree the above isn't entirely logical or user-friendly. It has the feel of a Beta Version about it. When researching which machine to buy, I read a lot of opinion to the effect that "coffee makers just do what it takes to make great coffee, whether they're easy or intuitive to use is a secondary consideration ". I don't agree with that outlook: great tools don't just need to produce great results, they also need to be user-friendly in order to make the power they facilitate available to us mere mortals. So, whilst the sometimes-obscure controls on the Magnifica may well double-up as a useful test of debugging skills for new developers should you decide to install one in the office (see, I'm getting back on topic now), it is still a pretty inexcusable limitation in a machine that costs £350.


Bottom line, I'm glad I got one. It's nice to be able to enjoy a decent cup of Joe or two in the morning before heading out to face the day. And, minor quibbles aside, it is actually pretty easy to use to get a normal cup of coffee once you've got it set up. All-in-all, I'm pleased with the results and would recommend it.