Recently at work, I’ve been working
with my colleagues to set up a Total Quality software environment. I’ve been
learning a lot from my peers about topics such as VMware, RTI and Code-First EF. (I’d
previously used Schema-First, but Code First brings its own advantages and
challenges). What I brought to the party was some project experience in:
- Continuous Integration platforms (specifically in this case, TeamCity.)
- Unit Testing and Test-Driven Development techniques.
- Dependency Injection to support writing testable code.
- NAnt scripting.
- Futurama.
We’ll get to that last one in a minute. Let’s go through the others in order first.
Continuous Integration (CI)
Everygeek who’s anynerd is using it
these days. But lots of development teams and companies still avoid it, imagining
it to be too difficult, too time-consuming, or just not worth the hassle. (For
that matter, those same fallacious criticisms can be levelled at every other
item in the list above too. Except Futurama.) A decade ago people used to say
the same things about Source Control; thankfully there aren’t too many teams I
encounter these days that haven’t got their head around how important that is.
Some teams aren’t even sure what CI is, what it does, or what advantages it brings. They’ve always worked by developers just producing software on their own PCs. And they just deal with any time-consuming fallout when it comes to making that software work in the real world as part of the cost of doing business.
OK, so here’s the unique selling point if you’re trying to make the case for introducing this where you work. Are you ready? What CI adds to your team’s game is simply this: repeatable, verifiable deployment.
Unit Testing and Test-Driven Development techniques
Unit Testing has been around for a Very Long Time. I know a lot of people who are otherwise very good developers but who “don’t see the point” of unit testing. And I have been such a developer myself in the murky past.
The misconception that unit testing is pointless generally comes down to a few fallacies:
Some teams aren’t even sure what CI is, what it does, or what advantages it brings. They’ve always worked by developers just producing software on their own PCs. And they just deal with any time-consuming fallout when it comes to making that software work in the real world as part of the cost of doing business.
OK, so here’s the unique selling point if you’re trying to make the case for introducing this where you work. Are you ready? What CI adds to your team’s game is simply this: repeatable, verifiable deployment.
Unit Testing and Test-Driven Development techniques
Unit Testing has been around for a Very Long Time. I know a lot of people who are otherwise very good developers but who “don’t see the point” of unit testing. And I have been such a developer myself in the murky past.
The misconception that unit testing is pointless generally comes down to a few fallacies:
- They believe that their own code always works.
- The wider team and stakeholders place more value on quantity of new features than upon quality of existing features.
- They believe that they will always personally be around to ensure that their code doesn’t get broken in the future.
Like most good fallacies, there’s just enough truth in most of these to preserve the illusion that unit testing doesn’t provide enough advantages to the person that has to implement it. Not when compared to the opportunity costs of them learning how to do it, or the kudos of pushing out new features (that don’t work as intended.)
Part of the reason more developers don’t give it a go, is that you have to change the way you write code. Most code I’ve seen in the wild is tightly-coupled. This is a phrase that many developers are familiar with, but in my experience vanishingly few know what it means. Basically, it means that if you are writing Class A, and your class depends upon Class B to do its job, your class will instantiatiate a new instance of Class B itself. This means that if Class B stops working, all you (and Users) know is that your class “doesn’t work.” They won't care if your code is perfect, and it's just that damn Class B that let you down.
So, when doing test-driven development, developers need to add another couple of skills to their arsenal. Which brings us to…
Dependency Injection (DI)
One type of Tight Coupling is defined above. Code is also tightly coupled when it is too closely tied to one UI. So, if you’re a developer that puts all their business logic in code-behind files or controller actions, your code won’t be testable. Because your code needs the UI to do its job, before it will be able to be verified.
Fortunately, there are frameworks and coding styles out there that help developers implement loose coupling, to make their code independently testable.
The basic idea behind all of these is that instead of your Class A consuming Class B directly to perform some function, it consumes Interface B instead. That is, some object that Class A doesn’t instantiate itself, satisfies some interface that represents the job Class B was doing for Class A. Typically this is achieved by making the constructor of Class A look like this :
The above pattern is known as Constructor Injection. What it gives you is the ability to swap out whatever is implementing Interface B when it comes to unit testing Class A. So, instead of the object that really does implement Interface B in live use, you can use what is called a mock instance of Interface B. That is typically some object that always gives you anticipated responses. So you can concentrate on testing Class A. That way, any errors you see can be wholly attributed to Class A.
When you write your classes using the Constructor Injection pattern demonstrated above, DI frameworks provide concrete implementations of objects that implement interfaces at runtime. So, you 'magically' find a usable implementation or Interface B available in Class A's constructor. As the developer of Class A, you don't care particularly about where that implementation of Interface B comes from; that is the responsibility and concern of the developer of Interface B and your chosen DI framework.
This is just one of the techniques that developers moving from code that "just works" need to learn if they want their code to be verifiable. It is difficult to embrace. Because frankly writing code that "just works" is hard enough. And because using these techniques opens up the possibility of developers having to recognise errors in their own code. But unit testing also brings with it a huge number of advantages: The ability to prove that a given piece of code works, not just at the time of writing but every single time you build. And it protects your work from being modified in adverse ways by subsequent developers.
Unit testing and Dependency Injection are whole topics on their own, so I won't say more about them here. (I'll perhaps save that for future blogs.) With regard to understanding tight and loose coupling, though, I'll leave you with an analogy. If a traveller wants to get to some destination, they don’t need to know what the bus driver’s name will be, the vehicle registration, what type of fuel the bus uses, etc. They just need to know what bus stop to be at, what time, and what is the correct bus number to get on. Similarly, Class A doesn’t need to know everything about Class B or where it comes from. It just needs to know that when it requires an object to do some job, one will be available at an agreed time. Class A instantiating Class B itself is analogous to a traveller trying to build their own bus.
Last time I checked, there were something like 22 DI frameworks that you can use with .Net. The one I implemented at work recently is called Castle Windsor, which I’ve been using for a few years. In benchmark tests it’s not the fastest. It’s not the simplest. And it’s not the most customisable/powerful. But it is the one that for my money strikes the right balance between those competing factors. And it integrates particularly well with ASP.Net MVC and Entity Framework.
NAnt Scripting
Continuous Integration platforms on their own give you a powerful way of automating builds and deployments. However, there are advantages to be gained to farming out some of that work to a more specialised tool. NAnt is one such tool.
For any system that gets developed, there are typically 10-25 individual “jobs” that are involved in setting up a copy of the system that Testers and ultimately Users can access. e.g, for a web app you might need to:
- Create some Virtual Directories in IIS.
- Copy the files that the website is made of into the folders those VDs point at.
- Customise a web config that tells the site how to access the underlying database.
- Create the underlying database in SQL Server.
- Populate the database with data.
- Create an App Pool in IIS under which the site will run.
- Grant the relevant App Pool access to the database
You’d also be well-advised to have
steps that involve:
- Running unit tests, so you don’t deploy broken code.
- Updating Assembly Information so that each build has an identifying number. That way, bugs can be reported against specific builds.
- Backing up any prior version so that you can rollback any of the above steps if the deployment fails.
If you put these in a script that
lives in your project instead of in build steps on your CI server, you can more
easily mirror steps between different branches in your builds.
Futurama
One of the things that motivates me is getting to have a bit of fun whilst I work. In the team I joined a few months ago, there has been one common theme tying all of the above threads together: Futurama.
Myself and my colleagues have set up about 10 Windows Server 2012 machines that perform various jobs. e.g., One of them is a Domain Controller. Another is our CI server. Several more act as paired web and sql servers that can be temporarily allocated to testing, by internal testers or by end users. Or they can be used by developers to test the deployment process.
Futurama
One of the things that motivates me is getting to have a bit of fun whilst I work. In the team I joined a few months ago, there has been one common theme tying all of the above threads together: Futurama.
Myself and my colleagues have set up about 10 Windows Server 2012 machines that perform various jobs. e.g., One of them is a Domain Controller. Another is our CI server. Several more act as paired web and sql servers that can be temporarily allocated to testing, by internal testers or by end users. Or they can be used by developers to test the deployment process.
Each of our VMs is named after a Futurama character and has its own distinct colour scheme. (NB: They have a fully-qualified name too, like DVL-SQLALPHA, that describes their actual role.) This helps developers stay oriented when RDP-ing around what would otherwise be nearly-identical machines. It’s also fun.
You saw how TeamCity / Professor Farnsworth looked above. This is how one of our Web Servers, characterised after Zapp Brannigan, looks. As you can see, it's easy to tell which VM you're on, even from a distance:
There are Futurama-themed Easter Eggs hidden in other parts of our build process too. e.g., each CI build produces a log file. At the end of which, a build gets reported as “Successful” or “Failed” for some detailed reason. A recent evening in my own time, I wanted to test implementing custom NAnt functions. (NAnt is written in C#, and you can write functions in C# to augment what it does.) In order to test this with something non-critical, I augmented that custom “Success” or “Failure” method thus:
The exact piece of ASCII art that
gets rendered reflects whether the build was successful or not, and is semi-random.
So, you might get Hermes with a brain slug saying something dumb if the build
is broken. Or you might get Professor Farnsworth announcing “Good news,
everyone!” if all went as planned.
These 'features' are of course whimsical. But at worst they give developers a smile during some of the tougher moments of the job. And at best they give you a chance to test out new techniques on non-critical features. As well as giving your brain a rest between more intensive tasks.
The best teams I’ve worked with all knew their onions on a technical level, but also knew when to have fun too. I'm glad to be working in such a team at present. e.g., I recently implemented the following function:
These 'features' are of course whimsical. But at worst they give developers a smile during some of the tougher moments of the job. And at best they give you a chance to test out new techniques on non-critical features. As well as giving your brain a rest between more intensive tasks.
The best teams I’ve worked with all knew their onions on a technical level, but also knew when to have fun too. I'm glad to be working in such a team at present. e.g., I recently implemented the following function:
My colleague Ian made me chuckle
when I discovered this in our code repository a few weeks later: