Saturday, 25 November 2017

Product Review – Dell XPS 13

I left my old job recently. And consequently found myself in the market for a new laptop. I hadn’t bothered replacing my old Acer Aspire V3-571 with custom 1TB Samsung SSD that had failed within two years. Because the Toshiba Satellite Pro P50-C-12Z my old job supplied worked fine as both a home and work machine. Especially once I upgraded it with the SSD from my home laptop. However, once I left that job, I found myself relying on a 10-year-old Dell Inspiron for a couple of weeks: 


That Inspiron is a loyal old work horse that has served me well as both a dev laptop and file server for a decade. And has been my go-to machine whenever other, newer, higher-spec machines have failed. It's had its screen and hinges replaced. And has had its hard drive upgraded several times with various SSDs. So saying it has lasted 10 years is a wee bit Ship of Theseus-y. But the motherboard, fans, etc have all stood the test of time. However, it just isn’t fast enough any more for modern development, so I needed a new machine.

After some research, I ended up opting for the Dell XPS 13. With 16GB RAM, 512GB PCIe SSD hard drive and the new 8th generation Kaby Lake Processor (i7-8550U.) I initially ordered the previous model. Which was identical in all respects to the above spec except the processor (it was an i7-7550U), and the price tag: the old spec was £300 cheaper. However, after reading  several reviews that all suggested the improvement in performance with the 8th gen processor is significant, I cancelled and ordered the new spec. It cost £1649 in early November 2017. At this time, in late November 2017, it is now retailing at £1547 on Amazon. That’s quite a price change in only a few weeks. Not waiting a few weeks to see how the price settled is my only regret so far about buying this machine.

The main thing that makes the 8th generation processor so much better is that it has a Quad Core, compared to the previous generation’s dual core. So, even with a lower clock speed, the newer processor burns through tasks like building Visual Studio solutions about 60% faster than its predecessor. That’s pretty significant. Performance gains are usually on the order of 10-20% increases between adjacent versions. As the newer processors have a lower speed, they also run cooler. So, except when you’re building solutions, you almost never hear the fan kick in. I've found that browsing the internet, watching videos, and even using MS Office are all silent experiences on this machine.

I had heard that coil whine (a low pitched noise that accompanies processor-intensive tasks like watching videos) was a problem in the previous model. But I haven’t experienced that problem myself in the three weeks I’ve been using my new machine. Similarly, I had read reports of the wireless network card being unreliable; but my machine has performed perfectly. It hasn't dropped connection to my home wireless even once.

The main things that drew me to this model were the form factor. (A 13.3 inch screen in a machined aluminium chassis, that would usually only be large enough to hold a 11 inch screen in other models.) And the battery life. (Dell reports 16 hours; I’m finding more like 9 or 10. But I do have the QHD screen, which is more battery-intensive.) Every time I’ve bought a laptop before, I’ve gone for power over convenience. But to be honest, the sort of 15-inch laptops I’ve opted for in the past aren’t great for commuting (small tables on trains.) And they typically don’t have a good battery life. (I’d have been lucky to get an hour or so on battery on my old Satellite Pro.) By comparison, this machine runs and runs. If I take it to bed at night, I have to be careful not to surf (do people still say "surf"?) or work too long. Because that battery could easily see me through til dawn. As it is, if I browse the internet or watch a video for 3  hours or so at night on battery, I find I still have 70% battery the next morning. That, to me, is better than having a 10% faster machine that doesn’t last as long.)

As a development machine

Honestly, I’ve not used this laptop heavily as a development machine yet. Though I have experimented with running old Visual Studio solutions on it. A solution with 36 projects Builds in 1-2 seconds in Visual Studio 2015 Professional and VS2017 Community from a cold start. And Re-builds in about 15 seconds. That's amazing - better than any laptop or desktop I've ever had, full size or not, with or without SSD. On my old full-size Satellite Pro laptop with 1TB Samsung SSD, that Rebuild would have taken about 20 seconds. I should warn that there are some gotchas in VS2017. e.g., if you leave the Lightweight Solution Load option on (as it is by default), then the initial build time for that same project is about 40 seconds in VS2017 Community. Also, the solution sometimes doesn't build at all. So, my advice is to switch Lightweight Solution Load off completely. It's not stable enough at the current time to be of any use.

I’m always dubious about using versions of Visual Studio in the same year as they are named for. But VS2017 appears to be particularly bad in terms of buggy-ness and generally poor design decisions. (e.g., I found that the Javascript Language Service (the part that should make Intellisense work for Typescipt and JS files) has been arbitrarily turned off in VS2017 for some of my existing solutions. Seemingly because some Microsoft developer had put it in the “too hard” pile to make Javascript / Typescript work properly in VS2017.) Note to Microsoft developers: if you can’t handle more than 20MB of Javascript / Typescript files for a single solution, you’re just wasting developers’ time. Don’t bother doing less than that and considering the job done. Turning off features that worked perfectly well in previous versions of VS is pretty inexcusable. The only reason for using Typescript over Javascript is that it provides object-oriented capabilities. But you can only leverage those features meaningfully if you have Intellisense. Disabling such a key feature at such a stupidly-low threshold is like having a car whose doors fall off if you go over 30 MPH. And closing bugs about same on the basis that you meant to do something that stupid is even more stupid than the design decision was in the first place. FWIW, I fixed the problems with JS/Typescript Intellisense that are evident in VS2017 by disabling the new Language Service completely using this option:

and including the following settings in a file named "tsconfig.json" in the root of my web project:

          "compilerOptions": {
            "disableSizeLimit": true,
            "module": "commonjs",
            "allowJs": true,
            "outDir": "out"
          "exclude": [
          "compileOnSave": true,
          "typeAcquisition": {
            "enable": true

Anyway, this review is about my XPS 13, not the poor design decisions of the Visual Studio 2017 development team. I mention these issues with VS2017 vs VS2015 purely to note how hard it is to assess new hardware if running new software too. Sometimes, it’s not the hardware that's to blame for any failings observed. Overall, like-for-like, my XPS 13 performs better than my old Satellite Pro + SATA 3 Samsung SSD. Even though that machine was no slouch. I’m glad I bought it, and will continue to use it as my main development laptop.

Battery Life

In terms of battery life, the XPS 13 is a world away from any laptop I’ve owned before. Realistically, I get about 10 hours out of it if I’m just browsing the internet, or watching videos. As noted, I haven’t used it for actual development in anger yet. But going by the way the fan kicks when I build VS solutions, I’d suspect that I’d get around 4-5 hours max out of it at full throttle, possibly less. All previous laptops I’ve used have only got around 1-1.5 hours on battery, even if I were only browsing. So, whilst I have to be careful this doesn’t make me sit up too long at night. It is a huge improvement. That battery life is the main reason I bought the XPS 13 over its big brother, the XPS 15. Every time I’ve bought a laptop in the past 10 years, my ‘sensible’ head has kicked in and coaxed me to go for raw power over portability and battery life. With this machine, I don’t need to compromise. It provides both ultra-portability, and processing power in one package. Whilst no doubt the latest generation XPS 15 probably could out-perform this model in sheer processing time. You can’t argue with sub-5 second builds in VS2015, combined with a full day’s battery life for commuting or using in the evening for lighter tasks. 

Other features

The XPS 13 has two features that I particularly like. Firstly, it has a nice, carbon fibre, rubberised keyboard surface. The keys themselves are pretty tactile, chicklet-style. As a touch typist, I find it suits me very well. But the palm rest is rubberised, which makes the keyboard pleasant to use if the laptop is cold. Had Dell opted for Aluminium all round, I think that would have made for some pretty cold hands when typing a quick email first thing in the morning. Or when transferring the laptop from a cold car boot to a warm office.

Secondly, the keyboard is well-lit. With differing levels of white lighting available. Including "off." In a dark room, that backlighting makes positioning your hands far easier. Also, the keyboard light only comes on when you type; so it's not distracting if you generally want a keyboard light, but also don't want to be distracted by same when watching a video. 

Minor Quibbles

My old Inspiron and new XPS 13 both have one design feature that I find annoying. Namely, there is a battery charging light right on the front of the machine. And it can't be disabled. It goes off when the battery is fully charged, but it'd have been nice to be able to switch it off electively. My old Inspiron battery has reached a stage where it doesn't hold a charge any more; it's therefore even more annoying, as it flashes orange to warn that the battery needs replaced. My new XPS 13 is too new to be able to tell if it does the same thing, though my understanding is that it will when the battery is too old to charge any more. It'd be nice not to have to use black electrical tape to switch this feature "off." 

Secondly, the webcam is badly-placed. I don't use it anyway, so it's not an issue for me. But if you do a lot of web conferencing, be aware that it is placed on the bottom-left of the screen. This is because the 13.3 inch display takes up nearly all of the height and width available. But it means that any Webex you have will involve participants looking right up your nose. Not pleasant.

Lastly, the hinge on the lid is very strong. I personally like this, as it means the screen doesn't move when you use the touchscreen. But some people have complained about having to use a whole two hands instead of one to open the lid. The main issue I do have with the lid is that it doubles as a very effective set of pliers if you placed your fingers on the hinge whilst opening. (Just as well you need both hands to open it then really, isn't it?)

Other options

Other options I considered included the Razer Blade Stealth. (In the end, I decided the lack of a 8th gen processor, combined with the fact I could only get the “gamer” version with a green logo and rainbow-coloured keyboard lighting were deal breakers for me. Plus, support is US-based whilst I’m in the UK.) I also liked the HP Spectre very much indeed – it seems a very nice machine. Just not quite as capable as the XPS 13 in terms of power or battery life. Beautiful, though.

Thursday, 17 August 2017

Product Review - Wago Connectors

I was re-wiring my garage recently, when I got fed up screwing wires into choc blocks. I figured someone must have come up with a better way of connecting wires together, got Googling, and found these guys - Wago Connectors:

Wago make lots of different kinds of connectors, some of which are re-usable. Those are the ones I went for. For historical reasons, there are two kinds of re-usable connectors. The 222s:

And newer 221s:

Both kinds come in 2-way, 3-way and 5-way forms. (For connecting the respective number of wires together.) The 221s are slightly more expensive, and take up about 40% less space. But they do the same job of letting you join wires together. Potentially wires of different gauges (such as when connecting twin and earth solid-core to multi-core flex cable used by most appliances in the UK.)

I can highly-recommend these useful little guys. They sped up the job considerably, and have proven very reliable in use.

I don’t have a fidget spinner, so I kept a few of these connectors on my desk over the next month or so to footer with whilst coding. Opening and closing the levers repeatedly. From that unscientific "test", I can say that the 222s are quite a bit more robust than the 221s. After a few hundred “opening and closing” operations on their levers, the more expensive 221 wouldn’t stay open fully any more. It is still usable, and I could hold it open whilst inserting a wire if I really needed to. But then it becomes just as fiddly to use as a choc block. So if you're going to be installing/uninstalling and re-building a lot, I'd say go for the 222s. If weight is a primary concern (e.g., building a drone) then use the 221s or just solder and accept that greater build time and reduced ability to dis-assemble is the price you pay for less weight.

On the upside, the levers on the 221s are considerably easier to open. Though neither is particularly difficult. There is a dedicated tool for opening them that costs over £100, but really it's a ridiculously over-engineered solution that I can't image anybody needing. Even people that are installing these all day would have no difficulty opening them with just their fingers.

The first time you open one of the 222s, you’ll be unsure if it’s broken. Because its jaws initially open to about half way quite easily, then you need to use substantially more force to open the lever all the way. It can also give you a nasty “mouse trap” snap on your fingers if you’re not careful whilst you close the lever to clamp your wire in place.

Over all, I think I’ll be using the cheaper 222s where space isn’t a consideration. To that end, I bought a box of the 3-way and 2-way 222s, and a box of the 5-way 221s. (Since when I need to connect 5 wires together, that’s usually when space is tightest.)

With regard to their ratings, I'm honestly not quite sure what amperage / voltage they can take. The problem is there are two ratings on each model. (Presumably to satisfy more than one set of tests for different markets.) 

The 222s are rated at "20A 300V" on one side and "600V" on the other side. The 221s have labels showing they are variously rated at "450V 32A" or "20A 300V". Confused?, you will be? Here is a YouTube video of someone actually burning the things out to test their limits

In practical use, I've had no problems having about 10 of these things in the same switch. I've also used three in series on the same circuit.

2-way 222 connectors: £13.23 for a pack of 50 @ Screwfix 

3-way 222 connectors: £15.13 for a pack of 50 @ Screwfix

5-way 221 connectors: £13.80 for a pack of 25 @ Screwfix 

Addendum: Thelma quite enjoyed these little devices too. She reports that the 222s, being rounder, are 50% “more chasy” than the “boring” more square 221s. They therefore fly faster when she bats them with her paws to simulate spontaneous movement.

Monday, 17 October 2016

Another TrustPilot whitewash artist - Fenix Torch (

I got this spam after making a purchase on Fenix Torch recently. I would note that I have opted out of TrustPilot spam emails several times in the past:

I  particularly like the bit that says "All reviews, good, bad, or otherwise will be viewable immediately." 

What it doesn't say is that any negative feedback will be immediately subjected to attempted whitewashing:

When will these companies learn? Trying to suppress negative reviews leads to worse publicity than the original review ever could have on its own.

Fenix Torch really should look up the Streisand Effect some time. Because behaviour like this only makes them look foolish.

Tuesday, 19 July 2016

"No comment" – now Amazon are removing even Neutral feedback. Do not buy from Polimil Ltd.

It’s no secret that Amazon’s review system is utterly corrupt and broken. It’s bad enough that Amazon and Sellers solicit reviews from buyers, even if your preferences have been set to specifically avoid that type of spam. But, as myself and other customers have found, they’re only interested in keeping those reviews that turn out to be positive. Any negative reviews are quietly deleted without announcement or explanation. It’s hard to see this as anything other than fraud. Because ratings heavily influence and inform the buying decisions of future would-be buyers. So giving the misleading impression of uniform satisfaction amounts to false advertising. Ironically, it's well known uniformly-positive reviews reduce buyer trust for the whole system in general. At this juncture, Amazon's customer reviews have about as much credibility as North Korean news media.

The above state of affairs is bad enough. However, a recent experience has left me with an even lower opinion of Amazon than previously. They are now removing even Neutral feedback. Yes, you heard that correctly. 3 out of 5 stars reviews are being deleted. It’s difficult to see this as anything other than even more desperate whitewashing than before.

I recently left a review like this*:

(* full disclosure:  I had to mock the above image up from a nearly-identical review I'd left another less needy seller about the same time. Because my actual review was deleted I can't show you that. My review was exactly as shown.)

Seem innocuous to you? Me too. However, it unleashed a barrage of harassment, abuse, and unfounded claims of "fraud" by the seller. And ultimately led to my review being deleted without explanation or notification by Amazon.

The harassment started about a week after I’d left the review. When I’d made the order, I had supplied a phone number to Amazon, to be used for delivery purposes only. The item was being delivered to my place of work, so I left the main switchboard number. Experience has taught me never to trust Amazon with any more specific or personal means of contact.

Instead of being used for delivery purposes, the seller in this case (Polimil Ltd) actually had the gall to use the above number to call my place of work and attempt to demand a reason why I had left such “negative” feedback? I happened to be on annual leave (and I wouldn’t have taken such a presumptuous call anyway.) However, rather than belatedly realise that they were behaving like an idiot, this prompted the following email from the seller:

The above message demonstrates some of the many reasons I don’t leave detailed feedback nor engage in discussions about the feedback I do leave. Not only is it presumptuous in the extreme to ask a customer to explain why they find your service merely “Fair” rather than outstanding. I generally find people that are unhinged and socially deficient enough to try and demand such impertinent conversations to be factually incorrect as well. In this case, Polimil claimed that I had made allegations that I had not in fact made. Namely that the goods had been delivered late and were not as described. As you can see from my review above, I didn’t even answer the questions about whether the goods arrived on time or were as described. That’s why those sections have “N/A” against them. My review in its totality was simply “3 out of 5 stars, no comment.”

Anyway, I responded as follows. And expected this to be the end of the matter:

Apparently this clear instruction not to contact me again fell on deaf ears, however. Because a short while later, I received the following email:

By this stage, “Nick” was starting to look like this guy here:

I find the best way to deal with companies that can’t take Do not contact me” for an answer is to ignore their repetitive ranting, and focus instead upon repeating the “Do not contact me” message. In this case, I also felt that this further contact warranted a complaint to the body that are meant to prevent this type of abuse of personal contact information: the Information Commissioners’ Office. My response was:

You’d think that would be clear enough, wouldn’t you? Apparently not:

At this stage “Nick” (Nick Dunkley, btw, I looked him up for the Restraining Order that would surely be coming if he persisted in this pathological nonsense) sent the message above. (Yes, for the record, that’s the 4th contact in one day, including the initial phone call. Two of which emails were sent after being told in no uncertain terms to cease and desist his harassment. I guess Do not contact me” means Persist until you are taken to Court” for some people):

By now, Nick Dunkley had gone full circle and ended up looking like this guy:

(That's Basil Fawlty, btw, for you millennial readers that make me feel old so often by failing to 'get' my cultural references!)

I must admit at this juncture to laughing out loud at the farce this had become. Polimil's inept display of customer disservice had turned mild, neutral feedback into a blogworthy piece of bad press.
The unprofessional way I’d been stalked by these lunatics after making this one-time, low-value purchase had already guaranteed I’d never consider purchasing from Polimil ever again. Only an idiot would persist in contacting a customer that had already said Do not contact me” twice, and who had promised further formal action in the form of a complaint to the regulator if the harassment didn’t stop.

I think that some Amazon sellers lose track of the fact that most customers don’t purchase from them. We purchase from Amazon. They are just a smalltime supplier to Amazon, of which there are many that can be easily replaced. Their business model is so fundamentally-flawed that they actually need to use a third party website to sell their goods. Whose only function is to insulate customers from them. And they need to pay Amazon for the privilege. Sellers are ultimately just a small fish supplier to Amazon, whose crappy customer service Amazon has unwisely tried to outsource to customers to deal with.

Behaviour like this, I believe, will ultimately cause customers to abandon Amazon in the same way we abandoned the High Street a few years back. You can only get away with so much customer abuse and dishonesty. On the high street, it was constant obnoxious upselling at the till that forced once-dominant companies like HMV out of business. Customers made their way onto the internet to avoid that type of tedious and unpleasant behaviour. I think that in time Amazon will also to lose their present place at the top of the supply chain. Because of Amazon's tolerating and encouraging the type of behaviour I and others have experienced from unprofessional sellers like this.

Anyway, as mentioned in the emails, I did contact Amazon to complain about this Seller's complete lack of professionalism, and their seemingly-unhinged staff. I asked for compensation for the harassment I had been subjected to. This was Amazon’s response:

So, a stock letter admitting no responsibility whatsoever by Amazon. Even though the seller stated that Amazon had “advised” them to contact me. Incidentally, I also found this illuminating discussion on an Amazon Sellers’forum. It demonstrates two things. 1) Even Amazon sellers themselves find obnoxious the constant nagging by some sellers for positive feedback. 2) Amazon doesn’t just condone or turn a blind eye to sellers harassing buyers for positive feedback in this way. Amazon goes out of their way to advise sellers to do it. So, I think it’s pretty credible that Amazon have equal or greater liability than the seller for the harassment they solicit from sellers upon buyers.

Anyway, later in the day there was a somewhat ranting response posted against my feedback by the seller. I guess by that stage Amazon had advised them they couldn’t remove my feedback, as I hadn’t given them any plausible rationalisation for doing so. 

I don’t have a copy of the Seller's full rant, but basically they accused me of “lying” and suggested my review may have been "malicious." I think it’s worth re-iterating at this juncture that I had done nothing more than give them a “3 out of 5 stars – Fair” rating, and posted the simple message “No comment.” Even though their subsequent Customer Disservice insanity wouldn’t have warranted nearly such favourable feedback.

This morning, I see that my review has been deleted entirely. So, reviewing Amazon's response above, apparently my feedback being "invaluable to us" and an undertaking that it will be taken with the "utmost seriousness" is code for "We will delete it as soon as we think your back is turned. All the Seller needs to do is rant at us like Basil Fawlty and we'll fold like a cheap suit."

I’ve contacted Amazon customer services this morning for an explanation as to why my feedback was removed, and have demanded that they reinstate it. I won’t hold my breath. I guess companies like Amazon can get so big they forget that they don’t control the whole internet. So this sorry tale of customer abuse and whitewashing gets posted on other review sites and my blog. Instead of being an innocuous “3 out of 5, no comment” rating on a website whose reviews they delete at will.


Update: here is some bonus insanity from Basil. I mean Nick Dunkley. I received the following email from him today:

You can't fix stupid. I find more and more this is the type of low intelligence alpha personality that you encounter as a buyer on Amazon. Which is why I give it a miss whenever possible. 

Still, it was something else to add to the complaint I sent ICO that demonstrates how unbalanced this buffoon of an Amazon Seller/Stalker is.

Friday, 25 December 2015

Building a Total Quality Software environment, with Continuous Integration, Unit Testing, and Dependency Injection. And Futurama.

Recently at work, I’ve been working with my colleagues to set up a Total Quality software environment. I’ve been learning a lot from my peers about topics such as VMware, RTI and Code-First EF. (I’d previously used Schema-First, but Code First brings its own advantages and challenges). What I brought to the party was some project experience in: 

  • Continuous Integration platforms (specifically in this case, TeamCity.)
  • Unit Testing and Test-Driven Development techniques.
  • Dependency Injection to support writing testable code.
  • NAnt scripting.
  • Futurama.

We’ll get to that last one in a minute. Let’s go through the others in order first.

Continuous Integration (CI)

Everygeek who’s anynerd is using it these days. But lots of development teams and companies still avoid it, imagining it to be too difficult, too time-consuming, or just not worth the hassle. (For that matter, those same fallacious criticisms can be levelled at every other item in the list above too. Except Futurama.) A decade ago people used to say the same things about Source Control; thankfully there aren’t too many teams I encounter these days that haven’t got their head around how important that is.

Some teams aren’t even sure what CI is, what it does, or what advantages it brings. They’ve always worked by developers just producing software on their own PCs. And they just deal with any time-consuming fallout when it comes to making that software work in the real world as part of the cost of doing business.

OK, so here’s the unique selling point if you’re trying to make the case for introducing this where you work. Are you ready? What CI adds to your team’s game is simply this: repeatable, verifiable deployment. 

Unit Testing and Test-Driven Development techniques 

Unit Testing has been around for a Very Long Time. I know a lot of people who are otherwise very good developers but who “don’t see the point” of unit testing. And I have been such a developer myself in the murky past. 

The misconception that unit testing is pointless generally comes down to a few fallacies:

  • They believe that their own code always works.
  • The wider team and stakeholders place more value on quantity of new features than upon quality of existing features.
  • They believe that they will always personally be around to ensure that their code doesn’t get broken in the future.

Like most good fallacies, there’s just enough truth in most of these to preserve the illusion that unit testing doesn’t provide enough advantages to the person that has to implement it. Not when compared to the opportunity costs of them learning how to do it, or the kudos of pushing out new features (that don’t work as intended.)

Part of the reason more developers don’t give it a go, is that you have to change the way you write code. Most code I’ve seen in the wild is tightly-coupled. This is a phrase that many developers are familiar with, but in my experience vanishingly few know what it means. Basically, it means that if you are writing Class A, and your class depends upon Class B to do its job, your class will instantiatiate a new instance of Class B itself. This means that if Class B stops working, all you (and Users) know is that your class “doesn’t work.” They won't care if your code is perfect, and it's just that damn Class B that let you down.

So, when doing test-driven development, developers need to add another couple of skills to their arsenal. Which brings us to… 

Dependency Injection (DI)

One type of Tight Coupling is defined above. Code is also tightly coupled when it is too closely tied to one UI. So, if you’re a developer that puts all their business logic in code-behind files or controller actions, your code won’t be testable. Because your code needs the UI to do its job, before it will be able to be verified.

Fortunately, there are frameworks and coding styles out there that help developers implement loose coupling, to make their code independently testable. 

The basic idea behind all of these is that instead of your Class A consuming Class B directly to perform some function, it consumes Interface B instead. That is, some object that Class A doesn’t instantiate itself, satisfies some interface that represents the job Class B was doing for Class A. Typically this is achieved by making the constructor of Class A look like this :

 The above pattern is known as Constructor Injection. What it gives you is the ability to swap out whatever is implementing Interface B when it comes to unit testing Class A. So, instead of the object that really does implement Interface B in live use, you can use what is called a mock instance of Interface B. That is typically some object that always gives you anticipated responses. So you can concentrate on testing Class A. That way, any errors you see can be wholly attributed to Class A.

When you write your classes using the Constructor Injection pattern demonstrated above, DI frameworks provide concrete implementations of objects that implement interfaces at runtime. So, you 'magically' find a usable implementation or Interface B available in Class A's constructor. As the developer of Class A, you don't care particularly about where that implementation of Interface B comes from; that is the responsibility and concern of the developer of Interface B and your chosen DI framework.

This is just one of the techniques that developers moving from code that "just works" need to learn if they want their code to be verifiable. It is difficult to embrace. Because frankly writing code that "just works" is hard enough. And because using these techniques opens up the possibility of developers having to recognise errors in their own code. But unit testing also brings with it a huge number of advantages: The ability to prove that a given piece of code works, not just at the time of writing but every single time you build. And it protects your work from being modified in adverse ways by subsequent developers. 

Unit testing and Dependency Injection are whole topics on their own, so I won't say more about them here. (I'll perhaps save that for future blogs.) With regard to understanding tight and loose coupling, though, I'll leave you with an analogy. If a traveller wants to get to some destination, they don’t need to know what the bus driver’s name will be, the vehicle registration, what type of fuel the bus uses, etc. They just need to know what bus stop to be at, what time, and what is the correct bus number to get on. Similarly, Class A doesn’t need to know everything about Class B or where it comes from. It just needs to know that when it requires an object to do some job, one will be available at an agreed time. Class A instantiating Class B itself is analogous to a traveller trying to build their own bus.

Last time I checked, there were something like 22 DI frameworks that you can use with .Net. The one I implemented at work recently is called Castle Windsor, which I’ve been using for a few years. In benchmark tests it’s not the fastest. It’s not the simplest. And it’s not the most customisable/powerful. But it is the one that for my money strikes the right balance between those competing factors. And it integrates particularly well with ASP.Net MVC and Entity Framework. 

NAnt Scripting 

Continuous Integration platforms on their own give you a powerful way of automating builds and deployments. However, there are advantages to be gained to farming out some of that work to a more specialised tool. NAnt is one such tool.

For any system that gets developed, there are typically 10-25 individual “jobs” that are involved in setting up a copy of the system that Testers and ultimately Users can access. e.g, for a web app you might need to:

  • Create some Virtual Directories in IIS.
  • Copy the files that the website is made of into the folders those VDs point at.
  • Customise a web config that tells the site how to access the underlying database.
  • Create the underlying database in SQL Server.
  • Populate the database with data.
  • Create an App Pool in IIS under which the site will run.
  • Grant the relevant App Pool access to the database

You’d also be well-advised to have steps that involve:

  • Running unit tests, so you don’t deploy broken code.
  • Updating Assembly Information so that each build has an identifying number. That way, bugs can be reported against specific builds.
  • Backing up any prior version so that you can rollback any of the above steps if the deployment fails.

If you put these in a script that lives in your project instead of in build steps on your CI server, you can more easily mirror steps between different branches in your builds. 


One of the things that motivates me is getting to have a bit of fun whilst I work. In the team I joined a few months ago, there has been one common theme tying all of the above threads together: Futurama.

Myself and my colleagues have set up about 10 Windows Server 2012 machines that perform various jobs. e.g., One of them is a Domain Controller. Another is our CI server. Several more act as paired web and sql servers that can be temporarily allocated to testing, by internal testers or by end users. Or they can be used by developers to test the deployment process.

Each of our VMs is named after a Futurama character and has its own distinct colour scheme. (NB: They have a fully-qualified name too, like DVL-SQLALPHA, that describes their actual role.) This helps developers stay oriented when RDP-ing around what would otherwise be nearly-identical machines. It’s also fun.  

You saw how TeamCity / Professor Farnsworth looked above. This is how one of our Web Servers, characterised after Zapp Brannigan, looks. As you can see, it's easy to tell which VM you're on, even from a distance:


There are Futurama-themed Easter Eggs hidden in other parts of our build process too. e.g., each CI build produces a log file. At the end of which, a build gets reported as “Successful” or “Failed” for some detailed reason. A recent evening in my own time, I wanted to test implementing custom NAnt functions. (NAnt is written in C#, and you can write functions in C# to augment what it does.) In order to test this with something non-critical, I augmented that custom “Success” or “Failure” method thus:

The exact piece of ASCII art that gets rendered reflects whether the build was successful or not, and is semi-random. So, you might get Hermes with a brain slug saying something dumb if the build is broken. Or you might get Professor Farnsworth announcing “Good news, everyone!” if all went as planned.

These 'features' are of course whimsical. But at worst they give developers a smile during some of the tougher moments of the job. And at best they give you a chance to test out new techniques on non-critical features. As well as giving your brain a rest between more intensive tasks.

The best teams  I’ve worked with all knew their onions on a technical level, but also knew when to have fun too. I'm glad to be working in such a team at present. e.g., I recently implemented the following function:

My colleague Ian made me chuckle when I discovered this in our code repository a few weeks later: