Clicky

Sunday, 14 December 2014

Product Review - LED Lenser LED7299R H14R.2 Rechargeable Head Torch




I bought one of these for running during the Winter months, when you inevitably find yourself having to make some runs in the dark or twilight.

There are plenty of options out there - ranging from an offering at £5 from Tesco, right the way through to Hollis Canister diving head torches at £800. Obviously, there’s a trade off between getting what you pay for, choosing a light that’s suitable to your purpose, and not spending more than you need to.

After checking out other reviews for several different options, I opted for the LED Lenser LED7299R H14R.2 Rechargeable Head Torch. You can spend anything from £90 to £130 depending on where and when you choose to buy this model. There’s also a similar-but-cheaper model in the same range that isn’t rechargeable. (No reason that you couldn’t buy separate rechargeable batteries of course.) However, I liked the convenience of having the recharging unit built in. It can alternatively take four conventional AA batteries, which you can use as a backup.

For running, it was important that the torch had enough light output to be able to see in pitch darkness on unlit trails with occasional tree cover that blocks ambient light. It was also important that it was comfortable to run with. A lot of runners recommended the Petzl range of head torches. I can see why. They’re a lot lighter than the one I chose (whilst at the same  time being a lot dimmer - typically about a third to a quarter of the light output.) My main criticism of the LED Lenser H14 R2 is that it can feel a bit hard and uncomfortable on your head, particularly the front torch holder. A softer, more padded material behind the lamp would have made it much more useable. As is, it’s more comfortable with a beanie hat underneath, but I wouldn’t fancy trying to run with it overnight in the Summer when a hat would make you overheat.

In terms of light output, it was difficult to find reliable information. The minimum light output was fairly consistently reported by various sources to be 60 Lumens. The product box and the site where I bought it both say the maximum output is 850 Lumens. Other sources quoted as low as 260 to 350 Lumens.There appears therefore to be some confusion about what is meant by "maximum". Namely, the torch has a 'boost' setting that increases brightness for 10 seconds at a time. However, there is a second definition which is the maximum brightness that the torch is able to consistently maintain. I suspect this distinction accounts for many of the differences reported by different sources.

60 Lumens is about as good as the majority of the Petzl range. The brightest setting for the H14 R2, whatever the real value in Lumens,  is a very bright light that is uncomfortable to look at directly at the highest setting. The very highest setting (known as the "boost" setting) only stays on for 10 seconds at a time. Most of the rest of the time, I used it at the highest 'stable' setting.

On that highest constant-current setting, the light can be diffused over an area about 5m wide and 10m far directly in front of you. You can also elect to have a narrower but more intense beam. The specs say it will project light up to about 260m. I found that not to be the case, though I did stick to the “wide and bright” setting throughout my run. Perhaps the boost setting when combined with the narrowest beam would momentarily illuminate the farther "260m" distance quoted for 10 seconds at a time; I didn't test that, because such brief and narrow momentary brightness isn't relevant for my use case or many others I can imagine. I did test the range on the max consistent setting combined with a wide beam when I returned to my car. I found that whilst that setting is quite good enough for running/walking in the pitch dark by allowing you to see what's immediately in front of you, the light didn’t even make it across to the trees at the far end of the 100m or so car park I was in. I’ll try it again on the “narrow beam, temporary boost” setting during my next night run. However, whilst I suspect that the specs are technically correct and that objects will be able to be illuminated at that distance, albeit briefly, it is only with a beam that’s about 1m wide. It's for the reader to decide whether that performance meets their actual needs.

I found the light was good enough for my use case. I ran during astronomical twilight (the third darkest phase of the night; pretty much pitch black for the purposes of this test.) Without the torch, I would just about have been able to see my hand in front of my face in open ground, but not the path I was running on. On stretches covered by trees, it'd have been completely dark. As it was, I missed a pothole in the same forested location (once on the way out, and once on the way back.) I couldn’t see how I’d done this at the time, as I felt I’d been seeing the path well enough to run at a normal pace. I stumbled at the exact same spot again, however the very next day during daylight. So, it just appeared to be a particularly well-camouflaged pothole, rather than a failing of the torch. 

The final lighting feature of note in this torch is the rear red light that you can turn on to allow traffic and cyclists to see you more easily. I thought that was a nice little safety feature. Although, there's no real way to tell if it's on or off once you have the torch on, and the button is very sensitive. Other non-lighting features include a battery-power indicator (the rear LED glows red, amber or green for five seconds when you switch it on, to let you know how charged up the battery is.) I've used mine for less than an hour so far, and it's still in the green from its first charge. I'll update this review with how long a full charge lasts when I've gone through a full cycle. Lastly, you can detach the battery pack (and the front torch itself if you want) and wear them as a belt attachment. I personally prefer the light being cast wherever I'm looking, and didn't find the battery pack intrusive where it was, so haven't used this option.

The last point I want to note about this product isn't about the torch itself. It's about the user manual that comes with it. For a top-of-the-range piece of kit, the quality of the instruction manual translation leaves a lot to be desired. It's some of the worst Deutsch-glish I've ever seen. Take this excerpt for example:


It's so bad that at first I thought I might have been sent a fake item, since I couldn't imagine any self-respecting manufacturer allowing such a poorly-translated document to accompany their product. But, the supplier I used (ffx.co.uk's) bona fides checked out. And, checking with LED Lenser's own website, it seems that they've just done a very bad job of translating the user manual of an otherwise very good product. You can read the full manual (downloaded from LED Lenser's US site) for yourself here


All-in-all, I’m glad I bought this piece of kit. It’s good enough for what I need it for. The head harness could be a little more comfortable, but it’s very usable for its intended purpose nonetheless. I feel a Petzl and other cheaper options would probably not have been bright enough for what I need. And other more expensive options would have been brighter still, but wouldn’t have been designed to wear out of water.

Not a bad purchase : 7/10

Sunday, 21 September 2014

Amazon deletes negative feedback that it doesn’t agree with - how can anyone trust a company that behaves that way?





Amazon has been lowering customer service standards for quite a while. Despite being a company that in the past has wisely avoided self-harming behaviour like spamming and ripping off customers, lately they seem to have Jumped The Shark. My recent experience with them demonstrates a Google-level degree of cynicism in their dealings with customers.

This month I purchased a couple of running tops from SportsShoes.com. This is SportsShoes.com’s Amazon storefront. You may, like I was, be impressed by the 4.8 out of 5 stars averge review that other consumers had apparently given this vendor. You may also be particularly surprised to compare it with this 2.1 out of 5 stars rating from another popular independent review site. (Something I really wish I had done before foolishly trusting Amazon’s own ratings at face value.)

How did those ‘customer ratings’ get to be so different?


After receiving my running tops, in short order I received the following unsolicited email from SpamShoes (as I now think of them) -




OK, as First World Problems go, it’s right up there. But, avoiding annoying spam like the above begging for feedback and further business is one of the main reasons I’ve used Amazon in the past. Amazon has a setting in their user options that allows you to opt in to receiving reminders about leaving feedback, if you want to. Like most people, I have that option set not to bother me. I don’t use Amazon to help people build their business. I use it as a consumer for my own convenience. Period. So, when an individual vendor decides to ignore my preference and contact me anyway, that rankles.

So, I sent a response back to the vendor saying that I didn’t appreciate their spam, and reminding them that Amazon themselves will send us an email reminding us to leave feedback if we have agreed to receive one. The vendor doesn’t need to know what my preference about receiving feedback reminders is, only that I have one and would have received a reminder already if I’d asked for one. This is the response I received:


Thank you for your email,

I am very sorry that you feel aggrieved by our email, this is an automated email sent to all our customers. It's a courtesy follow up email to our customers mainly to say thank you for ordering and we hope you're happy with the purchase. But it's also a chance for any customers who may have had a problem to contact us so that we can resolve this. We are not begging for your feedback, it's just a polite reminder for you to leave some if you wish. The setting you refer to on your buyer profile, I can only assume to be for Amazon fulfilled orders only as we are unaware of any settings on your profile.

We received your negative feedback for your order, however contacted Amazon regarding this as we felt it was unfair as no spam emails have been sent. They have agreed with us, and removed the comment as they have acknowledged no spam emails were sent.

Finally, I can assure you we're a very professional vendor with a vast customer base. As I'm sure you can see from our feedback ratings, we generally do a good job which is reflected within the percentages. We'll continue to provide the service we are currently on both Amazon and our website.

Please be assured, you'll receive no further emails from our company.

Kind Regards,
Adam


Spammer doesn’t want to recognise they're a spammer shocker. Those perpetrating the act rarely choose to recognise they're doing anything wrong. No apparently doesn't mean "no" for these people. It means you must have misunderstood their intentions. Whilst they undoubtedly know deep down that they're behaving badly, they completely fail to recognise how pathological and self-defeating their behaviour is. You made a purchase from them once. So they feel entitled to invade your inbox whenever they like. They're the date rapists of the marketing world. It's no wonder they need to pay a third party like Amazon to be able to do something as simple as communicate with potential customers.    
 
This alone would not keep me up nights - plenty of businesses do dumb things that alienate their customers, without ever recognising how dumb or self-defeating they are. (Even when, as in this particular case, their business model is so fundamentally flawed that they actually need to sell their goods through a third party website, the only benefit of which is that it allows consumers to withhold their real address from the vendor!)

The part that does surprise me, and I believe should surprise any consumer that uses Amazon, however is that part in red where the vendor boasts about having been able to easily remove my negative feedback merely by asking Amazon to delete it.

Here is Amazon’s advice to Vendors about when feedback can be deleted. My review (which I don’t have a copy of since it was deleted) didn’t breach any of these rules. It merely stated my opinion that I had received unsolicited email from the vendor that I considered to be spam, and that as a consequence I was glad I hadn’t exposed my real address to them.

Looking around the internet, it seems like I’m not the only one that’s had a problem with their reviews and feedback being deleted. (There are plenty of other examples of negative reviews of both vendors and products that you can Google on your own if you wish.) In my case, I contacted both Amazon Customer Services and Amazon CEO Jeff Bezos to ask what their policy actually is about deleting reviews they merely disagree with (as opposed to any that breach their published rules.) In both cases, I specifically asked which of Amazon’s feedback guidelines my feedback had breached? And if none why was it deleted anyway? Customer Services merely restated that the vendor didn’t agree with my review. In Jeff’s case, there was no response at all. 

So, I’m forced to conclude that Amazon’s customer feedback ratings are nothing more than a sham. If the vendor in question (SportsShoes.com) hadn’t been dumb enough to send me further unsolicited email bragging about how easily Amazon had agreed to remove feedback they didn’t like I wouldn’t even know the review had been deleted since Amazon themselves didn’t even have the courtesy to tell me.

So, next time you’re perusing Amazon, have a think whether that ostensibly-5-star vendor you’re reading other consumers’ opinions about might really be a 2-star Del Boy outfit that’s just playing the system. And next time you’re considering whether to leave feedback about one of your purchases, positive or negative, to help other consumers. Stop to think whether you’re contributing to an honest feedback system that actually helps fellow consumers make better purchasing decisions, or merely lending validity to an artificially-whitewashed feedback system that has no credibility whatsoever.

Thursday, 20 February 2014

Scalability, Performance and Database Clustering.


What the Exxon Valdez and database clusters have in common


I was recently asked to comment on the proposed design for a project by a prospective new customer. The project involved a high number of simultaneous users, contributing small amounts of data each, and was to be hosted in the Cloud. The exact details were To Be Decided, but Amazon EC2 and MySQL were floated as likely candidates for the hosting and RDMS components. (Although my ultimate recommendations would have at least considered using SQL Azure instead, given some of the time constraints and other technologies involved that would have dovetailed into the wider solution.)

The discussion got me thinking about the topic of database clustering, as it relates to performance and scalability concerns. During the course of the discussion of the above project with the client’s Technical Director, it transpired that, despite the organisation concerned having used clustering in an attempt to improve performance previously, that approach had failed.

The above discussion didn’t surprise me. It’s a misunderstanding I’ve witnessed a number of times, whereby people confuse the benefit that database clustering actually bestows. In short, people often believe that using such a design aids scalability and performance. Unfortunately, this isn’t the case. What such an architecture actually provides is increased reliability, not performance. (It’s actually less performant than a standalone database, since any CRUD operations need to be replicated out to duplicate databases). Which is to say that if one database goes down, another is in place to quickly take over and keep processing transactions until the failed server can be brought back online.

The analogy I usually give people when discussing the benefits and limitations of clustering is that it’s a bit like the debate about double hulls on oil tankers. As you may know, after the Exxon Valdez disaster the US Government brought in legislation that stated every new oil tanker built for use in US ports was to be constructed with double hulls. The aim was admirable enough: to prevent such an ecological disaster from ever happening again. However, it was also a political knee-jerk reaction of the worst kind. Well intentioned, but not based on measurable facts.

Of perhaps most relevance to the topic was the small fact that those parts of the Exxon Valdez that were punctured were in fact double-hulled (the ship was punctured on its underside, and it was double-hulled on that surface). Added to this is the fact that a double hull design makes ships less stable, so they’ll be that little bit more likely to collide with obstacles that more manoeuvrable designs can avoid . And, just like in database clustering, the added complexity involved actually reduces capacity. (In the case of ships, the inner hull is smaller; in databases the extra replication required means less transactions can be processed in the same amount of time with the same processing power.)

As with all things, the devil is in the details. You can design clustered solutions to minimise the impact of replication (e.g., if you make sure the clustered elements of your schema only ever do INSERTs, the performance hit will be almost negligible). But, many people just assume that because they are clustering that in itself will automagically increase performance, and it’s that misconception that leads to most failed designs.


I’ve been involved in a couple of projects that involved either large amounts of data in one transaction impacting on a replicated database, or large numbers of smaller individual transactions being conducted by simultaneous users. In neither case, in my experience, was clustering a good solution to the design challenges faced.

The first project I have as a point of reference was one I worked on back in 2007, that involved a business intelligence application that collected around a million items of data a month via a userbase of 400 or so. I was the lead developer on that 7-person team, and so had complete control over the design chosen. I also had the advantage of having at my disposal one of the finest technical teams I’ve ever worked with.

The system involved a SQL Server database that was used by around 30 back office staff, OLAP cubes being built overnight for BI analysis, and certain sub-sections of the schema being replicated out to users that accessed the system via PDAs over GPRS (which of course will have been replaced by 3G / 4G now). The PDA users represented the bulk of those 400 users of the system.

The design we settled upon was one that traded off normalisation and database size for the least impact on those parts of the schema that needed to be replicated out to the PDAs. So, CRUD updates made in the back office system were only transferred to near-identical, read-only tables used by the PDAs once an hour (this could be fine-controlled during actual use to aid performance or to speed up propagation of information as required). This approach meant that the affected tables had less sequential CRUD operations to be carried out whenever the remote users synched over their low-bandwidth connections. And if they were out of range of connectivity at all, their device still worked using on-board, read-only copies of the backoffice data required.

The second main consideration in the design involved a large data import task that happened once every six weeks. One of my developers produced a solution that was algorithmically sound, but that quickly reached the limitations of what an ORM-driven approach can do. In short, it took several hours to run, grinding through thousands of individual DELETE, INSERT and UPDATE statements. And if any consistency errors were found in the data to be imported (which was not an uncommon occurrence) the whole process needed to be gone through again, and again, until eventually it ran without hiccups. It wasn’t uncommon to take a skilled DBA 24 hours to cleanse the data and complete the import task successfully. Meanwhile, the efficiency of those replicated parts of the schema used by the PDAs would be taking a battering. A better approach was needed.

In the end, I opted for using SQL Server’s XML data type to pass the bulk upload data into a stored procedure in a single transaction. Inside the procedure, wrapped in a reversible TRANSACTION, just those parts of the data that represented actual changes were updated. (E.g., it wasn’t uncommon in the imported data to have a DELETE instruction, followed by an INSERT instruction that inserted exactly the same data; the stored proc was smart enough to deal with that and only make those changes that affected the net state of the system). I designed the stored proc so that any errors would cause the process to be rolled back, and the specific nature of the error to be reported via the UI. The improved process ran in under a second, and no longer required the supervision of a DBA. Quite a difference from 24 hours.

The second project that informs my views of clustered database designs was one that I wasn’t the design authority on. In this case, I was just using the database(s) for some other purpose. Prior to my involvement, a SQL Server cluster involving three instances of the database was set up, and kept in sync. The solution was designed for use by a vendor of tickets for all sorts of events, including popular rock concerts. It wasn’t an uncommon occurrence for the tickets to go on sale, and for an allocation of many thousands to be sold out in literally ten seconds flat, as lots of fans (and I’m sure ticket touts too) sat feverishly pressing F5, waiting for the frenzy to start. (And sometimes, if the concert organiser got their price point wrong, you’d find that only a few tickets were sold for an over-priced event, but that’s another story!)

In the case of this design, I never did see the failover capabilities come into play. Which is to say that each of the three SQL Server instances that replicated the same data for reliability reasons all stayed up all of the time. I had a feeling that if one ever went down for reasons of load, however, it wouldn’t have been long before the others would have suffered the same fate. And since it was an on-premise deployment rather than being cloud-based, something like a power cut would have stopped the show dead.

It’s not that common for hardware to fail just because a high number of requests are being made simultaneously. All that will happen is that some users won’t get through (and you as the site owner will never know that was the case). It’s not like the server will shut down in shock. Even the recent low-tech attacks to large online retailers like Amazon using amateur tools like LOIC didn’t damage any critical infrastructure. At best, such conditions can saturate traffic for a short while. And often they don’t achieve even that much.

As a final point, I’d note that there are far greater concerns when designing an authenticated, public-facing system, such as CSRF vulnerabilities. Any attempt to address performance concerns by using clustering will inevitably adversely affect those security concerns. Because commonly-accepted solutions to same typically rely on data being reliably saveable and retrievable across short time frames (rather than getting in sync eventually as most clustering solutions allow for).

So, in summary, whilst there’s a place for database clustering for reasons of reliability, my earnest advice to anyone considering using that design for reasons of performance or scalability is to reconsider. There are usually changes you can make to your database schema itself that will have the same or better impact on the amount of data you can cope with in a short timeframe, and the impacts that data will have on your wider design. Don’t end up like Fry from Futurama, lamenting how your design might have worked had you only used (n+1) hulls/servers rather than n :