Clicky

Saturday, 26 November 2011

Moving, keep on moving.

The company that I’m working for at present is in the process not just of designing their latest product, but of moving offices. A small advance party has been sorting out arrangements at our new location over the past few weeks, and, on Friday, our team became the first group to take up permanent residence.

This is how the new place looks. Somewhere in that photo, there are eight developers hard at work [clue: the kitchen, containing the coffee machine, is just out of shot on the left].



It was kind of weird having the whole place to ourselves for now. I guess that will change soon enough, as the rest of the company ships across over the next month.

The open-plan layout represents a stark contrast to the previous location, which was rather warren-like in character. It’s nice – I like it. The only thing that had me slightly perturbed was that, despite only being a couple of miles from the old place, my satnav decided to get creative when I left for the evening, and took me home via an entirely different route to the one it recommended from the old place, involving lots of unfamiliar local roads.  It turned out to be a very fast route indeed, but it did make me wonder, as I belted along in the dark, about the mathematical implications of the common algorithms used by satnav devices in selecting the ‘best route’ from A to B that could result in such radically different conclusions when A moves by just 2 miles. I think that must mean I’m a software developer at heart. :)

Saturday, 5 November 2011

Silverlight, Prism & MEF

In my latest project I’ve been working with Silverlight (version 4), in conjunction with Microsoft’s Prism framework, and the Managed Extensibility Framework (MEF). Prism was designed by Microsoft's Patterns & Practices group, and is also sometimes known by the somewhat less catchy title of “Compososite Application Guidance”. It's a generalised framework for composing UIs in WPF and Silverlight that are comprised of loosely-coupled parts. It makes use of patterns like Dependency Injection / Inversion of Control to achieve loose coupling between the various constituent components of a solution. This means that those components can be designed by teams working in isolation, and leaves the design open to be extended in the future. Whilst Prism itself isn’t specifically tied to the MVVM pattern, the loose coupling it enables is typically used by developers working in Silverlight or WPF as a mechanism to help implement that pattern, and to thereby leverage the testability and extensibility benefits that MVVM bestows. My current project has also provided some other interesting challenges for me, in the form of technologies that I’ve been encountering for the first time in a live project. Specifically, for reasons related to the product itself, the project I’m working on utilises several Postgres databases running within a Linux environment – not an everyday challenge for a .Net developer by any means. This blog entry provides a brief overview of the above toolsets, and their purpose within the overall design.




MVVM (Model View ViewModel) is a development pattern that was first proposed by John Gossman, and is based on an even earlier pattern by Martin Fowler called the Presentation Model. I wont attempt to describe MVVM in full here (there are a great many other articles available on the web that do that far better than I could, for example this one here, as well as useful training videos that give you an introduction to some actual coding techniques to implement the pattern. More on those later.) For the purposes of this blog, I’ll just summarise by saying that MVVM is a pattern that takes code that would ordinarily reside in code behind files, and pushes that code into separate areas of the application known as ViewModels. The code that resides in these separate ViewModel classes represents the pure business logic of the application. This is where the pattern starts to become useful in ways that have made it the most popular and widely-adopted approach for extensible WPF and Silverlight applications. Separating concerns in this way makes the ViewModels in which the business logic resides independently unit testable, which in turn makes the design more resilient to ongoing development and change. So, any developers that need to alter the design in the future after initial development has been completed and the product is in a care and maintenance phase can have greater confidence that any changes they may implement will not break existing code. 

Prism: Not the only option for implementing MVVM
Prism isn't the only option for implementing the MVVM pattern. There are many alternative MVVM frameworks out there, e.g. the MVVM Light Toolkit. As noted, Prism version 4 isn't in itself tied to MVVM in any way. It’s just a generalised framework for designing extensible, composite UIs that exhibit loose coupling. However, the loose coupling that Prism provides is commonly used by WPF and Silverlight developers as an enabling mechanism for implementing MVVM. Which framework and pattern you may choose for a given solution is entirely a judgement call. Prism and/or MVVM may well be over-complicated for some types of simple application. Just because you're working in WPF or Silverlight, doesn't mean you should de facto be using Prism, or implementing MVVM. There are overheads involved in deciding to use each, and it's worth noting that they won’t be the right choice for every single project that uses those technologies. Even John Gossman himself has suggested that some of the approaches that MVVM facilitates may be overkill for simpler applications that are not intended to be long-lived, extendable or scalable. Prism contains a great many discrete features for building composite UIs. (e.g., it provides a selection of Navigation frameworks, the ability to group individual Views into Regions and Modules, has integration with the separate Unity or MEF components for Dependency Injection, and even facilitates the option to use your own preferred DI framework of which there are literally dozens of options if you prefer, as well as features such as its Event Aggregator which facilitates communication between loosely-coupled components in a way that leaves the publishing and consuming components freely independent of one another). However, what's nicest about Prism is that developers are free to pick and choose which specific parts of that significant set of features they will use for their own particular solution. You don’t have to use it. And, if you do choose to, you’re not forced to use it all.

When implementing MVVM, Prism may be used in a range of discrete ways to facilitate reliable, scalable results in a reasonable amount of time.
I wont go into all of Prism’s more discrete areas that I touched upon above within this blog, but can recommend this excellent guide to Prism in conjunction with this equally-enlightening guide to Silverlight 4 development for further reading. The Prism guide is part-written by the lovely Ward Bell, whose insightful videos on IdeaBlade’s DevForce ORM tool gave me many an enlightening chuckle a few years back. Get ’em on Kindle so that you can read them on your PC instead of lugging heavy books around – you know it makes sense. ;) 

Before moving on from this discussion of Prism and MVVM, I'll just lastly note that there's a great little series of instructional videos by Microsoft's Mike Taulty available. 



It's a rare individual that can grasp technical concepts clearly, and communicate them to others. Mike manages to do so beautifully over the course of just a few hours. His videos are great for getting a feel not just for what Prism and MVVM can do, but also along the way they give quite a bit of insight into how composite applications can be more complicated to understand than traditional, tightly-coupled designs. And they demonstrate how easy it is to build up an illusion of loose coupling using frameworks like Prism, whilst in realty just turning turning tight coupling that would otherwise cause build errors at design time into breaking functionality at runtime that are harder to track down and eliminate from the design. When you get to the part of the videos where Mike is having to copy DLLs into bin folders by hand to ensure that the project still works as expected you'll see what I mean.
I appreciate that Mike's videos do contain some contrived examples, but still voluntarily immersing yourself back in the Bad Old Days of DLL Hell isn't what any pattern or practice that's meant to make developers’ lives easier should be about.  That illusion of loose coupling whilst really still having all the same interdependencies of tight coupling in disguise is also evident when Mike doesn't attempt to type in the XAML configuration files that describe how his Modules should be loaded and what dependencies they will have on one another. He instead stops the video when it gets to those points and gets said config files from a Blue Peter -stylie pre-prepared example, because in his own words typing in such config files by hand is “prone to error”. It's ridiculously easy to get such configuation information wrong in ways that will only become apparent when a user tries to use the components controlled by them at runtime. And it's even easier to get the configs right in the first instance, but for them to become invalidated at some later point during the lifetime of the extensible application involved by later development work. It's a judgement call for the developer/architect to make as to whether the risks vs rewards of using Prism make using it the right decision for a given project.

Getting back to MVVM as a concept in its own right, as I mentioned the last time I wrote about Silverlight, one of the main strengths of and purposes behind the XAML-based design environment that each of these technologies uses is the separation of concerns that XAML facilitates between the related but distinct goals of design and development. Using XAML as a common medium allows designers that are primarily concerned with the look and feel of an application to work hand-in-hand with developers, whose primary responsibility is making sure the application actually does what it was functionally intended to do. MVVM adds a further refinement to that collaborative effort within WPF/Silverlight. It enables developers to protect their work against any unintended behaviour that may be introduced after initial development, by allowing their business logic code to be unit tested in isolation from the purely visual elements of the solution. For designers, MVVM allows the visual elements that they are most interested in (called the Views) to be designed independently from the application state and business logic stored in the ViewModels. In the case of designers, using MVVM is also useful for simulating application state at design time, thereby allowing them to refine the visual representation that the UI will be required to have when it experiences those states when the application actually runs. The final part of MVVM – Models – concerns those parts of the application that are purely concerned with the business of communicating with whichever specific back-end database is being used. This is particularly useful in allowing developers to focus on business logic in isolation, without becoming overly-involved in issues that are more generally within the purview of DBAs.

Some clichés about Web Designers
and Developers. And Women.
Designers use a tool called Expression Blend to work on the visual elements of WPF/Silverlight applications. As a developer, you can think of Blend as a great big Photo Shop for Visual Studio 2010. It allows all those subtle little visual effects that differentiate between professional and amateur software products to be applied to functional designs in a way that takes those designs from being merely useable to being useable + refined. One of the first problems that users of Blend typically experience when working with .Net projects is that any moderately-complex business logic that is tightly coupled with UI elements has a fairly high probability of completely breaking Blend. Typically, problems that will break Blend in this way include things like database connectivity that is not available at design time being required to provide data for the UI Element’s initialisation. Since Blend semi-initialises components in order to render them in its design surface, this presents an additional challenge for developers. If you’re a developer user of Blend, initialisation issues of the type described are just about a manageable problem, since it’s possible to attached Visual Studio’s debugger to the Blend executable using Visual Studio’s “Debug -> Attach to Process…” feature, and thereby discover at precisely which point in the code Blend is failing to render a given control. Fixes to such problems will typically involve using C#’s System.ComponentModel.DesignerProperties.IsInDesignTool Property to intelligently protect those areas of code that are erroring out. For designers, who typically don’t touch the underlying code, this issue can present more of a brick wall – if there’s an area of code underlying a UI element that they need to style that doesn’t work, they’re pretty much stuck without a developer’s help.

By using MVVM, any code that UI Elements use can be separated out into their own classes (called ViewModels). By separating concerns in this way from the outset, it makes it less likely that UI Elements will be fundamentally unable to be rendered in Blend at design time. Most MVVM frameworks will render any constructor logic within a base ViewModel class that utilises the DesignerProperties.IsInDesignTool Property mentioned above within a common base constructor. By solving this problem once and for all, and avoiding rather than dealing with the consequences of designtime / runtime issues of this type, MVVM helps to significantly reduce the number of occasions when design members of the team will have to call on busy developers for help.

Concentrating on the developer-centric benefits of MVVM, by separating out business logic code into ViewModels developers obtain in return greater control over the stability and testability of their functional code. If you’ve worked on a software product of any level of complexity with a moderately-sized team, you’ll know that there is ample scope for code that has been written, debugged and tested to begin experiencing unanticipated behaviour at some later point in the development cycle, through no fault of the original developer or tester. Developers that are starting out (and even some that have been around long enough to know better!) are sometimes misguided enough to believe that their own code is bug-free.
This is usually not the case, but, even if we were living in some mythical Nirvana universe where we were all super-developers that produced error-free code first time every time, there is still ample scope for any developer of any level of ability to have bugs inadvertently introduced into their code by other team members as the collaborative software product you’re building together evolves over the course of the project. This is where the Unit Testing that MVVM facilitates becomes useful. Just to make my position clear on this subject: I’m not an advocate of what I would term ‘fundamentalist’ Test Driven Development. Which is to say that I don’t advocate the approach to TDD that involves writing tests before writing code, or that encourages writing lots of meaningless tests for even the most simplistic parts of the system. I do, however, support the concept of writing tests to protect existing human-tested code against unanticipated bugs being introduced by subsequent development. MVVM allows various frameworks such as the Managed Extensibility Framework (MEF) and Unity to work in conjunction with Unit Testing and Mocking frameworks such as xUnit and Moq to test ViewModels in isolation at design/test time. This allows developers to set up unit tests that automate the process of confirming ‘What should happen if…’ scenarios. [And, in the case of MEF, there is also lots more functionality on offer, including the ability to group loosely-coupled independent components of the solution into a coherent composite application. You can find lots of good info about how to use MEF in conjunction with Silverlight here. ]


Well, so much for this overview. I’m enjoying working with Prism, as well as the other tools/environments that I mentioned such as Postgres and Linux for the first time, and thereby evolving into an ever-more-experienced developer with first-hand insight into how a range of technologies are being used on live projects. As the name of this blog suggests, it’s not just the code or the software products that I’ve been involved in building over the years that’s a Work In Progress, it’s myself and my professional experience that are ever-evolving. One of the things that makes that process of learning and personal development possible is having a great team to work with in a positive and forward-thinking environment. I’m very pleased to be working with such a knowledgeable team at this time, who have many strengths in those areas that are newest to me, and are open enough in their outlook to allow my ideas and discrete experience to be part of their already-impressive mix too.

Monday, 8 August 2011

Product Review: 120 Gb Outbacker MXP Biometric Hard Drive


This is a great little tool for using in environments where you’re required to work with sensitive data. The biometrics in the device behave flawlessly (having heard about some other less-than-impressive implementations of fingerprint recognition technology, I had been a little worried that this item would be one of those that had lots of ‘false negatives’, but I needn’t have worried – it does what it’s designed to do, and it does it every time). The drive itself is sturdy, small and absolutely silent in operation. It doesn’t heat up in the slightest, and the rubber feet and chunky construction mean that it sits steadily on whichever surface you put it on, so you can use the fingerprint reader with one hand, without having to use your other hand to steady the unit. It also comes with a power adapter (which, admittedly, looks a little flimsy and makeshift once you plug the UK adapter into the two-pronged US-format base unit). On my PC and laptop, the power adapter wasn’t needed anyway, provided I plugged it straight into a USB port (it wouldn’t work on its own through an unpowered USB hub). For £40, I think the fact that it comes packaged with an adapter at all is pretty impressive; many of the more expensive drives I’ve bought have had adapter sockets, but haven’t come with an adapter unit.

The pre-loaded software works seamlessly in Windows 7, and is well thought-out. You just plug the unit in, and a new drive appears in your list that contains the start.exe programme you need to get going. Double-clicking on start.exe guides you through the simple (less than one minute) process of registering two fingerprints and setting you up to use the drive. You choose an administrator password during that initial setup (which you can use to access the drive should the biometrics fail at any point, or should you be in a position where you can’t physically access the drive but still want to unlock it; e.g., if you’re using VPN to access a PC it’s connected to). After that initial setup, it’s just a matter of plugging the drive into whichever PC you want to use it on, swiping your finger once over the reader, and you’re good to go.

My only slight quibble is the ugly, branded “McAfee” logo that’s been plastered over the top of the drive! It’s hideous!! If you want to, you can remove this eyesore easily enough with the edge of a coin, just like using a scratch card, without damaging the underlying silver paint. To avoid any mess as the little flecks of powder-coat paint re-adhere themselves to the silver case after rubbing off the logo, I recommend covering the logo with ordinary, clear sellotape first, then rubbing gently over the tape with the edge of a coin, then peeling back the tape to remove the logo (which adheres neatly to the tape and comes off cleanly in one piece rather than turning into thousands of individual sticky dust particles).

In terms of transfer speed, the on-the-fly encryption used is just about noticeable. The transfer speed I saw on large ( > 6Gb) files was around 10Mb/s. My unencrypted SATA II drive manages around 40Mb/s by comparison. This was still a good enough performance for me, and compared favourably with the SSD + TrueCrypt Traveller Disk arrangement I had been using prior to this purchase.

All in all, minor quibbles aside, this is easily the best bargain I’ve seen in a while. £40 for a reliable 120Gb biometric drive that has reasonable transfer speed is simply superb value. I recommend this item for anyone that’s serious about security, but who doesn’t want to have the hassle of setting up TrueCrypt/Bitlocker etc, or remembering passwords, whilst moving data around.

You can buy one here.

Saturday, 19 February 2011

Code Reviews


One of the practices thats become semi-common within development teams over the past ten years or so has been the observance of formalised management/peer reviews of team members’ work, known as Code Reviews. The specific format these reviews take, and who carries them out, varies quite a bit from workplace to workplace, and project to project. This article covers some of the approaches I’ve encountered within various working environments, and provides some personal viewpoints on what works and what doesn’t when carrying out reviews. 






When I first started writing software professionally, in the late 1980’s, it was a fairly solitary activity. Development ‘Teams’ tended to be comprised of at most two or three individuals. Often, only one of those people would be writing the actual code, and it was rarely the case that anyone other than that single coder would take an interest in the actual implementation of the solution (as distinct from a user-level understanding of the functionality being developed). Code reviews were therefore fairly superfluous to the development process – solutions either worked, or they didn’t, and it was usually fairly easy to see what was or wasn’t working without having to enter into too much analysis of the subject.

In recent years, since around about the turn of the millennium, it’s become more popular to have larger development teams, comprised of more specialised (and less broadly-skilled) individuals. The reasons for this change in approach are many and varied, but perhaps one of the main drivers is that there are so many more development technologies available now in comparison to those early days, and having larger teams allows different parts of those larger and more diverse teams to better focus on more specific parts of the solution. It’s common nowadays to find 3-4 developers, a DBA (sometimes more than one, especially in the Public Sector), Business Analysts (including some that don’t have a coding background), Project Managers, and specialist users with expert knowledge of the problem being solved, all working together within dedicated teams aimed at producing a single software product. There are some advantages to this more modern approach, including greater support for individuals within the team and an increased ability to fill gaps left by any departing members, though there are some disadvantages too. Specifically, a corollary of working within those larger and more diverse teams is that, when working on multiple fronts consecutively, there is more chance of any one particular area of work getting out of sync with the rest of the team’s activity. By increasing the separation of concerns, development teams also multiply their potential points of failure. This is where Code Reviews come in.

Code Reviews, when done correctly, can have multiple benefits to a properly-functioning and well-motivated team. They can help individuals to communicate their ideas with, and engage in mutual learning from the experience of, their peers and technical management, thereby facilitating a two-way process of learning and mutual understanding. They can make the product being developed more robust and maintainable, since it stands to reason that the more members of the team that know about the logic behind the wider implementation (including parts of that implementation they may not have worked on directly themselves), the easier it’ll be for any one developer to take over another’s work if the demands of the team (sickness, holidays, re-prioritisation, etc) dictate. Perhaps most importantly, though, Code Reviews enable individual developers to understand the impact that their own work has within the context of the wider team’s direction. They enable individual members of the team to contribute their own part of the wider effort in conjunction with rather than in isolation from the concurrent activity going on around about them. In this way, they encourage each member of the team’s work to be part of a widely-understood set of greater goals, rather than just leaving each individual to contribute their part of what eventually becomes a patchwork quilt of loosely-aligned ideas. 

If carried out incorrectly, on the other hand, Code Reviews can be extremely damaging to a team’s morale. If conducted in the wrong spirit, advice passed on during code reviews can seem, from the perspective of the recipient, to merely represent unnecessary interference in, unwelcome distraction from, and unjustified criticism of, their work. They can create a climate of fear that stifles individual initiative. If done badly enough, they can breed resentment that ultimately leads otherwise skilled people to (correctly or not) conclude that they can’t do right for doing wrong, sapping their confidence and motivation, and diminishing the team’s productivity.

So, how best do them properly? Well, in my humble opinion from two decades of being on the receiving and delivering end of same, here are some of the things that I’ve seen work worst and best during my career:


1) Keep It Private

I’ve found it best to always give reviews about individual developers’ work directly to the person concerned, face-to-face and in private. If more than one individual is involved in a particular piece of discrete work, it’s of course fine to have a conversation together with all of the parties concerned aside from the wider team, but any discussion of individuals’ work should never be done in front of uninvolved members of the team. It’s human nature to perceive even constructive criticism as a negative experience, where that constructive criticism is delivered before an audience of the recipient(s)’ peers. The self-same feedback that would be perceived as well-meaning advice if delivered in private, can sound like the harshest criticism if given in public; context is everything. I once knew a lead developer that not only gave code reviews in public at individual developers’ desks in front of the whole development office, but who also later wrote about those individuals’ work in clearly-identifying terms on his internet-facing public blog. I guess he must have mistakenly thought of the internet as an anonymous medium and considered his blog in much the same way that other people might think of their private diaries. Ouch. My advice is to never behave in this way, however tempted you may be to vent frustration and however much you may feel that an individual deserves it. Criticising your peers (or, even worse, your direct reports, who look to you for even-handed guidance and leadership) in public is never a classy way to behave. Doing so is guaranteed to immediately lose you credibility with the very people whose work you should be trying to positively influence – who wants to work for someone that appears only to be interested in embarrassing and deriding them in public? What you should aim to achieve out of each Code Review is a more motivated and better-informed team member, and an insight into that team member’s thought processes for yourself; it’s plain to see that you’re unlikely to achieve either of those outcomes by merely seeking to criticise or denigrate people in public.

Code Reviews are either Win-Win or they are Lose-Lose for the parties involved; by providing any constructive criticism you have to make in private, you’ll help ensure the former rather than the latter outcome.


2)  It’s Not All About The Negatives

One of the things that can make Code Reviews become a morale-sapping chore rather than the productive exercise they should be is if a perception is allowed to develop amongst the team that theyre merely used to provide oversight of those being reviewed, to pick up on any areas where theres room for improvement. For experienced technical professionals (who are most often experts in their particular field, and are regularly the most experienced member of the team in whatever it is that they individually specialise in) this can sometimes feel like a wholly-negative experience. Even where the technical reviewer is of equal experience in a given technology, it’ll certainly be the case that the person being reviewed has a greater knowledge of the solution as it was developed, since they will be the very person that built the thing. They’ll know precisely why they chose implementation ‘X’ over possible alternative ‘Y’, and can resent it if someone with ten minutes’ understanding of their work chooses to focus exclusively upon perceived negatives in preference to recognising the bigger picture. It’s not that reviewers should be afraid of giving relevant feedback where necessary – they should by all means do so, that’s one of the things they’re there to do – but it is highly beneficial to the process if that feedback can be delivered in a way that avoids alienating the skilled people whose initiative and expertise the reviewer should be seeking to encourage and nurture. One of the key things that makes for such an effective review is that the reviewer should make clear that any constructive criticism being passed on is being given within the context of a generally acceptable performance by the person being reviewed (if indeed that is the case, which it should be in all bar exceptional circumstances: if the reviewer was responsible for mentoring and briefing the person whose work they are now reviewing, and the reviewer has been keeping an eye on source control during the development process, any major problems discussed during a code review shouldn’t come as a total surprise to either the reviewer or the developer – unaddressed major problems should have been spotted and raised well in advance of a retrospective code review). Showing appreciation for the many positives of your direct reports’ efforts within the wider context of a rounded and balanced review of what they have produced goes a long way to lending credibility to any suggestions for improvement you may have to make.


3) Don’t Nit Pick

This is related to but subtly different from It’s Not All About The Negatives above. The first and foremost consideration when you’re giving a review should be whether the work you’re reviewing actually does what it was functionally intended to by the developer, and envisaged to by any specification document. If the solution does meet those criteria, that’s the biggest single tick in the box, whether or not you may have implemented things subtly differently yourself. Within the context of that big picture, a rounded review may also encompass some more esoteric developer-level issues such as the coupling and cohesion evident in the design, how well abstraction of concerns has been achieved, whether the developer has adhered to any design patterns that had been set in advance, the readability of the code, whether the developer has included meaningful comments, etc. However, it’s advisable to avoid commenting on trivial issues, such as the variable names used, or the developer’s use of white space (unless there are glaring and inconsistent gaps that make the code hard to read). The only time it’s appropriate to comment on something as subjective and individual as variable names is if the developer is using generic names like String1, String2, etc, instead of labels that make it clear what the variable is actually for. Perceived wisdom on the ‘best’ naming conventions for Variables, Classes, etc varies wildly from workplace to workplace, and even from project to project within the same workplace. On the whole, it’s best not to focus too much on minor details like these during formal reviews. If you make the mistake of being over-controlling, whatever you may ‘gain’ from that approach in terms of control and code consistency, you invariably lose in terms of individual initiative and the developer’s pride of ownership in their work (and the inner motivation to do their best that professional pride brings). Trying to micromanage every aspect of a person’s work is a sure-fire way to stifle their initiative, and ruin their self-confidence, which in the long run reduces the team's productivity. It’s far more beneficial if developers walk away from code reviews with an accurate sense that the reviewer trusted their judgement enough not to sweat the small stuff, and only bothered them with feedback that was worth their time and the reviewer’s time.


4) Use The Review To Communicate Across The Team

I mentioned above that one of the most beneficial aspects of Code Reviews are that they allow individuals within the team to gain a broader perspective on where their work falls into the bigger picture of concurrent activity going on within the wider team. Code Reviews can be a good time to discuss other areas of the wider solution with developers that are uninvolved with those areas, and to explain where the work being reviewed is intended to fit in with those separate parts of the wider team’s activity. This helps provide an element of useful redundancy within the team (in case developers need to cover for one another or take over one another’s responsibilities for a time), and helps communicate a greater understanding of why certain strategic decisions have been made. From the perspective of the reviewer (who will generally be the lead developer or development manager), Code Reviews also help communicate intent from individual developers back to those guiding the overall solution; as touched upon earlier, good reviews should involve a two-way flow of useful information and feedback.


5) Write It Down

Code Reviews should be formally documented, and held confidentially for later reference only by the reviewer and the person whose work was the subject of the review. This formal record of the review should ideally only be 1-2 pages long, will contain less than 500 words in most cases, and no more than 1000 words in any event. The written outcome should be a list of bullet points of any positive and negative observations to be raised, that the reviewer brings two copies of to the meeting and uses to guide the discussion, then e-mails to the person being reviewed after the meeting. It’s advisable not to send this document in advance, since the brief nature of a bulleted list can be taken out of context if the issues it pertains to aren’t given context by being elaborated upon in person. Another reason for waiting until after the meeting to mail the document to the recipient of the review is that it allows you to incorporate any feedback from the meeting itself into the formal record, and to note any specific outcomes that are expected (such as any remedial work that may have been  requested). If you’re feeling brave and you’d like to encourage openness and communication, it’s also an idea to include a section at the bottom of the document headed : “And how did I do?”. This encourages the recipient of the review to give you any feedback they wish on any aspect of the process. Any given project needs to have one distinct individual that is entrusted with responsibility for the overall direction of the solution, and that individual will usually be the person conducting the code reviews (though I have seen working environments where developers have been asked to review one another’s code on a regular basis, which I don’t think worked particularly well - developers like to be able to focus on their own work in the main, and they often perceive it as a distracting and unnecessary chore if theyre instead tasked with regularly commenting on their peers’ work; perhaps most importantly, though, developers’ time is too valuable to waste on administrative tasks that the technical leadership of the team should be shouldering for themselves). Meaningful communication between members of the team is the key to making sure that the main technical decision maker knows everything they need to know to do their job effectively. If communication is truly to be bi-directional, in a code review it’s advisable to document any feedback from the person being reviewed alongside any critique the reviewer may have made of their work.






So much for Code Reviews, and the best and worst of how I’ve seen them employed in various places. I of course welcome any reviews of the above that anyone out there may care to make. ;)

Thursday, 10 February 2011

ASPX Page Life Cycle


One of the most fundamental concepts of ASP.Net – and indeed of any transactional web development technology – is the concept of Postbacks. That is, the idea that an interactive web page may be used by inputting information via the interface presented at the client (i.e. web browser) side of the transaction, and then ‘submitting’ the page so-influenced by user input for processing by the web server. Web users are highly familiar with this concept from a user perspective – whether they’re shopping at Amazon, commenting on stories on the BBC News Site, posting threads on web forums, or whatever, it’s widely understood that in order to affect any change whatsoever with an interactive website users need to input what they want at the browser UI, and ‘post’ the page back in some way (often by simply hitting a button marked ‘submit’, but also in may other ways specific to context).

For web developers, facilitating this type of interaction with users presents many challenges, not least of which is the fact that the web is essentially a ‘stateless’ medium. Which is to say that each time an interactive web page is ‘submitted’ for processing, from the perspective of the web server it’s as if it is the first time it has interacted with the user in question, and so everything about the transaction that is being attempted must be included in the page that has been submitted (including information that, from the user’s perspective, they may have submitted several steps previously, e.g., they may have entered their Amazon user account name and password previously, and have added several items to their ‘basket’ in the meantime, but they don’t expect to have to enter their credentials again when they click ‘Purchase’ as the final step in the process – they expect the page to somehow have ‘remembered’ that they had provided that information already).

This article is about how ASP.Net handles Postbacks in a stateless medium, and focuses in particular on the .Net-specific concept of the page life cycle of an ASPX page.  






As mentioned above, facilitating interaction between website owners and website users, and between website users with one another, presents many challenges, whatever development technology you’re using. The HTTP protocol upon which the internet was originally based, and that it still relies heavily upon today, was really designed for allowing people to share ‘static’ content with one another. The web, as it was originally envisioned, allowed users to share simple text files of a fixed content with their peers. Each request for a ‘page’ would result in the exact same HTML being rendered, until and unless the publisher of that page deliberately changed the content manually. For this type of interaction, a stateless communication protocol is perfectly suitable : user requests a page, user gets that page; simple. However, that original concept isn’t how users expect the web to work any more. Today, they expect to be able to interact with, not merely consume, the content of the websites they use. Providing this type of interactivity has involved developers finding ways to allow an essentially static medium to somehow ‘remember’ individual users, and interact with them in ways that make sense within the context of those preceding interactions. ‘Remembering’ previously-encountered information in this way is known as ‘persisting state’.


The main tool that ASP.Net provides to facilitate the illusion of persisting state in interactive web pages is the concept of an ASPX page. Simply put, an ASPX page, also known as a ‘web form’, is a type of container that can hold several discrete types of object – collectively known as ‘controls’ – that allow a user to input information, or otherwise interact with the page. The types of control that can be placed in an ASPX page (which I’ll refer to simply as a ‘page’ from here on in) are many any varied. There is, for example, a particular class of objects known as ‘web controls’, which are defined in the System.Web.UI.WebControls .net framework namespace. These controls provide many ready-made interactive elements, such as text boxes, radio buttons, drop down lists, and others, that retain the values entered into them by users across different postbacks of the page. Developers can additionally create more complex interactive elements of their own design, known as ‘user controls’. User controls employ many of the same paradigms as web controls, but require more developer knowledge to build, whilst providing a greater degree of control over visual presentation for the developer. The last thing that developers should know about controls (of whatever type) is that they may facilitate custom Events, specific to the type of control. That is, they can be designed/used in such a way as to ensure that a postback occurs and that specific code runs at the web server in response to users interacting with the controls in a certain pre-defined way. E.g., textboxes may be designed to respond to users entering text in them, and buttons and images may be designed to respond to being clicked, etc.


The way that ASPX pages manage the feat of appearing to persisting data within the confines of the stateless medium of the internet is by using something called ViewState. In essence, ViewState is a special type of non-visible or ‘hidden’ control, contained within the ASPX page, whose job, amongst other things, is to keep track of all of the data entered in and required by each of the other data-dependent controls on the web form. It is the ViewState mechanism that allows a web server running Internet Information Services (IIS) to interpret the virtual ‘state’ of a page (a ‘state’ that was potentially built up over several preceding postbacks). If you ever look at the underlying page source of an ASPX page as it is rendered in a browser, you may well see something that looks like this somewhere in the middle of the HTML:


<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUJNjI0NjY1NDA2DxYCHgpNeVByb3BlcnR5BQMxMjNkZNsT9/JHdta88TymHVqnYrr7vzIS8vtD3DxRcAt1+MLp" />


This is the page's ViewState. It’s not visible to the user (hence the type="hidden" attribute of the input HTML element tag), but it is nonetheless very important to maintaining the illusion that the system the user is interacting with ‘remembers’ key aspects of the user’s preceding interaction, thereby providing the illusion of persisting state. The value contained in the  value="......" tag is a serialised representation of the data used by other controls in the form. There’s a lot more to ViewState, and indeed optimising it is a whole other topic in itself that I’ll perhaps write about another time. Within the scope of this discussion, though, it can be understood to be a serialised hash table, that ‘stores’ named Properties of controls on the page and their values across different postbacks of the page. There’s another type of object, known as ControlState, that performs a similar job to ViewState, but which specifically exists to track properties of controls regardless of any other settings a developer may choose to implement. The main distinction between ViewState and ControlState is that, whilst ViewState may optionally be developer-configured to be disabled, ControlState cannot; where ViewState doesn’t exist ControlState is necessary to do all the work of tracking control properties on its own.

So, to recap, ASPX pages contain various types of control, and the page itself may be submitted by users to the web server for further processing in a process known as making a postback. From the users’ perspective, making a postback conceptually involves ‘submitting’ the page, thereby performing some data-driven action, such as processing an order, making a post on a forum, etc., in a way that is abstracted from and invisible to the user. Now, let’s look at what actually happens from a developer perspective at the web server when such a postback actually takes place.

As well as allowing developers to design the visual and non-visual elements of the user interface, at design time ASPX pages also include a separate developer tool known as the Code Behind element of the page. This part of the design may be thought of as representing that part of the design that gets executed at the web server end (as opposed to anything that happens in the users’ browsers). The Code Behind part of the page facilitates interaction between disconnected users submitting pages via postbacks out in the stateless web, and those ‘back end’ parts of the system that turn those user postbacks into meaningful actions, such as facilitating transactions with underlying databases. For example, this is the part of the wider system that allows product orders placed by users on commercial websites and posts made on internet forums to be turned into records in a more permanently-stored medium, such as a relational database. The way that the Code Behind element of the page facilitates this interaction relies heavily on several of those elements already touched upon, as well as some others that will be discussed below. In summary, whenever a postback takes place, the following Events will take place in the order presented below within the Code Behind:


1.      Pre-Init
2.      Init
3.      InitComplete
4.      Pre-Load
5.      Load
6.      Control Events
7.      LoadComplete
8.      PreRender
9.      PreRenderComplete
10.  SaveStateComplete
11.  Render*
12.  Unload


*    Render isn’t an Event per se, but a Protected Method that gets called for each control on the page at this point in the proceedings.


You can ‘subscribe’ to these Events by creating an appropriate ‘handler’ in the code behind. For example, to create a handler for the page Pre-Init Event, simply enter the following code:

C#:
void Page_PreInit(object sender, EventArgs e)
{
'Your custom code here
}


VB:

Private Sub Page_PreInit
(ByVal sender As Object, ByVal e As System.EventArgs)
Handles Me.PreInit
'Your custom code here
End Sub

In VB this is slightly easier than in C#, in that you need only select the Event you want to create a handler for from the drop-down list provided in the Visual Studio IDE, as demonstrated:


In C#, you either need to type the code by hand, or use the following method:



As noted in the C# version, using the above method will insert a Handler that is specific to the individual ASPX page, rather than one that is generally applicable to the System.Web.UI.Page object that all ASPX pages are based upon. This isn’t hugely problematic, but it is just a little neater and generates less code to be maintained if you rename the Handler using the nomenclature “Page_EventName” as suggested in the example, then delete the line reading:

this.PreInit += new EventHandler(WhateverYourPageIsCalled_PreInit);

that you used temporarily. You delete this line because upon renaming the Handler it becomes superfluous, because the ‘WhateverYourPageIsCalled_PreInit’ Handler to which it refers no longer exists, and because for exactly the same reason you’ll get a compilation error telling you about the Handler it refers to not existing. The reason that the “Page_EventName” Handler naming format is slightly more robust is that, provided you include the AutoEventWireup="true" attribute in the @ Page directive within the markup component of the ASPX page (and that attribute is included by default: see screen dump below), the ASPX page creates appropriate add handler code for any Page_Event handlers you define for you behind the scenes, rather than you having to maintain your own. 



The VB environment is just a little bit smarter on this occasion than the C# one, and makes use of this useful feature without having to be overtly told to do so.

Moving on, the following sections describe what each of the Events you may create handlers for actually do, and provides suggestions for when you may want to use them.


Pre-Init

By the time this event runs, certain pre-processing will have occurred that allows the developer to decide what should happen at this juncture. Certain useful page properties will have been set, including the IsPostBack and IsCallBack properties. Respectively, these properties allow the developer to know whether the page is being ‘submitted’ for the first time by a user (in which case IsPostBack will be false), and whether the ‘submit’ action was triggered as a result of client script utilising a CallBack (a sort of mini-postback, involving processing of the code behind by the server without utilising the ‘submit’ mechanism or refreshing the page, used in AJAX-enabled applications).  It’s possible in ASP.Net to submit the results from one ASPX page into the code behind for another ASPX page; if this is the case, then the IsCrossPagePostBack property of the page will be true at this stage.

As well as using the above properties to direct which Methods of your code behind to run, it’s common to use the Pre-Init handler to dynamically set any desired aspects of the page, such as setting the Theme or Master Page (both of which can affect the appearance and layout significantly), depending on the context.

It’s important to note that at this stage of the page life cycle, the controls on the page will not have had their values retrieved from the page’s ViewState yet. So, don’t use this handler to try and retrieve and act upon user input just yet.


Init

Each control on the page has an Init Event, which may be separately handled. Each of the individual controls’ Init Events will run before the page’s Init Event does.

MSDN advises to use this Event to “read and initialise” control properties, and also states that “controls typically turn on view state tracking immediately after they raise their Init event”. Be advised, however, that when I double-checked the actual behaviour under Framework version 4 and IIS 7 for writing this article, the specific behaviour I observed appeared to be as follows:


·        You may validly set properties of controls during this Event only where the current page is not experiencing a PostBack (i.e., provided that this is the first time the user has submitted this page). Where you do initialise any properties here, you may expect those properties you initialise to be persisted to the control/page sent to the client during the first time the page is loaded only. i.e., if you set the ‘Text’ Property of a textbox during this Event, whatever you set that ‘Text’ Property to will be shown on the page as seen by the user, during the first processing of the page only, but not during any subsequent postbacks.

·        ViewState is made active for individual controls during their own individual Init events, but be advised that setting values in ViewState directly is not possible within the Init event. NB: this is usually done to model generic ‘Properties’ within code behind pages, and the syntax takes the form:

     
ViewState("MyProperty") = "123"          (VB) 
     
ViewState["MyProperty"] = "123";                (C#).


·        If you set default values for control properties in the ASPX page Markup, you can read those initialised properties during this Event. Any dynamically-initialised properties of user controls may also be read here (which might be important, depending on what you intended your user control to do). However, please note that whilst the above is all well and good, other behaviour observed during this event is a bit odd and counterintuitive. Specifically, you should be aware that, if this is a postback, and if the user has set a Property of a control, that user-set control property will not yet reflect the value contained by the control during this Event. Even worse, during this event any user-set controls will incorrectly appear to show the initial values they held when the page was ‘new’; so, control properties are not merely missing, but misleading and incorrect, during postbacks. Please remember this fact, and surround any code you may place within the Init event handler within a  if (Page.IsPostBack)  block to ensure that it behaves as you intend during all stages of the page’s life cycle.



InitComplete

Between the Init and InitComplete Events precisely one thing happens: ViewState tracking gets turned on. So, using the

ViewState["MyProperty"] = "123"

syntax mentioned earlier will cause values to be ‘permanently’ persisted to ViewState during this Event.


PreLoad

MSDN says that this Event runs after the page loads ViewState for itself and all child controls on the page. As noted above, however, I found that ViewState had already loaded during InitComplete. All postback data processing should have been completed by the time this Event runs.


Load

The page’s own Load Event runs, then the Load Event for each individual control on the page runs. Event handlers for the page and individual control’s Load events may be reliably used to set control properties programmatically during postbacks.

Load Event Handlers are a good place to initialise and open any database connections that may be required for further processing during the next stage.


Control Events

Individual control Event Handlers will run at this point in the page’s processing. So, if you’ve created a TextChanged event handler for a TextBox, this is the point in the page’s life cycle where that handler’s code would run.


LoadComplete

LoadComplete runs for the page only, not for individual controls. Place any code that relies on individual control event handling to be complete here (e.g., you may have code that determines whether a ‘Complete Transaction’ button is enabled here, depending on whether all necessary data has been entered in other controls first).


PreRender

An important part of what an ASPX page does is that it dynamically generates HTML markup pertinent to the present state of the page and its constituent controls (remember that discussion about how the web was originally conceived to be a static medium earlier? – well, it still is, and dynamic rendering of HTML in this way is the part of the whole ‘page life cycle’ process that makes it appear that a static medium is capable of generating interactive content based on user input). At this stage in the cycle, all controls have been added to the page, have been updated with the latest data from user input and server-side processing, and are ready to be rendered as HTML.

This Event may be used to make any final changes to control properties before final rendering takes place.


PreRenderComplete

This Event fires just after all data bound controls whose DataSourceID property has been set call their DataBind methods. Data Binding is outside the scope of this article.


SaveStateComplete

The SaveStateComplete Event fires after ViewState and ControlState have been finalised. MDSN advice says that any changes to controls made during or after this Event will be rendered, but will not be persisted during the next Postback. I didn’t find this to be the case – for example, setting the ‘Text’ property of a TextBox during this event will both change what gets rendered in the textbox when the page is presented to the user, and will persist the value set through to the next postback (and beyond). It just goes to show, it pays to check advice, even advice from the horse’s mouth, with each new release of the framework and IIS.


Render

Render isn’t an Event that can be handled, but a Protected Method that exists for each control on an ASPX page, that gets called for each control on the page at this point in the proceedings. This is the Method that is used to output the actual HTML that has been dynamically generated from all the foregoing processing within the page. The ASPX page passes a Framework object called a HTMLTextWriter into the Method as an argument. That HTMLTextWriter contains the page’s calculated HTML output specific to the present context of the control. 

In 99% of all cases, the HTML generated by ASP.Net will be smart enough to ensure that what the developer intended to be presented to the user is what actually gets sent. However, there are a small number of cases where pages are just a little too complex, or where the evolving capabilities of the evermore complex browsers that are continually being developed falls out of sync with the most recent version of the .Net framework, leading to the ASP.Net rendering process sending HTML that is inappropriate for a particular user’s setup. In these cases, the Render Method may be overridden by the developer.

The following example shows one way of overriding the Render Method, that I used on a live project recently. The basic crux of the problem was that the ‘name’ attributes of the HTML tags rendered by the ASP.Net engine were of a particular format, and, for the purposes of working in harmony with another solution that I didn’t have access to the source code for, those name tags had to take on a different naming convention. I tried a couple of approaches, but the one that produced the most satisfactory results is reproduced below. The basic jist of the solution is that the Render Method is overridden in my custom user control, and HtmlTextWriter object that the Render Method takes as input from the ASPX page is tinkered with to ensure that each of the name attriubtes inserted by the ASP.Net engine is replaced by deliberately-set name tags chosen by the developer instead:

protected override void Render(HtmlTextWriter writer)
{
    if (QuestionLabel != String.Empty && RenameControlsAsPerQuestionLabel == true)
    {
        StringBuilder sb = new StringBuilder();
        StringWriter sw = new StringWriter(sb);
        HtmlTextWriter hw = new HtmlTextWriter(sw);

        base.Render(hw);

        if (ResponseControlType == ControlTypeEnum.CheckboxGroup)
        {
            foreach (ResponseOption o in AvailableResponses)
            {
                sb.Replace(
                    "name=\"" + o.ResponseCheckBoxForCheckBoxGroupTypeQuestions.UniqueID + "\"",
                    "name=\"" + o.ResponseControlName + "\"");                
            }
        }
        else
        {
             sb.Replace("name=\"" + responseCtl.UniqueID + "\"", "name=\"" + QuestionLabel + "\"");
        }

        writer.Write(sb.ToString());
    }
    else
    { 
        base.Render(writer);
    }
}
The times when developers should need to override ASP.Net’s natively-generated HTML are few and far between, but it’s useful to know how to do this should it be required.


Unload

The last thing that happens before the processed page gets output to the user that submitted/requested it is that the Unload Event gets called for each control on the page, then for the page itself. This is the best place to close any open connections to databases or files at the server side (either in the page level event for general connections, or in control-level Unload events where a specific control has exclusively used a particular resource during the processing of the page).






That’s pretty much it for the page life cycle. It should be noted that the life cycle for a CallBack is slightly different, but similar in most of the important respects noted above (the main difference is that Callbacks don’t involve submitting the whole page for processing, only discrete elements of it, and that consequently ViewState is therefore unaffected by Callbacks). It’s pretty amazing to think how much innovation and ingenuity has gone into making what was originally a static document delivery system into the fully-interactive medium that it’s since become.