Web Visitors vs Users, Impatient vs Bored and how they affect Website Change Management

Why are users on your site?

  • To look around?
  • To do or achieve something?

I pondered this question after reading Gerry McGovern’s discussion on Impatient vs Bored.  He suggests that people using (or rather choosing ‘not’ to use) websites are actually more likely to be impatient than just bored with your content.

I think we need to explore the different types of sites and people visiting them to understand this a bit more.

Different types of sites

IT people have traditionally used the rather woolly terms of Web Site and Web Application to differentiate between something simple and something more sophisticated.  There’s no official classification here.  There’s usually some characteristics that point more to one than the other.

Characteristics of a Web Application

  • Dynamic content.  This could be driven from a database or external source
  • User interaction.  Users can register/update information, upload, download.  They can ‘do’ useful things on the site.
  • Commerce.  Users can buy things

Users and Visitors

People accessing the web can also be classified.
 
You could say that a visitor is

“someone who has a passing interest in your site, looking to find some information or browsing around for comparison purposes.” 

A user is

“someone with a longer term association or affiliation who potentially logs into the site, or gains knowledge of the structure and becomes expert in achieving their tasks.” 

It’s reasonable to assume that users are subset of visitors.  Visitors and users will also access both types of sites. This means that whether someone is a visitor or a user depends on the specific context of their goal at the time of access, and their past history on the site.  (phew – almost drew a venn diagram there!).

Impatient and Bored

Impatience is something more likely to be experienced by a user who’s trying to complete a meaningful task – i.e. they have a certain expectation of how a site will work and perform.  Casual browsers are more likely to switch off from the site if they don’t like what they see.
If you want a huge generalisation then:

“Users with an affiliation to a site (web application) are likely to become impatient if their progress is impeded, and casual visitors to a site (web application or content site) are more likely to become bored if the content is not engaging or visually appealing.”

Underlying Factors

There’s a couple of factors that underpin all types of web access:

  • Information Architecture (IA) – the structure of the information, sections and pages on the site.
  • Usability – the ease with which people can achieve their goals on the site.

IA is equally important to simple and sophisticated sites, as a visitor to a company brochure site needs to know that they can get around speedily and find what they’re looking for without undue delay.  Bad IA on a larger site is likely to grate with users over time and people will find themselves frustrated because their navigation around the site isn’t logical.

A good and logical IA is often a matter of being consistent with de facto standards.  For example many web users now have subconscious expectactions that company sites have ‘Contact Us’, ‘About Us’ etc.  Going against the grain here leads to impatience.

Usability is the detail in every interaction.  The sections and pages on the site may be completely logical, but if the developers have produced a whizz-bang Flash product catalogue widget that takes over a minute to load, then you’ll be getting some impatient users.  The effect will be similar if the flow of a page or workflow tries to go against simple and accepted interface design principles.  This could include using non-standard form elements on a page, or collecting information in a strange order just because it suits a back-end system (but not the user).

Functionality vs Visual Design

For the most part, functionality wins over visual eye candy with users.  Business users routinely put up with desktop applications that do what they need without swooping curves, dripping in glass buttons and subtle gradients.  There is however a growing expectation of a minimum level of visual design on the web.  Maybe this  makes up for the fact that sites still rarely deliver everything a user wants.

The Pressure to Redesign

Creative agencies will often suggest a site’s poor performance is down to the visual design not being up to scratch – as they want to perform that job.  This takes advantage of the (still) general lack of understanding about the web amongst company decision-makers. This surprisingly includes a lot of marketing departments who still only think ‘print’. 

The other extreme is marketers on a constant rebranding trip, constantly quoting ‘market risks’, effectively keeping themselves gainfully employed.

The model of development on the web over the years has been largely evolutionary, with change coming without warning, and largely without consultation with users.  This was OK in previous times (I refuse to say web 1.0), when user expectations were low, and the level of engagement with any one site was also low.  This is still true for many small sites.

The price of Success

With community sites becoming more mainstream and popular, companies now often elicit feedback to work out where to go next.  Sometimes when big changes are made (Facebook) with little or no communication, things can get a little heated with petitions and protests galore.
 
Just imagine if Microsoft significantly changed the interface to Word on the millions of computers around the world without notice.  It just wouldn’t happen! 

Sites like Facebook have learned the hard way that success on the web also breeds greater responsibility to change with regards to your users.  Users of free services can simply vote with their feet, and increasingly do.  Facebook has flooded the social web space and so hangs on to many users as they’ve become the de facto standard. 

Very few sites can rely on such a situation.

Managing Change

So how can you manage change on websites?  When do you need a lick of paint, and when do you need a complete redesign?

The following isn’t an exhaustive list, but gives some thoughts on some tools and approaches to consider.

Understand your user base.

Use web stats tools like Google Analytics to understand where your users come from and where they go on the site.  Set up goals to see how successful things like your payment workflow is – i.e. what percentage of people add something to a cart, and subsequently complete the transaction?

Analyse the paths into your site so see if there’s any opportunities for SEO improvement, like better keywording, extra landing pages etc.

Make the right changes

Sometimes it’s appropriate to do some field research to work out the right changes to make on the site.  This could be from a variety of sources.  The key is to remain light-footed throughout the process so you can react to changes as they (inevitably) occur:

  • Business Requirements.  This is typically what drives most change, but internal people are not the only people in the equation.  They don’t use the site the way external users do.
  • Usability testing with the current site and a group of users can be quite revealing to find gaps that explain poorly performing site areas, and also give rise to new ideas.
  • User surveys can be effective, but asking questions of users needs to be offered sparingly, and in an optional way.  Keep things small and succinct to get the best return of ‘take home’ points.  Consider offering some reward for completing the survey.

Design it right and Try before you Buy

In order to react to change, and feedback you need to get people looking at your intended changes as quickly as possible.  The following is an example of an iterative approach from detailed design to implementation for a complex change that will affect a large number of users:

WireFrame

Wireframe development is a great place to start by designing layout and visualising key elements and interactions on the site.  This is specifically tackled before any detailed visual design to test the concepts with business people and prospective users.

Prototype

This can be created from the wireframes to put some more meat around the concept built in the wireframes.  This could be as simple as page images with hyperlinks to allow clicking through the flow, to a slim ‘actual’ prototype in place on the site.  You’d typically build a ‘proper’ prototype if you’ve got some technical risk to overcome – e.g. proving a technical solution is possible for a given situation.  Some tools like Axure exist to facilitate wireframes and prototypes in one. 

Prototype testing

This is then performed either with a control group of users, and or with business users to assess the viability of the solution and also to get valuable feedback and other ideas. 

The wireframes and prototype would then be updated again with further rounds of testing as required to get to a point where things are formalised enough to start development. 

Visual design may also creep into this area, as some people simply can’t say ‘yes’ until they see ‘exactly’ how something’s going to look, but try and limit this.  This is where you hope for a programmer who’s design-savvy. 

This phase ends with the wireframes being signed off by the business.

Visual design

This will no doubt continue to evolve as it’s the tangible stuff that businesses can ‘feel’, but should be tied down as early as possible.  The business should sign off completed mockups (e.g. from Photoshop), based on the approved wireframes.

Completing the Job

The rest of the job is standard develop/test/implement etc, but developing small chunks and testing early, and implementing often is always a good way to go.

If the original prototype  was actually ‘functional’, then you might be able to go fairly quickly to some internal or public A:B testing and with a bit of work you could find yourself finished. 

Whether you’re catering more for visitors or users, the first step to any change is putting yourself in their shoes.

Setting up a Continuous Integration .NET Build Server without Visual Studio

We start with Windows XP SP2 (doesn’t really need to be a ‘server’).  My requirement is to rely on the .NET framework and open source tools – i.e. not requiring a Visual Studio Licence.  We’re using .NET 2.0 so everything will be in reference to that.


I assume here that you’ll be familiar with the actual software below and just want to get a Build Server up and running without having to install Visual Studio.  If you’re not familiar with Continuous Integration then start by looking at Martin Fowler’s Continuous Integration article and then the info on CruiseControl.NET, as that’s the tool that pulls everything together. 


There’s help on each of the sites below for installing and using each of the tools, so I won’t go into detail about each one.  The order of the list isn’t critical, but you’ll probably have some issues unless all are done.



  1. Install IIS from Add/Remove Windows Components (used for CruiseControl.NET) if not already present
  2. Open up a command prompt and run C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i (to install ASP.NET in IIS)
  3. Install CruiseControl.NET (1.4) from http://confluence.public.thoughtworks.org/display/CCNET/Download.  NOTE:  If you’re using a remote SubVersion repository (as in my case) then you’ll need to run the CruiseControl.NET Service as a domain user that has access to the network location of the repository  rather than LocalSystem.  This is because CruiseControl will be using the SVN client to poll for changes. 
  4. Install Microsoft .NET Framework 2.0 SP1 from http://www.microsoft.com/downloads/details.aspx?familyid=029196ed-04eb-471e-8a99-3c61d19a4c5a&displaylang=en
  5. Install latest SubVersion (1.5) from http://www.collab.net/downloads/subversion/
  6. Install appropriate NUnit from http://nunit.org/index.php?p=download (targeting .NET framework 2.0 – I have 2.4.6)
  7. Install latest FxCop from http://www.microsoft.com/downloads/details.aspx?familyid=3389F7E4-0E55-4A4D-BC74-4AEABB17997B
  8. Install .NET Framework 2.0 SDK from http://www.microsoft.com/downloads/details.aspx?familyid=fe6f2099-b7b4-4f47-a244-c96d69c35dec&displaylang=en (you’ll need this to get around a NAnt bug whereby it can’t resolve an internal property to load the .NET framework)
  9. Download latest NAnt from http://nant.sourceforge.net/
  10. Download latest NAntContrib from http://nantcontrib.sourceforge.net/ (You’ll need this to run MSBuild)
  11. To save getting errors when building Web Projects (and other types) the easiest thing is to copy c:\program files\msbuild from your dev machine to the build server (otherwise you’ll have to alter the path of where the targets are in each project).

Once you’ve got all of these set up then you’ll be able to add some builds to your ccnet.config file ( most likely c:\program files\CruiseControl.NET\server\ccnet.config)


Test CruiseControl’s happy by going to http://localhost/ccnet It should come up OK but basically show ‘no projects’.


I’m not going to go into detail about setting up the builds here but may cover that in a follow up article, as it requires some more setup with SVN and a ‘standard’ project structure (to get benefit from reuse of build scripts).  If you’re still with me then I’ll share one last thing which might help… 


Some people choose to have NAnt in a standard place and just reference it from the PATH, but I now use the power of svn:externals to drag in NAnt, NUnit and other common external dependencies like MS Enterprise Library from a shared SVN location to each project.  This means you just get latest of a project and it has ‘everything’ needed to build – no installations or assumptions about locations of tools.

Refactoring the inefficient data loop

OK. Last one for today.  I’m not going to go into too much detail except to say that the offending piece of code loads all items in a ‘matrix’ database table and builds a business object collection hierarchy.  This table has 4 fields, one or more of which may be NULL – for different meanings.  I really don’t particularly like this type of approach but changing it right now would involve quite a bit of work.


The table looks something like


Level1ID
Level2aID
Level2bID
Level2cID


This ultimately translates to a business object hierarchy of:


 



  • Rows with Level1ID (only) – all other fields null – are Level1 objects
  • Rows with Level1ID and Level2a ID (only) are Level2a objects

etc…


 


The matrix table just contains the primary keys for each object so the code loads each one in the style below…


 


Load matrix
for each row in matrix
{
 if row type == Level1
     Add to Level1Collection
 else if row type == Level2a
 {
  Load Level2a object
  Add Level2a object to Level1[Level1ID].Level2aCollection
 }
 else if row type == Level2b
 {
  Load Level2b object
  Add Level2b object to Level1[Level1ID].Level2bCollection
 }
 else if row type == Level2c
 {
  Load Level2c object
  Add Level2c object to Level1[Level1ID].Level2cCollection
 }
}


 


This seems reasonable enough (logical anyway) given the way the data’s being retrieved.


This does however load another several hundred rows from more stored proc calls that load each new object into the child collections.  This whole thing consistently takes around 8 seconds.


 


The Refactoring


 


A wise man once told me that if you can be sure the bulk of your operation is data access then lump it all together if you can, and do the loop or other grunt processing on the client. 



  1. I created a new Stored Procedure to return 4 result sets.  These come back as 4 tables in the target DataSet.  Each resultset is qualified by the same master criteria, and just uses joins to get a set of data that can be loaded directly into the collections.  The original stored proc is no longer required and this is now the only data access call.
  2. I changed my collection classes slightly to allow a LoadData method which takes in a dataset, a tablename and a parent key.  This means I can add Level2a objects to the appropriate Level1 collection.  The pseudo code now looks like…

Load multiple results
for each row in matrix
{
 if Level1 Results present 
   LoadData on Level1Collection
    
 if Level2a Results present
  for each Level1 row
   LoadData on Level1[Level1ID].Level2aCollection
   
 if Level2b Results present
  for each Level1 row
   LoadData on Level1[Level1ID].Level2bCollection


 if Level2c Results present
  for each Level1 row
   LoadData on Level1[Level1ID].LevelcbCollection
 
}


As I said at the beginning, there are some definite improvements to be made from changing the data structure, and a lot of this code could look a lot nicer by using Typed Datasets with relationships defined.


 


The new approach actually completes in less than 100ms.  I couldn’t actually believe it myself, and made sure I killed connections, cache etc to make sure the database was coming in cold.  Still the same.


 


This basically proves that for data-heavy operations, things really start to hurt when you’re making repeated client round-trips, however small the call.  This is basically a 99% saving in load time for this business object. 


 


The resulting page is also really snappy now and I’m sure the customer won’t even notice :-)


 


Getting into IT? – Get into Usability

I’ve been tailed by a graduate this week (who’s very switched on), and in talking to him it became apparent that as part of his rotation activities he’d been sitting in on some usability testing and interviews with users.

It struck me that this sort of experience will be incredibly valuable to him in years to come as he’s forming many of his norms and opinions about the IT world right now and to start off thinking about ‘what the user needs’ is not a bad thing!  I know I was first subjected to the partisan stereotype of users before making up my own mind that the reason many projects fail is because many programmers don’t understand basic usability concepts.  I felt I needed to understand more.

Part of the problem is that all too often companies shortsightedly exploit their new and junior starters.   If you take the example of a kitchen hand in a 19th century mansion, you’d walk in through the door, and be ushered past many rooms filled with things you’re not allowed to experience – past the ladies doing embroidery, the gents in the den smoking cigars and drinking brandy.  Before you could ask any questions (some of which may be quite insightful) you’re locked in the kitchen washing dishes!

IT and programming is still mistakenly viewed as a ‘back room’ function in many places, and thankfully my current organisation has the forsight to expose new people to things that are ultimately going to make them more valuable to the company.  Learning about usability is fundamental to understanding what makes systems work well, and also being able to address those that fall short, so the earlier you start, the better.

How do you ‘rate’ a developer or team lead?

Just going through some old notepads from previous employment and found a table that I’d come up with to ‘rate’ all the developers in the department (I was a team lead there at the time – I also rated myself and the other team leads).  The purpose was to effectively work out who to put where – i.e. what teams.  It’s pretty a simple process, and is essentially a straw poll on some high-level KPI’s, but assumes you’re experienced, and ‘pretty good’ yourself, and can recognise/score people objectively and consistently.  If you let personal preference get in there then your results mean nothing.


Each person is scored on 4 categories as some may be stronger in different areas.  Each score is on a scale of 1-10, and the scores are simply added to get the overall rating ( out of 40).  You can obviously make this a percentage etc if you wish.  You could also apply an average to scores given by multiple people.


Developer Categories:



  1. Ability. Raw development ability.  Can they achieve a technical solution?
  2. Discipline. Their ‘normal’ work practice.  Do they take pride in their work?  Do they lead others in the processes they follow?  how do they work when unsupervised?
  3. Commercial Focus / flexibility,  When faced with a deadline or changing scope do they cope?  Do they think creatively and work with others to keep things on track, potentially adding commercial compromise into their original design?
  4. Control. How do they work under pressure? Do corners get cut, do they fall to pieces, or do they rise to the occasion? 

Team Lead (Technical) Categories:



  1. People leadership. How do they treat their team members?  Do they instil confidence and inspire their team to achieve?  Does their team ‘like’ them? You have to be careful with the last one, because on its own it shouldn’t give a big score.  It’s the icing on the cake rather than the meat in the sandwich!.
  2. Getting Stuff Done.  Do they get results?  Does their team get results?  Do they deliver regardless of obstacles and issues?
  3. Upward Management. How well do they communicate changes, issues etc to managers and stakeholders?  Do they ‘sit’ on issues and hope they’ll go away until they ‘blow up’?

  Team leads can obviously be rated as developers too…


 

Criteria for buying a Coffee

I experienced some (momentary) guilt the other day as someone at a coffee shop near work (offering concessions to staff) caught me with another shop’s coffee!!  Aargh, gasp.  I was with a group so managed to convince them I was ‘along for the ride’ with them.  Ridiculous stuff, but it made me think – what makes us go to different coffee shops? 

Well, I’m in the ‘coffee capital of Australia’, so people take it a little seriously here (I’m easier to please), and don’t take kindly to the ‘big boys’ from the US.  Here’s my list of criteria.  A ‘yes’ answer gains a point, and a ‘no’ answer deducts a point:  Some questions overlap, but sometimes you just have to peel the onion :-)

  • Company is owned and run by someone in the shop
  • Company is owned and run locally (city or country) 
  • Coffee is sourced from ethical suppliers
  • The staff ‘care’, and smile
  • There’s a shared tips jar
  • Location is convenient (this is thrown in because all other points may be good, but you’re not going to walk forever just to get a coffee!)
  • You can be ‘in and out’ in less than 4 minutes (on average) – i.e. queues
  • The price is reasonable (based on the scores for the above – a bit subjective)

You might even apply your own weighting for each point – i.e. ethical supply may be something you’re not prepared to compromise on, so give that 5 points (you get the idea).

Let’s just say apart from the coffee tasting like water, It’ll be a long time before I go to Starbucks!

Signs of Discontent? – Increased Food and Coffee

I like observing human behaviour at work and I’ve noticed recently that the amount of ‘junk’ food appearing in the dev pod has increased quite a bit.  People are also going for more coffee breaks (as far as I can see).  I may be imagining this, but I think it’s probably a good indicator of any or all of the following:



  • Boredom
  • Lack of enthusiasm for the current work
  • Need for stress relief (mucho long hours)

I’d have to admit the current project has its frustrations (integration work with a ‘less than extendable’ 3rd party product) and I’d probably rather be doing something else – so I’ll keep my eye on this and see what happens when the supplies run out!


I think there’s a bit of synergy here with the triangle of happiness.  My own experience is if you’re happy and engaged in what you’re doing then you often don’t even find the time to eat..

Unit Tests – They’ve got to be worth it

This is a re-hash of a post I wrote and ‘lost’ a while ago after I was reading Charlie Poole’s first blog entry (from September 05) ‘What’s a test worth?‘, and found a hard-copy this morning.


It occurred to me (originally) that we tend to never remove unit tests as we have some strange and irrational fear that we should only ever move forwards with tests and the ‘rainy day’ tests should be retained as a ‘just in case’ safety net.  All this does of course is water down your test library for a number of reasons:



  1. The test library takes longer than is desired AND required to run
  2. Tests exist with a purpose that no-one’s quite sure about and thus are more difficult to maintain and fix when something causes them to fail
  3. The test library cannot possibly be well defined and categorised due to point 2, leading to more developer and tester confusion

I therefore tried to give myself a small number of practical rules to follow when adding or maintaining unit tests.  Refactoring is something that applies equally to unit tests (and I don’t just mean changes that stop your build from breaking when you change your functionality).


The question is ‘What characteristics should a test display in order to avoid being deleted’



  1. If the test covers unique specific functionality (not covered in any other test) in a contained and specific way, sets up its own data and tears it down whilst making a number of useful assertions, and tests logic ‘not’ data – it should stay
  2. If the test duplicates other coverage, but also covers something else at a higher level – i.e. more of a system test – it may also still be of some use.  If the new functionality is at the same logical level as that already covered, then it’s probably an indicator that some refactoring should be undertaken to allow that to be specifically unit-tested (that’s not very clear but hopefully you get what I mean)
  3. We’re clutching at straws now, but if the test does ‘anything’ useful at all (that’s not been done in another test), then it may still be a good ‘catcher’ for some other high-level scenarios – you’d probably want to re-categorise the test at least in this case.

If you can’t place the test in any of the 3 above then you need to do one or more of the following




    1. Remove the test
    2. Refactor the functionality you’re testing
    3. Improve the test so that there is a specific and unique purpose

I’ll try and add to and refine this over time, but for now there’s my starting point.