Use SQL Server Trusted Connections with ASP.NET on Windows 2003 without impersonation

Access control and troubleshooting 401 errors must be one of the most annoying and recurring issues with configuring IIS.  One of the problems is that it’s often quite a long time between issues, and you simply forget how you solved something last time.  This time I decided to write it all down as I set things up.

Target Scenario
My target scenario is a local intranet, where you want to use a ‘service account’ to access SQL Server, directly from your ‘trusted’ web application, removing the need for impersonation. 

The benefits of this are of course that you can take advantage of connection pooling, but also removing the need to configure passwords in web.config for SQL users (or specific, impersonated domain users).  This also removed the overhead of configuring specific domain users and their SQL Server permissions.  It may also be that you just want to simplify your security model to work only on Windows authentication across the stack.  

SQL Server

  1. Create a new role in the database you’re accessing, for the purposes of your application
  2. Add your service domain user account to the role in SQL Server
  3. Assign permissions to objects, stored procedures etc to the role (not directly to the user)

IIS/Web Site

  1. Set up your web site/application as you would normally – one way to do things….
    1. Create your web application root folder on the web server
    2. Copy your files (or use your deployment tools/scripts to do this)
    3. Create a new application pool to house your new web application (probably model this from the default web site).  This is important as this is where the credentials will be set
    4. Create the new IIS web application against the root folder (if not already done as part of step 2)
    5. Associate the new IIS application with the new application pool
    6. Set the ASP.NET version of your IIS application appropriate (may need to restart IIS here)
  2. Ensure ‘Integrated Security’ is set ON in the Directory Security tab, and ‘Anonymous’ access is switched OFF
  3. Set the application pool’s ‘identity’ to the domain user you want to run the application (and connect to SQL Server) as
  4. Open a command window and Go to the windows\Microsoft.NET\Framework\vXXXXX folder
    1. run aspnet_regiis -ga <domain>\<user> to grant necessary access to the metabase etc for the service account (as per http://msdn.microsoft.com/en-us/library/ff647396.aspx#paght000008_step3 )
    2. In the command window go to the inetpub\adminscripts folder, and set
      NTAuthenticationHeaders metabase property as per instructions at
      http://support.microsoft.com/kb/326985.  you can also use MetaEdit from
      the IIS Resource Kit to change this.  If you’re fully configured to use Kerberos then you can potentially skip this step, as it’s all about making IIS use NTLM authentication.
  5. Navigate to ‘Web Service Extensions’ in IIS Manager, and ensure that the ASP.NET version you’re targeting is ‘allowed’.  e.g. ASP.NET 4.0 is ‘prohibited’ by default.

Summary
So here we’ve circumvented the need to use impersonation by running the ASP.NET application as a specific domain user that is configured as a SQL Server Login, and granted the right access by means of a SQL Server role.  The main work is the plumbing to get IIS to work happily with that user in standard NTLM authentication (you may be able to use Kerberos depending on your network configuration).

Other background on creating service accounts can be found at http://msdn.microsoft.com/en-us/library/ms998297.aspx

Web Visitors vs Users, Impatient vs Bored and how they affect Website Change Management

Why are users on your site?

  • To look around?
  • To do or achieve something?

I pondered this question after reading Gerry McGovern’s discussion on Impatient vs Bored.  He suggests that people using (or rather choosing ‘not’ to use) websites are actually more likely to be impatient than just bored with your content.

I think we need to explore the different types of sites and people visiting them to understand this a bit more.

Different types of sites

IT people have traditionally used the rather woolly terms of Web Site and Web Application to differentiate between something simple and something more sophisticated.  There’s no official classification here.  There’s usually some characteristics that point more to one than the other.

Characteristics of a Web Application

  • Dynamic content.  This could be driven from a database or external source
  • User interaction.  Users can register/update information, upload, download.  They can ‘do’ useful things on the site.
  • Commerce.  Users can buy things

Users and Visitors

People accessing the web can also be classified.
 
You could say that a visitor is

“someone who has a passing interest in your site, looking to find some information or browsing around for comparison purposes.” 

A user is

“someone with a longer term association or affiliation who potentially logs into the site, or gains knowledge of the structure and becomes expert in achieving their tasks.” 

It’s reasonable to assume that users are subset of visitors.  Visitors and users will also access both types of sites. This means that whether someone is a visitor or a user depends on the specific context of their goal at the time of access, and their past history on the site.  (phew – almost drew a venn diagram there!).

Impatient and Bored

Impatience is something more likely to be experienced by a user who’s trying to complete a meaningful task – i.e. they have a certain expectation of how a site will work and perform.  Casual browsers are more likely to switch off from the site if they don’t like what they see.
If you want a huge generalisation then:

“Users with an affiliation to a site (web application) are likely to become impatient if their progress is impeded, and casual visitors to a site (web application or content site) are more likely to become bored if the content is not engaging or visually appealing.”

Underlying Factors

There’s a couple of factors that underpin all types of web access:

  • Information Architecture (IA) – the structure of the information, sections and pages on the site.
  • Usability – the ease with which people can achieve their goals on the site.

IA is equally important to simple and sophisticated sites, as a visitor to a company brochure site needs to know that they can get around speedily and find what they’re looking for without undue delay.  Bad IA on a larger site is likely to grate with users over time and people will find themselves frustrated because their navigation around the site isn’t logical.

A good and logical IA is often a matter of being consistent with de facto standards.  For example many web users now have subconscious expectactions that company sites have ‘Contact Us’, ‘About Us’ etc.  Going against the grain here leads to impatience.

Usability is the detail in every interaction.  The sections and pages on the site may be completely logical, but if the developers have produced a whizz-bang Flash product catalogue widget that takes over a minute to load, then you’ll be getting some impatient users.  The effect will be similar if the flow of a page or workflow tries to go against simple and accepted interface design principles.  This could include using non-standard form elements on a page, or collecting information in a strange order just because it suits a back-end system (but not the user).

Functionality vs Visual Design

For the most part, functionality wins over visual eye candy with users.  Business users routinely put up with desktop applications that do what they need without swooping curves, dripping in glass buttons and subtle gradients.  There is however a growing expectation of a minimum level of visual design on the web.  Maybe this  makes up for the fact that sites still rarely deliver everything a user wants.

The Pressure to Redesign

Creative agencies will often suggest a site’s poor performance is down to the visual design not being up to scratch – as they want to perform that job.  This takes advantage of the (still) general lack of understanding about the web amongst company decision-makers. This surprisingly includes a lot of marketing departments who still only think ‘print’. 

The other extreme is marketers on a constant rebranding trip, constantly quoting ‘market risks’, effectively keeping themselves gainfully employed.

The model of development on the web over the years has been largely evolutionary, with change coming without warning, and largely without consultation with users.  This was OK in previous times (I refuse to say web 1.0), when user expectations were low, and the level of engagement with any one site was also low.  This is still true for many small sites.

The price of Success

With community sites becoming more mainstream and popular, companies now often elicit feedback to work out where to go next.  Sometimes when big changes are made (Facebook) with little or no communication, things can get a little heated with petitions and protests galore.
 
Just imagine if Microsoft significantly changed the interface to Word on the millions of computers around the world without notice.  It just wouldn’t happen! 

Sites like Facebook have learned the hard way that success on the web also breeds greater responsibility to change with regards to your users.  Users of free services can simply vote with their feet, and increasingly do.  Facebook has flooded the social web space and so hangs on to many users as they’ve become the de facto standard. 

Very few sites can rely on such a situation.

Managing Change

So how can you manage change on websites?  When do you need a lick of paint, and when do you need a complete redesign?

The following isn’t an exhaustive list, but gives some thoughts on some tools and approaches to consider.

Understand your user base.

Use web stats tools like Google Analytics to understand where your users come from and where they go on the site.  Set up goals to see how successful things like your payment workflow is – i.e. what percentage of people add something to a cart, and subsequently complete the transaction?

Analyse the paths into your site so see if there’s any opportunities for SEO improvement, like better keywording, extra landing pages etc.

Make the right changes

Sometimes it’s appropriate to do some field research to work out the right changes to make on the site.  This could be from a variety of sources.  The key is to remain light-footed throughout the process so you can react to changes as they (inevitably) occur:

  • Business Requirements.  This is typically what drives most change, but internal people are not the only people in the equation.  They don’t use the site the way external users do.
  • Usability testing with the current site and a group of users can be quite revealing to find gaps that explain poorly performing site areas, and also give rise to new ideas.
  • User surveys can be effective, but asking questions of users needs to be offered sparingly, and in an optional way.  Keep things small and succinct to get the best return of ‘take home’ points.  Consider offering some reward for completing the survey.

Design it right and Try before you Buy

In order to react to change, and feedback you need to get people looking at your intended changes as quickly as possible.  The following is an example of an iterative approach from detailed design to implementation for a complex change that will affect a large number of users:

WireFrame

Wireframe development is a great place to start by designing layout and visualising key elements and interactions on the site.  This is specifically tackled before any detailed visual design to test the concepts with business people and prospective users.

Prototype

This can be created from the wireframes to put some more meat around the concept built in the wireframes.  This could be as simple as page images with hyperlinks to allow clicking through the flow, to a slim ‘actual’ prototype in place on the site.  You’d typically build a ‘proper’ prototype if you’ve got some technical risk to overcome – e.g. proving a technical solution is possible for a given situation.  Some tools like Axure exist to facilitate wireframes and prototypes in one. 

Prototype testing

This is then performed either with a control group of users, and or with business users to assess the viability of the solution and also to get valuable feedback and other ideas. 

The wireframes and prototype would then be updated again with further rounds of testing as required to get to a point where things are formalised enough to start development. 

Visual design may also creep into this area, as some people simply can’t say ‘yes’ until they see ‘exactly’ how something’s going to look, but try and limit this.  This is where you hope for a programmer who’s design-savvy. 

This phase ends with the wireframes being signed off by the business.

Visual design

This will no doubt continue to evolve as it’s the tangible stuff that businesses can ‘feel’, but should be tied down as early as possible.  The business should sign off completed mockups (e.g. from Photoshop), based on the approved wireframes.

Completing the Job

The rest of the job is standard develop/test/implement etc, but developing small chunks and testing early, and implementing often is always a good way to go.

If the original prototype  was actually ‘functional’, then you might be able to go fairly quickly to some internal or public A:B testing and with a bit of work you could find yourself finished. 

Whether you’re catering more for visitors or users, the first step to any change is putting yourself in their shoes.

Using JQuery with DotNetNuke 4.x

I’m currently doing a project using DotNetNuke, and we’re using JQuery plugins to achieve certain content rotation and scroller functionality.  All was ‘amost’ good as I’d found a way to inject the JQuery script to the page header on a per-skin basis, but in ‘edit’ mode the actions button wasn’t showing up at the top of containers in FireFox and was causing JavaScript errors in IE.

I’d already gone through the hoops of declaring jQuery.noConflict(), but it still appeared to be conflicting with the dnn:actions (solpartactions) control.  I’d read somewhere else about Solpart code being incompatible with JQuery.

I tried one last thing, adding the noConflict() call in the JQuery library script file itself – rather than running as a fragment on page load.  This fixed everything, as something else was obviously getting in and hijacking in the meantime.  Apparently with V5 this will all be fixed as JQuery’s more integrated with the framework.  Anyway, for those interested here’s what I had to do to get JQuery (and associated plugins) talking nicely whilst still allowing the actions menu to pop up on my containers…

  1. Amend the JQuery library (jquery.1.x.x.min.js) by adding the following line at the bottom…

    jQuery.noConflict();

  2. Amend the skin you want to load the jquery library (and plugins) in (we’ve got it only in specific skins to avoid the overhead where it’s not required).  You could also do this in the module by checking ‘if loaded’, but here’s the code for a skin (in ascx file)…

    <script runat=”server”>
        Private Sub Page_Init(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Init
            ‘add a script reference for Javascript to the head section
            AddScript(“/js/jquery.scrollable-1.0.2.min.js”)
            AddScript(“/js/jquery.mousewheel.js”)
            AddScript(“/js/jquery-1.3.2.min.js”)
        End Sub
       
        Private Sub AddScript(ByVal fileName As String)
            Dim oLink As New HtmlGenericControl(“script”)
            oLink.Attributes(“language”) = “javascript”
            oLink.Attributes(“type”) = “text/javascript”
            oLink.Attributes(“src”) = fileName
            Dim oCSS As Control = Me.Page.FindControl(“CSS”)
            If Not oCSS Is Nothing Then
                oCSS.Controls.AddAt(0, oLink)
            End If
        End Sub
    </script>

    The order is important, as we’re adding the scripts to the ‘top’ of the scripts each time.  JQuery needs to be the first referenced.

  3. Make sure that anywhere you use jQuery you use the jQuery(xx) syntax, and not $(xx).

That’s it.

ASPNETCOMPILER The Target Directory could not be deleted. Please delete it manually or choose a different target – SubVersion

I’d added some projects to SubVersion today and later on added a CruiseControl.NET build.  I then started getting build failures due to the following:

ASPNETCOMPILER The Target Directory could not be deleted. Please delete it manually, or choose a different target

After a bit of looking around with the SysInternals Process Monitor I couldn’t find anything weird (e.g access denied due to someone locking the folder) and then saw that my output folder from the project (ASP.NET Web Deployment Project) was under SubVersion control (below)

Whoops! – after a swift delete of the folder (also in SubVersion) normality was resumed.  That’s another good reason not to put binaries into source control!

ASP.NET Data Binding – Accessing a parent data item from within a nested repeater

I’m maintaining an app at the moment that uses quite a few nested repeaters, and found that headers were being output when there was no data present.  It was found that the header was being written in the ItemTemplate of an ‘outer’ repeater, rather than as the HeaderTemplate of the ‘inner’ repeater.  The next problem was how to reference the outer repeater from the ‘inner’ HeaderTemplate…

The following will bind to a field called HeaderDescription.

<%# DataBinder.Eval(Container.Parent.Parent, “DataItem.HeaderDescription”) %>

The parent of the inner item is it’s repeater, so you have to go to it’s parent to get the right RepeaterItem.  Why don’t you just do the following you ask?

<%# DataBinder.Eval(Container.Parent.Parent.DataItem, “HeaderDescription”) %>

..’cos it doesn’t work – The Eval method expects a ‘Control’ as its first parameter.  There’s other ways to do this server-side, but the first option is probably the easiest.

To complete the picture and only show when there’s data you can add the following to the ‘inner’ repeater declaration

OnItemDataBound=”ItemDataBound” Visible=”false”

then..

        protected void ItemDataBound(object sender, RepeaterItemEventArgs e)
        {
            if (e.Item.ItemType == ListItemType.Item)
            {
                if (!e.Item.Parent.Visible)
                    e.Item.Parent.Visible = true;
            }

        }

This will ensure that you’ll only show if you’ve bound a ‘data’ item (remember you’re doing binding in the HeaderTemplate too).  You could also hook similar things into other events, but it’s generally more convenient to put these things into events that relate to the actual control (pre_render’s probably another good candidate as it will only get called once and you can check the count in the DataSource).

Technorati Profile

Make JQuery and Prototpye coexist and play together with a GreaseMonkey User Script

I’ve been playing with Greasemonkey scripts recently – for Redbubble.com, and wanted to use JQuery with GreaseMonkey.  This is pretty well documented, but I discovered an incompatibility with my script and the host site, as it uses the Prototype Javascript library (must admit I didn’t know much about it).

Prototype (like JQuery) uses the $ notation, and so by default any GreaseMonkey User Script loaded will hijack the $ object, meaning that stuff on the original site may stop working.

I thought I was sunk but it turns out JQuery just gets better, and it can gracefully give back control of the $ to whichever library originally loaded it.  Just call..

jQuery.noConflict();

You then have to use jQuery instead of $ (e.g. jQuery(“#myID”) instead of $(“#myID”) ), but hey – that’s a small price to pay when the alternative is rewriting the whole thing long-hand.

Removing the scrollbar Page Shift from Firefox

This had bugged me for a while.  A lot of sites (including some of the ones I develop) tend to have a fixed width layout these days and some browsers (IE particularly) ‘always’ has a visible scrollbar.  This means that the available screen width is constant whether the page scrolls or not. 

Firefox on the other hand (and Chrome/Opera/Safari) seem to have this off by default.  This of course seems reasonable until you have a fixed width, centred layout that ‘shifts’ when you switch from a non-scrolling to a scrolling page.  It’s just a bit off-putting.

Fortunately Firefox is configurable and the following will fix that up for you. (I’m sure the other browsers are capable of something similar but I’m not using them much 🙂 )

  1. Find your profile directory (it’s bound to be the ‘default’ one unless you’re developing Firefox addons.  You’ll normally find it in c:\documents and settings\username\application data\Mozilla\Profiles\xxxxx.default\
  2. Go to the ‘chrome’ subfolder and create a file called userContent.css (you’ll probably find there’s a couple of ‘example’ files there already.
  3. Add the following (Firefox-specfic) line to the file:

    html { overflow: -moz-scrollbars-vertical !important; }

  4. Save the file, exit Firefox and start her up again.  You should now have a permanent scrollbar which eliminates the page shift. 

Removing references to HttpModules from ASP.NET SubFolders in web.config

If you have ASP.NET applications that live as subfolders of a larger site, you may find yourself with issues when it tries to find assemblies and httpmodules that are referenced in the parent’s web.config.

Fortunately this is something you can work around.  Matthew Nolton goes through how you ‘remove’ these references at the subfolder level using the <remove> element.

This is all fine until you get an even tastier situation like I encountered the other day…

ASP.NET application ‘A’ lives as subfolder ‘B’ of both parent site ‘C’ and ‘D’.  The configuration of ‘C’ and ‘D’ is slightly different (modules, handlers, assemblies etc).  Why is this a problem you ask?  We were trying to be a bit clever (and failed 🙂 ) by only deploying application ‘A’ once.  Virtual directories in site ‘C’ and ‘D’ both point to the same physical ‘A’ folder.  This effectively means that the stuff that needs to be removed from ‘A’ varies depending on which parent site you’re accessing.

OK – I could just fix this by duplicating the installation, and varying the configuration but….

You can also remove all modules by adding a ‘clear’ element as follows…

<httpModules>
    

<clear/>

</httpModules>

This is fine, BUT if you’re using Session State or any other in-built features that are implemented as httpModules then you’ll get exceptions as ASP.NET will give you a ‘null’ session for instance.

The following is probably a safe list of modules you’d normally need (maybe even only the session for simple apps), so just add them back in after the ‘clear’….

<httpModules>
    
<clear/>
    
<add name=”OutputCache” type=”System.Web.Caching.OutputCacheModule”/>
    
<add name=”Session” type=”System.Web.SessionState.SessionStateModule”/>
    
<add name=”WindowsAuthentication” type=”System.Web.Security.WindowsAuthenticationModule”/>
    
<add name=”FileAuthorization” type=”System.Web.Security.FileAuthorizationModule”/>
</httpModules>

This is nicer for a couple of reasons

  1. It shows what dependencies the application has on ASP.NET/external features, and…
  2. It gives you the power back to have the application consumed by multiple sites as you’ve effectively decoupled yourself from the parent’s dependencies.