NDepend – Visual Studio Addin: takes you as far as you want to go

First of all I’d like to point out that I was kindly given a license by the folks at NDepend (not very often that sort of thing happens I can assure you!) and I’m under absolutely no obligation to write anything about it.

In the beginning…

The funny thing is that it was probably over a year ago when I first installed the product without any specific requirement or
expectation. I had a little play with it (on Visual Studio 2008 as I recall), then the work I ‘had’ to do overtook my will to learn this new product and it lay gathering dust on my hard drive.  This probably explains why I haven’t posted in all that time!

But then…

Recently, I picked up an existing project (on visual Studio 2010), and wanted to have a good look inside to see what I was getting myself into. I dusted off NDepend and told myself I’d give it a good go this time…

First Impressions

The first thing I learned is that this is one significant addin, and you realistically need to ‘know you need it’ before you get it (see ‘laying
dormant comment above’). This also means you need to know what it can do for you – which is plenty!

If you’re reading this and thinking of trialling NDepend, then you either have problems to solve or you’re wanting to invest in ongoing improvement to your code. Both are very good reasons as it happens.

NDepend has few limitations in what it can do, as it has your entire codebase, Visual Studio extensibility and its own powerful rules engine at its disposal. It also employs its own CQL (code query language), to allow you to find all sorts of patterns and complexity problems with your code.

The biggest problem is knowing where to start, or discovering that first task you want to achieve with it. It’s easy to get overwhelmed by the
information it bombards you with when you spin it up).

To be fair, there’s plenty of links trying to lead you to ‘what you’re looking at is…’

Reasons to try/buy

If you’re interested in the quality of your code I believe there really is no equal.  This is the tool you need. You may already be using FX Cop in your build process to check for certain snytactical rules, and ReSharper for sorting out your code as you go, but NDepend can do all sorts of ‘different’ funky stuff (through CQL) that goes in depth to your code to enforce things that would be otherwise difficult to do It can obviously do all the simple stuff like show you where your dependencies are between methods, classes and projects, and redundant code etc.

Some highlights I quite like – made possible through CQL:

  • Enforcing of layering constraints – i.e. ‘this UI project cannot directly reference ‘that’ ‘DAL’ project
  • Simple spot check stuff like queries on a ‘lines of code’ threshold – indicating complexity
  • Code not meeting an acceptable test coverage
  • For all the possibilities you’ll need to look here.

Things to be aware of

  • It’s a technical tool, and it’s easy to get a little overwhelmed with what it can do and where to start.
  • Time is needed to understand some of the concepts and power of the product.
  • You’ll need a beefy machine to avoid things slowing down with the addin loaded (I had to disable it for a while when I was using a solution with 60 projects as I was starting to experience memory issues).  If you don’t want to run it in Visual Studio, you can run it in the standalone ‘Visual NDepend’ application.
  • I’ll admit I haven’t spent a lot of time with the interactive reports, and I don’t find some of the graphical representations of the metrics that easy to use.
  • I think like most products, you get comfortable with what you see as valuable, and tend to only try other things when you have time.

Summary

Clearly NDepend’s a very impressive tool for any serious development team to be using. It will help you to learn about reducing complexity, dependencies and generally designing your code in an efficient way. It’s basically all about improving quality.

It’s also a big product that’s not for the faint hearted. You basically get out what you put in as far as effort in understanding what it’s trying to achieve for you.

I think the key is finding the right balance between all the technical information it presents, the time you have available, and the business benefit you’ll get from code improvements.

As I said at the start. It can basically take you as far as ‘you’ want to go.

Worth taking a look at: http://www.ndepend.com/

Generate SQL Server Inserts from existing data – with Identity Inserts

A good while ago I posted a stored proc that would generate insert statements for table data, along with simple filtering capability.

I broke this out again today, as I needed to recreate part of a database on a local machine.  I didn’t have knowledge of the schema so I just went about fixing each constraint error and adding the reference tables as required to my script.  After manually adding ‘SET IDENTITY_INSERT xx’ about a dozen times I added the functionality to the Stored Proc – so if your table has an identity column it will now wrap the results in IDENTITY_INSERT statements, saving you a bit more time, and headaches.

An interesing side effect if you’re explicitly inserting identity values is that the order of your inserts may become pertinent – especially if you’ve got foreign keys referencing the same table.  I then just added the capability to ‘order by’.
 
There’s no automatic way to switch off column headers, so you’ll need to configure that in the Resutls Output options in Query Analyzer (or management studio).

If you run the proc into ‘Northwind’ and run:

usp_insertgenerator Employees, @order=’EmployeeID’

You’ll get…

SET IDENTITY_INSERT Employees ON

INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘1′,’Davolio’,’Nancy’,’Sales Representative’,’Ms.’,convert(dateti
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘2′,’Fuller’,’Andrew’,’Vice President, Sales’,’Dr.’,convert(datet
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘3′,’Leverling’,’Janet’,’Sales Representative’,’Ms.’,convert(date
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘4′,’Peacock’,’Margaret’,’Sales Representative’,’Mrs.’,convert(da
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘5′,’Buchanan’,’Steven’,’Sales Manager’,’Mr.’,convert(datetime,’1
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘6′,’Suyama’,’Michael’,’Sales Representative’,’Mr.’,convert(datet
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘7′,’King’,’Robert’,’Sales Representative’,’Mr.’,convert(datetime
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘8′,’Callahan’,’Laura’,’Inside Sales Coordinator’,’Ms.’,convert(d
INSERT Employees(EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,
Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath)
  VALUES(‘9′,’Dodsworth’,’Anne’,’Sales Representative’,’Ms.’,convert(datet

SET IDENTITY_INSERT Employees OFF

Here’s the script.

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[usp_InsertGenerator]') and 
OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[usp_InsertGenerator]
GO


CREATE PROC dbo.usp_InsertGenerator
(@tableName varchar(100), @where varchar(1000) = NULL, @order varchar(1000) = NULL) as

SET NOCOUNT ON

--Check whether the table has an identity column
DECLARE @TableHasIdentityColumn BIT
SELECT @TableHasIdentityColumn = 1
from INFORMATION_SCHEMA.COLUMNS
where TABLE_SCHEMA = 'dbo'
and COLUMNPROPERTY(object_id(TABLE_NAME), COLUMN_NAME, 'IsIdentity') = 1
AND TABLE_NAME = @tableName

IF ISNULL(@TableHasIdentityColumn, 0) = 1
    PRINT 'SET IDENTITY_INSERT ' + @tableName + ' ON'

--Declare a cursor to retrieve column specific information for the specified table
DECLARE cursCol CURSOR FAST_FORWARD FOR 
SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName
OPEN cursCol
DECLARE @string nvarchar(3000) --for storing the first half of INSERT statement
DECLARE @stringData nvarchar(3000) --for storing the data (VALUES) related statement
DECLARE @dataType nvarchar(1000) --data types returned for respective columns
SET @string='INSERT '+@tableName+'('
SET @stringData=''
 
DECLARE @colName nvarchar(50)
 
FETCH NEXT FROM cursCol INTO @colName,@dataType
 
IF @@fetch_status<>0
 begin
 print 'Table '+@tableName+' not found, processing skipped.'
 close curscol
 deallocate curscol
 return
END
 
WHILE @@FETCH_STATUS=0
BEGIN
IF @dataType in ('varchar','char','nchar','nvarchar')
BEGIN
 SET @stringData=@stringData+''''+'''+isnull('''''+'''''+'+ @colName+'+'''''+''''',''NULL'')+'',''+'
END
ELSE
if @dataType in ('text','ntext') --if the datatype is text or something else 
BEGIN
 SET @stringData=@stringData+'''''''''+isnull(cast('+ @colName+' as varchar(2000)),'''')+'''''',''+'
END
ELSE
IF @dataType = 'money' --because money doesn't get converted from varchar implicitly
BEGIN
 SET @stringData=@stringData+'''convert(money,''''''+isnull(cast('+ @colName+' as varchar(200)),''0.0000'')+''''''),''+'
END
ELSE 
IF @dataType='datetime'
BEGIN
 SET @stringData=@stringData+'''convert(datetime,'+'''+isnull('''''+'''''+convert(varchar(200),'+ @colName+',121)+'''''+''''',''NULL'')+'',121),''+'
END
ELSE 
IF @dataType='image' 
BEGIN
 SET @stringData=@stringData+'''''''''+isnull(cast(convert(varbinary,'+ @colName+') as varchar(6)),''0'')+'''''',''+'
END
ELSE --presuming the data type is int,bit,numeric,decimal 
BEGIN
 SET @stringData=@stringData+''''+'''+isnull('''''+'''''+convert(varchar(200),'+ @colName+')+'''''+''''',''NULL'')+'',''+'
END
 
SET @string=@string+@colName+','
 
FETCH NEXT FROM cursCol INTO @colName,@dataType
END
DECLARE @Query nvarchar(4000)
 
SET @query ='SELECT '''+substring(@string,0,len(@string)) + ') VALUES(''+ ' + substring(@stringData,0,len(@stringData)-2)+'''+'')'' FROM '+@tableName
IF @where IS NOT NULL
 SET @query = @query + ' where ' + @where
IF @order IS NOT NULL
 SET @query = @query + ' order by ' + @order
 
exec sp_executesql @query
 
CLOSE cursCol
DEALLOCATE cursCol


IF ISNULL(@TableHasIdentityColumn, 0) = 1
    PRINT 'SET IDENTITY_INSERT ' + @tableName + ' OFF'
GO 

Making Web Wireframes easier with Visio – The Pixel Dimensions Shape

Many people use Adobe Illustrator or Photoshop for designing Web layouts, but a lot of companies use Visio – especially for higer level ‘wireframes’.  One of Visio’s problems (<= 2003 anyway) is that it doesn’t support pixel units.  This means you’ll always be approximating the sizes of content areas and page elements, making the developer’s job more difficult.

Enter ‘Visio Guy’, and the Pixel Unit Line Shape.  This is just a great addition for anyone using Visio.  Before you know it you’ll be laying out pixel perfect shapes that the developers will then give you great pleasure in declaring ‘this won’t work in IE6!’ 🙂 

A thing to note with Visio as well: If you’re working with pretty small dimensions then the only way to get really precise is to zoom right in (like 500%), otherwise the whole ‘snap/glue’ thing will kick in based on the page units (probably mm), regardless of whether you’ve switched snap/glue off.

HOW TO Use SQL Server DTS Transform Data Task to export to dynamic Excel spreadsheet file

Many people (including me) are often still using SQL 2000 – which means that you might be using DTS (Data Transformation Services) to perform utility jobs on your databases. 
I’ve had a requirement to output data to Excel (not my choice – Excel usually isn’t!) – which is subtly different to text output as CSV*.  Another requirement was to have a ‘generated’ file name in a known folder.  This is as simple as a date/time driven file name.  The ‘job’ runs once (or more) a day and so creates a new file each time.  Easy right?

Well – it turns out that if you go through the normal motions of creating an Excel connection and using a Transform Data Task it’s likely to work first time and leave you thinking ‘great’.  Unfortunately when you run it again you’re likely to get a message similar to the following:

Table ‘New_Table$’ not found. 

After much digging (hence the reason for writing this), I discovered that in order to map the columns successfully you basically have to create the table in the spreadsheet before running the transformation (the ‘create table’ part of creating the transformation).  If you’ve got a dynamic file name then the next time you’ll  effectively have a blank spreadsheet without the required receiving ‘table’, and it will fail, unless you’ve recreated the sheet before the transformation.

OK so assuming you want to output some data to a spreadsheet with a dynamic filename with a Transform Data task then here’s how you do it.

  1. I put all files in the same folder so I use 2 global variables to build the filename – NetworkLocation and FinalLocation
    1. First, Right-click on a blank part of your package screen and choose Package Properties, then Global Variables tab.
    2. Create the two global variables NetworkLocation and FinalLocation
    3. Edit the NetworkLocation value to be the folder (including a final ‘\’) of where you want to store the files.
    4. Leave FinalLocation blank (we’ll create that in the next step.
  2. Add an ActiveX Script Task with the following code (which you can
    obviously tailor the name to your needs).  It basically creates a
    date/time based file name, and stores a concatenation of the network
    location and file name in the FinalLocation global variable.

    ‘**********************************************************************
    ‘  Visual Basic ActiveX Script
    ‘************************************************************************
    Function Main()

        Dim fileName
        fileName = “MyOutput_” & DatePart(“yyyy”,Date) & Right(“0” & DatePart(“m”,Date), 2) & Right(“0” & DatePart(“d”,Date), 2) & “_” & Right(“0” & DatePart(“h”,Now), 2) & Right(“0” & DatePart(“n”,Now), 2) & Right(“0” & DatePart(“s”,Now), 2)
        DTSGlobalVariables(“FinalLocation”).Value = DTSGlobalVariables(“NetworkLocation”).Value + fileName + “.xls”

        Main = DTSTaskExecResult_Success

    End Function

  3. Add your data connections
    1. Create your source connection (presumably SQL Server or wherever your data’s coming from)
    2. Create your destination connection (A Microsoft Excel 97-2000
      connection) and set an initial file name (best to be in the folder
      specified in NetworkLocation)
  4. Create a Data Transformation Task connecting your source and destination connections as per normal
    1. Add in your Table/View or SQL Query to define the source data.
    2. In the Destination tab click ‘Create…’ next to ‘Table name:’ (a
      dialog will pop up ‘Create Destination Table’).  Change the table name
      (or column definitions) if you like, then COPY THE ENTIRE TEXT – and
      paste into a text editor for safe keeping.
    3. Click OK (This will create a new sheet in the spreadsheet you’ve
      specified in your connection, ready to receive your data).  Your fields
      should then also be populated in the table below.
    4. Check the transformations tab – you should get a nice one to one mapping between source and destination columns.
  5. Create a Dynamic Properties Task (Here’s where it all comes together)
    1. Click ‘Add…’ to create a new property.  Choose your Excel Connection and pick the ‘DataSource’ property, then click ‘Set…’
    2. Choose ‘Global Variable’ as the source and FinalLocation as the variable – Click OK.
    3. Click ‘Add…’ to create another property.  Choose the
      DTSTask_DTSDataPumpTask_? in the Tasks node (your data transformation
      task) and pick the ‘DestinationObjectName’ property.
    4. Choose ‘Query’ as the source and paste in your ‘CREATE TABLE‘ statement from earlier – Parse to check, then click OK.
  6. Create the Workflow for the task
    1. Select the Dynamic Properties Task, right-click and select Workflow –> Workflow Properties.
    2. Click ‘New’ and set the ActiveX Script Task as ‘Source Step’.
    3. Select the Data Transformation Task, right-click and choose Workflow Properties.
    4. Click ‘New’ and set the Dynamic Properties Task as ‘Source Step’.

That’s basically it – the package should look something like this:

It should also now do the following (in order):

  1. Set filenames (stored in global variables)
  2. Set filename of the excel connection, create the destination ‘sheet’
    from the global variables and the query in the transformation
  3. Do the transformation and store the results in a date/time named Excel spreadsheet in your configured folder

If you ever need to change the query (columns) then you’ll need to regenerate the ‘CREATE TABLE’ statement and reflect it in the Dynamic Properties Task (5.c)

*Typically an Excel export covers your back with datatypes where a CSV export won’t – so when you open in Excel you’ll have to format columns specifically yourself – like converting dates, and also long numbers from scientific format to text format.

How to generate a list of Visual Studio Shortcuts

I’d seen addins that list Visual Studio shortcuts before, but seem to have lost them.  I also used to use ReSharper, so memorised its shortcuts.  I’m now back to ‘naked’ Visual Studio, so am having to re-learn the standard shortcuts.

I was surprised to find this article on MSDN that has code for a macro to produce a html page of all the shortcuts.  Just run the macro, open it up and search in the browser whenever you’re having problems remembering – unless you really want to print it (in which case I’d be inclined to bang a bit of CSS in there 🙂 ).  I might improve upon the whole situation if I find myself needing to refer to it more often, and will post updates here.

Removing the scrollbar Page Shift from Firefox

This had bugged me for a while.  A lot of sites (including some of the ones I develop) tend to have a fixed width layout these days and some browsers (IE particularly) ‘always’ has a visible scrollbar.  This means that the available screen width is constant whether the page scrolls or not. 

Firefox on the other hand (and Chrome/Opera/Safari) seem to have this off by default.  This of course seems reasonable until you have a fixed width, centred layout that ‘shifts’ when you switch from a non-scrolling to a scrolling page.  It’s just a bit off-putting.

Fortunately Firefox is configurable and the following will fix that up for you. (I’m sure the other browsers are capable of something similar but I’m not using them much 🙂 )

  1. Find your profile directory (it’s bound to be the ‘default’ one unless you’re developing Firefox addons.  You’ll normally find it in c:\documents and settings\username\application data\Mozilla\Profiles\xxxxx.default\
  2. Go to the ‘chrome’ subfolder and create a file called userContent.css (you’ll probably find there’s a couple of ‘example’ files there already.
  3. Add the following (Firefox-specfic) line to the file:

    html { overflow: -moz-scrollbars-vertical !important; }

  4. Save the file, exit Firefox and start her up again.  You should now have a permanent scrollbar which eliminates the page shift. 

Create SQL Server User and Role Script to execute Stored Procedures Only


I really thought I’d posted this before, but here’s a script I use quite often when I need to create a SQL user that needs to just have ‘execute’ access to Stored Procs. 

This is usually used for web apps where the server is going through as a single user.  I’d encourage anyone to ensure that you always start off with granting the minimum access required and only add more when actually needed.  If you’ve got different roles in the application (viewer, administrator etc) then create different roles, with a user for each.

The intention is to remove the need to give direct table access (as we only call Stored Procs right?).  The script doesn’t actually do the ‘grants’ for you, but will generate another batch that you can execute if you like.  I prefer to put the grants in the script for each proc/function anyway, just so it’s a little easier to maintain, and you can specifically grant access to the ‘admin’ procs to the ‘admin’ role, whilst limiting any ‘normal’ users to the read-only/everyday ones. 

There’s satisfaction to be had from an app that connects to the database via an appropriate user, which is in a role that can only do what is necessary for the role, and can’t access tables directly.  Not only is it neat, it’s also (relatively) secure.

Here’s the script…

— MSIMNER: This script will generate a role and user for SQL Access to a single database
— The role will have no specific access to read or write tables but will be granted access
— through execution of stored procedures ONLY

USE <databasename, sysname, MyDB>

DECLARE @UserName VARCHAR(20)
DECLARE @Password VARCHAR(20)
DECLARE @DBName   VARCHAR(20)
DECLARE @Role     VARCHAR(20)

–SET PASSWORD HERE
SET @Password ‘ixFg2tfk’

SET @UserName ‘<databasename, sysname, MyDB>User’
SET @Role     ‘db_exec_<databasename, sysname, MyDB>’
SET @DBName   ‘<databasename, sysname, MyDB>’

IF EXISTS(SELECT FROM master.dbo.syslogins WHERE name = @UserName)
BEGIN
    exec 
sp_revokedbaccess @name_in_db @UserName
    
exec sp_droplogin @loginame @UserName
    
exec sp_droprole @rolename @Role

END

–Create User
exec sp_addlogin @loginame @UserName, @passwd @Password
exec sp_grantdbaccess @loginame @UserName
exec sp_defaultdb @loginame @UserName, @defdb @DBName
exec sp_addrole @rolename @Role, @ownername ‘dbo’
exec sp_addrolemember @rolename @Role, @membername @UserName

— The output of this command will need to be run separately – i.e. paste the output into a new window and run… 

–SELECT ‘GRANT EXECUTE ON ‘ + name + ‘ TO ‘ + @Role , *
–FROM sysobjects
–WHERE type = ‘P’


Colorized by: CarlosAg.CodeColorizer