Generate SQL Inserts from existing data

This is one from the archives that obviously got away.  I had need for this today so thought I’d post so I don’t have to look so hard next time!


It’s pretty much a case of create the SP in your chosen database and run with a tablename as the parameter.  I originally found this on one of the SQL sites (probably SQLServerCentral), and amended it slightly to add some basic filtering in the @where parameter.


There’s a few limitations from memory – with blob, image and text fields, but for your average tables this will probably do the trick just fine.


I’ve also used a different method that basically builds up a single INSERT with SELECT ‘literal values’ UNION SELECT ‘literal values’.  That works fine and is potentially easier to control transaction-wise as everthing’s in a single statement.  Anyway here’s the code….

if exists (select from dbo.sysobjects where id = object_id(N'[dbo].[usp_InsertGenerator]’and OBJECTPROPERTY(idN’IsProcedure’1)
drop procedure [dbo].[usp_InsertGenerator]
GO

CREATE PROC dbo.usp_InsertGenerator
(@tableName 
varchar(100), @where varchar(1000= NULLas
 
–Declare a cursor to retrieve column specific information for the specified table
DECLARE cursCol CURSOR FAST_FORWARD FOR 
SELECT 
column_name,data_type FROM information_schema.columns WHERE table_name @tableName
OPEN cursCol
DECLARE @string nvarchar(3000–for storing the first half of INSERT statement
DECLARE @stringData nvarchar(3000–for storing the data (VALUES) related statement
DECLARE @dataType nvarchar(1000–data types returned for respective columns
SET @string=‘INSERT ‘+@tableName+‘(‘
SET @stringData=
 
DECLARE @colName nvarchar(50)
 
FETCH NEXT FROM cursCol INTO @colName,@dataType
 
IF @@fetch_status<>0
 
begin
 print 
‘Table ‘+@tableName+‘ not found, processing skipped.’
 
close curscol
 
deallocate curscol
 
return
END
 
WHILE 
@@FETCH_STATUS=0
BEGIN
IF 
@dataType in (‘varchar’,‘char’,‘nchar’,‘nvarchar’)
BEGIN
 SET 
@stringData=@stringData+””+”’+isnull(””’+””’+’+@colName+‘+””’+””’,”NULL”)+”,”+’
END
ELSE
if 
@dataType in (‘text’,‘ntext’–if the datatype is text or something else 
BEGIN
 SET 
@stringData=@stringData+””””’+isnull(cast(‘+@colName+‘ as varchar(2000)),””)+”””,”+’
END
ELSE
IF 
@dataType ‘money’ –because money doesn’t get converted from varchar implicitly
BEGIN
 SET 
@stringData=@stringData+”’convert(money,”””+isnull(cast(‘+@colName+‘ as varchar(200)),”0.0000”)+”””),”+’
END
ELSE 
IF 
@dataType=‘datetime’
BEGIN
 SET 
@stringData=@stringData+”’convert(datetime,’+”’+isnull(””’+””’+convert(varchar(200),’+@colName+‘,121)+””’+””’,”NULL”)+”,121),”+’
END
ELSE 
IF 
@dataType=‘image’ 
BEGIN
 SET 
@stringData=@stringData+””””’+isnull(cast(convert(varbinary,’+@colName+‘) as varchar(6)),”0”)+”””,”+’
END
ELSE 
–presuming the data type is int,bit,numeric,decimal 
BEGIN
 SET 
@stringData=@stringData+””+”’+isnull(””’+””’+convert(varchar(200),’+@colName+‘)+””’+””’,”NULL”)+”,”+’
END
 
SET 
@string=@string+@colName+‘,’
 
FETCH NEXT FROM cursCol INTO @colName,@dataType
END
DECLARE 
@Query nvarchar(4000)
 
SET @query =‘SELECT ”’+substring(@string,0,len(@string)) + ‘) VALUES(”+ ‘ substring(@stringData,0,len(@stringData)-2)+”’+”)” FROM ‘+@tableName
IF @where IS NOT NULL
 SET 
@query @query + ‘ where ‘ + @where
 
exec sp_executesql @query
 
CLOSE cursCol
DEALLOCATE cursCol
GO


Colorized by: CarlosAg.CodeColorizer

Data Dictionary from within SQL Server

I knew you could do this, but never took the time to find out exactly which sys… table the comments got stored in.  sysProperties is what you want, and this article on SQLServerCentral tells you how…


The easiest way to get them ‘in’ is in Enterprise Manager, and someone posted a slightly amended version of the code to get them out (I also added IsNullable)…



DECLARE @TableName VARCHAR(128)

SET @TableName ‘Catalogue’

SELECT 
    
T.table_name AS TableName,
    C.ordinal_position 
AS ColumnOrder,
    C.[column_name] 
AS ColumnName,
    
ISNULL(P.value, AS ColumnDescription,
    C.DATA_TYPE 
AS DataType,
    
COALESCE(numeric_precision,character_maximum_length,0AS Length,
    
COALESCE(numeric_scale,0AS DecimalPos,
    C.is_nullable 
AS IsNullable,
    
ISNULL(C.COLUMN_DEFAULT, AS DefaultValue  
    
FROM information_schema.tables T 
        
INNER JOIN information_schema.columns C 
            
ON T.table_name C.table_name 
        
LEFT JOIN ::FN_LISTEXTENDEDPROPERTY(N’MS_Description’N’user’,N’dbo’,N’table’, @TableName, N’column’null) P 
            
ON C.column_name P.objname
    
WHERE T.table_name @TableName



Colorized by: CarlosAg.CodeColorizer

ASP.NET Open Source Content Management Systems

I’m producing a site for a client at the moment, and it quickly became apparent that I’d have a number requirements that would lead me down the CMS path:



  1. Client makes direct updates to the site (with approval/preview possibilities if possible)
  2. Built-in membership, security, roles etc (why write this yourself? – I’ve got 2 kids and just don’t have the time or inclination!)
  3. Skinning capability (OK ASP.NET gives you that ‘kindof’ for free – but I throw it in anyway)
  4. CSS-driven layout (rather than tables) – This is a technical requirement for me rather than the client, but I want to make the site accessible and SEO-friendly
  5. Capability to customise  and add generic hooks to other office-based systems
  6. Quick time to market (building all the above from scratch would be ‘fun’ I’m sure, but would also take time)

To be fair to the customer I couldn’t really suggest that I do everything from scratch when there’s plenty of open source and commercial CMS’s that can do the job very well. 


I do .NET – so I’m sure there’s plenty of great CMS’s for other platforms, but I’m not about to use this as a re-skilling exercise, charging a customer for the pleasure (as I need to do some more ‘significant’ development around the site, so will be using .NET for that).


What follows is a personal view based on my requirements and my experience (rather than extensive research into each product).


DotNetNuke


This was the first one I considered as I’d had dealings with it a few years ago (a dev ‘community’ site at a previous employer).  After playing with just about every module on the planet at the time (‘cos you do), we realised we just needed a blog, so converted to Community Server as that seemed a bit less cluttered.  (BTW if you want to run .NET blog without SQL Server (like me) then DasBlog’s what you want.


Pro’s



  • Price (free)
  • Supports pretty much everything I need (open architecture, skinning, module development etc)
  • Familiar from previous use
  • Easy to use and administer – very WYSIWYG
  • Documentation’s comprehensive (if a little focused towards the high-level architectural view). 

Con’s



  • Slow (I thought I imagined this as my own dev machine’s not a ‘beast’ and it’s running SQL 2005 express, but part of this is potentially partly due to the caching mechanism (as noted on the MojoPortal site)
  • Table-driven by default (if you want your skin to be CSS-driven you’ve either got to muck around for ages finding a ‘good’ CSS skin (there’s plenty of bad/really basic ones).  This extends to the DNN standard controls, and it takes an awfully long time (using Firebug and the IE Dev toolbar) to work out why you’ve still got fixed size fonts when you’ve gone to great length in your skin to make everything sizable.  (I’m sure again the better CSS skins address this, but you there always appears to be something that gets in the way because the original DD design uses tabular layout throughout).
  • It’s written in VB.NET (OK this isn’t a ‘real’ reason, as there’s no difference – I know!  You also shouldn’t need to do much (if anything) in VB.NET if you’re a C#’er and you want to extend things – I think I’m just a bit miffed I’ve got to work with a bit of VB.NET at work at the moment 🙂 )
  • Not that keen on the actual logical structure, and the limited url-rewriting.  The whole anonymous tab thing out of the box.  It seems to me that DNN was designed before lots of newer features became available and it’s had to bolt them on as time’s gone by.
  • Lastly – it doesn’t feel very open source.  It’s clear that the business model has evolved to keep the output freely available whilst making money out of books, consulting and conferences (fair play to Shaun Walker – he deserves the kudos for the product).  I just feel that things went a little ‘corporate’ a while ago, which turned me off.  I bought an iRiver rather than an iPod for the same reason (too much marketing noise that turned me off, almost trying to bombard you so you don’t even consider other products, and also trying to deter competition).  There’s also many people now trying to make money out of skins (most of which aren’t terribly good) – another indicator that the ‘community’ isn’t very focused on open source development.

MojoPortal


Until a couple of days ago I’d not heard of this, but the performance discussion (above) sparked my interest.  It’s clearly got less out-of-the-box functionality and community support than DNN and is lacking more in the documentation department, but if you’re after a no fuss implementation then this might be the way to go.


Pro’s (from the site – not my experience)



  • Free and open source
  • Multi DB support (just about any DB you’d care to use)
  • Approval/workflow support (although haven’t looked into this yet)
  • Provides major functionality required
  • Skinning uses standard ASP.NET theme-based approach
  • Aligned relatively well with accessibility guidelines (CSS layout) – XHTML compliancy 

Con’s



  • Download is actually a little confusing.  You’ve got a number of zip files and you’re left to work out which one/s you need.  I downloaded a couple then reaslied I only needed the mssql release. 
  • Documentation a little lacking (although installation pretty easy once you’ve just changed a config setting). 
  • More of a ‘developer’ product – some understanding of .NET required.  This makes it more difficult for non-techy people to get it off the ground  

Jury’s still out on this one as I’ve literally installed and that’s it.  I need to get to grips with the integration and skinning support, as I want to change the default 3 pane layout and remove the left panel, so we’ll see how that goes…


Others (That I don’t know anything about)


Rainbow Portal (looks like it might be more worth a look in a year or two when there’s a bit more support


Umbraco (not all versions free)


 


So what am I going to do now?  Install MojoPortal and see if it ‘feels’ nicer than DotNetNuke.  I’m not really too worried about future support, and upgrades as I’m providing a site to a client who will run with it once I’m done (based on what it does ‘now’ – and how stable it is ‘now’).


I’ll post the results – or add to the list when I’ve come to a conclusion after some more research

SQL Server – Save all DTS Packages

Here’s another useful proc I found to back up all your DTS packages (from I can’t remember where).  Put this in your master database: 


Create procedure usp_SavePackages
@Path    
varchar(128
)
as
/*

*/

    
set nocount on

declare 
@objPackage 
int
declare 
@PackageName varchar(128
)
declare @rc 
int
declare 
@ServerName varchar(128
)
declare @FileName varchar(128
)
declare    @FilePath varchar(128
)
declare    @cmd varchar(2000
)
    
    
select     @ServerName 
@@ServerName ,
        @FilePath 
@Path
    
    
if right(@Path,1) <> 
‘\’
    begin
        select @Path = @Path + ‘
\

    end
    
    — create output directory – will fail if already exists but …
    select    @cmd = ‘
mkdir 
‘ + @FilePath
    exec master..xp_cmdshell @cmd
    
    
create table #packages (PackageName varchar(128))
    insert     #packages
        (PackageName)
    select     distinct name
    from    msdb..sysdtspackages
    
    select    @PackageName = ”
    while @PackageName < (select max(PackageName) from #packages)
    begin
        select    @PackageName = min(PackageName) from #packages where PackageName > @PackageName

        select    @FileName = @FilePath + @PackageName + ‘
.dts


        exec @rc = sp_OACreate ‘
DTS.Package
‘, @objPackage output
        if @rc <> 0
        begin
            raiserror(‘
failed to create package rc %d
‘, 16, -1, @rc)
            return
        end

        exec @rc = sp_OAMethod @objPackage, ‘
LoadFromSQLServer
‘ , null,
            @ServerName = @ServerName, @Flags = 256, @PackageName = @PackageName
        if @rc <> 0
        begin
            raiserror(‘
failed to load package rc %d, package %s
‘, 16, -1, @rc, @PackageName)
            return
        end
        
        — delete old file
        select @cmd = ‘
del 
‘ + @FileName
        exec master..xp_cmdshell @cmd, no_output
        
        exec @rc = sp_OAMethod @objPackage, ‘
SaveToStorageFile
‘, null, @FileName
        if @rc <> 0
        begin
            raiserror(‘
failed to save package rc %d, package %s
‘, 16, -1, @rc, @PackageName)
            return
        end
        
        exec @rc = sp_OADestroy @objPackage
    end

GO


Colorized by: CarlosAg.CodeColorizer

Microsoft CRM – Data Migration Framework (DMF) Bugs / Fixes

I’ve been migrating data for a client recently and part of this has required the use of the MS Data Migration Framework (DMF).  I thought I’d post a couple of bugs I found and(thankfully) how I managed to get past them with the use of SQL Profiler!


The Migration Framework works – albeit slowly as it uses the CRM Web Service to stuff the records into the CRM database.  The CRM structure I’ve been trying to migrate to contains a large number of custom fields.  The DMF calls many stored procs to get its data (from the CDF database); these are structured by entity e.g


p_dm_GetContact
p_dm_GetContract


I was having a problem with Contacts, and after tracing a migration attempt with cut-down data (for speed), I found that the issue was that the lid was being blown off a VARCHAR(8000) in p_dm_GetContact giving the following SQL Error:


Error: 207, Severity: 16, State: 3


The error is ‘Invalid Column Name’.  Further investigation revealed that a slightly dodgy piece of code (replicated in all the other similar procs) effectively truncates the dynamically created field list in the ‘SELECT’ statement…


— if there are extended fields, then attached them to the end 
—   of the base field list

if len(@metadatafields) > 0
    set @entityfields = @entityfields + ‘, ‘ + rtrim(@metadatafields)


@entityfields contains a comma delimited list of fields from the ‘cdf_Contact’ table, and @metadatafields contains fields from ‘cdf_Contact_Ext’ (custom fields).  If you concatenate the two you’re obviously asking for trouble.


The following shows the ‘fix’ to enable a bit more room (it’s still not very good – but as the code’s not available we can’t make a nicer fix right now)


CREATE procedure p_dm_GetContact
as
begin
    set nocount on
    
    declare 
@entity            varchar(100
)
    
declare @entityfields     varchar(8000
)
    
declare @metadatafields varchar(8000
)
    
— set target entity name
    
set @entity 
‘Contact’

    
— query base field list for targeted entity
    
exec p_dm_GetFields @entity, ‘c’, @entityfields 
output

    
— query extended field list for targeted entity
    
exec p_dm_GetMetaDataFields @entity, ‘ext’, @metadatafields 
output

    
— if there are fields, then attached them to the end 
        —   of the base field list    
    
if len(@entityfields) > 
0
       
set @entityfields ‘, ‘ rtrim
(@entityfields)

    
— if there are extended fields, then attached them to the end 
        —   of the base field list
——    if len(@metadatafields) > 0
——        set @entityfields = @entityfields + ‘, ‘ + rtrim(@metadatafields)

    
if len(@metadatafields) > 0
        
set @metadatafields ‘, ‘ rtrim
(@metadatafields)

    
— build/execute query statement
    — the status code must be null
    
exec (
‘    select top 1 ”” As CRMUpdateStatus, 
            c.ContactId As entityid,
            c.StateCode As statecode,
            dbo.p_fn_GetID(l.DestinationId) As originatingleadid, 
            dbo.p_fn_GetID(p.DestinationId) As defaultpriceLevelid, 
            dbo.p_fn_GetID(a.DestinationId) As accountid, 
            dbo.p_fn_GetID(su.DestinationId) As owninguser 
            ‘ 
+ @entityfields + @metadatafields + 


                 from cdf_Contact c

       inner join cdf_Contact_info info 
                   on c.ContactId = info.ContactId

       left outer join cdf_Contact_ext ext 
                   on c.ContactId = ext.ContactId

      left outer join cdf_Lead_info l 
                   on c.OriginatingLeadId = l.LeadId

      left outer join cdf_PriceLevel_info p 
                   on c.DefaultPriceLevelId = p.PriceLevelId

      left outer join cdf_Account_info a 
                   on c.AccountId = a.AccountId

           inner join cdf_SystemUser_info su 
                   on c.OwningUser = su.SystemUserId

           where (info.StatusCode is null)
         and (info.DestinationId is null)’
)
end
GO



Colorized by: CarlosAg.CodeColorizer


 

SQL Server – Get Matrix for two different result sets

I had a need today to splice 2 different result sets together and effectively give all permutations of the two tables.  I’d seen different examples on doing this with cursors and loops etc, but I thought – there ‘must’ be an easier way – there’s lots of different join types right?


The first issue I’d got is that there’s no common fields between the two tables I need to merge because otherwise a ‘simple’ FULL OUTER JOIN would do the biz.


In the end it didn’t amount to much more than that because a slightly dodgy hack (but functional nontheless), to guarantee full coverage from the join is to simply place the same literal on both sides of the join 🙂


With 2 sets of ‘different’ data (let’s use Northwind as I’m not about to bore you with my own data).  Say you wanted to generate a list of all products for all regions given the following result sets…


SELECT ProductName
FROM Products


SELECT RegionDescription
FROM Region


You can splice them together with this….


SELECT  R.RegionDescription,
        P.ProductName
FROM Products P
FULL OUTER JOIN Region R
    ON 1 = 1


and you get something like this… (snippet)


RegionDescription                                  ProductName                             
————————————————– —————————————-
Eastern                                            Valkoinen suklaa
Eastern                                            Vegie-spread
Eastern                                            Wimmers gute Semmelknödel
Eastern                                            Zaanse koeken
Western                                            Alice Mutton
Western                                            Aniseed Syrup
Western                                            Boston Crab Meat


If there’s actually a common field then your join can obviously use that.  Any other filtering you want to do can also go into the WHERE clause as required.


This is another reminder of the lesson that if you’re tempted to use a CURSOR, then resist that urge until you’ve exhausted your Google finger!


SQLServerCentral and SQLMag often have the answer you’re looking for.

SQL Server – Kill all users script

Found this the other day – for when you’re restoring SQL databases.  You need to kick out all current users right?


Run this little fella into your master database and away you go – just run this step before your restore.



CREATE PROCEDURE usp_KillUsers @dbname varchar(50as
SET NOCOUNT ON

DECLARE 
@strSQL varchar(255)
PRINT ‘Killing Users’
PRINT ‘—————–‘

CREATE table #tmpUsers(
spid 
int,
dbname 
varchar(50))

INSERT INTO #tmpUsers
SELECT 
[Process ID]    l.req_spid, db_name(l.rsc_dbid)
from master.dbo.syslockinfo l with (NOLOCK
where db_name(l.rsc_dbid) @dbname
group by l.req_spid, db_name(l.rsc_dbid)


DECLARE LoginCursor CURSOR
READ_ONLY
FOR SELECT spid, dbname FROM #tmpUsers WHERE dbname @dbname

DECLARE @spid varchar(10)
DECLARE @dbname2 varchar(40)
OPEN LoginCursor

FETCH NEXT FROM LoginCursor INTO @spid, @dbname2
WHILE (@@fetch_status <> –1)
BEGIN
    IF 
(@@fetch_status <> –2)
    
BEGIN
    PRINT 
‘Killing ‘ + @spid
    
SET @strSQL ‘KILL ‘ + @spid
    
EXEC (@strSQL)
    
END
    FETCH 
NEXT FROM LoginCursor INTO  @spid, @dbname2
END

CLOSE 
LoginCursor
DEALLOCATE LoginCursor

DROP table #tmpUsers



Colorized by: CarlosAg.CodeColorizer

SQL Server Tips – Optional Parameters and Find Closest Match

It’s funny how you can survive for years not knowing some really simple stuff.  I learned two useful things last week.  For instance…


Optional SQL Parameters


To have an ‘optional’ parameter in a SQL ‘search’ Stored Proc you’d often declare a parameter thus…


@MyParam VARCHAR(20) = NULL –to denote that we want to ignore this param


and you’d probably



  1. have some convaluted logic to possibly build dynamic SQL around whether to factor the parameter into your query
  2. build some nasty ‘like’ clause that defaults to ‘%’
  3. derive MIN and MAX values from the input parameter and use a BETWEEN
  4. use a ‘magic value’ in place of your NULL to denote ‘ignore’
  5. something else equally ugly I’ve not considered

…or (assuming this parameter is expecting to do a direct match) you could just simply do


WHERE @MyParam = ISNULL(@MyParam, MyFieldIWantToMatch)


This will ensure if your field is nulled out it just does a simple (IF 1 = 1) type thing.


 


Find Closest Match


I had a need to find the closest match to a particular value within a 10% tolerance +-.  The following does that fairly simply (I did nick the idea from thescripts.com) but these sorts of things do avoid the need for nasty tempdb use with temp tables…


The following is an example – so don’t think I design tables with one column!


DECLARE @Amount INT


SET @Amount = 50


DECLARE @MinAmount INT
DECLARE @MaxAmount INT


SET @MinAmount = @Amount * 0.9
SET @MaxAmount = @Amount * 1.1


SELECT  AT.Amount
FROM AT.AmountTable AT
WHERE Amount BETWEEN @MinAmount AND @MaxAmount –within range
ORDER BY ABS(@Amount – AT.Amount) ASC –Nifty ‘closest’ match


You can then take the first row in your result set to be the closest match.