SQLGeordie's Blog

Helping the SQL Server community……where i can!

SSIS SCD vs MERGE Statement – Performance Comparison — July 3, 2012

SSIS SCD vs MERGE Statement – Performance Comparison

I wouldn’t class myself as an expert in SSIS but I certainly know my way around but came across something today which I thought I’d share. As with a lot of things there are “many ways to skin a cat”, none of which is something I’ll go into at the moment but what i will concentrate on is updating columns in a table where the data has changed in the source.

One of the projects I’m currently working on requires this very process and when i set about doing so I created the T-SQL Merge statement to do the business. However, the question was raised as to why I didn’t use SSIS’s built in component Slowly Changing Dimension (SCD)? I didn’t really have an answer other than personal preference but decided to delve into it a bit further and compare the performance of each method.

As a test, I created a source table with a Key and Name column:

USE TempDB;

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'dbo.iSource') AND type in (N'U'))
	DROP TABLE dbo.iSource;

CREATE TABLE dbo.iSource
(
   ID INT,
   Name varchar(100)
);

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'dbo.iTarget') AND type in (N'U'))
	DROP TABLE dbo.iTarget;
	
CREATE TABLE dbo.iTarget
(
   ID INT,
   Name varchar(100)
);

and populated it with some dummy data:

INSERT INTO dbo.iSource (ID,Name)
SELECT TOP 10000
ROW_NUMBER() OVER (ORDER BY t.object_id) AS rownumber
,'Name_'+convert(varchar(4),ROW_NUMBER() OVER (ORDER BY t.object_id))
FROM sys.tables t
CROSS JOIN sys.stats s;

INSERT INTO dbo.iTarget (ID,Name)
SELECT TOP 10000 
ROW_NUMBER() OVER (ORDER BY t.object_id DESC) AS rownumber --Done in descending order
,'Name_'+convert(varchar(4),ROW_NUMBER() OVER (ORDER BY t.object_id))
FROM sys.tables t
CROSS JOIN sys.stats s;

SELECT ID, Name FROM iSource;
SELECT ID, Name FROM iTarget;

So we now have a source and target table with different Names and we’ll look to update the iTarget table with the information coming from iSource.

Method 1 – MERGE Statement

MERGE dbo.iTarget AS target
	USING (
	SELECT ID, Name
	FROM dbo.iSource
	 ) AS  source (ID, Name)
		ON (target.ID = source.ID)
		WHEN MATCHED AND target.Name <> source.Name 
		THEN 
			UPDATE SET Name = source.Name
	 WHEN NOT MATCHED THEN 
		 INSERT (ID, Name)
		 VALUES (source.ID, source.Name); 

Using this method simply in SSMS for simplicity, profiler output 2 rows for Batch Starting and Batch Completing, CPUTime of 125ms and Duration of 125ms and it updated 6678 records. Top stuff, as expected.

Method 2 – SSIS SCD Component
I rebuilt the tables to put them back to where we started and set about creating the same thing in SCD setting ID as the business key and Name as the changing attribute and not setting inferred members, below is a screen dump of the outcome of this:

BEFORE:

I clear down the profiler and run the ssis package and the outcome is quite astounding.

DURING/AFTER:

The profiler output 13456 rows including 6678 rows of queries like this:

exec sp_executesql N'SELECT [ID], [Name] FROM [dbo].[iTarget] WHERE ([ID]=@P1)',N'@P1 int',8

as well as 6678 rows of queries similar to this:

exec sp_execute 1,'Name_3304',3304

Total Duration of 37 seconds (yes that’s seconds not ms!!)…….and this is on a table of only ~7k rows!

Well I’ll be damned, the SCD basically runs a cursor looping each record checking for a match on ID and updating that record if so. I can’t actually believe that MS have built a component which performs in this way.

So, to answer the question asked ” why I didn’t use SSIS’s built in component Slowly Changing Dimension (SCD)?”, I now have a definitive answer, it doesn’t perform!

I’m sure SCD has its place but for me, the requirements and the datasets I’m working on I think I’ll stick with MERGE for now….. 🙂

NOTE: This was done on SQL Server 2008R2 Developer Edition running on Windows 7 Ultimate, not sure if SQL Server 2012 has improved the SCD performance but I’ll leave that for another day.

Advertisements
It’s that time of year…..Exceptional DBA Awards 2012 — June 25, 2012

It’s that time of year…..Exceptional DBA Awards 2012

Being a 2011 finalist I felt I should try and rally all those who truly are exceptional to get their nominations in and quick sharp as the closing date is getting close.

I was lucky enough to be nominated for this award last year and wasn’t going to follow it through as I felt I didn’t really stand a chance but when I sat and thought about it, if someone is willing to think of you as being exceptional at what you do, enough so to nominate you then why not, what’s the worst that can happen!!??!!

The level of talent out the is phenomenal and the 4 guys I was up against last year are up there with the best in the world. Don’t let that put you off though, I feel that this award is very much focused towards those in the USA and not many actually make it through to the finals from the UK (Kevan Riley Blog / Twitter and myself I think are the only two!) so I think we need to give a bigger push this year and try and get more than one finalist from the UK 🙂

If you haven’t been nominated by one of your peers then nominate yourself, there’s no rule saying you can’t and in fact Redgate encourage it.

Get entered, the questions answered and cross your fingers!

Good luck!!!!

Querying Microsoft SQL Server 2012 Beta exam (70-461 / 71-461) – My Thoughts — April 13, 2012

Querying Microsoft SQL Server 2012 Beta exam (70-461 / 71-461) – My Thoughts

Well I’ve now done the final SQL Server 2012 exam I managed to get a slot booked for. The Querying Microsoft SQL Server 2012 exam wasn’t going to be my strongest subject as I’m more of a DBA than Developer but i felt it went quite well.
The exam consisted of 55 questions, varying in structure from multiple guess to drag n drop. There were only about 5 or 6 questions i left comments about relating to the content not being clear, typo’s or in one instance an actual mistake in the question so all in all a better setup than the Administrator exam I took first off (70-462 / 71-462).

The biggest issue I found was down to my own fault. I didn’t revise on the syntax of the new 2012 T-SQL functionality. Don’t get me wrong, I know i’ve got a lot of them right but with some, although I knew the answer was down to 2 of the 4, I didn’t know it well enough to be 100% certain as there was only 1 word different in the syntax which I’m a bit disappointed with……..but no-one to blame but myself 🙂

I’m still not sure whether the pass mark is 70% or 80% and hoping I’ve answered enough of the non-2012 questions correctly to scrape through.

As always, I’d be interesting to hear other peoples thoughts on any of the 2012 exams they’ve taken so far…..

Designing Database Solutions for Microsoft SQL Server 2012 Beta exam (70-465 / 71-465) – My Thoughts — April 6, 2012

Designing Database Solutions for Microsoft SQL Server 2012 Beta exam (70-465 / 71-465) – My Thoughts

After sitting the Administering Microsoft SQL Server 2012 Databases Beta exam (71-462) on Monday, I was still a little disappointed with Microsofts approach to questioning for these exams. So I went into this exam with pretty much the same mindset that the questions were going to be vague and in some cases completely wrong.

Much to my surprise, I found the questions in this exam far far better. The exam itself was split into sections. There were 44 in total, 26 the standard multiple choice and 5 further scenario based sections, each with either 3 or 4 questions. Section one was much the same as the 71-462 exam but I felt the questions were in the majority, more concise and in my opinion gave enough information to make a valid judgement when answering. I did leave a few comments as there were a few of the questions that could do with a bit more work and had a couple of typo’s.

The scenario sections again provided enough information to select the relevant answers, the only criticism of these sections were on question 2 of my second scenario, there was a major typo on the answers of question which didn’t pry me away from the answer but requires sorting.

Now, the main thing that really got me with this exam was the amount of SQL Azure questions in section one. Is was not mentioned as a skill measured so needs looking into in my opinion, either add it as a skill measured or remove it from the exam.

A much more enjoyable exam than the administrator, mainly due to the higher level of quality in the questioning resulting in far fewer comments being left and for me, I love the scenario based questions!!

As always, I’d be interesting to hear other peoples thoughts on any of the 2012 exams they’ve taken so far…..

How to output from invoke-sqlcmd to Powershell variable — February 3, 2012

How to output from invoke-sqlcmd to Powershell variable

Sorry for another Powershell post but I’ve been doing a lot of it recently and coming up with (what i think are) a few nifty tricks.

One of the issues I encountered recently was with Kerberos delegation whilst trying to automate Log Shipping. What I was trying to do was use an OPENROWSET query to run against the Primary and Secondary servers in order to obtain the Primary_id and Secondary_id in order to pass to the script to be ran on the monitor server. However, seeing as the environment was not setup for Kerberos I encountered the “double-hop” issue.

Enabling Kerberos delegation for the service account would be too high a risk without thorough testing so wasn’t an option in this instance so I decided to look into using invoke-sqlcmd against each of the servers to get the IDs required and pass it to the monitor script.

So how did I go about doing this you ask, well its actually really simple. After a bit of googling I came across this blog by Allen White which gave me a starting block.

Firstly, you have to amend your TSQL script to SELECT the parameter you want to output and use within the rest of the script, something like this:

TSQL snippet to be ran against the Primary Server:

--Cut down version of the script for readability
EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database 
		@database = N'$(Database)' 
		...
		,@primary_id = @LS_PrimaryId OUTPUT --This is what we want
		,@overwrite = 1 
		,@ignoreremotemonitor = 1 

--Need to output this in order for powershell to take it and use it in the monitor script
SELECT @LS_PrimaryId as LS_PrimaryId 

Do the same for the script to run on the secondary server but obviously for the secondary_id 🙂

So, now you’ve setup the TSQL side of things, you need to then call these from Powershell and assign the output parameter to a Powershell variable like so:


$script = "LogShip_Primary.sql"
$PrimaryID = Invoke-Sqlcmd -InputFile $ScriptLocation$script -Variable Database=$DatabaseName, etc etc etc -ServerInstance $PrimaryServer 

$script = "LogShip_Secondary.sql" 
$SecondaryID = Invoke-Sqlcmd -InputFile $ScriptLocation$script -Variable Database=$DatabaseName, etc etc etc -ServerInstance $SecondaryServer

So, relatively simple. Basically your setting the output to a Powershell variable. keeping things tidy, re-assign it to another variable and something to note is that the output is actually a DataTable object. Make sure you use the name of the alias you used in your last TSQL statement.


$PID = $PrimaryID.LS_PrimaryId
$SID = $SecondaryID.LS_SecondaryId 

Once this is done then you can use this in your script to run against the monitor server


$script = "LogShip_Monitor.sql" 
Invoke-Sqlcmd -InputFile $ScriptLocation$script -Variable Database=$DatabaseName, etc etc etc, PrimaryID=$PID, SecondaryID=$SID -ServerInstance $MonitorServer

And there you have it, nice n simple! All you then have to do is wrap it in a foreach loop for the databases you want to setup and a nice and simple automated logshipping build script.

Obviously I’ve omitted a lot of the setup / checking of scripts etc from this post as I don’t want to be doing all the work for you!

Enjoy 🙂

So then, what’s the definition of an object……..? — January 25, 2012

So then, what’s the definition of an object……..?

Not blogged for a while due to client and project commitments but something which has surprised me when speaking with colleagues both past and present is that when I mention the built in function OBJECT_DEFINITION, the majority of DBA’s haven’t heard of it, never mind used it. So i felt it necessary to dust off the blog typing fingers and see if i can enlighten 🙂

So, I though it be a good idea to enlighten a few people to how it can be used by giving real world examples.

Firstly, a short definition (no pun intended) from BOL (http://msdn.microsoft.com/en-us/library/ms176090.aspx) as to what exactly this function does:

Returns the Transact-SQL source text of the definition of a specified object.

Its as simple as that!

Pass in the Object_ID which it expects to be in the current database context and it spits out the text. I’ll show you a couple of examples of how it works in comparison to how I’ve seen the same thing done but by using sp_helptext as well as some of the other system tables.

I’ll not beat around the bush and get straight into a few examples and old skool alternatives as there’s not really much more i can say about the function itself:

Example 1 – OBJECT_DEFINITION

SELECT OBJECT_DEFINITION(OBJECT_ID('usp_StoredProcedureName'))


Example 2 – sp_helptext

EXEC sp_helptext 'usp_StoredProcedureName'


Example 3 – Using system tables to search (this is a common way I’ve seen this done)

SELECT  o.[name]
	  , o.type_desc
	  , sc.[text]
FROM  sys.objects o
INNER JOIN syscomments sc ON o.[object_id] = sc.id
WHERE o.type_desc = 'SQL_STORED_PROCEDURE'
	  AND o.[name]  = 'usp_StoredProcedureName'


Example 4 – OBJECT_DEFINITION for multiple objects

SELECT [object_ID], [Name], OBJECT_DEFINITION([object_ID]) AS ProcText
FROM sys.procedures


Example 5 – OBJECT_DEFINITION for multiple with filtering

SELECT [object_ID], [Name],  OBJECT_DEFINITION([object_ID]) AS ProcText
FROM sys.procedures
WHERE OBJECT_DEFINITION([object_ID]) LIKE '%CATCH%'


Example 6 – OBJECT_DEFINITION to Script out Procedures

SET NOCOUNT ON;

DECLARE @strSQL NVARCHAR(MAX)

SET @strSQL = ''
SELECT @strSQL += OBJECT_DEFINITION([object_ID])+CHAR(10)+'GO'+CHAR(10)
FROM sys.procedures

SELECT @strSQL


Now this can be used for all programmability objects within SQL Server, not just procedures so the same works for Views, functions, triggers etc

Again from BOL here is a full list:

C = Check constraint

D = Default (constraint or stand-alone)

P = SQL stored procedure

FN = SQL scalar function

R = Rule

RF = Replication filter procedure

TR = SQL trigger (schema-scoped DML trigger, or DDL trigger at either the database or server scope)

IF = SQL inline table-valued function

TF = SQL table-valued function

V = View

So there you have it, short n snappy blog today and I really hope that it helps give people a new insight into how to get object text.

Powershell and Dynamic SSIS (or any for that matter) Config files — December 5, 2011

Powershell and Dynamic SSIS (or any for that matter) Config files

As I imagine that the majority of people who are reading this will have some level of SSIS knowledge, I’ll not go into explanations about package configurations in SSIS and its various methods but rather jump straight to it.

The majority of environments I’ve worked in where SSIS packages are utilised, tend to sway down the XML Config package configuration method. As many of you are aware, in multi-tier SQL Environments (ie. Integration, QA, UAT etc etc) this can be a pain when deploying the packages to the relevant environments because you have to at some stage reconfigure the XML configuration file to have the correct parameters to pass into the package for each environment. This can become an even worse scenario when you have tens if not hundreds of SSIS packages (and corresponding dtsConfig’s) and are upgrading your infrastructure with new servers (and/or instance names) as well as drive configurations.

If you don’t have the time to be re-working the SSIS packages to use a SQL table (depending on your process this could take a while to get through development, testing etc) to hold the configuration parameters which makes it easy to script, deploy and update then here’s a simple trick using Powershell (i’m still at the very basic Powershell level so bare with me!!) you can use for your existing dtsConfig files. The sample i’ll be using is for a dtsConfig used in a Restore package.

Unfortunately you’re still going to have to do some initial amending of one config file here :(.

Firstly, lets amend the relevant parameter values to have a standard name for each, as an (snipit) example:

<Configuration ConfiguredType=”Property” Path=”\Package.Connections[master – destination].Properties[ServerName]” ValueType=”String”>
<ConfiguredValue>*SQLInstance*
</Configuration>
<Configuration ConfiguredType=”Property” Path=”\Package.Variables[User::DataFilesDir].Properties[Value]” ValueType=”String”>
<ConfiguredValue>*DataFilesDir*
</Configuration>

I’ve used a * to prefix and suffix the parameter so that you don’t amend anything that may have a similar name.

By doing this, you can run a bit of Powershell to update the relevant element parameter values for each instance by using the code below, NOTE i’ve excluded the setting of the parameters etc as I use this as a module and don’t want to waste space:

#Create Copy for dtsConfigs by piping the date from the static file (with *'d parameters to the new file to be used
(Get-Content $RestoreDatabasesDynamicConfig) |
Set-Content $RestoreDatabasesDynamicConfig_New

#Amend the dtsConfig's and create new files to use
$restoreArray = ("SQLInstance", "DataFilesDir", "LogFilesDir", "SSISRestoreLocationDBs")

#Loop through the array to replace each parameter listed
foreach($Arr in $restoreArray){
	$Replace = "\*"+$Arr+"\*" #This sets the parameter with the * prefix and suffix
	$WithThis = "$"+$Arr #What parameter name passed in to replace the text with, ie. $SQLInstance, $DataFilesDir

	switch ($Arr)
	{
		# Use $ExecutionContext.InvokeCommand.ExpandString($WithThis) to evaluate the string (ie $SQLInstance)
		"DataFilesDir" {$WithThis = $SQLInstallLocation+$ExecutionContext.InvokeCommand.ExpandString($WithThis)}
		"LogFilesDir" {$WithThis = $SQLInstallLocation+$ExecutionContext.InvokeCommand.ExpandString($WithThis)}

		#I've left the above in as an example of how to set the files to a folder location passed in as a Parameter ($SQLInstallLocation)
		default {$WithThis = $ExecutionContext.InvokeCommand.ExpandString($WithThis)}
	}

	#Now create the new dtsConfig file with updated Parameter information
   (Get-Content $RestoreDatabasesDynamicConfig_New) |
	Foreach-Object {$_ -replace "$Replace", $WithThis} |
	Set-Content $RestoreDatabasesDynamicConfig_New

}

In short, this script is taking the amended dynamic file with all the parameters with their *’s prefixed and creating a new dtsConfig file. It then builds an array of parameters to work through and do the replacement, the values of these are(or indeed can be…) passed through to the function/module. I’ve put a switch in there to check for particular items in the array as in my example i wanted to append a folder location to the value passed in. you don’t necessarily have to do this but left it in to show it can be done.

Another example of using this is for a silent SQL Server 2008 install. Once you have a ConfigurationFile.ini then you can follow the same process to put a standard tag in the file and use powershell to find and replace it with a parameter value – works an absolute treat when installing many instances

I’m sure there’ll be someone far more clever than me that can somehow the file for the element and replace any value without standardising the parameter values but I’m no expert with Powershell and learning everyday and hope some others out there can get some use out of this…..and yes, I also realise that you can do a “Find and Replace in Files” with Notepad++ but this technique is great for automating!

Powershell to Script out SQL Server Objects using a CMS list — September 19, 2011

Powershell to Script out SQL Server Objects using a CMS list

I’ve recently decided I’ve got to get with the times and started dabbling with Powershell. Up until now i’ve not really seen any use (as a DBA) for it from a SQL Server perspective as I’ve been able to achieve pretty much everything in the past using TSQL, VBScript and/or SMO etc but the client stated they wanted Powershell to script out all the objects from all instances in their large estate.

I’ll not bore you too much with what its doing but in a nutshell it’ll loop a list of servers from a Central Management Server (CMS) but this can be amended to loop a textfile or indeed a text list. From this, the script will create a folder for the server, loop all databases (incl system but this can be turned off) creating a folder for each and generate a .sql file for procedures, views and functions each within their own folders within the database. You can add/remove the ability to script other objects such as tables, indexes etc but it wasn’t a requirement at the time so I’ve not included it.

I’m not going to pretend that this script is top notch but for a beginner I’m quite proud of it :).
There are a number of similar scripts on the web but none of them did quite what I was after, hopefully It’ll be of some use to some.

Will I be using Powershell again? Definitely! I know I could have knocked this up quicker using other technologies but where’s the fun in that 🙂 ????

And here it is, there are 2 scripts: One which is a function to get the objects called (funnily enough) getSQLObjects (fn_getSQLObjects.ps1) and the other is a script to get the server list and call the function (Run_getSQLObjects.ps1). These could have been combined and a parameter used to dictate whether to use text file, CMS or other method of listing the SQL Servers but I decided to keep them separate.

One other thing to note is that I used SQLPS to run this but the script could be tweaked to use a relevant connection to the CMS to get the serverlist.

fn_getSQLObjects.ps1:


##########################################################
# PowerShell Script of DB Objects
#
# Uses PS and SMO to script the drop and create a type of
# SQL Server object (views, tables, etc.) to individual 
# folders and files
#
# Chris Taylor		07/09/2011
##########################################################

Function getSQLObjects ($SQLInstance, $DB, $ObjType, $OutputFile)
{
	#List of schemas to exclude
		$ExcludeSchemas = @("sys","Information_Schema") 
	  

	#Create smo objects##########################################
		[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
	   
	#SQL Instance &amp; DB object
		$server = new-object ('Microsoft.SqlServer.Management.Smo.Server') $SQLInstance
		$dbs=$server.Databases
	   
	#Set ScriptingOption object option
		$ScriptingOption = new-object ('Microsoft.SqlServer.Management.Smo.ScriptingOptions')        
		
		$ScriptingOption.SchemaQualify = 1
		$ScriptingOption.AllowSystemObjects = 0
		
	#Script Drop Objects###########################################
		$ScriptingOption.ScriptDrops = 1         
		$ScriptingOption.IncludeIfNotExists = 1
				
	#Create Directory for Database and ObjType
		$chkFolder = $OutputFile + $DB + "\"
		if (!(Test-Path -path $chkFolder))
		{
			New-Item ($chkFolder) -type directory
		}	

#write-host $chkFolder	
	#Create directory for object type within database folder
		$chkFolder += $ObjType + "\"
		if (!(Test-Path -path $chkFolder))
		{
			#write-host $chkFolder
			New-Item ($chkFolder) -type directory
		}	
	
	#Generate script for all objects##################################
		foreach ($objects in $dbs[$DB].$ObjType)
		{
			If ($ExcludeSchemas -notcontains $objects.Schema )  
			{				
				$filename = $chkFolder + $($objects.Schema) + "." + $($objects.name) + ".sql"
				$objects.Script($ScriptingOption) + "`r GO `r " | out-File $filename 
			}
		}

	#Script Create Objects#########################################
    #Append this to the file generated above with the Drop - possibly 
    #a better way to do it but works for me
		$ScriptingOption.ScriptDrops = 0         
		$ScriptingOption.IncludeIfNotExists = 0
		
		foreach ($objects in $dbs[$DB].$ObjType)
		{
			If ($ExcludeSchemas -notcontains $objects.Schema )
			{
				$filename = $chkFolder + $($objects.Schema) + "." + $($objects.name) + ".sql"
				$objects.Script($ScriptingOption) + "`r GO `r " | out-File $filename -Append
			}
		}
}		

Run_getSQLObjects.ps1

##########################################################
# PowerShell Script of DB Objects
#
# Loop CMS passing in database list from a particular 
# server
#
# Chris Taylor		07/09/2011
##########################################################


#Loop the CMS Server to list of Servers
foreach ($sqlserver in invoke-sqlcmd -query "SELECT svr.name
											 FROM   msdb.dbo.sysmanagement_shared_registered_servers_internal svr
											 INNER JOIN msdb.dbo.sysmanagement_shared_server_groups_internal grp
											             ON svr.server_group_id = grp.server_group_id")
{
	$ExcludeServers = @("ALL","Servername")
	If ($ExcludeServers -notcontains $sqlserver.name )  
	{	
		write-host $sqlserver.name
		$SQLInstance = $sqlserver.name 
		$OutputFile = "C:\temp\"
	
		#Create Folder for Server, note that this will create a sub folder for any sql instances on that server
		#For example, servername\instance will have a folder for servername with a subfolder for instance
		$OutputFile += $SQLInstance + "\"
		if (!(Test-Path -path $OutputFile))
		{
			New-Item ($OutputFile) -type directory
		}	
		
		#Create smo objects##########################################
			[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
		   
			#SQL Instance &amp; DB object
			$server = new-object ('Microsoft.SqlServer.Management.Smo.Server') $SQLInstance
			$dbs=$server.Databases
			
		#Set what you want to script out
		$Objects = @("StoredProcedures","Views","UserDefinedFunctions")
		
		#Loop DBs
			foreach ($db in $server.Databases)
			{
				#write-host $db.name

				#Loop the object types
				foreach ($objType in $Objects)
				{
					#write-host $objType + ' ' + $db.name + ' ' + $objType.name + ' ' + $OutputFile
					getSQLObjects $SQLInstance $db.name $objType $OutputFile	
				}	
			}
	}#Don't do anything if exceptions, could use a BREAK in the loop instead
}

Being an Exceptional DBA Award 2011 Finalist — July 25, 2011

Being an Exceptional DBA Award 2011 Finalist

On Wednesday 13th July at 16:06 (GMT) I got what could only be described as the biggest shock of my 31 years on this planet – an email from the Exceptional DBA Awards 2011 saying that I’d made it as one of the final five! To say it’s an honour is an understatement, especially as this award is open worldwide and looking at some of the previous finalists I knew the level of competition is always very high.

Being nominated by one of my peers was unexpected and to be honest I wasn’t going to go through the process to complete the application but I’m so glad I did. More importantly (to me), is the fact that in order to be a finalist you need to be chosen by the expert judging panel who are all highly respected members of the SQL community, and to actually get through their vetting is a phenomenal achievement which I thank them for greatly.

So, what’s next? Judging by the high calibre of the 4 other finalists, winning the award will be virtually impossible. Being (as far as I’m aware) the only UK based finalist may make things that little bit tougher for me but I’d love to be the first UK winner of the award and ultimately make the trip to Seattle in October. It’s a long way to travel for what my wife would call a Geek Conference, to be honest though, this is a once in a lifetime opportunity and SQLPass is always something I’ve wanted to attend but due to client commitments I’ve never been able to make the jump over the “pond”.

Following the words of 2008’s winner Dan McClain “It brings another level of ‘wow’ to my resume, like having a special certification”, and that is exactly what being a finalist has done for me – even if I don’t win, it’s still an outstanding achievement and something I’m very proud and will be for a very long time.
So, what do you have to do in order to make me the first UK based winner of the Exceptional DBA Awards? Simply click this link and “Vote for Chris>>” – it’s that simple. No registration or forms to fill out and remember, every vote counts!! If I win and make it to SQLPass, I’d appreciate everyone to give me a shout and have a few beers. For those of you UK based, I’ll no doubt see you at the next SQL UG in Manchester and/or Leeds.

Good luck to Jeff, Colin, Scott and Tim and hopefully see everyone in October.

Exceptional DBA Awards 2011 - Vote

NetApp Dynamic Backup Script — June 9, 2011

NetApp Dynamic Backup Script

For those of you who’ve worked with NetApp and SnapManager for SQL Server, you will know can generate a scheduled backup as a SQL Agent Job. Simple, click through the GUI, selecting the Databases you want to backup, the schedule etc etc and a job is created with a command line script within it similar to that below:

"C:\Program Files\NetApp\SnapManager for SQL Server\SmsqlJobLauncher.exe" new-backup
–svr 'ServerToBackupName'
-d 'ServerToBackupName', '4', 'DBName1', 'DBName2', 'DBName3', 'DBName4'
-ver –verInst 'ServerToVerifyOnName' -mp
–mpdir 'C:\Program Files\NetApp\SnapManager for SQL Server\SnapMgrMountPoint'
-RetainBackups 7 -lb -bksif -RetainSnapofSnapInfo 0 -trlog –mgmt standard

This script indicates the server in which the Databases reside upon for backing up, the number of databases to be backed up as well the list of database names you selected earlier in the GUI.

Now, this looks all relatively straight forward and makes perfect sense up until the point when you either create or drop a database from the server. This script does not pick up the fact that this has happened and will fail to snap any new databases and will no doubt fail trying to snap a database that no longer exists. So, every time you do this administration work you have the additional step of amending the job to match the current list of databases as it cannot update itself – how crazy is that!?!

I’m not sure about you I don’t like giving myself additional work to do so I set about rectifying this issue by creating a script that would mean you can create this job once and not have to worry about it again unless you specifically require to exclude a certain database by dynamically populating the Database list for you based on what is in sys.databases. This can be modified to include or exclude certain Databases i.e. system databases.

The script itself is straightforward, assigning the list of Databases to a variable and build up a command string to be ran via xp_cmdshell. Note:- There are certain security risks associated with enabling the xp_cmdshell feature so please make sure you’ve read and understood this before proceeding .
I’ve added comments to the script where necessary:

SET NOCOUNT ON;

-- Amend variables where necessary depending on how many DBs you have
-- Remembering the maximum recommendation for DBs on a LUN (from NetApp) is 35
DECLARE 	@strSQL		        VARCHAR(2000), 
		@strDBNames		VARCHAR(1000),
		@intDBCnt		VARCHAR(5),
		@strVerifServer 	VARCHAR(50),
		@intRetainBkups 	INT

SELECT		@strSQL		        = '',
		@strDBNames		= '',
		@strVerifServer 	= 'VerificationServerName',
		@intRetainBkups 	= 8
		
--Get DB Names
SELECT @strDBNames = @strDBNames +''''+ NAME +''', '
FROM sys.databases
--WHERE database_id > 4

--Get DB count
SELECT @intDBCnt = CONVERT(VARCHAR(5),COUNT(*)) 
FROM sys.databases
--WHERE database_id > 4

--remove trailing comma, probably a fancier way but its quick and works
SET @strDBNames = LEFT(@strDBNames,LEN(@strDBNames)-1) 

--Make sure this string is all on one line!!!!
--Multi-line for readability
SET @strSQL = @strSQL + '
"C:\Program Files\NetApp\SnapManager for SQL Server\SmsqlJobLauncher.exe" new-backup  
–svr '''+@@SERVERNAME+'''
-d  '''+@@SERVERNAME+''', '''+@intDBCnt+''', '+@strDBNames+'
-ver  –verInst '''+@strVerifServer+''' -mp  
–mpdir ''C:\Program Files\NetApp\SnapManager for SQL Server\SnapMgrMountPoint''
-RetainBackups  '+CONVERT(VARCHAR(5),@intRetainBkups)+' -lb -bksif -RetainSnapofSnapInfo 0 -trlog  –mgmt standard 
'

SELECT @strSQL --Output the text being ran, can comment out
EXEC xp_cmdshell @strSQL

There we have it. A NetApp snap backup script which will continue to work even when you go crazy and decide to add/drop as many databases as you can.

Hopefully there are other out there that can benefit from this script 🙂

Feel free to leave constructive criticism or even a positive comment but please don’t comment on things like “why have you used xp_cmdshell?” or “why did you do it so that you have to run a separate SET statement to remove trailing comma’s”, everyone has their way and this is invariably is mine 

UPDATE (30th Nov 2011):
The latest versions of Snapmanager for SQL have addressed this issue and now if you select all DBs to backup it does not output all DB names into the job script as per the above. You can still use the script above to dynamically backup a selected set of databases by amending the select from sys.databases query so its not completed redundant!!!