SQLGeordie's Blog

Helping the SQL Server community……where i can!

Administering Microsoft SQL Server 2012 Databases Beta (70-462 / 71-462) – My Thoughts… — April 4, 2012

Administering Microsoft SQL Server 2012 Databases Beta (70-462 / 71-462) – My Thoughts…

On Monday I did my first Microsoft SQL Server exam since I did my SQL Server 2000 exams many moons ago and I can’t believe the quality of questioning hasn’t changed one bit! Don’t get me wrong, it wasn’t all plain sailing but for me, what made it difficult was the vagueness of the questions and in some cases, blatant mistakes! I understand the SQL Server 2012 exams are still in Beta but some of these errors are not simple spelling mistakes – although there were in fact quite a few typo’s in my exam.

As an example (without giving the game away), one question refers to SQL Server 2012 and SQL Server 2000, but the answers did not refer to SQL Server 2000 at all. I re-read the question over and over to see if I was missing something but I’m confident that it was a mistake in the question.

The exam itself consisted of 56 questions and I felt the allotted time was sufficient for this but what I didn’t find sufficient was the time allocated for leaving comments which from speaking to others in the field they’ve felt the same. During the exam I wrote comments down for around 25 of the questions in prep for leaving comments at the end but you’re only given 10 (or was it 15?) minutes to actually enter the comments into the system and with the machine (or could have been the application) being as slow as it was, this meant I had to type quicker than I’ve ever typed in my life :).

I didn’t get through all the comments so picked out the ones I felt needed highlighting the most and got those done first and managed to get through about 12 of them before the time ran out. I was a little disappointed with this as for me that time should be sufficient for examinees to relay their comments fully so Microsoft can take the points / mistakes on board and rectify them before they launch the live exams in June.

I was also a little surprised that I wasn’t given the test score there and then (have to wait until they actually go live), not quite sure why Microsoft have chosen to do it that way but I’m sure they have their reasons. I’m guessing its so people can’t relay an answers to others who are taking the exam but no reason was given so I can only speculate.

So, all in all a good test of your administrative knowledge but in my opinion there’s a lot of work to be done to rectify the issues before they go live. It would be interesting to hear other peoples thoughts on any of the 2012 exams they’ve taken so far…..

Advertisements
How to output from invoke-sqlcmd to Powershell variable — February 3, 2012

How to output from invoke-sqlcmd to Powershell variable

Sorry for another Powershell post but I’ve been doing a lot of it recently and coming up with (what i think are) a few nifty tricks.

One of the issues I encountered recently was with Kerberos delegation whilst trying to automate Log Shipping. What I was trying to do was use an OPENROWSET query to run against the Primary and Secondary servers in order to obtain the Primary_id and Secondary_id in order to pass to the script to be ran on the monitor server. However, seeing as the environment was not setup for Kerberos I encountered the “double-hop” issue.

Enabling Kerberos delegation for the service account would be too high a risk without thorough testing so wasn’t an option in this instance so I decided to look into using invoke-sqlcmd against each of the servers to get the IDs required and pass it to the monitor script.

So how did I go about doing this you ask, well its actually really simple. After a bit of googling I came across this blog by Allen White which gave me a starting block.

Firstly, you have to amend your TSQL script to SELECT the parameter you want to output and use within the rest of the script, something like this:

TSQL snippet to be ran against the Primary Server:

--Cut down version of the script for readability
EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database 
		@database = N'$(Database)' 
		...
		,@primary_id = @LS_PrimaryId OUTPUT --This is what we want
		,@overwrite = 1 
		,@ignoreremotemonitor = 1 

--Need to output this in order for powershell to take it and use it in the monitor script
SELECT @LS_PrimaryId as LS_PrimaryId 

Do the same for the script to run on the secondary server but obviously for the secondary_id 🙂

So, now you’ve setup the TSQL side of things, you need to then call these from Powershell and assign the output parameter to a Powershell variable like so:


$script = "LogShip_Primary.sql"
$PrimaryID = Invoke-Sqlcmd -InputFile $ScriptLocation$script -Variable Database=$DatabaseName, etc etc etc -ServerInstance $PrimaryServer 

$script = "LogShip_Secondary.sql" 
$SecondaryID = Invoke-Sqlcmd -InputFile $ScriptLocation$script -Variable Database=$DatabaseName, etc etc etc -ServerInstance $SecondaryServer

So, relatively simple. Basically your setting the output to a Powershell variable. keeping things tidy, re-assign it to another variable and something to note is that the output is actually a DataTable object. Make sure you use the name of the alias you used in your last TSQL statement.


$PID = $PrimaryID.LS_PrimaryId
$SID = $SecondaryID.LS_SecondaryId 

Once this is done then you can use this in your script to run against the monitor server


$script = "LogShip_Monitor.sql" 
Invoke-Sqlcmd -InputFile $ScriptLocation$script -Variable Database=$DatabaseName, etc etc etc, PrimaryID=$PID, SecondaryID=$SID -ServerInstance $MonitorServer

And there you have it, nice n simple! All you then have to do is wrap it in a foreach loop for the databases you want to setup and a nice and simple automated logshipping build script.

Obviously I’ve omitted a lot of the setup / checking of scripts etc from this post as I don’t want to be doing all the work for you!

Enjoy 🙂

So then, what’s the definition of an object……..? — January 25, 2012

So then, what’s the definition of an object……..?

Not blogged for a while due to client and project commitments but something which has surprised me when speaking with colleagues both past and present is that when I mention the built in function OBJECT_DEFINITION, the majority of DBA’s haven’t heard of it, never mind used it. So i felt it necessary to dust off the blog typing fingers and see if i can enlighten 🙂

So, I though it be a good idea to enlighten a few people to how it can be used by giving real world examples.

Firstly, a short definition (no pun intended) from BOL (http://msdn.microsoft.com/en-us/library/ms176090.aspx) as to what exactly this function does:

Returns the Transact-SQL source text of the definition of a specified object.

Its as simple as that!

Pass in the Object_ID which it expects to be in the current database context and it spits out the text. I’ll show you a couple of examples of how it works in comparison to how I’ve seen the same thing done but by using sp_helptext as well as some of the other system tables.

I’ll not beat around the bush and get straight into a few examples and old skool alternatives as there’s not really much more i can say about the function itself:

Example 1 – OBJECT_DEFINITION

SELECT OBJECT_DEFINITION(OBJECT_ID('usp_StoredProcedureName'))


Example 2 – sp_helptext

EXEC sp_helptext 'usp_StoredProcedureName'


Example 3 – Using system tables to search (this is a common way I’ve seen this done)

SELECT  o.[name]
	  , o.type_desc
	  , sc.[text]
FROM  sys.objects o
INNER JOIN syscomments sc ON o.[object_id] = sc.id
WHERE o.type_desc = 'SQL_STORED_PROCEDURE'
	  AND o.[name]  = 'usp_StoredProcedureName'


Example 4 – OBJECT_DEFINITION for multiple objects

SELECT [object_ID], [Name], OBJECT_DEFINITION([object_ID]) AS ProcText
FROM sys.procedures


Example 5 – OBJECT_DEFINITION for multiple with filtering

SELECT [object_ID], [Name],  OBJECT_DEFINITION([object_ID]) AS ProcText
FROM sys.procedures
WHERE OBJECT_DEFINITION([object_ID]) LIKE '%CATCH%'


Example 6 – OBJECT_DEFINITION to Script out Procedures

SET NOCOUNT ON;

DECLARE @strSQL NVARCHAR(MAX)

SET @strSQL = ''
SELECT @strSQL += OBJECT_DEFINITION([object_ID])+CHAR(10)+'GO'+CHAR(10)
FROM sys.procedures

SELECT @strSQL


Now this can be used for all programmability objects within SQL Server, not just procedures so the same works for Views, functions, triggers etc

Again from BOL here is a full list:

C = Check constraint

D = Default (constraint or stand-alone)

P = SQL stored procedure

FN = SQL scalar function

R = Rule

RF = Replication filter procedure

TR = SQL trigger (schema-scoped DML trigger, or DDL trigger at either the database or server scope)

IF = SQL inline table-valued function

TF = SQL table-valued function

V = View

So there you have it, short n snappy blog today and I really hope that it helps give people a new insight into how to get object text.

Powershell and Dynamic SSIS (or any for that matter) Config files — December 5, 2011

Powershell and Dynamic SSIS (or any for that matter) Config files

As I imagine that the majority of people who are reading this will have some level of SSIS knowledge, I’ll not go into explanations about package configurations in SSIS and its various methods but rather jump straight to it.

The majority of environments I’ve worked in where SSIS packages are utilised, tend to sway down the XML Config package configuration method. As many of you are aware, in multi-tier SQL Environments (ie. Integration, QA, UAT etc etc) this can be a pain when deploying the packages to the relevant environments because you have to at some stage reconfigure the XML configuration file to have the correct parameters to pass into the package for each environment. This can become an even worse scenario when you have tens if not hundreds of SSIS packages (and corresponding dtsConfig’s) and are upgrading your infrastructure with new servers (and/or instance names) as well as drive configurations.

If you don’t have the time to be re-working the SSIS packages to use a SQL table (depending on your process this could take a while to get through development, testing etc) to hold the configuration parameters which makes it easy to script, deploy and update then here’s a simple trick using Powershell (i’m still at the very basic Powershell level so bare with me!!) you can use for your existing dtsConfig files. The sample i’ll be using is for a dtsConfig used in a Restore package.

Unfortunately you’re still going to have to do some initial amending of one config file here :(.

Firstly, lets amend the relevant parameter values to have a standard name for each, as an (snipit) example:

<Configuration ConfiguredType=”Property” Path=”\Package.Connections[master – destination].Properties[ServerName]” ValueType=”String”>
<ConfiguredValue>*SQLInstance*
</Configuration>
<Configuration ConfiguredType=”Property” Path=”\Package.Variables[User::DataFilesDir].Properties[Value]” ValueType=”String”>
<ConfiguredValue>*DataFilesDir*
</Configuration>

I’ve used a * to prefix and suffix the parameter so that you don’t amend anything that may have a similar name.

By doing this, you can run a bit of Powershell to update the relevant element parameter values for each instance by using the code below, NOTE i’ve excluded the setting of the parameters etc as I use this as a module and don’t want to waste space:

#Create Copy for dtsConfigs by piping the date from the static file (with *'d parameters to the new file to be used
(Get-Content $RestoreDatabasesDynamicConfig) |
Set-Content $RestoreDatabasesDynamicConfig_New

#Amend the dtsConfig's and create new files to use
$restoreArray = ("SQLInstance", "DataFilesDir", "LogFilesDir", "SSISRestoreLocationDBs")

#Loop through the array to replace each parameter listed
foreach($Arr in $restoreArray){
	$Replace = "\*"+$Arr+"\*" #This sets the parameter with the * prefix and suffix
	$WithThis = "$"+$Arr #What parameter name passed in to replace the text with, ie. $SQLInstance, $DataFilesDir

	switch ($Arr)
	{
		# Use $ExecutionContext.InvokeCommand.ExpandString($WithThis) to evaluate the string (ie $SQLInstance)
		"DataFilesDir" {$WithThis = $SQLInstallLocation+$ExecutionContext.InvokeCommand.ExpandString($WithThis)}
		"LogFilesDir" {$WithThis = $SQLInstallLocation+$ExecutionContext.InvokeCommand.ExpandString($WithThis)}

		#I've left the above in as an example of how to set the files to a folder location passed in as a Parameter ($SQLInstallLocation)
		default {$WithThis = $ExecutionContext.InvokeCommand.ExpandString($WithThis)}
	}

	#Now create the new dtsConfig file with updated Parameter information
   (Get-Content $RestoreDatabasesDynamicConfig_New) |
	Foreach-Object {$_ -replace "$Replace", $WithThis} |
	Set-Content $RestoreDatabasesDynamicConfig_New

}

In short, this script is taking the amended dynamic file with all the parameters with their *’s prefixed and creating a new dtsConfig file. It then builds an array of parameters to work through and do the replacement, the values of these are(or indeed can be…) passed through to the function/module. I’ve put a switch in there to check for particular items in the array as in my example i wanted to append a folder location to the value passed in. you don’t necessarily have to do this but left it in to show it can be done.

Another example of using this is for a silent SQL Server 2008 install. Once you have a ConfigurationFile.ini then you can follow the same process to put a standard tag in the file and use powershell to find and replace it with a parameter value – works an absolute treat when installing many instances

I’m sure there’ll be someone far more clever than me that can somehow the file for the element and replace any value without standardising the parameter values but I’m no expert with Powershell and learning everyday and hope some others out there can get some use out of this…..and yes, I also realise that you can do a “Find and Replace in Files” with Notepad++ but this technique is great for automating!

Powershell to Script out SQL Server Objects using a CMS list — September 19, 2011

Powershell to Script out SQL Server Objects using a CMS list

I’ve recently decided I’ve got to get with the times and started dabbling with Powershell. Up until now i’ve not really seen any use (as a DBA) for it from a SQL Server perspective as I’ve been able to achieve pretty much everything in the past using TSQL, VBScript and/or SMO etc but the client stated they wanted Powershell to script out all the objects from all instances in their large estate.

I’ll not bore you too much with what its doing but in a nutshell it’ll loop a list of servers from a Central Management Server (CMS) but this can be amended to loop a textfile or indeed a text list. From this, the script will create a folder for the server, loop all databases (incl system but this can be turned off) creating a folder for each and generate a .sql file for procedures, views and functions each within their own folders within the database. You can add/remove the ability to script other objects such as tables, indexes etc but it wasn’t a requirement at the time so I’ve not included it.

I’m not going to pretend that this script is top notch but for a beginner I’m quite proud of it :).
There are a number of similar scripts on the web but none of them did quite what I was after, hopefully It’ll be of some use to some.

Will I be using Powershell again? Definitely! I know I could have knocked this up quicker using other technologies but where’s the fun in that 🙂 ????

And here it is, there are 2 scripts: One which is a function to get the objects called (funnily enough) getSQLObjects (fn_getSQLObjects.ps1) and the other is a script to get the server list and call the function (Run_getSQLObjects.ps1). These could have been combined and a parameter used to dictate whether to use text file, CMS or other method of listing the SQL Servers but I decided to keep them separate.

One other thing to note is that I used SQLPS to run this but the script could be tweaked to use a relevant connection to the CMS to get the serverlist.

fn_getSQLObjects.ps1:


##########################################################
# PowerShell Script of DB Objects
#
# Uses PS and SMO to script the drop and create a type of
# SQL Server object (views, tables, etc.) to individual 
# folders and files
#
# Chris Taylor		07/09/2011
##########################################################

Function getSQLObjects ($SQLInstance, $DB, $ObjType, $OutputFile)
{
	#List of schemas to exclude
		$ExcludeSchemas = @("sys","Information_Schema") 
	  

	#Create smo objects##########################################
		[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
	   
	#SQL Instance &amp; DB object
		$server = new-object ('Microsoft.SqlServer.Management.Smo.Server') $SQLInstance
		$dbs=$server.Databases
	   
	#Set ScriptingOption object option
		$ScriptingOption = new-object ('Microsoft.SqlServer.Management.Smo.ScriptingOptions')        
		
		$ScriptingOption.SchemaQualify = 1
		$ScriptingOption.AllowSystemObjects = 0
		
	#Script Drop Objects###########################################
		$ScriptingOption.ScriptDrops = 1         
		$ScriptingOption.IncludeIfNotExists = 1
				
	#Create Directory for Database and ObjType
		$chkFolder = $OutputFile + $DB + "\"
		if (!(Test-Path -path $chkFolder))
		{
			New-Item ($chkFolder) -type directory
		}	

#write-host $chkFolder	
	#Create directory for object type within database folder
		$chkFolder += $ObjType + "\"
		if (!(Test-Path -path $chkFolder))
		{
			#write-host $chkFolder
			New-Item ($chkFolder) -type directory
		}	
	
	#Generate script for all objects##################################
		foreach ($objects in $dbs[$DB].$ObjType)
		{
			If ($ExcludeSchemas -notcontains $objects.Schema )  
			{				
				$filename = $chkFolder + $($objects.Schema) + "." + $($objects.name) + ".sql"
				$objects.Script($ScriptingOption) + "`r GO `r " | out-File $filename 
			}
		}

	#Script Create Objects#########################################
    #Append this to the file generated above with the Drop - possibly 
    #a better way to do it but works for me
		$ScriptingOption.ScriptDrops = 0         
		$ScriptingOption.IncludeIfNotExists = 0
		
		foreach ($objects in $dbs[$DB].$ObjType)
		{
			If ($ExcludeSchemas -notcontains $objects.Schema )
			{
				$filename = $chkFolder + $($objects.Schema) + "." + $($objects.name) + ".sql"
				$objects.Script($ScriptingOption) + "`r GO `r " | out-File $filename -Append
			}
		}
}		

Run_getSQLObjects.ps1

##########################################################
# PowerShell Script of DB Objects
#
# Loop CMS passing in database list from a particular 
# server
#
# Chris Taylor		07/09/2011
##########################################################


#Loop the CMS Server to list of Servers
foreach ($sqlserver in invoke-sqlcmd -query "SELECT svr.name
											 FROM   msdb.dbo.sysmanagement_shared_registered_servers_internal svr
											 INNER JOIN msdb.dbo.sysmanagement_shared_server_groups_internal grp
											             ON svr.server_group_id = grp.server_group_id")
{
	$ExcludeServers = @("ALL","Servername")
	If ($ExcludeServers -notcontains $sqlserver.name )  
	{	
		write-host $sqlserver.name
		$SQLInstance = $sqlserver.name 
		$OutputFile = "C:\temp\"
	
		#Create Folder for Server, note that this will create a sub folder for any sql instances on that server
		#For example, servername\instance will have a folder for servername with a subfolder for instance
		$OutputFile += $SQLInstance + "\"
		if (!(Test-Path -path $OutputFile))
		{
			New-Item ($OutputFile) -type directory
		}	
		
		#Create smo objects##########################################
			[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
		   
			#SQL Instance &amp; DB object
			$server = new-object ('Microsoft.SqlServer.Management.Smo.Server') $SQLInstance
			$dbs=$server.Databases
			
		#Set what you want to script out
		$Objects = @("StoredProcedures","Views","UserDefinedFunctions")
		
		#Loop DBs
			foreach ($db in $server.Databases)
			{
				#write-host $db.name

				#Loop the object types
				foreach ($objType in $Objects)
				{
					#write-host $objType + ' ' + $db.name + ' ' + $objType.name + ' ' + $OutputFile
					getSQLObjects $SQLInstance $db.name $objType $OutputFile	
				}	
			}
	}#Don't do anything if exceptions, could use a BREAK in the loop instead
}

Being an Exceptional DBA Award 2011 Finalist — July 25, 2011

Being an Exceptional DBA Award 2011 Finalist

On Wednesday 13th July at 16:06 (GMT) I got what could only be described as the biggest shock of my 31 years on this planet – an email from the Exceptional DBA Awards 2011 saying that I’d made it as one of the final five! To say it’s an honour is an understatement, especially as this award is open worldwide and looking at some of the previous finalists I knew the level of competition is always very high.

Being nominated by one of my peers was unexpected and to be honest I wasn’t going to go through the process to complete the application but I’m so glad I did. More importantly (to me), is the fact that in order to be a finalist you need to be chosen by the expert judging panel who are all highly respected members of the SQL community, and to actually get through their vetting is a phenomenal achievement which I thank them for greatly.

So, what’s next? Judging by the high calibre of the 4 other finalists, winning the award will be virtually impossible. Being (as far as I’m aware) the only UK based finalist may make things that little bit tougher for me but I’d love to be the first UK winner of the award and ultimately make the trip to Seattle in October. It’s a long way to travel for what my wife would call a Geek Conference, to be honest though, this is a once in a lifetime opportunity and SQLPass is always something I’ve wanted to attend but due to client commitments I’ve never been able to make the jump over the “pond”.

Following the words of 2008’s winner Dan McClain “It brings another level of ‘wow’ to my resume, like having a special certification”, and that is exactly what being a finalist has done for me – even if I don’t win, it’s still an outstanding achievement and something I’m very proud and will be for a very long time.
So, what do you have to do in order to make me the first UK based winner of the Exceptional DBA Awards? Simply click this link and “Vote for Chris>>” – it’s that simple. No registration or forms to fill out and remember, every vote counts!! If I win and make it to SQLPass, I’d appreciate everyone to give me a shout and have a few beers. For those of you UK based, I’ll no doubt see you at the next SQL UG in Manchester and/or Leeds.

Good luck to Jeff, Colin, Scott and Tim and hopefully see everyone in October.

Exceptional DBA Awards 2011 - Vote

NetApp Dynamic Backup Script — June 9, 2011

NetApp Dynamic Backup Script

For those of you who’ve worked with NetApp and SnapManager for SQL Server, you will know can generate a scheduled backup as a SQL Agent Job. Simple, click through the GUI, selecting the Databases you want to backup, the schedule etc etc and a job is created with a command line script within it similar to that below:

"C:\Program Files\NetApp\SnapManager for SQL Server\SmsqlJobLauncher.exe" new-backup
–svr 'ServerToBackupName'
-d 'ServerToBackupName', '4', 'DBName1', 'DBName2', 'DBName3', 'DBName4'
-ver –verInst 'ServerToVerifyOnName' -mp
–mpdir 'C:\Program Files\NetApp\SnapManager for SQL Server\SnapMgrMountPoint'
-RetainBackups 7 -lb -bksif -RetainSnapofSnapInfo 0 -trlog –mgmt standard

This script indicates the server in which the Databases reside upon for backing up, the number of databases to be backed up as well the list of database names you selected earlier in the GUI.

Now, this looks all relatively straight forward and makes perfect sense up until the point when you either create or drop a database from the server. This script does not pick up the fact that this has happened and will fail to snap any new databases and will no doubt fail trying to snap a database that no longer exists. So, every time you do this administration work you have the additional step of amending the job to match the current list of databases as it cannot update itself – how crazy is that!?!

I’m not sure about you I don’t like giving myself additional work to do so I set about rectifying this issue by creating a script that would mean you can create this job once and not have to worry about it again unless you specifically require to exclude a certain database by dynamically populating the Database list for you based on what is in sys.databases. This can be modified to include or exclude certain Databases i.e. system databases.

The script itself is straightforward, assigning the list of Databases to a variable and build up a command string to be ran via xp_cmdshell. Note:- There are certain security risks associated with enabling the xp_cmdshell feature so please make sure you’ve read and understood this before proceeding .
I’ve added comments to the script where necessary:

SET NOCOUNT ON;

-- Amend variables where necessary depending on how many DBs you have
-- Remembering the maximum recommendation for DBs on a LUN (from NetApp) is 35
DECLARE 	@strSQL		        VARCHAR(2000), 
		@strDBNames		VARCHAR(1000),
		@intDBCnt		VARCHAR(5),
		@strVerifServer 	VARCHAR(50),
		@intRetainBkups 	INT

SELECT		@strSQL		        = '',
		@strDBNames		= '',
		@strVerifServer 	= 'VerificationServerName',
		@intRetainBkups 	= 8
		
--Get DB Names
SELECT @strDBNames = @strDBNames +''''+ NAME +''', '
FROM sys.databases
--WHERE database_id > 4

--Get DB count
SELECT @intDBCnt = CONVERT(VARCHAR(5),COUNT(*)) 
FROM sys.databases
--WHERE database_id > 4

--remove trailing comma, probably a fancier way but its quick and works
SET @strDBNames = LEFT(@strDBNames,LEN(@strDBNames)-1) 

--Make sure this string is all on one line!!!!
--Multi-line for readability
SET @strSQL = @strSQL + '
"C:\Program Files\NetApp\SnapManager for SQL Server\SmsqlJobLauncher.exe" new-backup  
–svr '''+@@SERVERNAME+'''
-d  '''+@@SERVERNAME+''', '''+@intDBCnt+''', '+@strDBNames+'
-ver  –verInst '''+@strVerifServer+''' -mp  
–mpdir ''C:\Program Files\NetApp\SnapManager for SQL Server\SnapMgrMountPoint''
-RetainBackups  '+CONVERT(VARCHAR(5),@intRetainBkups)+' -lb -bksif -RetainSnapofSnapInfo 0 -trlog  –mgmt standard 
'

SELECT @strSQL --Output the text being ran, can comment out
EXEC xp_cmdshell @strSQL

There we have it. A NetApp snap backup script which will continue to work even when you go crazy and decide to add/drop as many databases as you can.

Hopefully there are other out there that can benefit from this script 🙂

Feel free to leave constructive criticism or even a positive comment but please don’t comment on things like “why have you used xp_cmdshell?” or “why did you do it so that you have to run a separate SET statement to remove trailing comma’s”, everyone has their way and this is invariably is mine 

UPDATE (30th Nov 2011):
The latest versions of Snapmanager for SQL have addressed this issue and now if you select all DBs to backup it does not output all DB names into the job script as per the above. You can still use the script above to dynamically backup a selected set of databases by amending the select from sys.databases query so its not completed redundant!!!

SnapManager Epic Fail – oh dear! — April 7, 2011

SnapManager Epic Fail – oh dear!

An ex colleague and good friend of mine has recently been having trouble with Snapmanager for SQL, anyone had similar troubles? Would be interesting to see peoples thoughts on this as i share his concerns….

Bearing in mind that this may very well be down to human (sysadmin) error and a requirement to RTFM, but their tech support neglected to mention this issue when discussed before the work was carried out! I would have thought something as big as this would be mentioned by them during that call…….

UPDATE:- From some of the responses i’ve had on #sqlhelp, this issue is actually by design and not bug. I still can’t understand why a tool that claims to do this job fully automatically but doesn’t do some of the fundamental changes required when moving a database. Thanks to @GiulianoM and @BrentO for their words of wisdom and in Brent’s case (as per usual), sarcasm :o)

Why I don’t like NULLs and Nullable Columns — March 25, 2011

Why I don’t like NULLs and Nullable Columns

As I imagine (or hope), the majority of us adhere to some form of standards in the environment that they work in. Whether that be whether you have to wear a Shirt and Tie, be in for 9am or those of us in a Database or Development environment, Coding Standards.

Everyone who has worked with me knows that one of my standards is NO NULLABLE columns in any new development unless there is an extremely good, valid reason and to date, I’ve only ever had 1 developer convince me that they should make their column nullable – and that was due to the current system design meaning we had to have it without a complete re-write of that part of the system. I don’t really want this blog to turn into a big debate as to whether I’m right or wrong on this, its purely my view based on the many years I’ve been working with SQL Server and development teams.

There are many blogs out there where DBAs share the same or similar views such as Thomas LaRock (Blog|Twitter) in which he talks about the harm they cause by the result of a person not knowing they are there. What I’m going to focus on in this blog is more where people know the NULL values are there and tailor their T-SQL around it, specifically when the value is NULL and they don’t want to show NULL on screen so they replace it with another value.

I’ve knocked up a quick script to populate a table EMPLOYEE with dummy data for the purposes of this blog and yes, I’ve purposely left MODIFIEDBY_ID and MODIFIEDBY_DATE as NULLable and no, i’m not saying this is the table design anyone should be using in their systems, especially later on.

CREATE TABLE EMPLOYEE
   (
      EMP_ID INTEGER IDENTITY(1000,1)
                     PRIMARY KEY
                     NOT NULL
     ,DATE_OF_BIRTH DATETIME NOT NULL
     ,SALARY DECIMAL(16,2) NOT NULL
     ,EMP_NAME VARCHAR(50) NOT NULL
     ,INSERTED_ID INT NOT NULL
     ,INSERTED_DATE DATETIME NOT NULL
     ,MODIFIEDBY_ID INT NULL
     ,MODIFIEDBY_DATE DATETIME NULL
   )
GO

SET NOCOUNT ON ;

DECLARE @Counter BIGINT ;
DECLARE @NumberOfRows BIGINT
SELECT  @counter = 1
       ,@NumberOfRows = 70000
WHILE ( @Counter &lt; @NumberOfRows )
      BEGIN
            INSERT  INTO EMPLOYEE
                    (
                      DATE_OF_BIRTH
                    ,SALARY
                    ,EMP_NAME
                    ,INSERTED_ID
                    ,INSERTED_DATE
                    )
                    SELECT  CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)
                           ,CONVERT(DECIMAL(16,2) , ( 50000 + RAND() * 90000 ))
                           ,'Dummy Name' + CONVERT(VARCHAR(100) , @Counter)
                           ,CONVERT(INT , rand() * 10000)
                           ,CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)

            SET @Counter = @Counter + 1
      END ; 
     
SELECT  @counter = 1
       ,@NumberOfRows = 25000
WHILE ( @Counter &lt; @NumberOfRows )
      BEGIN
            INSERT  INTO EMPLOYEE
                    (
                      DATE_OF_BIRTH
                    ,SALARY
                    ,EMP_NAME
                    ,INSERTED_ID
                    ,INSERTED_DATE
                    ,MODIFIEDBY_ID
                    ,MODIFIEDBY_DATE
                    )
                    SELECT  CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)
                           ,CONVERT(DECIMAL(16,2) , ( 50000 + RAND() * 90000 ))
                           ,'Dummy Name' + CONVERT(VARCHAR(100) , @Counter)
                           ,CONVERT(INT , rand() * 10000)
                           ,CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)
                           ,CONVERT(INT , rand() * 10000)
                           ,CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)

            SET @Counter = @Counter + 1
      END ;    

I also create a couple of non-clustered indexes that should be utilised by the sample queries below:

CREATE INDEX idx1 ON employee ( MODIFIEDBY_DATE , inserted_date ) INCLUDE ( emp_id ,salary )
CREATE INDEX idx2 ON employee ( MODIFIEDBY_DATE ) INCLUDE ( emp_id ,salary )

So, the table is there and the data populated, now for one of my biggest bugbears:

Query 1:

SELECT  emp_id
       ,salary
       ,isnull(MODIFIEDBY_DATE , inserted_date)
FROM    EMPLOYEE
WHERE   isnull(MODIFIEDBY_DATE , inserted_date) > dateadd(yy , -20 , getdate())

Arrrrgggghhhh!!!! Don’t get me wrong, i do understand why tables and columns like this do exist but for a DBA its a nightmare to tune as any index you may put on these columns will ultimately end in a Scan as opposed to a Seek.

Here is the execution plan:

As a comparison here is the query and execution plan for the query without the ISNULL function:

Query 2:

SELECT  emp_id
       ,salary
       ,MODIFIEDBY_DATE
FROM    EMPLOYEE
WHERE   MODIFIEDBY_DATE > dateadd(yy , -20 , getdate())

Execution Plan:

And a comparison of the two:

SELECT  emp_id
       ,salary
       ,isnull(MODIFIEDBY_DATE , inserted_date)
FROM    EMPLOYEE
WHERE   isnull(MODIFIEDBY_DATE , inserted_date) > dateadd(yy , -20 , getdate())

SELECT  emp_id
       ,salary
       ,MODIFIEDBY_DATE
FROM    EMPLOYEE
WHERE   MODIFIEDBY_DATE > dateadd(yy , -20 , getdate())

Thats 95% against 5%!! What a difference!

Now, for bugbear #2 i’ll need to add a new column with new values, again, i’m not saying you should or would ever do this but for the purposes of the blog:

ALTER TABLE EMPLOYEE
ADD   PreviousSalary DECIMAL(16,2)

So i’ve now added a new column with all NULL values, however, i don’t want my users seeing the word NULL on the front end. Easy, i’ll replace it with a zero:

SELECT  emp_id
       ,salary
       ,isnull(MODIFIEDBY_DATE , inserted_date) AS LastModifiedDate
       ,isnull(PreviousSalary,0) AS PreviousSalary
FROM    EMPLOYEE
WHERE   isnull(MODIFIEDBY_DATE , inserted_date) > dateadd(yy , -20 , getdate())

From this you can now see that the SQL Optimiser will now ignore the index created and scan the clustered index:

Obviously i could amend idx1 to factor in this column or create a new index:

CREATE INDEX idx3 ON employee ( MODIFIEDBY_DATE , inserted_date ) INCLUDE ( emp_id ,salary, PreviousSalary )

And sure enough it will choose that index but again, its a scan!

Ok, so you may well be thinking that this doesn’t happen often or wouldn’t cause too much of an issue on your system which may very well be correct. However, what about a highly transactional financial system with 100’s of millions of rows? Yes it does happen! What makes matters even worse is if you decided you wanted to search for (in this example) PreviousSalary > 10000, you’d then have to handle the NULL values and convert them to another value (typically 0) which begs the question, why is it not set to an arbitrary default value in the first place?

I’ve updated a random selection of PreviousSalary records to now have non-NULL values and added a new index:

CREATE INDEX idx4 ON employee ( PreviousSalary ) INCLUDE ( emp_id ,salary )

Running the query to handle the NULLs will still produce and Index Scan:

SELECT  emp_id
       ,salary
       ,isnull(PreviousSalary,0)
FROM    EMPLOYEE
WHERE   isnull(PreviousSalary,0) > 10000   

If I now update all the NULL columns to 0 (I could have used an arbitrary figure such as -1, -100 or even -99999 to indicate a NULL value) and amend the query above, we now get the execution plan:

SELECT  emp_id
       ,salary
       ,isnull(PreviousSalary,0)
FROM    EMPLOYEE
WHERE   isnull(PreviousSalary,0) > 10000   

SELECT  emp_id
       ,salary
       ,PreviousSalary
FROM    EMPLOYEE
WHERE   PreviousSalary > 10000

I think by now everyone will be getting the picture. So how do you rectify this? First and foremost its education. Speak with your developers, do “Lunch ‘n’ Learn” sessions, anything you can to express how much of an issue this can potentially be and make the time to sit with them and (re)design that part of the system so that you don’t have to handle NULLs either at the query level or even the front end. If they understand that if they use the ISNULL function in this way then their query will not be SARGable and ultimately hurt performance, I’m confident that you won’t have any issues converting them to this method.

I know that the majority of people who may read this may not take anything away from it but even if I get a handful of people who can take this and speak with their development teams on this very simple concept of database design then you could save yourself major headaches in the future!

Database Mail – Collection of SQL Server Scripts — March 21, 2011

Database Mail – Collection of SQL Server Scripts

As part of my blogging I’ll be periodically providing readers with a collection of my scripts which I use on a daily basis (collected over the years) and hopefully others can gain from these and utilise them going forward.

Today i’m providing a list of Database Mail scripts I use when troubleshooting issues. These are predominantly SQL Server 2005 but I believe will work on SQL Server 2008 but as I don’t have a 2008 instance available to test I can’t guarantee this!

Firstly, a good place to start is to check the status of your Database Mail:

EXEC msdb.dbo.sysmail_help_status_sp

it may be possible that it isn’t sending mail because it hasn’t been started 🙂

Following on from this, if you discover that Database Mail has in fact stopped then you can start it again with:

EXEC msdb.dbo.sysmail_start_sp

NOTE: It can also be stopped by simply running

EXEC msdb.dbo.sysmail_stop_sp

If Database Mail is running then it may be that Service Broker has not been enabled:

SELECT is_broker_enabled FROM sys.databases WHERE name = 'msdb';

Another technique I use when troubleshooting Mail issues (if its running and the settings are correct) is to check if anything is actually stuck in the SQL Server Mail system, it can be apparent that mail is sat queued but unable to send. A quick and easy way to determine this is to run:

EXEC msdb.dbo.sysmail_help_queue_sp @queue_type = 'mail';

If the value is 0 then you can determine that there isn’t actually anything in the queue. If the value is anything other than 0 then you can investigate this further by running one or all of the below:

Test to see if your mail has actually been sent

SELECT * FROM msdb.dbo.sysmail_sentitems where sent_date &gt; dateadd(d,-1,getdate());

Check the mail eventlog for further errors

SELECT * FROM msdb.dbo.sysmail_event_log;

Check for any mail item which is sitting at the unsent status

SELECT * FROM msdb.dbo.sysmail_allitems WHERE sent_status = 'unsent';

A favourite of mine, determine the reasons why a mail item has actually failed

SELECT items.subject,
    items.last_mod_date
    ,l.description ,*
FROM msdb.dbo.sysmail_faileditems as items
INNER JOIN msdb.dbo.sysmail_event_log AS l
    ON items.mailitem_id = l.mailitem_id
WHERE items.last_mod_date > getdate()-1

The script above tends to be the ideal script to run in order to find out why an email isn’t being sent, i tend to find 90% of my issues by running that script!

Something else worth checking is whether the user actually has permissions to be sending emails via SQL Server:

Check members of the DatabaseMailUserRole

EXEC msdb.sys.sp_helprolemember 'DatabaseMailUserRole'

These are just a handful of queries that can be ran when troubleshooting Database Mail issues, by no means is it a complete list but from my experience i rarely have to go elsewhere. There has been times when i’ve had to venture into amending retry attempts, querying sys.dm_broker_queue_monitors and sys.service_queues but this has been very rare circumstances so I’ve not included the queries for these

If anyone has anything they feel worthwhile adding to this then feel free to email me or comment and I’ll add them if relevant.