Skip to content

Merge csv files – quick PowerShell snippet

During a bit of work I’ve been doing this evening for SQLRelay 2014 I used something I have in my arsenal of PowerShell scripts which I thought I’d share because I love it’s simplicity :) It’s nothing big and fancy but something that is extremely useful. Tasked with merging a large number of csv files there (as always) is a quick and easy way to do this with PowerShell:

Get-ChildItem *.csv |
ForEach-Object {Import-Csv $_} |
Export-Csv -NoTypeInformation WhateverYouWantToCallTheFile.csv

There are ways to make this a little more dynamic which I will update the post with in the coming weeks….

2013 in review

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 12,000 times in 2013. If it were a concert at Sydney Opera House, it would take about 4 sold-out performances for that many people to see it.

Click here to see the complete report.

Output SQL Server data from multiple tables to Tab Delimited text files using Powershell

I had a request this morning for something I though was actually very simple:

Client: “Can you extract all data for these particular tables including column headers to a tab delimited .txt file?”
Chris: “Sure, no problem, I’ll just run bcp querying sys.tables using a COALESCE loop to output the statements”
Client: “Top stuff, let me know when it’s done”

So, away I went generating my script which took a matter of minutes and run it…….where’s the column headers? Bugger, forgot that bcp doesn’t output column headers without doing some funky stuff by creating a header record in a separate file and merging that with the file of data.

With this in mind I knew creating a SSIS package (or using export data to generate – very manual unless I delved into the realms of BIML) could do this but I thought I’d have a look at powershell invoking sqlcmd.

Again, this all seemed to be going very well until I came to outputting the data to a tab delimited .txt file. As far as I’m aware Powershell does not have an Export-Txt so I had to look into how I can use the Export-Csv to actualy output to .txt tab delimited as opposed to comma separated and found the parameter -delimiter “`t” – Excellent!!! Added this in and run the script…………and the first row consisted of “#TYPE System.Data.DataRow” – wft!?!?!?!?!

Quick search on my search engine of choice showed that there is a parameter that you can pass in to remove this from the export -NoTypeInformation.

Run it again with -NoTypeInformation and everything worked as expected apart from all column headers and data had quotes (“) around them which was not part of the requirement. Unfortunately (as far as I know) there is no switch, parameter or the likes that does this so I had to change the Export-Csv to ConvertTo-Csv and run a Replace on ‘”‘ with ” which managed to do the trick.

I’ve included the script below which can be tailored to your needs:

$server = 'ServerInstanceHere'
$database = 'DBNameHere'
$path = 'c:\work\ToDelete\'
$query = "SELECT name FROM sys.tables WHERE name in (
 'TableNameHere_1',
 'TableNameHere_2'
 --etc etc
 )"
$queryToOut = "SELECT * FROM $TableName"

#Get list of table names to output data
$Tables = invoke-sqlcmd -query $query -database $database -serverinstance $server
foreach ($Table in $Tables)
{
 $TableName = $Table["name"]
 write-host -ForegroundColor Green "Creating File $TableName.txt"
 invoke-sqlcmd -query $queryToOut -database $database -serverinstance $server | `
 #Convert as opposed to Export to replace quotes if required
 ConvertTo-Csv -NoTypeInformation -delimiter "`t" | `
 ForEach-Object {$_ -Replace('"','')} | `
 Out-file $path$TableName.txt
 #Export-Csv -NoTypeInformation -delimiter "`t" -path $path$TableName.txt
}

Apologies for the formatting but the powershell script tag doesn’t seem to format it the way I’m wanting it to :(

SQL Server NorthEast – New Usergroup!!

SQLBits in Nottingham was where it all began. A short conversation on whether there were any plans for a SQL usergroup in Newcastle with Richard Douglas (@SQLRich) and whether there was scope for me to begin setting one up quickly moved onto conversations with Chris Testa-O’Neill (@ctesta_oneill ) and eventually Jonathan Allen (@fatherjack). This became quite a lengthy chat regarding the ins, outs, ups, downs of setting up such a thing.

This was back in May. Four months of venue hunting, speaker negotiations and marketing led to the first ever SQL Usergroup in Newcastle – #sqlnortheast :)

The schedule was set up for Gavin Campbell and Neil Hambly to make their merry way up to the north east and give two fantastic sessions. So with venue sorted, speakers sorted, food sorted, attendees sorted, we were all set. Boooooom! Then the bombshell hits, a few days before the UG Neil anounces he can’t make it :(

With the first SQL UG in the north east hanging in the balance, up steps a very good friend of mine Chris McGowan (@ckwmcgowan), who was willing to make the trip from Manchester at such short notice and save the day! With 18 people regstered, this was about 17 more than I was expecting. With no initial indication as to the level of interest we could generate in the Northeast I was over the moon with the uptake. The integration between the group was phenominal and there was such a broad range of knowledge and skills ranging from hardcore sql internals DBA types to developers to Azure – made for great conversations.

So, September 3rd came along and surprisingly all seemed to be going well. Both speakers turned up on time, food turned up and most importantly 15 people turned up on the evening which was gobsmacking. All in all it turned out to be an extremely good evening / night. Few beers with everyone afterwards on the Quayside led to far too many beers with Chris and Gavin back at the hotel bar – wasn’t a pretty sight the next morning!

What an experience and from the excellent all round feedback received from the attendees, this is something that they hope will continue.

Michael Robson (@heymiky) and myself are currently trying to work out dates for next year and organise speakers. We’ve had a bit of a break from the UG due to taking on a leg of sqlrelay in Newcastle (November 25th) but we do have a “SQL on the Lash” evening session set up for December to end the year on a high.

I’ll be reporting back with the how it all goes with sqlrelay and with any further anouncements on dates for sqlnortheast UG in 2014.

Few pics from the session:


photo (4)

photo (5)

photo (6)

Making sure your Triggers fire when they should

As some of you may be aware, triggers are not my favourite thing in the world but like most things, it does have its place.

Whilst onsite with one of my clients, one of the processes fires a trigger on insert which ultimately runs a SSRS subscription to email a report. All sounding fairly feasible so far. However, this process is also used as part of a batch process overnight which would run a separate insert statement (actually another stored procedure in another job step) instead of the “onDemand” insert. Ok, still doesn’t sound like too much of an issue.

Now, they started experiencing occasional failures of this job during the day with the error relating to the fact that the SSRS subscription job was being called when it already was running. Interesting, this in theory shouldn’t ever happen because the process either ran the jobs based on the batch process or the one off onDemand.

Stepping through the process, it led me to an AFTER INSERT trigger. Upon opening it I spotted the issue straight away. Something that as I’ve found over the years as a consultant, a lot of DBA’s and developers have failed to understand that (from MSDN ):

These triggers fire when any valid event is fired, regardless of whether or not any table rows are affected.This is by design.

So, the issue was that step 3 ran a procedure which ultimately ran an insert statement for the onDemand insert, step 4 ran a procedure to insert for the overnight batch process which as it happens doesn’t have any records to insert but will in fact fire the trigger to run the SSRS subscription again! There is a number of ways to fix this but I’ve tended to stick with a basic check of the “inserted” table for results and RETURN out if no records are there to process.

I’ve supplied a bit of test code below for people to try this out.

Lets create a test table and an audit table:

USE tempdb
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TestTable]') AND type in (N'U'))
DROP TABLE [dbo].[TestTable]
GO
CREATE TABLE [dbo].[TestTable]
(
	TestTableID INT IDENTITY(1,1),
	TestTableDescr VARCHAR(20)
)
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[AuditTrigger]') AND type in (N'U'))
DROP TABLE [dbo].[AuditTrigger]
GO
CREATE TABLE [dbo].[AuditTrigger]
(
	AuditTriggerID INT IDENTITY(1,1),
	AuditTriggerDescr VARCHAR(20),
	DateCreated DATETIME
)
GO

INSERT INTO dbo.TestTable (TestTableDescr)
VALUES ('Test1'), ('Test2'), ('Test3');

SELECT * FROM dbo.TestTable;

Now lets create the trigger with no checking:

USE [TempDB]
GO

IF  EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[trTestTable]'))
DROP TRIGGER [dbo].[trTestTable]
GO

CREATE TRIGGER [dbo].[trTestTable] ON [dbo].[TestTable]
   AFTER INSERT
AS 
BEGIN

	--Log the fact the trigger fired
	INSERT INTO [dbo].[AuditTrigger] (AuditTriggerDescr, DateCreated)
	SELECT 'Trigger Fired', GETDATE()
	
END
GO

Test Inserting a record that exists:

--Valid Insert
INSERT INTO dbo.TestTable (TestTableDescr)
SELECT TestTableDescr
FROM dbo.TestTable
WHERE TestTableDescr = 'Test1';

SELECT  *
FROM    [dbo].[AuditTrigger];

Test Inserting a record that doesn’t exist:

--Not a Valid Insert
INSERT INTO dbo.TestTable (TestTableDescr)
SELECT TestTableDescr
FROM dbo.TestTable
WHERE TestTableDescr = 'Test4';

SELECT  *
FROM    [dbo].[AuditTrigger];

You’ll now see that there are 2 entries in the AuditTrigger table due to the fact that the trigger fired even though no records were actually valid to insert.

So, lets amend the trigger to check for valid inserts:

USE [TempDB]
GO

IF  EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[trTestTable]'))
DROP TRIGGER [dbo].[trTestTable]
GO

CREATE TRIGGER [dbo].[trTestTable] ON [dbo].[TestTable]
   AFTER INSERT
AS 
BEGIN
	
	--Check to see if any records were inserted
	IF NOT EXISTS (SELECT 1 FROM INSERTED) 
		RETURN 
		
	--Log the fact the trigger fired
	INSERT INTO [dbo].[AuditTrigger] (AuditTriggerDescr, DateCreated)
	SELECT 'Trigger Fired', GETDATE()
	
END
GO

and test the inserts again:

Test Inserting a record that exists:

--Valid Insert
INSERT INTO dbo.TestTable (TestTableDescr)
SELECT TestTableDescr
FROM dbo.TestTable
WHERE TestTableDescr = 'Test2';

SELECT  *
FROM    [dbo].[AuditTrigger];

Test Inserting a record that doesn’t exist

--Not a Valid Insert
INSERT INTO dbo.TestTable (TestTableDescr)
SELECT TestTableDescr
FROM dbo.TestTable
WHERE TestTableDescr = 'Test4';

SELECT  *
FROM    [dbo].[AuditTrigger];

No record will have been inserted with the final insert statement!

Lets clean up our tempdb:

USE [TempDB]
GO

--Clean up
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TestTable]') AND type in (N'U'))
DROP TABLE [dbo].[TestTable]
GO
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[AuditTrigger]') AND type in (N'U'))
DROP TABLE [dbo].[AuditTrigger]
GO

Hopefully this will help point out the misconception that triggers only fire when records are actually inserted :)

As per usual, I’d like to hear peoples thoughts/experiences on this topic.

DBCC CheckTable, Spatial Indexes and incorrect compatibility mode…..

Just a very quick blog today regarding an issue that has arisen with one of my clients. During Integration it became apparent that one table in particular was failing during the weekly consistency checks, the error being output:

DBCC results for ‘sys.extended_index_1696529623_384000′.

There are 313423 rows in 1627 pages for object “sys.extended_index_1696529623_384000″.

DBCC results for ‘schema.Table’.

There are 312246 rows in 12192 pages for object “schema.Table”.

Msg 0, Level 11, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

Msg 0, Level 20, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

 

A bit of background. The server is running SQL Server 2008R2 SP1 CU2 and the database in question is still in compatibility 90 (SQL Server 2005). The table in question has a spatial index on a Geography column. 

So, how do we fix this? Well there’s a couple of options.

  1. Change the compatibility to 100
  2. Install SQL Server 2008R2 SP1 CU3…

This is a documented issue (kb 2635827) and the fix can be found on Microsoft’s Support Pages

FIX: Access violation when you run a DBCC CHECKDB command against a database that contains a table that has a spatial index in SQL Server 2008 or in SQL Server 2008 R2 

As to which fix we deploy, well that’s for tomorrow’s fun and games ;)

Developing Microsoft SQL Server 2012 Databases (70-464) – My Thoughts

I finally pulled my finger out and took the last exam of the MCSE SQL Server 2012 – Data Platform certification last week and passed with a score of 876 which is pretty respectable in my opinion ;)

The exam consisted of 3 sections:

  • 1 – 6 Scenario based questions
  • 2 – 7 Scenario based questions
  • 3 – 32 Generic questions
  • So those of you who are clever enough to work out that’s a grand total of 45 questions. I’m pleased to say that once again the quality of the questioning was to Microsoft’s usual standard, not in terms of difficulty but more in terms of complete irrelevance!

    I’m bound by NDA so can’t go into specifics but I’ll try and provide and example of this complete and utter irrelevance without giving the game away:

    Q: What is your favourite holiday destination?

    A (Select one of the following):

  • Green
  • Green with a bit of Yellow
  • Green with a bit of Blue
  • Sky Blue with Pink dots
  • Hmmmm, bit of a tricky one here. From what I can tell, none of the answers bear any relevance whatsoever to the question…….hmmmmm……..suppose I’ll have to take a random guess and hope that the answer I select is the one they’ve set as being correct!!!

    It was just a good job i wasn’t on the threshold of pass/fail as this could potentially have been the difference. From the actual question, it was to be a simple answer but I suppose I’ll never know whether or not my random guess worked out or not :(. I just hope the comments I left regarding this issue are taken up and fixed so others don’t have the same issue.

    All in all I felt comfortable throughout the exam but do know for a fact that my knowledge of Assemblies / CLR’s has slipped significantly and I need a refresher.

    So that’s it. SQL Server 2012 MCSE done and dusted and no more exams……well for the next 3 years at least. So whats next? Not sure, there is the MCSM exams but I have to be honest, I’ve got no formal certifications in SQL Server 2005/2008 but I know for a fact that I have far more knowledge and experience of its features than I do for SQL Server 2012 so I suppose I kind of backed up my original issue I have with these exams. Yes I studied and yes I did learn a lot from doing so but in no way shape or form would I class myself as an “Expert” in SQL Server 2012 – I don’t believe anyone could!

    As per usual, any thoughts or comments are welcome.

    Follow

    Get every new post delivered to your Inbox.