SQLGeordie's Blog

Helping the SQL Server community……where i can!

Docker, Kubernetes and SQL Server on Linux series — June 20, 2018

Docker, Kubernetes and SQL Server on Linux series


I’ve been playing around with Docker (more specifically with SQL Server on Linux) the beginning of 2017 to see if indeed I can “Build, Ship, Run, Any App Anywhere” and will be putting together a series of how-to’s and issues experienced so you guys don’t spend the countless hours trying to investigate and work out what is going on when things do not work.

I’ll be adding links to this main page as and when new posts are added as a central point of reference.

Little shortcut: https://shell.azure.com/ — March 9, 2022
Newcastle Power BI YouTube Channel — January 20, 2022

Newcastle Power BI YouTube Channel

Let’s hit 100 subscribers!

Myself, Glen and Mark have been back up and running with the Newcastle Power BI meetup from October 2021 running virtual events which have been a huge success!

We have been recording some of the sessions where the speaker is happy for the content to be shared and have been uploading them to a brand new, hot off the press YouTube channel. This is still work in progress but wanted to get it out there👍

Please check it out, like the videos, subscribe to the channel and just as all the other kool YouTubers say “Make sure you hit the bell!” as we’d love to have 100+ subscribers by the end of January!

Feel free to check out @SQLGeordie’s youtube channel for some Docker/Kubernetes/DevOps recordings, Who knows, I may actually do some more this year 🤷‍♂️

tempdb – size matters! — January 9, 2022

tempdb – size matters!

tldr: Over the years I’ve read a lot of blog posts and watched a lot of videos where they mention that you should have your tempdb files all the same size. What I haven’t seen much of (if any) is what performance impact you actually see if they are not configured optimally. This blog post aims to address that 😉


We had a customer experiencing significant performance issues leading to application timeouts (30s) so called on Datamasterminds to investigate. Although this wasn’t a constant performance issue and only ever seen in extremity very infrequently, we were fortunate enough that they had invested in a SQL Server monitoring tool (SQL Sentry) which captured the historical performance.


Looking at various noted dates/times where they had encountered performance issues and an example is shown below:

It is obvious to see that it wasn’t pretty during these times, very high PAGELATCH_UP (tempdb PFS) and the dreaded THREADPOOL waits can be seen so we got to work looking at setting up additional monitoring and analysing the database(s) and queries which were running at the time. Long story short, this lead to there being a select few stored procedures creating/dropping a lot of temporary objects and in some cases running a very high number of inserts/deletes to them in loops/cursors. With the queries continuing to come in, the wait times get higher ultimately leading to them being queued and eventually hitting the 30s application timeout.

A good explanation of what Page Free Space (PFS) is can be found over at microsoft docs.

At the time, there were 8 tempdb files but all are different sizes so the usage was skewed. This is because of SQL Server’s proportional fill algorithm where it will try (more often than not) to write to the file with the most free space. In this case, as file id 1 was significantly larger (117GB) than any of the others (25-50GB) so it was the “defacto standard” when writing to tempdb. Ultimately, causing the contention we were seeing.

Tempdb Usage during high wait times can be seen below (taken from SQL Sentry), note the variation in Read/Writes to each file as well as the size differences:

Below is some of the output from sp_whoisactive during the high PAGELATCH_UP wait times. You will see the majority relate to the INSERTS and DELETES to temporary objects…..all in tempdb file id 1.

NOTE: This is just a snippet of the output, the number of queries was in the 1000’s 😲

The Fix

The interim fix was very straight forward, simply resize the tempdb files to be the same size and the proportional fill algorithm worked far better 💪

We’re still working with the customer on the performance tuning efforts to reduce resource usage and contention seen throughout.

The following list from microsoft explains how increasing the number of tempdb data files that have equal sizing reduces contention:

  • If you have one data file for the tempdb, you only have one GAM page, and one SGAM page for each 4 GB of space.
  • Increasing the number of data files that have the same sizes for tempdb effectively creates one or more GAM and SGAM pages for each data file.
  • The allocation algorithm for GAM allocates one extent at a time (eight contiguous pages) from the number of files in a round robin fashion while honouring the proportional fill. Therefore, if you have 10 equally sized files, the first allocation is from File1, the second from File2, the third from File3, and so on.
  • The resource contention of the PFS page is reduced because eight pages at a time are marked as FULL because GAM is allocating the pages.

Hopefully this blog post gives you an insight into what sort of issue you can see if you don’t take the advice of Microsoft, Consultants or indeed anyone telling you to size all your tempdb files the same 🤔

DBCC ShrinkFile Error Message: File ID 1 of database ID ‘nn’ cannot be shrunk as it is either being shrunk by another process or is empty —

DBCC ShrinkFile Error Message: File ID 1 of database ID ‘nn’ cannot be shrunk as it is either being shrunk by another process or is empty

We’re currently working with a customer on an Archiving project and as part of it trying to reduce their 8.5tb database down to where it should/needs to be (~5tb) in order to be able to restore it in their dev/test environments. Unfortunately adding more disk space in these environments is not an option so as we remove data we are forced to shrink the database files.

I’ll be posting some issues you may encounter when trying to shrink a very large database filled with a ton of (B)LOB data but this post focuses on and issue experienced whilst trying to get the shrink to run.

Although we haven’t pinpointed the cause 100% yet, occasionally there were times when the shrink process would just bomb out after ~2mins with the message below:

File ID 1 of database ID 10 cannot be shrunk as it is either being shrunk by another process or is empty. [SQLSTATE 01000] (Message 5240)

This is usually a message seen when a backup is currently running against that database but in this case it was not. There is a coincidence where the last full backup that run was taking a significantly longer time that it should and this shrink process was trying to run (and bombing out) so it could be linked but to get the process running again there is a little trick you can use and that is to simply increase the database size by “a very small amount”.

In our case we just used 1MB and the script is below:

USE DBNameHere;

    name AS FileName, 
    size/128.0 AS CurrentSizeMB,  
    size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS FreeSpaceMB
FROM sys.database_files
WHERE type IN (0,1)
AND DB_NAME() = 'DBNameHere'

-- Use the CurrentSizeMB and add 1MB
USE [master]

Hopefully this will be useful for others that may be stuck in a similar situation.

SQL Server Corrupt / Suspect database with In-Memory OLTP — August 24, 2021

SQL Server Corrupt / Suspect database with In-Memory OLTP

The Problem

Late last week (20th) we had an emergency call from a company that had a production database go into Suspect mode and needed help. Now this isn’t a great situation to be in so when they then go on to tell us that the last valid backup they had was from the 12th and the backup job had been failing since then – even less of a great situation to be in 😢

There are many blogs and forums posts out there showing the steps to rectify this situation with the main options being

  1. Restore the last valid backup or
  2. Put the DB into Emergency mode and run CHECKDB with REPAIR_ALLOW_DATA_LOSS
  3. Create a new DB and migrate everything to it
    1. The data was in a readable state from the DB in Emergency mode – we were fortunate!
  4. Other options are available in certain scenarios

Depending on your backup strategy, options 1 and 2 can put you in the situation where data loss could occur but with this company the first option isn’t really an option as they would be losing 8 days worth of data.

This is where it started to get interesting, as we discussed what they had tried already they mentioned that they had tried some of the steps in blogs / forums to do option 2 but they were getting an error relating to In-Memory OLTP preventing it. Like many many others, these guys have an In-Mem OLTP filegroup from trying it out once and not being able to remove it but it had been like that for a number of years without causing an issue……..until today.

Aha, so we’re now in an even less of a great situation than the less of a great situation a few minutes earlier 👀. Unfortunately, In-Memory OLTP objects are not checked as part of a DBCC CHECKDB process so option 2 is again not an option. A standard CHECKDB had been run by the guys and returned no errors which helped narrow down the issue to being with the In-Memory OLTP structures as they’re not part of the check.

Another option we did explore as a last ditch effort before option #3 was a slightly modified version of the process from one of Paul Randal’s blog’s on “Creating, detaching, re-attaching, and fixing a SUSPECT database” to try and re-attach the mdf and rebuilding a new log file. That day I learned something new and that this is also not an option for DBs with In-Memory OLTP. Below is a snippet of code to show what I mean for attaching a DB and attaching a new log file.

USE [master]
( FILENAME = N'D:\DATA\DBNameHere.mdf' )

-- If the above doesn't work then try forcing...
    ON  ( FILENAME = N'D:\DATA\DBNameHere.mdf' )

The error that you will get is:

Msg 41316, Level 16, State 0, Line 7
Restore operation failed for database ‘DBNameHere‘ with internal error code ‘0x88000001’.
Msg 41836, Level 16, State 1, Line 10
Rebuilding log is not supported for databases containing files belonging to MEMORY_OPTIMIZED_DATA filegroup.
DBCC results for ‘DBNameHere‘.
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DBNameHere‘.
Msg 7909, Level 20, State 1, Line 10
The emergency-mode repair failed. You must restore from backup.

The “Fix

So really, the only option now is #3, to copy all the data to a new database following the steps below as a guide:

  • disable logins so applicaiton will not be able to connect
  • create new database (without in-memory OLTP filegroup)
  • script schema / procs / users etc from Emergency mode db
  • disable foreign keys disable nonclustered indexes
  • migrate the data
    • We used the import method from SSMS to quickly / easily utilise the identity seed management but there are other methods available
  • enable foreign keys (WITH CHECK)
  • rebuild nonclustered indexes
  • drop emergency mode db
  • rename new database back to old name
  • enable logins

Although this took a while, most of it on making sure the data migrated was indeed correct, they managed to recover their data and are back up and running, albeit with a 1 day outage. These guys were extremely lucky and have learned a valuable lesson about ignoring failed backup errors.

The Cause?

Although we couldn’t 100% say with certainty what exactly happened to cause this, from the error log we could see that a restore of the DB was attempted on a secondary instance and the script did not include a WITH MOVE and it attempted to overwrite the MEM folder for the production DB. These files were locked by SQL Server and the log indicated this also but the problems were being seen not long after and the error from the failed backups relates to missing objects from the MEM folder so it is a likely cause.

Couple of things to take away from this:

  • Always check and validate your backups
    • These guys did that every night by restoring this DB to another instance for reporting, their problem was ignoring the backup failures / errors
  • Be very wary when looking to implement / try out In-Memory OLTP, especially if you’re throwing it straight into production. Once the filegroup is created then it can’t be got rid of and if you are unfortunate enough to encounter corruption and don’t have valid backups then you are in a more difficult situation compared to if you weren’t using In-Memory OLTP
Microsoft Ignite 2021 – Book of News — March 4, 2021

Microsoft Ignite 2021 – Book of News

Microsoft Ignite 2021 – Book of News

Microsoft Ignite 2021 is currently underway and if, like me, you can’t attend every session you would like to, well Microsoft have this covered with their Book of News.

There’s a ton of new stuff (very little for Data which is my primary focus) but this gives a great overview of all the latest information. Some of the highlights for me are:

Enjoy the rest of conference and if you miss any of it, enjoy the Book of News 😃

Should I split my SQL Server Drives on a SAN in 2021? — February 7, 2021

Should I split my SQL Server Drives on a SAN in 2021?

NOTE: This blog post references HPE as our example but is relevant to other storage vendors out there. I am in no way affiliated with HPE 😉

Back in the day, “when I was a lad“, the recommendation for SQL Server was to split your data, logs and tempdb files onto separate drives/luns to get the most out of your storage. Jump forward to 2021, is this still relevant and should I be splitting my SQL Server drives on to separate luns on new SAN storage? A question which is often asked not just by customers as well as their 3rd party managed service providers / hosting companies. This question can also be along the lines of, “Why can’t we just put everything on a C:\ because the backend is all on the same lun“. This is slightly different as they’re questioning the drive lettering more than creating separate luns but still relevant to this topic.

The majority (not all) of SQL Servers will have a SAN to host its storage and SANs these days are super duper quick, especially those that have tiered SSD or even fancier, flash storage. The technical differences between the older spinning rust and the new fancy dan flash storage is not something we’ll delve into as there’s plenty of other blogs out there and not really the scope of this blog post.

Each storage vendor will (should) provide their own documentation specific to how the SAN should be configured for SQL Server to get the best bang for your buck. Taking HPE as an example, they have pdf’s for their various offerings including 3PAR/Primera as well as Nimble. Although there are some slight differences, each of them suggest that you SHOULD split your drives onto separate volumes.

I won’t disect the documents in their entirety but some of the sections which will help with answering the question but these mostly relate to which performance policy to set for your data, logs and tempdb based on the workload (ie. OLTP / OLAP and size of files):

  1. Array Snapshots and remote replication
    • You may not (won’t) want tempdb as part of this due to its large rate of data change
  2. Volume Block Size
    • According to the documentation, depending on the workload, you may (or may not?) want 8kb for data and 4kb for logs as per their default policy
  3. Caching
  4. Deduplication
  5. Compression
  6. Number of disks available

To provide a great overview, below is a snippet from the HPE Nimble recommendations:

Storage Layout for SQL Server Volumes
In general, using multiple volumes for SQL Server deployments offers several advantages:
The ability to apply custom performance policy settings to specific volumes
• The ability to control cache use for volumes on adaptive flash arrays
• The choice of limiting replication of certain volumes, such as the tempdb volume
• A gain in performance for I/O-intensive databases, especially if the queue depth of a device is at risk of
becoming saturated
• The ability to group databases in separate volume collections and apply custom backup and replication
• The ability to quickly recover specific databases with volume-based restores
Before designing the storage layout, carefully consider the administrative overhead of managing multiple
volumes. The ideal solution provides a balance between optimization and administration.

Allocation Unit Size

Something that often comes up during these conversations is the configuration of the volume formatting. Regardless of the chosen Performance Policy and indeed volume block size, the default recommendation from HPE is to use 64kb for your data, logs and tempdb. This is a recommendation, only testing for your specific environment will truly give you the answer as to what allocation unit size to set.

Additional Information

Below are some further snippets from the HPE documentation regarding default configurations:

HPE 3PAR/Primera:


Comparison of SSD / tiering uses for SQL Server files:


Should you split your SQL Server drives in 2021? The Nimble documentation gives a sentence which sum’s it up very well:

The ideal solution provides a balance between optimization and administration.

Having everything thrown into a single pool will make life a lot easier for the SAN guy, splitting things up could lead to an administrative nightmare but may be required to get the best out of it for your workload.

What I will say is, review your storage vendor’s documentation / recommendations, compare it with what sort of environment / workload you have and if it fits your specific setup, configure it, test it and use it as a baseline to compare with other configurations.

If testing proves that their guide/recommendations do not provide optimal performance for you environment then it is perfectly fine to deviate from their suggested recommendations, as long as the admin overhead is manageable. All I would add to this is that whatever the configuration decision made is, make sure it is fully documented as to why this has been done as someone several years later will no doubt ask the question as to why 🙂

Docker in 12 Steps – YouTube — December 30, 2020

Docker in 12 Steps – YouTube

I have eventually got around to start tidying up my Youtube channel since I moved it and the process made a proper mess of everything 🤦‍♂️

Want to get started with Docker, containers and even using SQL Server on Linux in just 12 easy , hands on steps (short videos)? If you’ve moved onto this sentence then the answer to the previous question must have been YES!

Have a look, I’ve purposely kept them as short as possible, the total time for all 12 videos is less than 90 minutes so there really is no excuse 😉

SQLGeordie – Docker in 12 Steps

Microsoft Ignite 2020 – Book of News — September 24, 2020

Microsoft Ignite 2020 – Book of News

Microsoft Ignite 2020 is currently underway and if, like me, you can’t attend every session you would like to, well Microsoft have this covered with their Book of News.

There’s a ton of new stuff but this gives a great overview of all the latest information including:

Enjoy the conference and if you miss any of it, enjoy the Book of News 😃

SQL Server / Docker Desktop and WSL2 — August 5, 2020

SQL Server / Docker Desktop and WSL2

This isn’t anything ground breaking but something that is really awesome for those that run SQL Server on Linux using Docker Desktop for Windows, Windows 10 (SDK version 2004, Build 19041 or higher) now ships with Windows Subsystem for Linux (WSL) 2.

I won’t go too much into what this is as you can read the article in the links above but to summarise, this will improve the experience of docker on windows:

  • Improvements in resource consumption
  • Starting up docker daemon is significantly quicker (Docker says 10s as opposed to ~1min previously)
  • Avoid having to maintain both Linux and Windows build scripts
  • Improvements to file system sharing and boot time
  • Allows access to some cool new features for Docker Desktop users.

Some of these are improvements we’ve been crying out for over the last couple of years so in my opinion, they’re a very welcome addition.

In order to get started using WSL2, there’s a couple of steps you need to run through which I’ll try and show below with a few screen shots.



Before we can do anything, please make sure that you have the downloads from the link in the previous section

Install Docker Desktop

Nothing to report on here, fairly straight forward click, click click

nstalling Docker Desktop 23.0.4 (46911) 
Docker Desktop 
Installation succeeded 

Once installed, depending on what the state of you machine is currently, you may see this message regarding installing WSL2:

Docker Desktop - Install WSL2 
WSL 2 is not installed 
Install WSL using this powershell script (in an administrative powershell) and 
restart your computer before using Docker Desktop: 
Enable-WindowsOptionalFeature -Online -FeatureName 
Use Hyper-V Stop Docker

Simply follow the information and enable the optional feature either using the powershell provided:

Enable-WindowsOptionalFeature -Online -FeatureName $("VirtualMachinePlatform", "Microsoft-Windows-Subsystem-Linux")
PS C: \WINDOWS\system32> Enable-WindowsOptiona1Feature 
"Virtua1MachineP1atform", "Microsoft-Windows-Subsystem-Linux" 

Or you can go to the windows features and enable it manually:

Windows Features 
Turn Windows features on or off 
To turn a feature on, select its check box. To turn a feature off, clear its 
check box. A filled box means that only pat of the feature is turned on. 
D TFTP Client 
Z] Virtual Machine Platform 
Windows Hypervisor Platform 
Windows Identity Foundation 3.5 
Z] Windows PowerSheII 2.0 
Windows Process Activation Service 
Windows Projected File System 
Windows Subsystem for Linux 
Windows TIFF Filter 
Work Folders Client 

Once you have done this you will be prompted to install the Linux Kernel update package (See downloads). You can reboot before doing this (another reboot may be required after) but I managed to install it and just do a single reboot:

Docker Desktop - Install WSL 2 kernel update 
WSL 2 installation is incomplete. 
The WSL 2 Linux kernel is now installed using a separate MSI update package. 
Please click the link and follow the instructions to install the kernel update: 
Press Restart after installing the Linux kernel.
Need to run the Linux kernel update and restart
Windows Subsystem for Linux Update Setup 
Welcome to the Windows Subsystem for Linux 
Update Setup Wizard 
The Setup Wizard will install Windovvs Subsystem for Linux 
Update on your computer. Click Next to contnue or Cancel 
to exit the Setup Wizard.
Run the update executable

Takes about 2 seconds to update

Windows Subsystem for Linux Update Setup 
Completed the Windows Subsystem for Linux 
Update Setup Wizard 
Click the Finish button to exit the Setup Wizard.
Update now complete

Depending on your setup, there may be a couple of additional steps if you follow this link:


Open the Docker settings and you should now see the option to use the WSL 2 based engine:

Select it and restart docker

If you wish to see what version of WSL you have then you can run the command below in an elevated command prompt:

wsl --list --verbose

I already have the Ubuntu distribution installed so I didn’t have to do this, you may need to install this:

If you click launch then the dstro will start:

Run SQL Server on Linux

docker run 	-e "ACCEPT_EULA=Y" `
				-e "SA_PASSWORD=P@ssword1" `
				--cpus="2" `
				--name SQLLinuxLocal -d `
				-p 1433:1433  `
        		<<Put Your Image Name here>>
docker ps -a

I can’t say too much about the various performance improvements so far but the daemon startup is certainly a lot quicker than previously. Perhaps some testing of the performance is in order and another blog post……. 😜