SQLGeordie's Blog

Helping the SQL Server community……where i can!

SQL Server Corrupt / Suspect database with In-Memory OLTP — August 24, 2021

SQL Server Corrupt / Suspect database with In-Memory OLTP

The Problem

Late last week (20th) we had an emergency call from a company that had a production database go into Suspect mode and needed help. Now this isn’t a great situation to be in so when they then go on to tell us that the last valid backup they had was from the 12th and the backup job had been failing since then – even less of a great situation to be in 😢

There are many blogs and forums posts out there showing the steps to rectify this situation with the main options being

  1. Restore the last valid backup or
  2. Put the DB into Emergency mode and run CHECKDB with REPAIR_ALLOW_DATA_LOSS
  3. Create a new DB and migrate everything to it
    1. The data was in a readable state from the DB in Emergency mode – we were fortunate!
  4. Other options are available in certain scenarios

Depending on your backup strategy, options 1 and 2 can put you in the situation where data loss could occur but with this company the first option isn’t really an option as they would be losing 8 days worth of data.

This is where it started to get interesting, as we discussed what they had tried already they mentioned that they had tried some of the steps in blogs / forums to do option 2 but they were getting an error relating to In-Memory OLTP preventing it. Like many many others, these guys have an In-Mem OLTP filegroup from trying it out once and not being able to remove it but it had been like that for a number of years without causing an issue……..until today.

Aha, so we’re now in an even less of a great situation than the less of a great situation a few minutes earlier 👀. Unfortunately, In-Memory OLTP objects are not checked as part of a DBCC CHECKDB process so option 2 is again not an option. A standard CHECKDB had been run by the guys and returned no errors which helped narrow down the issue to being with the In-Memory OLTP structures as they’re not part of the check.

Another option we did explore as a last ditch effort before option #3 was a slightly modified version of the process from one of Paul Randal’s blog’s on “Creating, detaching, re-attaching, and fixing a SUSPECT database” to try and re-attach the mdf and rebuilding a new log file. That day I learned something new and that this is also not an option for DBs with In-Memory OLTP. Below is a snippet of code to show what I mean for attaching a DB and attaching a new log file.

USE [master]
GO
CREATE DATABASE [DBNameHere] ON 
( FILENAME = N'D:\DATA\DBNameHere.mdf' )
 FOR ATTACH_REBUILD_LOG 
GO

-- If the above doesn't work then try forcing...
CREATE DATABASE [DBNameHere]
    ON  ( FILENAME = N'D:\DATA\DBNameHere.mdf' )
    FOR ATTACH_FORCE_REBUILD_LOG;
GO

The error that you will get is:

Msg 41316, Level 16, State 0, Line 7
Restore operation failed for database ‘DBNameHere‘ with internal error code ‘0x88000001’.
Msg 41836, Level 16, State 1, Line 10
Rebuilding log is not supported for databases containing files belonging to MEMORY_OPTIMIZED_DATA filegroup.
DBCC results for ‘DBNameHere‘.
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DBNameHere‘.
Msg 7909, Level 20, State 1, Line 10
The emergency-mode repair failed. You must restore from backup.

The “Fix

So really, the only option now is #3, to copy all the data to a new database following the steps below as a guide:

  • disable logins so applicaiton will not be able to connect
  • create new database (without in-memory OLTP filegroup)
  • script schema / procs / users etc from Emergency mode db
  • disable foreign keys disable nonclustered indexes
  • migrate the data
    • We used the import method from SSMS to quickly / easily utilise the identity seed management but there are other methods available
  • enable foreign keys (WITH CHECK)
  • rebuild nonclustered indexes
  • drop emergency mode db
  • rename new database back to old name
  • enable logins

Although this took a while, most of it on making sure the data migrated was indeed correct, they managed to recover their data and are back up and running, albeit with a 1 day outage. These guys were extremely lucky and have learned a valuable lesson about ignoring failed backup errors.

The Cause?

Although we couldn’t 100% say with certainty what exactly happened to cause this, from the error log we could see that a restore of the DB was attempted on a secondary instance and the script did not include a WITH MOVE and it attempted to overwrite the MEM folder for the production DB. These files were locked by SQL Server and the log indicated this also but the problems were being seen not long after and the error from the failed backups relates to missing objects from the MEM folder so it is a likely cause.

Couple of things to take away from this:

  • Always check and validate your backups
    • These guys did that every night by restoring this DB to another instance for reporting, their problem was ignoring the backup failures / errors
  • Be very wary when looking to implement / try out In-Memory OLTP, especially if you’re throwing it straight into production. Once the filegroup is created then it can’t be got rid of and if you are unfortunate enough to encounter corruption and don’t have valid backups then you are in a more difficult situation compared to if you weren’t using In-Memory OLTP
Microsoft Ignite 2021 – Book of News — March 4, 2021

Microsoft Ignite 2021 – Book of News

Microsoft Ignite 2021 – Book of News

Microsoft Ignite 2021 is currently underway and if, like me, you can’t attend every session you would like to, well Microsoft have this covered with their Book of News.

There’s a ton of new stuff (very little for Data which is my primary focus) but this gives a great overview of all the latest information. Some of the highlights for me are:

Enjoy the rest of conference and if you miss any of it, enjoy the Book of News 😃

Should I split my SQL Server Drives on a SAN in 2021? — February 7, 2021

Should I split my SQL Server Drives on a SAN in 2021?

NOTE: This blog post references HPE as our example but is relevant to other storage vendors out there. I am in no way affiliated with HPE 😉

Back in the day, “when I was a lad“, the recommendation for SQL Server was to split your data, logs and tempdb files onto separate drives/luns to get the most out of your storage. Jump forward to 2021, is this still relevant and should I be splitting my SQL Server drives on to separate luns on new SAN storage? A question which is often asked not just by customers as well as their 3rd party managed service providers / hosting companies. This question can also be along the lines of, “Why can’t we just put everything on a C:\ because the backend is all on the same lun“. This is slightly different as they’re questioning the drive lettering more than creating separate luns but still relevant to this topic.

The majority (not all) of SQL Servers will have a SAN to host its storage and SANs these days are super duper quick, especially those that have tiered SSD or even fancier, flash storage. The technical differences between the older spinning rust and the new fancy dan flash storage is not something we’ll delve into as there’s plenty of other blogs out there and not really the scope of this blog post.

Each storage vendor will (should) provide their own documentation specific to how the SAN should be configured for SQL Server to get the best bang for your buck. Taking HPE as an example, they have pdf’s for their various offerings including 3PAR/Primera as well as Nimble. Although there are some slight differences, each of them suggest that you SHOULD split your drives onto separate volumes.

I won’t disect the documents in their entirety but some of the sections which will help with answering the question but these mostly relate to which performance policy to set for your data, logs and tempdb based on the workload (ie. OLTP / OLAP and size of files):

  1. Array Snapshots and remote replication
    • You may not (won’t) want tempdb as part of this due to its large rate of data change
  2. Volume Block Size
    • According to the documentation, depending on the workload, you may (or may not?) want 8kb for data and 4kb for logs as per their default policy
  3. Caching
  4. Deduplication
  5. Compression
  6. Number of disks available

To provide a great overview, below is a snippet from the HPE Nimble recommendations:

Storage Layout for SQL Server Volumes
In general, using multiple volumes for SQL Server deployments offers several advantages:
The ability to apply custom performance policy settings to specific volumes
• The ability to control cache use for volumes on adaptive flash arrays
• The choice of limiting replication of certain volumes, such as the tempdb volume
• A gain in performance for I/O-intensive databases, especially if the queue depth of a device is at risk of
becoming saturated
• The ability to group databases in separate volume collections and apply custom backup and replication
schedules
• The ability to quickly recover specific databases with volume-based restores
Before designing the storage layout, carefully consider the administrative overhead of managing multiple
volumes. The ideal solution provides a balance between optimization and administration.

Allocation Unit Size

Something that often comes up during these conversations is the configuration of the volume formatting. Regardless of the chosen Performance Policy and indeed volume block size, the default recommendation from HPE is to use 64kb for your data, logs and tempdb. This is a recommendation, only testing for your specific environment will truly give you the answer as to what allocation unit size to set.

Additional Information

Below are some further snippets from the HPE documentation regarding default configurations:

HPE 3PAR/Primera:

Nimble:

Comparison of SSD / tiering uses for SQL Server files:

Conclusion

Should you split your SQL Server drives in 2021? The Nimble documentation gives a sentence which sum’s it up very well:

The ideal solution provides a balance between optimization and administration.

Having everything thrown into a single pool will make life a lot easier for the SAN guy, splitting things up could lead to an administrative nightmare but may be required to get the best out of it for your workload.

What I will say is, review your storage vendor’s documentation / recommendations, compare it with what sort of environment / workload you have and if it fits your specific setup, configure it, test it and use it as a baseline to compare with other configurations.

If testing proves that their guide/recommendations do not provide optimal performance for you environment then it is perfectly fine to deviate from their suggested recommendations, as long as the admin overhead is manageable. All I would add to this is that whatever the configuration decision made is, make sure it is fully documented as to why this has been done as someone several years later will no doubt ask the question as to why 🙂

Docker in 12 Steps – YouTube — December 30, 2020

Docker in 12 Steps – YouTube

I have eventually got around to start tidying up my Youtube channel since I moved it and the process made a proper mess of everything 🤦‍♂️

Want to get started with Docker, containers and even using SQL Server on Linux in just 12 easy , hands on steps (short videos)? If you’ve moved onto this sentence then the answer to the previous question must have been YES!

Have a look, I’ve purposely kept them as short as possible, the total time for all 12 videos is less than 90 minutes so there really is no excuse 😉

SQLGeordie – Docker in 12 Steps

Microsoft Ignite 2020 – Book of News — September 24, 2020

Microsoft Ignite 2020 – Book of News

Microsoft Ignite 2020 is currently underway and if, like me, you can’t attend every session you would like to, well Microsoft have this covered with their Book of News.

There’s a ton of new stuff but this gives a great overview of all the latest information including:

Enjoy the conference and if you miss any of it, enjoy the Book of News 😃

Free DataSets – Kaggle — July 5, 2020

Free DataSets – Kaggle

Pretty much anything and everything that you would need for a SQL Server, Azure, AWS, GCP, Power BI etc demo dataset can be found on kaggle. The search facility is easy to use and you can also filter on the particular file type and license you want. I also like the “quicklook” option so you can quickly check to see if indeed it is the sort of data you want as opposed to having to download the data, open it and then realise it’s nothing like what you are after.

I never knew a lot about kaggle and to be honest, other than being able to get datasets from it, I never had a need to. Taken from wikipedia, “Kaggle, a subsidiary of Google LLC, is an online community of data scientists and machine learning practitioners. Kaggle allows users to find and publish data sets, explore and build models in a web-based data-science environment, work with other data scientists and machine learning engineers, and enter competitions to solve data science challenges.

It’s actually no surprise that they’re a subsidiary of Google when you look at most if not all of Google’s datasets, they all seem to link to kaggle….

I’ve included a couple of useful datasets that I’ve used in the past, hopefully you’ll find them useful also:

Back to dataset main page

Free DataSets – Google —

Free DataSets – Google

For most people on the planet with internet access, the first place to visit when looking for anything on the web is Google……….other search engines are available. Not only can you search using the standard search engine but they have various spin offs specific for searching datasets.

Google tracks dateset searches, allows you to explore these and provide the ability to download the data either direct from one of their charts or by providing back links to the site in which it came from. Chances are, kaggle will be one of them which is quite obvious when they are a subsidiary of Google LLC 🙂

There are various places within google to search the datasets, depending on what it is you’re looking for, below are some examples:

Back to dataset main page

Free DataSets…..for SQL Server, Azure, AWS, Power BI etc etc Demo Purposes —

Free DataSets…..for SQL Server, Azure, AWS, Power BI etc etc Demo Purposes

This is just a short overview page of various dataset sources I’ve used in the past for usage in my SQL Server, Azure, AWS and Power BI demo’s. Most if not all of these are free as I don’t like paying for stuff and there are some whereby you can generate your own data if there is something more specific you need……..just be careful as although its “randomly” generated, I have had issues where I pushed the files to github and as it happens, one of the email addresses randomly generated was that of an employee of a company which I’d never heard of. Pure chance (I believe) but something to be mindful of if that is your intention for use.


Kaggle

Pretty much anything and everything that you would need for a SQL Server, Azure, AWS, GCP, Power BI etc demo dataset can be found on kaggle. The search facility is easy to use and you can also filter on the particular file type and license you want. I also like the “quicklook” option so you can quickly check to see if indeed it is the sort of data you want as opposed to having to download the data, open it and then realise it’s nothing like what you are after.


Google

For most people on the planet with internet access, the first place to visit when looking for anything on the web is Google……….other search engines are available. Not only can you search using the standard search engine but they have various spin offs specific for searching datasets.


Power BI Sample DataSets

Not only datasets but also Power BI sample pbix files for anyone looking to get started with it.


GenerateData – Create your own

If there is a specific dataset you are looking for, with specific column names and data types then this is a useful tool. Data is restricted to 100 rows initially unless you sign up / donate but worth a look:


data.gov.uk

At the time of writing there are 54,846 datasets available on here. Some are a bit bizarre (See below) and are specific to the UK so unless you’re looking for that kind of thing, it may not be the place for you.

Other very specific examples (apart from that below) I’ve found are “NHS Bassetlaw CCG Expenditure over £25K” and “Planned road works on the HE road network“………..


Super Bizarre DataSets

I’ve not used any of these but saw this article via Brent Ozar’s weekly newsletter – https://thestacker.com/. Some of them sound pretty cool, others are a bit out there such as: “Abandoned Shopping Trolleys in Bristol Rivers” from data.gov.uk and “The Great British Toilet Map“…….

MSBuild 2020 – Build Book of News — May 20, 2020
“Kubernetify” your Containers — February 23, 2020

“Kubernetify” your Containers

Adding the link to github which contain the slides and demo’s from the various events I have delivered this session to:

Github – Kubernetify Your Containers

To see them all I have given the root folder and if you search for “kubernetify” you should see everything needed 👍

If you wish to see recorded demos then you can do on the SQLGeordie YouTube channel, direct link to the play list can be found at:

Kubernetify Your Containers

Abstract

We have all now had a play around with Docker and Containers or at least heard about them.

This demo heavy session will walk through some of the challenges around managing container environments and how Kubernetes orchestration can help alleviate some of the pain points.

We will be talking about what Kubernetes is and how it works and through the use of demos we will:

  • Highlight some of the issues with getting setup (Specifically Minikube on Ubuntu),
  • Deploying/Updating containers in Kubernetes (on-Prem as well as AKS using Azure DevOps)
  • Persisting data
  • How to avoid making the same mistakes as I have