SQLGeordie's Blog

Helping the SQL Server community……where i can!

Delayed Durability in the wild… — November 14, 2016

Delayed Durability in the wild…

Background

We have recently been working on large data migration project for one of our clients and thought I would share how Delayed Durability helped us overcome a performance issue when the solution was moved to the client’s Development domain.

I won’t go into details of the project or the finer detail of our proposed solution as I have plans to put some more content together for that but in short the migration of the data was to be run by a (large) number of BIML generated SSIS (Child) packages for each table to be migrated, derived from a meta data driven framework with each stage being run by a master package, all of which run by a MasterOfMaster packages.

To maximize throughput, utilise as much processing power as possible, reduce the time it would take to run the migration and control the flow we built a series of sequence containers each running it’s own collection of Child Packages. We built the framework in such a way that these could be run in parallel or linear and each master package could contain as many containers (no pun intended) of child packages as required. This also allowed us to handle the order that packages were run in, especially those with dependencies whilst keeping the potential for parallelising (is that a word? No idea but I like it) the whole process as much as possible. Leaving the MaxConcurrentExecutables property to -1 mean’t we could push the processing to run up to 10 packages at once due to the VM having 8 cores (on Integration, 4 cores on Development) and this value of -1 allows the maximum number of concurrently running executables to equal the number of processors plus two.

An small example of how the MasterOfMaster and a Master Package for a stage looked is shown below:

Each container number could have Parallel and/or Linear Containers and both must succeed before the next Container level can start.

NOTE that this is just an example representation, naming conventions shown do not reflect the actual solution.

Problem

During development and initial testing on our own hardware, we had the migration at the time running at ~25minutes for around 600 packages (ie. tables) covering (what we termed) RawSource–>Source–>Staging which was well within the performance requirements for the stage that development was at and for what was initially set out. The rest of this blog post will hone in specifically on Source–>Staging only.

However, once we transferred the solution to the clients development environment things took a turn for the worse. In our environment we were running VMs with 8 cores, 16GB RAM and utlising SSDs. The client environment was running SQL Server 2016 Enterprise on VMWare vSphere 5.5, 8 vCPUs, 32GB RAM (for Integration, Development was half this) but the infrastructure team have done everything in their power to force all VMs onto the lower tier (ie. slow disks) of their 3-PAR SAN and throttle them in every way possible, just to make things more of a challenge. Even though the VM’s themselves were throttled we were confident that we wouldn’t see too much of a performance impact, especially as this was only a subset of the processing to be done so we needed it to be quick and it will only ever get longer and longer.

How wrong we were. On the first run the processing (for Source–>Staging) took around 141 minutes, yes you read that right, a full 116 minutes longer than the whole process took on our hardware! Wowza, didn’t see that one coming. I won’t delve too much into the investigations as again that will be saved for another blog post but essentially we were seeing a huge amount of the WRITELOG wait type since moving to the client environment. We believed the reason for this was due to the significant amount of parallel processing (running of SSIS packages in parallel loading to the same DB) we were doing and the SAN didn’t seem to be able to handle it. One other thing to note, due to truncations not being flagged as error’s in OLE DB Destination fast load data mode access, some of the packages that weren’t a direct copy where we knew the schema was exactly the same were run in non-fast load, ie row-by-row which puts additional stress on the system as a whole.

I will be blogging at a later date regarding how we managed to get everything running in fast load and handle the truncation via automated testing instead.

Solution

Enter Delayed Durability.

I won’t enter into too much detail regarding what this is or how it specifically works as this has been blogged by many others (Paul Randal, Aaron Bertrand to name just a couple) but my favourite description of delayed durability is comes from the msdn blogs and they refer to it as a “lazy commit“. Before you ask, yes we understood the issues of implementing such a change but the migration process was always a full drop and reload of the data so we didn’t care if we lost anything as we could simply run the process again.

Setting delayed durability at the database level we were able to control which Databases involved in the process we wished to have this without altering the BIML framework or code itself to handle it at the transaction level. By simply applying this to the Source and Staging databases we reduced the processing time from 141 minutes to 59 minutes. This wasn’t exactly perfect but shaving more than half the time off with one simple change and pushing the WRITELOG wait stat way way down the list was a great start.

As a side not, we have managed to get the processing from ~59mins to ~30mins without changing the VM/hardware configuration but I will leave that for another post.

Proof

When I first set out with this blog post it was only going to be a few paragraphs giving an insight into what we did however, I thought that all this would be pointless without some visualisation of the processing both before and after.

Row-by-Row with no Delayed Durability

We needed to get a baseline and where better to start than capturing the metrics through SentryOne and using Adam Mechanic’s spWhoIsActive we can see what I was talking about with the WRITELOG wait stat:

spwhoisactive_output

Granted the wait time themselves was relatively low, these were apparent almost every time we hit F5 and running our wait stat scripts in was in the top 3. A sample of the processing indicating this wait stat can also be seen below:

waitstats_writelog

As stated previously, overall the Source–>Staging process took 141 minutes and the overall processing from SentryOne PA was captured:

so_fullprocess_output

Row-by-Row with Delayed Durability

So when we ran the same process with Delayed Durability we can see straight away that the transactions/sec ramp up from ~7000 to ~12500. Top left shows without Delayed Durability, bottom left with Delayed Durability and right shows them side by side:

The overall process for Source–>Staging took only 59 minutes. I’ve tried to capture the before/after in the image below, the highlighted section being the process running with Delayed Durability forced:

SO_FullProcess_withDD_Output.png

You can see from this the drastic increase in Transactions/sec and reduction in Log Flushes.

Two package execution time examples (trust me that they are the same package) showing that with Delayed Durability the processing time was only 43% (166sec down to 72sec and 991sec to 424sec) ) of that without Delayed Durability set. Apologies for the poor image quality….

To me that is a huge reduction for such a simple change!

Conclusion

So should you go out and apply this to all your production databases right this second? No, of course you shouldn’t. We applied this change for to fix a very specific problem in an isolated environment and were willing to take the hit on losing data if the server crashed – are you, or more importantly your company willing to lose that data? I’m taking an educated guess that this will be a no but for certain situations and environments this configuration could prove to be very useful.

Links

Advertisements
Why I don’t like NULLs and Nullable Columns — March 25, 2011

Why I don’t like NULLs and Nullable Columns

As I imagine (or hope), the majority of us adhere to some form of standards in the environment that they work in. Whether that be whether you have to wear a Shirt and Tie, be in for 9am or those of us in a Database or Development environment, Coding Standards.

Everyone who has worked with me knows that one of my standards is NO NULLABLE columns in any new development unless there is an extremely good, valid reason and to date, I’ve only ever had 1 developer convince me that they should make their column nullable – and that was due to the current system design meaning we had to have it without a complete re-write of that part of the system. I don’t really want this blog to turn into a big debate as to whether I’m right or wrong on this, its purely my view based on the many years I’ve been working with SQL Server and development teams.

There are many blogs out there where DBAs share the same or similar views such as Thomas LaRock (Blog|Twitter) in which he talks about the harm they cause by the result of a person not knowing they are there. What I’m going to focus on in this blog is more where people know the NULL values are there and tailor their T-SQL around it, specifically when the value is NULL and they don’t want to show NULL on screen so they replace it with another value.

I’ve knocked up a quick script to populate a table EMPLOYEE with dummy data for the purposes of this blog and yes, I’ve purposely left MODIFIEDBY_ID and MODIFIEDBY_DATE as NULLable and no, i’m not saying this is the table design anyone should be using in their systems, especially later on.

CREATE TABLE EMPLOYEE
   (
      EMP_ID INTEGER IDENTITY(1000,1)
                     PRIMARY KEY
                     NOT NULL
     ,DATE_OF_BIRTH DATETIME NOT NULL
     ,SALARY DECIMAL(16,2) NOT NULL
     ,EMP_NAME VARCHAR(50) NOT NULL
     ,INSERTED_ID INT NOT NULL
     ,INSERTED_DATE DATETIME NOT NULL
     ,MODIFIEDBY_ID INT NULL
     ,MODIFIEDBY_DATE DATETIME NULL
   )
GO

SET NOCOUNT ON ;

DECLARE @Counter BIGINT ;
DECLARE @NumberOfRows BIGINT
SELECT  @counter = 1
       ,@NumberOfRows = 70000
WHILE ( @Counter < @NumberOfRows )
      BEGIN
            INSERT  INTO EMPLOYEE
                    (
                      DATE_OF_BIRTH
                    ,SALARY
                    ,EMP_NAME
                    ,INSERTED_ID
                    ,INSERTED_DATE
                    )
                    SELECT  CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)
                           ,CONVERT(DECIMAL(16,2) , ( 50000 + RAND() * 90000 ))
                           ,'Dummy Name' + CONVERT(VARCHAR(100) , @Counter)
                           ,CONVERT(INT , rand() * 10000)
                           ,CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)

            SET @Counter = @Counter + 1
      END ; 
     
SELECT  @counter = 1
       ,@NumberOfRows = 25000
WHILE ( @Counter < @NumberOfRows )
      BEGIN
            INSERT  INTO EMPLOYEE
                    (
                      DATE_OF_BIRTH
                    ,SALARY
                    ,EMP_NAME
                    ,INSERTED_ID
                    ,INSERTED_DATE
                    ,MODIFIEDBY_ID
                    ,MODIFIEDBY_DATE
                    )
                    SELECT  CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)
                           ,CONVERT(DECIMAL(16,2) , ( 50000 + RAND() * 90000 ))
                           ,'Dummy Name' + CONVERT(VARCHAR(100) , @Counter)
                           ,CONVERT(INT , rand() * 10000)
                           ,CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)
                           ,CONVERT(INT , rand() * 10000)
                           ,CONVERT(VARCHAR(10) , getdate() - ( ( 18 * 365 ) + RAND() * ( 47 * 365 ) ) , 103)

            SET @Counter = @Counter + 1
      END ;    

I also create a couple of non-clustered indexes that should be utilised by the sample queries below:

CREATE INDEX idx1 ON employee ( MODIFIEDBY_DATE , inserted_date ) INCLUDE ( emp_id ,salary )
CREATE INDEX idx2 ON employee ( MODIFIEDBY_DATE ) INCLUDE ( emp_id ,salary )

So, the table is there and the data populated, now for one of my biggest bugbears:

Query 1:

SELECT  emp_id
       ,salary
       ,isnull(MODIFIEDBY_DATE , inserted_date)
FROM    EMPLOYEE
WHERE   isnull(MODIFIEDBY_DATE , inserted_date) > dateadd(yy , -20 , getdate())

Arrrrgggghhhh!!!! Don’t get me wrong, i do understand why tables and columns like this do exist but for a DBA its a nightmare to tune as any index you may put on these columns will ultimately end in a Scan as opposed to a Seek.

Here is the execution plan:

As a comparison here is the query and execution plan for the query without the ISNULL function:

Query 2:

SELECT  emp_id
       ,salary
       ,MODIFIEDBY_DATE
FROM    EMPLOYEE
WHERE   MODIFIEDBY_DATE > dateadd(yy , -20 , getdate())

Execution Plan:

And a comparison of the two:

SELECT  emp_id
       ,salary
       ,isnull(MODIFIEDBY_DATE , inserted_date)
FROM    EMPLOYEE
WHERE   isnull(MODIFIEDBY_DATE , inserted_date) > dateadd(yy , -20 , getdate())

SELECT  emp_id
       ,salary
       ,MODIFIEDBY_DATE
FROM    EMPLOYEE
WHERE   MODIFIEDBY_DATE > dateadd(yy , -20 , getdate())

Thats 95% against 5%!! What a difference!

Now, for bugbear #2 i’ll need to add a new column with new values, again, i’m not saying you should or would ever do this but for the purposes of the blog:

ALTER TABLE EMPLOYEE
ADD   PreviousSalary DECIMAL(16,2)

So i’ve now added a new column with all NULL values, however, i don’t want my users seeing the word NULL on the front end. Easy, i’ll replace it with a zero:

SELECT  emp_id
       ,salary
       ,isnull(MODIFIEDBY_DATE , inserted_date) AS LastModifiedDate
       ,isnull(PreviousSalary,0) AS PreviousSalary
FROM    EMPLOYEE
WHERE   isnull(MODIFIEDBY_DATE , inserted_date) > dateadd(yy , -20 , getdate())

From this you can now see that the SQL Optimiser will now ignore the index created and scan the clustered index:

Obviously i could amend idx1 to factor in this column or create a new index:

CREATE INDEX idx3 ON employee ( MODIFIEDBY_DATE , inserted_date ) INCLUDE ( emp_id ,salary, PreviousSalary )

And sure enough it will choose that index but again, its a scan!

Ok, so you may well be thinking that this doesn’t happen often or wouldn’t cause too much of an issue on your system which may very well be correct. However, what about a highly transactional financial system with 100’s of millions of rows? Yes it does happen! What makes matters even worse is if you decided you wanted to search for (in this example) PreviousSalary > 10000, you’d then have to handle the NULL values and convert them to another value (typically 0) which begs the question, why is it not set to an arbitrary default value in the first place?

I’ve updated a random selection of PreviousSalary records to now have non-NULL values and added a new index:

CREATE INDEX idx4 ON employee ( PreviousSalary ) INCLUDE ( emp_id ,salary )

Running the query to handle the NULLs will still produce and Index Scan:

SELECT  emp_id
       ,salary
       ,isnull(PreviousSalary,0)
FROM    EMPLOYEE
WHERE   isnull(PreviousSalary,0) > 10000   

If I now update all the NULL columns to 0 (I could have used an arbitrary figure such as -1, -100 or even -99999 to indicate a NULL value) and amend the query above, we now get the execution plan:

SELECT  emp_id
       ,salary
       ,isnull(PreviousSalary,0)
FROM    EMPLOYEE
WHERE   isnull(PreviousSalary,0) > 10000   

SELECT  emp_id
       ,salary
       ,PreviousSalary
FROM    EMPLOYEE
WHERE   PreviousSalary > 10000

I think by now everyone will be getting the picture. So how do you rectify this? First and foremost its education. Speak with your developers, do “Lunch ‘n’ Learn” sessions, anything you can to express how much of an issue this can potentially be and make the time to sit with them and (re)design that part of the system so that you don’t have to handle NULLs either at the query level or even the front end. If they understand that if they use the ISNULL function in this way then their query will not be SARGable and ultimately hurt performance, I’m confident that you won’t have any issues converting them to this method.

I know that the majority of people who may read this may not take anything away from it but even if I get a handful of people who can take this and speak with their development teams on this very simple concept of database design then you could save yourself major headaches in the future!

Poll: – Index Rebuild / Defrag, what page count filter do you set? — March 11, 2011

Poll: – Index Rebuild / Defrag, what page count filter do you set?

After reviewing a number of clients overnight index defrag / rebuild scripts over the years, I’m intrigued to know how many of us actually set the page count filter when capturing the fragmentation levels.

I know there’s a lot of debate around the value to set and personally I set mine to 100, mainly because (in most circumstances) the client has no requirement for a separate update statistics job and therefore its down to the autostats update (if we’ve determined its required to be on) or indeed the overnight index defrag/rebuild job to rebuild them and even though 10000 pages isn’t a great deal on some systems for the stats to be out of date, as a rule of thumb I’d rather play it safe and defrag/update stats on tables with pages > 100.

So, interested to see what everyone sets and if enough interest I’ll blog the results 🙂

I’d appreciate any comments to supplement your selection too….

SQL Server CPU’s at 100%? Anyone checked the VAS (Virtual Address Space)??? – Part 2 (The Fix!) — March 9, 2011

SQL Server CPU’s at 100%? Anyone checked the VAS (Virtual Address Space)??? – Part 2 (The Fix!)

First of all I have to make an apology to everyone on SQLServerCentral.com for the lateness of this follow up blog as i know a lot of you have been requesting a follow up to Part 1 which was originally blogged there. I originally wrote (Part 1) over a year ago (recently moved to new blog site) and started work on part 2 but never got to the point of actually finishing it.

In part 1, I wrote about how every now and again our live production sql server’s CPU’s would flatline at 100%. This could happen pretty much anytime of the day, there was no consistency in when it would happen. It was becoming a major issue for the company (and also myself!) because we could not afford any downtime at all and I was running out of ideas as to troubleshoot and ultimately resolve the issue. Then, as if from nowhere, (Slava Ok) came to the rescue when I came across his blogs on VAS in SQL 2005. And this is where I’ll start my follow up by attempting to go through the steps to track, find and resolve the process causing the 100% CPU.

As stated in part 1, I’d been through all the usual checks a DBA would go through when troubleshooting any spikes or flatlines in CPU with no real luck. After reading Slava’s blogs on VAS and tying some of that up to the errors we were receiving in the sql logs, I thought there was a very good chance that VAS was the issue and set about proving that fact.

NOTE: This VAS issue was on a 32bit Windows 2003 Enterprise server with SQL Server 2005 Enterprise (Service Pack 2 CU9) with 4 x 3ghz Xeon processors and 16GB RAM.

First off, in order to try and give a bit of leeway between outages, I looked into the startup parameter –g which sets the amount of MemToLeave (MTL) for sql server. By default this is set to 384MB (256 MB + (0.5 MB x max worker threads [default: 255]) = 384. Calculation taken from Era of Data) so I took the decision to add the startup parameter –g512 to give a bit more for us to play with and hopefully allow longer periods between these outages. I’m not for one minute indicating that this is best practice or the right thing to do in your systems but for us at the time it was a decision we made and ultimately paid off as our outages went from a weekly occurrence, back up to about 1 in 3. The time between outages had been getting shorter and shorter over time, it started as 1 in 4-5 weeks and slowly got more and more frequent which added to the confusion.

So, now with the frequency of the outages less frequent I could then start to track the vas usage and see if I could pinpoint what was causing this issue. First of all I had to capture what the total and max free vas was during the day and look for trends. The script below creates the table to capture the results, the view used (could be a cte if you wish) to get the vas which was taken from Slava’s blog and a modified script taken from Christian Bolton’s blog which captures the server, the max free size, the Total Available and the capture date.


USE DBName
GO

--Create table to capture results
IF EXISTS(SELECT 1 FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'VASAvailability') AND type = (N'U'))
DROP TABLE dbo.VASAvailability
GO
CREATE TABLE dbo.VASAvailability
(
VASAvailabilityID INT IDentity(1,1),
ServerName Varchar(100),
TotalAvailMem_KB INT,
MaxFreeSize_KB INT,
CreatedDate datetime
)
GO


--Create VAS view
IF EXISTS(SELECT 1 FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'vwVas_Summary') AND type = (N'V'))
DROP VIEW dbo.vwVas_Summary
GO
CREATE VIEW dbo.vwVas_Summary AS
SELECT
Size = Vas_Dump.Size,
Reserved = sum(case (convert (INT,Vas_Dump.Base) ^ 0) when 0 then 0 else 1 end),
Free = sum(case (convert (INT,Vas_Dump.Base) ^ 0x0) when 0 then 1 else 0 end)
FROM
(
--- combine all allocation according with allocation base, don't take into
--- account allocations with zero allocation_base
---
SELECT CONVERT (varbinary,sum(region_size_in_bytes)) AS Size,
region_allocation_base_address AS Base
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address<> 0x0
GROUP BY region_allocation_base_address
UNION
(
--- we shouldn't be grouping allocations with zero allocation base
--- just get them as is
---
SELECT CONVERT (varbinary,region_size_in_bytes), region_allocation_base_address
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address = 0x0)
)
as Vas_Dump
GROUP BY Size
GO
 

Now there are the relevant objects in place to start tracking, I then set about creating a SQL Agent job to run every minute to capture these results. This may have seemed a little extreme as some people have asked if every 5 or 10 minutes would have been sufficient but due to the amount of traffic passing through this system, every minute may not have actually been enough but I stuck with it as a starter. Below is the script I used to insert the results of the vas (Total and Max), all that was required was to create the job and schedule it to run every minute.


INSERT INTO dbo.VASAvailability(ServerName,TotalAvailMem_KB,MaxFreeSize_KB,CreatedDate)
SELECT  @@ServerName, ,SUM(CONVERT(BIGINT , Size) * Free) / 1024 AS [Totalavailmem,KB] ,CAST(MAX(Size) AS BIGINT) / 1024 AS [Maxfreesize,KB] ,getdate() FROM    vwVas_Summary WHERE   Free <> 0

Over the next couple of weeks I monitored the results of the VAS trying to look for any trends or patterns as to when the VAS would drop and not be reclaimed. The nature of VAS shows that a process will claim a portion of it but in theory should release it once finished with it. At first I couldn’t find a particular day or even time as to when the total VAS reduced significantly and never came back. As with the flatlined cpu’s, it seemed to be happening on any day and at any time. By this time I’d lost most of my hair due to ripping it out and was on the verge of launching our system out the window!

All I could do from here was try and take sample times from when the vas dropped and try to investigate what processes were running at that time. The godsend at the time was having SQL Sentry’s Performance Advisor for SQL Server as that allowed me to quickly interpret what was running at any given time and saved me god knows how long in setting up server side traces etc to monitor this. A few of the selected time periods I chose were inconclusive due to the amount of traffic going through our system at those times, it could have been any one of over 1000 different procedures – gutted! Over the next few weeks I continued to monitor the results and finally started to see a trend becoming apparent. We seemed to lose a large portion of VAS on a Monday evening which I had kind of discredited at first due to it not being every Monday evening and the fact there wasn’t anything major running at that time…………..or so I thought.

Every evening we had a large overnight XML file import process which ran at various times depending on when we received the file from a 3rd party. This could be from 8pm until 2am the following morning. What I hadn’t thought of is that this process didn’t run over a weekend so on a Monday we had a file which was 3 times the normal size to cater for 3 days worth of data, bearing in mind that a normal size file was around 250MB, this meant that file on a Monday evening would be ~700MB. Now, this normally wouldn’t present an issue as the process ran a procedure using sp_xml_preparedocument and sp_xml_removedocument to clean up after itself, but what I wasn’t aware of until speaking with the development team was that their service sat waiting for this file and as soon as it saw the file it’d begin processing it – even though it hadn’t fully downloaded! Being XML, this import process would obviously fail due to an incomplete file and this could potentially happen 3 or 4 times during the 20-30mins its take to download the file (very slow uplink from 3rd party).

Can you see where I’m going with this?

What I uncovered was that each time the file was being (attempted) imported, it would run the sp_xml_preparedocument and attempt to process the incomplete file and ultimately bomb out, without running the sp_xml_removedocument due to error handling being handled by COM+ and nothing in the stored procedure. Something I actually learned from this was that rolling back via COM+ doesn’t “unprepare” the XML document, the only way you can do this is to deallocate it via the sp_xml_removedocument. We amended the process to handle reading the XML file by using a simple Try and Catch within the procedure like this:


BEGIN TRY
Do import / sp_xml_preparedocument
END TRY
BEGIN CATCH
IF @iDoc IS NOT NULL
BEGIN
EXECUTE sp_xml_removedocument @iDoc
END
-- Raise an error with the details of the exception
DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int
SELECT @ErrMsg = ERROR_MESSAGE() ,@ErrSeverity = ERROR_SEVERITY()   RAISERROR(@ErrMsg, @ErrSeverity, 1)
END
CATCH

And that was that! At the time of writing Part 1, we hadn’t experienced the issue within 2 months by implementing this error handling. At the time of writing this (well, when I started it) it was over 6 months and we never experienced the same issue of 100% CPU due to low VAS. We continued to monitor VAS by keeping the agent job running (at a reduced frequency) and having our bespoke proactive monitoring system alert us when it fell below 30MB to warn us that we may be experiencing memory pressure caused by low VAS and our systems are potentially going to go skits!

SQL Server CPU’s at 100%? Anyone checked the VAS (Virtual Address Space)??? —

SQL Server CPU’s at 100%? Anyone checked the VAS (Virtual Address Space)???

As a DBA, you can almost guarantee that at some point you’ll experience a server maxing out its CPU’s at 100%. Here is my latest experience with this situation which I’ve decided to blog as I found whilst researching that there isn’t that much in the way of documentation for this. Hope it helps many others out there……

This issue was on a Windows 2003 Enterprise server with SQL Server 2005 Enterprise (Service Pack 2 CU9) with 4 x 3ghz Xeon processors and 16GB RAM.

Problem:

Every now and again (roughly 2 weeks) our live production sql server’s CPU’s would flatline at 100%. This could happen pretty much anytime of the day, there was no consistency in when it would happen, could be first thing in the morning, mid-afternoon or late evening.

Solution:

Instinct and experience told me that CPU’s at 100% will no doubt be an issue with a big process that probably shouldn’t be running during peak times (AV scan or the likes), a query with possibly a lot of big table / index scans or hash joins. As I knew we don’t run AV scans during peak times and that from task manager I could see that the biggest CPU hogger was sql server itself, my primary focus was to investigate the procedures currently running. So, knowing that only myself and the technical architect have access to the live production environment and I knew that I wasn’t running anything and neither was he, it had to be something being ran from our in-house VB.net system.

Starting from the beginning, I started simple with a sp_who2 and sp_lock. These themselves could take up to 30seconds to return any output! When they did the number of processes running was about normal but the one thing that did stand out was the number of blocked processes. Since there were a lot of procedures blocking a number of other processes I decided to look into sysprocesses for processes and queries that were long running, open transactions and high CPU time to see if anything stood out as being the problem. However, due to the cpu’s being at 100%, EVERYTHING was running slow. Queries that normally took less than 500ms were taking up to 1min which as you can imagine is not acceptable in a live environment. Nothing seemed to be standing out as an issue, those that seemed like potential candidates were killed (risky I know). The daily profiler we run was difficult to research because of the above mentioned issue, every single query that was going through the system was off the scale. With this in mind, I stopped the Profiler and even tried restarting the agent to see if it were a transactional replication issue – but still the cpu’s at 100%.

So, to re-cap.

  • There are no external / 3rd party processes running on this box as its a dedicated sql server. The process with high cpu was sql server itself.
  • There were no long running queries / high cpu time that stood out, almost every procedure had an open transaction as they were taking so long to process.

The only option I had was to fail the server over as I now had department heads lined up at my desk asking “is it fixed yet?, is it fixed yet?, is it fixed yet?”. Sure enough, fail the servers over or restart the sql service and all is well again.

Checking the sql logs show nothing other than the fact that I’ve failed the server over and that the every 30min (Redgate) sql log backup’s failed, so no luck there…..or so I thought (see further down). I tried trailing the profiler around the time of high cpu issue beginning but nothing jumping out. After this had happened a couple of times this was becoming a very big problem. I was in a very difficult situation, nothing I searched after the incidents showed anything and I had roughly 5minutes to investigate whilst the issue was apparent.

My next train of thought was a memory leak. I trawled Microsoft and google looking for any patches or kb’s referring to memory leaks but all I found were either related to SQL 2000, fixed in earlier sql server cumulative updates to what we had installed and the one I found relating to McAfee turned out to be a dead end as we had a later version to that mention as having the memory leak issues.

I decided to take the slight performance hit and run Disk, CPU and memory perfmon’s as well as running quests spotlight to monitor this situation. Once again, when the high CPU issue arose all I got was basically that; CPU’s maxed out. Matching perfmon to the profiler ran didn’t bring anything to light. By now I was running out of ideas. I tried turning Hyper Threading off, restricting affinity to 2cpu’s instead of 4 but still no luck – every couple of weeks all the cpu’s still maxing out!

So, I once again returned to the log files and delved further into the one common theme I could find when this issue arose – RedgateVDI / log backup failure’s.

The error log (cut down) showed:

2009-01-19 17:15:00.59 spid435 Error: 18210, Severity: 16, State: 1.
2009-01-19 17:15:00.59 spid435 BackupVirtualDeviceSet::Initialize: Request large buffers failure on backup device ‘SQLBACKUP_C9D0A658-E6F1-4A03-8CC7-AC8466A5B202’. Operating system error 0x8007000e(Not enough storage is available to complete this operation.)

This was initially seen (by myself and colleagues) as a red-herring, due to there being more than enough storage available to initiate the backups. Looking a bit further (1000’s of errors output during this failure) I found errors such as:

Event Type: Error
Event Source: SQLVDI
Event Category: None
Event ID: 1
Date: 14/02/2009
Time: 2:10:03 PM
User: N/A
Computer: SqlBox1
Description:
SQLVDI: Loc=BufferAreaManager::MapBuffers. Desc=Out Of Address Space. ErrorCode=(8)Not enough storage is available to process this command.
. Process=512. Thread=2500. Server. Instance=SQL2000. VD=Global\SQLBACKUP_690D9E54-3147-4530-B037-7BA589B1B005_SQLVDIMemoryName_

Virtual Address Space? Aha! Now we’re getting somewhere. I searched Redgate for this VDI error and came up with this forum post: http://www.red-gate.com/messageboard/viewtopic.php?t=4240&postdays=0&postorder=asc&start=0

Now, although I was having similar issues relating to VDI, we didn’t seem to be having the common issue of 100% cpu’s but I did decide to monitor the server using EXEC maser..sqbmemory and this proved to be very interesting. When our server was maxing out at 100%, I ran this procedure and low and behold, the maximum free block of VAS was 3mb! I restarted the sql service and checked it again and it was back to circ 300mb.

This lead me to believe that it was almost certainly a VAS pressure issue we were experiencing and possibly created by redgate sql backup. I decided to research VAS further and found a brilliant blog by Slava Oks which can be found at: http://blogs.msdn.com/slavao/

I decided to monitor VAS by capturing the free blocks and total available every minute to see if I can trap what is actually causing the VAS to be eaten up. Over the next few weeks it became apparent that we had a large xml file (> 500mb) which was imported (using sp_xml_preparedocument and sp_xml_removedocument) every evening and sometime during this import the total VAS available dropped drastically. Taking this a step further, I decided to capture the VAS at every step of this import and found that one section seemed to be the culprit. The ins and outs of this are beyond the scope of this post but I may blog it in the future.

So, amend the procedure and all VAS issues are now resolved – we’ve not had a 100% cpu issue in over two months now (touch wood!). I am still monitoring the VAS (alerts put in place for if it drops too low) but I’m now 99.9% certain (there’s always room for doubt 🙂 ) that this issue is now resolved. Took a while but got there in the end!!!

Anyone else had similar issues?

Below I have listed a couple of links to descriptions of VAS for you to either brush up on or even learn a thing or two.

Virtual Address Space (VAS):

http://msdn.microsoft.com/en-us/library/aa366912(VS.85).aspx

http://en.wikipedia.org/wiki/Virtual_address_space