First of all I have to make an apology to everyone on SQLServerCentral.com for the lateness of this follow up blog as i know a lot of you have been requesting a follow up to Part 1 which was originally blogged there. I originally wrote (Part 1) over a year ago (recently moved to new blog site) and started work on part 2 but never got to the point of actually finishing it.

In part 1, I wrote about how every now and again our live production sql server’s CPU’s would flatline at 100%. This could happen pretty much anytime of the day, there was no consistency in when it would happen. It was becoming a major issue for the company (and also myself!) because we could not afford any downtime at all and I was running out of ideas as to troubleshoot and ultimately resolve the issue. Then, as if from nowhere, (Slava Ok) came to the rescue when I came across his blogs on VAS in SQL 2005. And this is where I’ll start my follow up by attempting to go through the steps to track, find and resolve the process causing the 100% CPU.

As stated in part 1, I’d been through all the usual checks a DBA would go through when troubleshooting any spikes or flatlines in CPU with no real luck. After reading Slava’s blogs on VAS and tying some of that up to the errors we were receiving in the sql logs, I thought there was a very good chance that VAS was the issue and set about proving that fact.

NOTE: This VAS issue was on a 32bit Windows 2003 Enterprise server with SQL Server 2005 Enterprise (Service Pack 2 CU9) with 4 x 3ghz Xeon processors and 16GB RAM.

First off, in order to try and give a bit of leeway between outages, I looked into the startup parameter –g which sets the amount of MemToLeave (MTL) for sql server. By default this is set to 384MB (256 MB + (0.5 MB x max worker threads [default: 255]) = 384. Calculation taken from Era of Data) so I took the decision to add the startup parameter –g512 to give a bit more for us to play with and hopefully allow longer periods between these outages. I’m not for one minute indicating that this is best practice or the right thing to do in your systems but for us at the time it was a decision we made and ultimately paid off as our outages went from a weekly occurrence, back up to about 1 in 3. The time between outages had been getting shorter and shorter over time, it started as 1 in 4-5 weeks and slowly got more and more frequent which added to the confusion.

So, now with the frequency of the outages less frequent I could then start to track the vas usage and see if I could pinpoint what was causing this issue. First of all I had to capture what the total and max free vas was during the day and look for trends. The script below creates the table to capture the results, the view used (could be a cte if you wish) to get the vas which was taken from Slava’s blog and a modified script taken from Christian Bolton’s blog which captures the server, the max free size, the Total Available and the capture date.


USE DBName
GO

--Create table to capture results
IF EXISTS(SELECT 1 FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'VASAvailability') AND type = (N'U'))
DROP TABLE dbo.VASAvailability
GO
CREATE TABLE dbo.VASAvailability
(
VASAvailabilityID INT IDentity(1,1),
ServerName Varchar(100),
TotalAvailMem_KB INT,
MaxFreeSize_KB INT,
CreatedDate datetime
)
GO


--Create VAS view
IF EXISTS(SELECT 1 FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'vwVas_Summary') AND type = (N'V'))
DROP VIEW dbo.vwVas_Summary
GO
CREATE VIEW dbo.vwVas_Summary AS
SELECT
Size = Vas_Dump.Size,
Reserved = sum(case (convert (INT,Vas_Dump.Base) ^ 0) when 0 then 0 else 1 end),
Free = sum(case (convert (INT,Vas_Dump.Base) ^ 0x0) when 0 then 1 else 0 end)
FROM
(
--- combine all allocation according with allocation base, don't take into
--- account allocations with zero allocation_base
---
SELECT CONVERT (varbinary,sum(region_size_in_bytes)) AS Size,
region_allocation_base_address AS Base
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address<> 0x0
GROUP BY region_allocation_base_address
UNION
(
--- we shouldn't be grouping allocations with zero allocation base
--- just get them as is
---
SELECT CONVERT (varbinary,region_size_in_bytes), region_allocation_base_address
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address = 0x0)
)
as Vas_Dump
GROUP BY Size
GO
 

Now there are the relevant objects in place to start tracking, I then set about creating a SQL Agent job to run every minute to capture these results. This may have seemed a little extreme as some people have asked if every 5 or 10 minutes would have been sufficient but due to the amount of traffic passing through this system, every minute may not have actually been enough but I stuck with it as a starter. Below is the script I used to insert the results of the vas (Total and Max), all that was required was to create the job and schedule it to run every minute.


INSERT INTO dbo.VASAvailability(ServerName,TotalAvailMem_KB,MaxFreeSize_KB,CreatedDate)
SELECT  @@ServerName, ,SUM(CONVERT(BIGINT , Size) * Free) / 1024 AS [Totalavailmem,KB] ,CAST(MAX(Size) AS BIGINT) / 1024 AS [Maxfreesize,KB] ,getdate() FROM    vwVas_Summary WHERE   Free <> 0

Over the next couple of weeks I monitored the results of the VAS trying to look for any trends or patterns as to when the VAS would drop and not be reclaimed. The nature of VAS shows that a process will claim a portion of it but in theory should release it once finished with it. At first I couldn’t find a particular day or even time as to when the total VAS reduced significantly and never came back. As with the flatlined cpu’s, it seemed to be happening on any day and at any time. By this time I’d lost most of my hair due to ripping it out and was on the verge of launching our system out the window!

All I could do from here was try and take sample times from when the vas dropped and try to investigate what processes were running at that time. The godsend at the time was having SQL Sentry’s Performance Advisor for SQL Server as that allowed me to quickly interpret what was running at any given time and saved me god knows how long in setting up server side traces etc to monitor this. A few of the selected time periods I chose were inconclusive due to the amount of traffic going through our system at those times, it could have been any one of over 1000 different procedures – gutted! Over the next few weeks I continued to monitor the results and finally started to see a trend becoming apparent. We seemed to lose a large portion of VAS on a Monday evening which I had kind of discredited at first due to it not being every Monday evening and the fact there wasn’t anything major running at that time…………..or so I thought.

Every evening we had a large overnight XML file import process which ran at various times depending on when we received the file from a 3rd party. This could be from 8pm until 2am the following morning. What I hadn’t thought of is that this process didn’t run over a weekend so on a Monday we had a file which was 3 times the normal size to cater for 3 days worth of data, bearing in mind that a normal size file was around 250MB, this meant that file on a Monday evening would be ~700MB. Now, this normally wouldn’t present an issue as the process ran a procedure using sp_xml_preparedocument and sp_xml_removedocument to clean up after itself, but what I wasn’t aware of until speaking with the development team was that their service sat waiting for this file and as soon as it saw the file it’d begin processing it – even though it hadn’t fully downloaded! Being XML, this import process would obviously fail due to an incomplete file and this could potentially happen 3 or 4 times during the 20-30mins its take to download the file (very slow uplink from 3rd party).

Can you see where I’m going with this?

What I uncovered was that each time the file was being (attempted) imported, it would run the sp_xml_preparedocument and attempt to process the incomplete file and ultimately bomb out, without running the sp_xml_removedocument due to error handling being handled by COM+ and nothing in the stored procedure. Something I actually learned from this was that rolling back via COM+ doesn’t “unprepare” the XML document, the only way you can do this is to deallocate it via the sp_xml_removedocument. We amended the process to handle reading the XML file by using a simple Try and Catch within the procedure like this:


BEGIN TRY
Do import / sp_xml_preparedocument
END TRY
BEGIN CATCH
IF @iDoc IS NOT NULL
BEGIN
EXECUTE sp_xml_removedocument @iDoc
END
-- Raise an error with the details of the exception
DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int
SELECT @ErrMsg = ERROR_MESSAGE() ,@ErrSeverity = ERROR_SEVERITY()   RAISERROR(@ErrMsg, @ErrSeverity, 1)
END
CATCH

And that was that! At the time of writing Part 1, we hadn’t experienced the issue within 2 months by implementing this error handling. At the time of writing this (well, when I started it) it was over 6 months and we never experienced the same issue of 100% CPU due to low VAS. We continued to monitor VAS by keeping the agent job running (at a reduced frequency) and having our bespoke proactive monitoring system alert us when it fell below 30MB to warn us that we may be experiencing memory pressure caused by low VAS and our systems are potentially going to go skits!

Advertisements