SQLGeordie's Blog

Helping the SQL Server community……where i can!

Docker and SQL Server on Linux series — June 20, 2018

Docker and SQL Server on Linux series

DockerBuildShipRunAnywhere

I’ve been playing around with Docker (more specifically with SQL Server on Linux) the beginning of 2017 to see if indeed I can “Build, Ship, Run, Any App Anywhere” and will be putting together a series of how-to’s and issues experienced so you guys don’t spend the countless hours trying to investigate and work out what is going on when things do not work.

I’ll be adding links to this main page as and when new posts are added as a central point of reference.

MSBuild 2020 – Build Book of News — May 20, 2020

MSBuild 2020 – Build Book of News

MSBuild 2020 is currently underway and if, like me, you can’t attend every session you would like to, well Microsoft have this covered with their Build Book of News.

This gives a great overview of all the latest information including:

Enjoy the conference and if you miss any of it, enjoy the Book of News 😃

How To Fix OBS Black Screen when recording SQL Server Session Demo’s? — April 2, 2020

How To Fix OBS Black Screen when recording SQL Server Session Demo’s?

Everyone loves a demo. When I say everyone, I mean I personally love seeing a demo in a session which is why 99% of any sessions I present will have at least one demo. This is a risky business, especially when dealing with Cloud stuff and conference wifi which is why I always record my demo’s in case they’re ever needed.

Those that know me know that I’m tight and don’t like spending money on stuff that I really need and tend to waste it on cars and watches instead so when it comes to recording I use the FREE opensource software Open Braoadcasting Software (OBS). This software is more than I’d ever need for recording and did I mention, it’s FREE!!

However, there has always been a bit of an issue with getting it to work. Especially for those fortunate to have a separate GPU. My laptop has a NVIDIA Quadro P1000 GPU, not the best out there but it came with the beast and does a job for me. This however is where the issue begins. When you have a separate GPU the software doesn’t know which one to use so when you fire up OBS and choose your Display Capture (ie. screen to record) you will see nothing but a black screen. You can see by the shock on my face that I’ve seen this before but updates seem to reset the changes you need to make (see further down):

Now, the fix used to be very simple, you would:

  1. Open the NVIDIA control panel
  2. Click on “Program Settings” tab
  3. Choose the Open Broadcaster Software in “Select a program to Customise”
  4. Change the preferred graphics processor to “Integrated Graphics”
  5. Click Apply and re-open OBS

However, recent updates to either windows, NVIDIA and/or OBS has meant that this alone no longer works and you will end up tearing your hair out as per image below

There is now an additional step you need to undertake which is to change the OBS exe to run in “Power Saving” mode.

To do this, navigate to Settings–>Graphics Settings and Browse to the OBS exe which can be found at “C:\Program Files\obs-studio\bin\64bit”. Once selected, click Options and choose “Power Saving” for the Graphics Specification as per image below:

Once you have saved this then the screen Capture should now show the screen you are wishing to record you demo’s on and we’re in a happy place :).

As to why this is now required, I really don’t know at the moment and I’m sure in a few months time there’ll be something else which stops it from working.

Hopefully this helps others and if nothing else, will act as a reference guide for when I have to do this again in a few months and will have forgotten the process!

“Kubernetify” your Containers — February 23, 2020

“Kubernetify” your Containers

Adding the link to github which contain the slides and demo’s from the various events I have delivered this session to:

Github – Kubernetify Your Containers

To see them all I have given the root folder and if you search for “kubernetify” you should see everything needed 👍

Abstract

We have all now had a play around with Docker and Containers or at least heard about them.

This demo heavy session will walk through some of the challenges around managing container environments and how Kubernetes orchestration can help alleviate some of the pain points.

We will be talking about what Kubernetes is and how it works and through the use of demos we will:

  • Highlight some of the issues with getting setup (Specifically Minikube on Ubuntu),
  • Deploying/Updating containers in Kubernetes (on-Prem as well as AKS using Azure DevOps)
  • Persisting data
  • How to avoid making the same mistakes as I have
Speaking — January 22, 2020

Speaking

Upcoming and previous speaking engagements. Links to slides and demo’s can always be found on github or youtube. If you would like me to speak at your event whether in person or remotely then please contact me.

Upcoming Speaking Engagements

February 29th 2020: Scottish Summit
Scotland (Glasgow)
Kubernetify” your Containers (45mins)

March 7th 2020: DataMinds.be
Belgium
Database CI/CD with Containers and Azure DevOps

April 3rd 2020: SQLBits London 2020
England (London)
Database CI/CD with Docker Containers and Azure DevOps (45mins)

April 25th 2020: SQLSaturday Denmark
Denmark
Database CI/CD with Docker Containers and Azure DevOps

Previous Speaking Engagements

February 1st 2020: SQLSaturday #927 Edinburgh 2020
Scotland (Glasgow)
Kubernetify” your Containers

December 14th 2019: SQLSaturday #910 Slovenia 2019 (Ljubljana)
Slovenia (Ljubljana)
Kubernetify” your Containers

October 1st 2019: Techorama.nl
Netherlands (Pathé Ede)
Database CI/CD with Containers and Azure DevOps

September 13th 2019: Data Scotland
Scotland (Glasgow)
Kubernetify” your Containers

June 20th 2019: DataGrillen 2019
Germany (Lingen)
Database DevOps with Containers and Azure DevOps

April 27th 2019: Data In Devon 2019
England (Exeter)
Database DevOps with Containers and Azure DevOps

December 8th 2018: SQLSaturday #782 Slovenia 2018 (Ljubljana)
Slovenia (Ljubljana)
Introduction to Containers

October 8th 2018: SQLRelay (Newcastle)
England (Newcastle)
AWS Glue – Let’s get “stuck” in!

October 2nd 2018: Introduction to Containers (One off for those who missed Data n’ Gravy)
England (Leeds)
Introduction to Containers

September 18th 2018: SQLNorthEast UserGroup
England (Newcastle)
AWS Glue – Let’s get “stuck” in!

September 14th 2018: SQLGLA 2018
Scotland (Glasgow)
AWS Glue – Let’s get “stuck” in!

September 6th 2018: Data Platform User Group (Leeds)
England (Leeds)
AWS Glue – Let’s get “stuck” in!

September 5th 2018: PASS Manchester SQL Server User Group
England (Manchester)
AWS Glue – Let’s get “stuck” in!

August 1st 2018: Introduction to Containers (One off for those who missed Data n’ Gravy)
England (Leeds)
Introduction to Containers

June 22nd 2018: SQLGrillen 2018
Germany (Lingen)
Introduction to Containers

June 12th 2018: Edinburgh Data Platform
Scotland (Edinburgh)
Introduction to Containers

April 28th 2018: Data n Gravy
England (Leeds)
Introduction to Containers

April 19th 2018: Glasgow SQL User Group
Scotland (Glasgow)
Introduction to Containers

November 30th 2017: SQLNorthEast User Group
England (Newcastle)
Introduction to Containers

Database CI/CD with Containers and Azure DevOps — September 29, 2019
AKS SQL Server Error – 0/1 nodes are available: 1 node(s) exceed max volume count — September 21, 2019

AKS SQL Server Error – 0/1 nodes are available: 1 node(s) exceed max volume count

Background

Whilst playing around with my session for Techorama.nl I encountered an error I hadn’t seen previously whilst deploying SQL Server on Linux in Azure Kubernetes Service (AKS)

0/1 nodes are available: 1 node(s) exceed max volume count

The yaml I used was only slightly modified (mainly names) from scripts used on minikube and docker-desktop so I was a little confused as to why I was getting this in AKS.

As it happens, the reason is because I am tight and don’t like spending money! During testing etc I drop my AKS node size to as small as I can have it, in this case it was a Standard_B2s (2CPU / 4GB) which previously I’ve never had issues with until this particular demo.

When playing around with AKS you may have used a single PersistentVolume (or no volumes at all) but this particular setup had:

  • 1 for systemdbs
  • 1 for sql data files
  • 1 for sql log files

Which if you can do maths, equals 3 files. Which is fine for this particular Azure VM size as you can attach 4 disks. However, the issue arises once you start adding additional deployments with the same setup but in a different namespace. This would take me over the threshold of the 4 allowed disks and gives you the error that you have exceeded the max volume count 😦

So how do you fix this?

The options are to either scale up your VM size or alter your deployment to use fewer disks. In my case I could get away with only having 1 disk for systemdbs/data/logs. This is a demo environment so I can do this 🙂

Upgrading AKS to higher than v1.13.10 — September 15, 2019

Upgrading AKS to higher than v1.13.10

I recently received an email from Microsoft Azure regarding some security vulnerabilities in AKS and to upgrade to >= 1.13.10:

Looking in the Azure Portal, there was only an option to upgrade to v1.12.8

and this was confirmed by running:

az aks get-upgrades --resource-group JCL-DevOps --name DevOps-K8s-Test --output table

As it is late at night my brain wasn’t working as it should be but thought I’d put a quick blog out there to say that if you are on v1.11.5 and want to upgrade to >= v1.13.10 then you have to do this in a 2 stage process by upgrading to v1.12.8 first:

az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.12.8
 az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.13.10

After upgrading to v1.12.8 you will now have the option to upgrade to v1.13.10 and then above that:

Now that I am up to 1.14.6, there are no further updates available:

az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table

As to why exactly this is I haven’t managed to find out yet but I have to assume it is like a lot of applications, it has to be a staged process – think upgrading SQL Server 2000 to SQL Server 2019, you can’t do this in 1 upgrade step 🙂

However, I’m a little confused and disappointed that this day in age and with the upgrade being minor version upgrades that it can’t be done in one go – perhaps I’m asking too much……………?

Upgrade scripts / path I used were:

az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.12.8
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.13.10
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.14.6
Minikube / SQL Server – Issues log… — September 10, 2019

Minikube / SQL Server – Issues log…

Having played around a bit with minikube running SQL Server Containers on both Windows 10 and an Ubuntu Hyper-V VM, I wanted to get “down on paper” some of the issues that I have experienced so hopefully you won’t make the same mistakes and probably more so, as a reference for me who keeps forgetting everytime I hit the same issue :(.

I will add to this post as and when I can.

Minikube Stop / delete…..does not stop or delete

There have been many times when I’ve simply just lost the ability to use the minikube Hyper-V VM that is created with a minikube start. I’m yet to figure out why this is as it seems very sporadic but certainly an issue if you shutdown the host it is running on, in my case a Dell Precision laptop.

There are a host of links on the internet saying to do a “minikube stop” / “minikube start” however I always seem to experience issues when doing so. My issue being, it hangs when doing a shutdown of the VM so it never gets to the point of being able to delete it. Unfortunately there is no errors or warnings, it just hangs at the point of shutting down the VM indicating 0% progress.

Why?

This is because I’m a complete idiot! I work primarily from my home office and have my laptop hardwired to the internet which is great as I don’t have to faff on with WiFi. However, in order for your minikube commands to interact with the minikube VM, it uses SSH and this only works when Wifi is connected. “Why is that Chris?” you may ask. Well, when you create a minikube VM using Hyper-V (I believe it isn’t required with virtualbox but please correct me if I am incorrect) you need to create and attach an “External” NIC. What I mean is, a Virtual Network switch that has access to the outside world so it can pull down images amongst a whole host of other cool stuff. My external virtual switch’s use my wifi adapter on the host laptop as its external network so it kinda makes sense that you need it connected to interact with the VM.

So, if you ever do decide to have a play around with minikube on Windows 10, make sure that even though you are connected to the internet via a wire, you also connect to WiFi 🙂 If you remember this, your life will be a hell of a lot easier.

Example minikube start command

The command below will create a minikube VM using Hyper-V called “minikube” with 10000MB RAM, 4vCPU’s and will attach a virtual switch called “ExternalSwitch” which has to be created separately in Hyper-V manager.

#Start minikube
minikube start –vm-driver=hyperv –memory=10000 –cpus=4 –hyperv-virtual-switch=”ExternalSwitch”

After about 4mins you should have your new shiny minikube VM available. Output from the terminal is below:

$ minikube start –vm-driver=hyperv –memory=10000 –cpus=4 –hyperv-virtual-switch=”External”
minikube v1.3.1 on Microsoft Windows 10 Pro 10.0.18362 Build 18362
Creating hyperv VM (CPUs=4, Memory=10000MB, Disk=20000MB) …
Preparing Kubernetes v1.15.2 on Docker 18.09.8 …
Pulling images …
Launching Kubernetes …
Waiting for: apiserver proxy etcd scheduler controller dns
Done! kubectl is now configured to use “minikube”

Database CI/CD with Containers (Docker) and Azure DevOps (Demo’s – YouTube) — July 4, 2019

Database CI/CD with Containers (Docker) and Azure DevOps (Demo’s – YouTube)

Recording of my demo’s taken for Data In Devon and DataGrillen. Much easier to record them than provide scripts/screenshots 🙂

CI/CD with Containers and Azure DevOps – 1 Build Pipeline Demo
CI/CD with Containers and Azure DevOps – 2 Pull and Run Image Locally
CI/CD with Containers and Azure DevOps – 3 Release Pipeline Demo
CI/CD with Containers and Azure DevOps – 4 Kubernetes Tour

Azure DevOps – Job Dependencies — March 12, 2019

Azure DevOps – Job Dependencies

I wanted to through together a quick post here to help out others who may be having issues with running multiple jobs in a build pipeline and there being no consistency in what order they run in.

The Problem

When I first started with VSTS and ultimately Azure DevOps, I went through many failed builds because the order of the jobs in your pipeline don’t run in the order that you’ve built them and how you would logically believe them to run. The image below shows two Build Pipeline jobs but when the build is queued, whether this be manual or via CI, the second job is running before job #1. In this example the build will fail because Job #2 is to deploy a dacpac to a SQL Server on Linux Docker Container (Using Ubuntu Agent Host) but obviously this cannot be done until the dacpac has been created in Job #1 which is running on a VS2017 Agent Host:

1 – Job running in wrong order

The reason for this is that when you specify multiple jobs in a build pipeline , by default they run in parallel. In my experience I never found that both jobs ran in parallel, always one after the other so it doesn’t seem to quite match what the Microsoft docs states but it’s not something I’ve ever spent the time investigating further.

This can obviously be very frustrating especially as (from my testing) there is no consistency in which order they run but I did find that cancelling a build or re-running a build straight after a failure seemed to through it out of sync whereas previously it was all running in the correct order.

The Fix

To stop this sporadic order of job running you can set Job dependencies in Azure DevOps. The process is so simple to setup and if you didn’t know about it and have been tearing your hair out over failed builds due to ordering then you’re going to kick yourself when you see the simplicity.

All you need to do is select the job you wish to start after another (other specific conditions can be applied) job completes and scroll to Dependencies, click the drop down and (in this example there is only one) the job you want to depend on will be in the list:

Select Job Dependency

That’s it, couple of clicks and your job ordering is sorted 😉