SQLGeordie's Blog

Helping the SQL Server community……where i can!

How To Fix OBS Black Screen when recording SQL Server Session Demo’s? — April 2, 2020

How To Fix OBS Black Screen when recording SQL Server Session Demo’s?

Everyone loves a demo. When I say everyone, I mean I personally love seeing a demo in a session which is why 99% of any sessions I present will have at least one demo. This is a risky business, especially when dealing with Cloud stuff and conference wifi which is why I always record my demo’s in case they’re ever needed.

Those that know me know that I’m tight and don’t like spending money on stuff that I really need and tend to waste it on cars and watches instead so when it comes to recording I use the FREE opensource software Open Braoadcasting Software (OBS). This software is more than I’d ever need for recording and did I mention, it’s FREE!!

However, there has always been a bit of an issue with getting it to work. Especially for those fortunate to have a separate GPU. My laptop has a NVIDIA Quadro P1000 GPU, not the best out there but it came with the beast and does a job for me. This however is where the issue begins. When you have a separate GPU the software doesn’t know which one to use so when you fire up OBS and choose your Display Capture (ie. screen to record) you will see nothing but a black screen. You can see by the shock on my face that I’ve seen this before but updates seem to reset the changes you need to make (see further down):

Now, the fix used to be very simple, you would:

  1. Open the NVIDIA control panel
  2. Click on “Program Settings” tab
  3. Choose the Open Broadcaster Software in “Select a program to Customise”
  4. Change the preferred graphics processor to “Integrated Graphics”
  5. Click Apply and re-open OBS

However, recent updates to either windows, NVIDIA and/or OBS has meant that this alone no longer works and you will end up tearing your hair out as per image below

There is now an additional step you need to undertake which is to change the OBS exe to run in “Power Saving” mode.

To do this, navigate to Settings–>Graphics Settings and Browse to the OBS exe which can be found at “C:\Program Files\obs-studio\bin\64bit”. Once selected, click Options and choose “Power Saving” for the Graphics Specification as per image below:

Once you have saved this then the screen Capture should now show the screen you are wishing to record you demo’s on and we’re in a happy place :).

As to why this is now required, I really don’t know at the moment and I’m sure in a few months time there’ll be something else which stops it from working.

Hopefully this helps others and if nothing else, will act as a reference guide for when I have to do this again in a few months and will have forgotten the process!

“Kubernetify” your Containers — February 23, 2020

“Kubernetify” your Containers

Adding the link to github which contain the slides and demo’s from the various events I have delivered this session to:

Github – Kubernetify Your Containers

To see them all I have given the root folder and if you search for “kubernetify” you should see everything needed πŸ‘

Abstract

We have all now had a play around with Docker and Containers or at least heard about them.

This demo heavy session will walk through some of the challenges around managing container environments and how Kubernetes orchestration can help alleviate some of the pain points.

We will be talking about what Kubernetes is and how it works and through the use of demos we will:

  • Highlight some of the issues with getting setup (Specifically Minikube on Ubuntu),
  • Deploying/Updating containers in Kubernetes (on-Prem as well as AKS using Azure DevOps)
  • Persisting data
  • How to avoid making the same mistakes as I have
Speaking — January 22, 2020

Speaking

Upcoming and previous speaking engagements. Links to slides and demo’s can always be found on github or youtube. If you would like me to speak at your event whether in person or remotely then please contact me.

Upcoming Speaking Engagements

February 29th 2020: Scottish Summit
Scotland (Glasgow)
Kubernetify” your Containers (45mins)

March 7th 2020: DataMinds.be
Belgium
Database CI/CD with Containers and Azure DevOps

April 3rd 2020: SQLBits London 2020
England (London)
Database CI/CD with Docker Containers and Azure DevOps (45mins)

April 25th 2020: SQLSaturday Denmark
Denmark
Database CI/CD with Docker Containers and Azure DevOps

Previous Speaking Engagements

February 1st 2020: SQLSaturday #927 Edinburgh 2020
Scotland (Glasgow)
Kubernetify” your Containers

December 14th 2019: SQLSaturday #910 Slovenia 2019 (Ljubljana)
Slovenia (Ljubljana)
Kubernetify” your Containers

October 1st 2019: Techorama.nl
Netherlands (PathΓ© Ede)
Database CI/CD with Containers and Azure DevOps

September 13th 2019: Data Scotland
Scotland (Glasgow)
Kubernetify” your Containers

June 20th 2019: DataGrillen 2019
Germany (Lingen)
Database DevOps with Containers and Azure DevOps

April 27th 2019: Data In Devon 2019
England (Exeter)
Database DevOps with Containers and Azure DevOps

December 8th 2018: SQLSaturday #782 Slovenia 2018 (Ljubljana)
Slovenia (Ljubljana)
Introduction to Containers

October 8th 2018: SQLRelay (Newcastle)
England (Newcastle)
AWS Glue – Let’s get “stuck” in!

October 2nd 2018: Introduction to Containers (One off for those who missed Data n’ Gravy)
England (Leeds)
Introduction to Containers

September 18th 2018: SQLNorthEast UserGroup
England (Newcastle)
AWS Glue – Let’s get “stuck” in!

September 14th 2018: SQLGLA 2018
Scotland (Glasgow)
AWS Glue – Let’s get “stuck” in!

September 6th 2018: Data Platform User Group (Leeds)
England (Leeds)
AWS Glue – Let’s get “stuck” in!

September 5th 2018: PASS Manchester SQL Server User Group
England (Manchester)
AWS Glue – Let’s get “stuck” in!

August 1st 2018: Introduction to Containers (One off for those who missed Data n’ Gravy)
England (Leeds)
Introduction to Containers

June 22nd 2018: SQLGrillen 2018
Germany (Lingen)
Introduction to Containers

June 12th 2018: Edinburgh Data Platform
Scotland (Edinburgh)
Introduction to Containers

April 28th 2018: Data n Gravy
England (Leeds)
Introduction to Containers

April 19th 2018: Glasgow SQL User Group
Scotland (Glasgow)
Introduction to Containers

November 30th 2017: SQLNorthEast User Group
England (Newcastle)
Introduction to Containers

Database CI/CD with Containers and Azure DevOps — September 29, 2019
AKS SQL Server Error – 0/1 nodes are available: 1 node(s) exceed max volume count — September 21, 2019

AKS SQL Server Error – 0/1 nodes are available: 1 node(s) exceed max volume count

Background

Whilst playing around with my session for Techorama.nl I encountered an error I hadn’t seen previously whilst deploying SQL Server on Linux in Azure Kubernetes Service (AKS)

0/1 nodes are available: 1 node(s) exceed max volume count

The yaml I used was only slightly modified (mainly names) from scripts used on minikube and docker-desktop so I was a little confused as to why I was getting this in AKS.

As it happens, the reason is because I am tight and don’t like spending money! During testing etc I drop my AKS node size to as small as I can have it, in this case it was a Standard_B2s (2CPU / 4GB) which previously I’ve never had issues with until this particular demo.

When playing around with AKS you may have used a single PersistentVolume (or no volumes at all) but this particular setup had:

  • 1 for systemdbs
  • 1 for sql data files
  • 1 for sql log files

Which if you can do maths, equals 3 files. Which is fine for this particular Azure VM size as you can attach 4 disks. However, the issue arises once you start adding additional deployments with the same setup but in a different namespace. This would take me over the threshold of the 4 allowed disks and gives you the error that you have exceeded the max volume count 😦

So how do you fix this?

The options are to either scale up your VM size or alter your deployment to use fewer disks. In my case I could get away with only having 1 disk for systemdbs/data/logs. This is a demo environment so I can do this πŸ™‚

Upgrading AKS to higher than v1.13.10 — September 15, 2019

Upgrading AKS to higher than v1.13.10

I recently received an email from Microsoft Azure regarding some security vulnerabilities in AKS and to upgrade to >= 1.13.10:

Looking in the Azure Portal, there was only an option to upgrade to v1.12.8

and this was confirmed by running:

az aks get-upgrades --resource-group JCL-DevOps --name DevOps-K8s-Test --output table

As it is late at night my brain wasn’t working as it should be but thought I’d put a quick blog out there to say that if you are on v1.11.5 and want to upgrade to >= v1.13.10 then you have to do this in a 2 stage process by upgrading to v1.12.8 first:

az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.12.8
 az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.13.10

After upgrading to v1.12.8 you will now have the option to upgrade to v1.13.10 and then above that:

Now that I am up to 1.14.6, there are no further updates available:

az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table

As to why exactly this is I haven’t managed to find out yet but I have to assume it is like a lot of applications, it has to be a staged process – think upgrading SQL Server 2000 to SQL Server 2019, you can’t do this in 1 upgrade step πŸ™‚

However, I’m a little confused and disappointed that this day in age and with the upgrade being minor version upgrades that it can’t be done in one go – perhaps I’m asking too much……………?

Upgrade scripts / path I used were:

az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.12.8
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.13.10
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.14.6
Minikube / SQL Server – Issues log… — September 10, 2019

Minikube / SQL Server – Issues log…

Having played around a bit with minikube running SQL Server Containers on both Windows 10 and an Ubuntu Hyper-V VM, I wanted to get “down on paper” some of the issues that I have experienced so hopefully you won’t make the same mistakes and probably more so, as a reference for me who keeps forgetting everytime I hit the same issue :(.

I will add to this post as and when I can.

Minikube Stop / delete…..does not stop or delete

There have been many times when I’ve simply just lost the ability to use the minikube Hyper-V VM that is created with a minikube start. I’m yet to figure out why this is as it seems very sporadic but certainly an issue if you shutdown the host it is running on, in my case a Dell Precision laptop.

There are a host of links on the internet saying to do a “minikube stop” / “minikube start” however I always seem to experience issues when doing so. My issue being, it hangs when doing a shutdown of the VM so it never gets to the point of being able to delete it. Unfortunately there is no errors or warnings, it just hangs at the point of shutting down the VM indicating 0% progress.

Why?

This is because I’m a complete idiot! I work primarily from my home office and have my laptop hardwired to the internet which is great as I don’t have to faff on with WiFi. However, in order for your minikube commands to interact with the minikube VM, it uses SSH and this only works when Wifi is connected. “Why is that Chris?” you may ask. Well, when you create a minikube VM using Hyper-V (I believe it isn’t required with virtualbox but please correct me if I am incorrect) you need to create and attach an “External” NIC. What I mean is, a Virtual Network switch that has access to the outside world so it can pull down images amongst a whole host of other cool stuff. My external virtual switch’s use my wifi adapter on the host laptop as its external network so it kinda makes sense that you need it connected to interact with the VM.

So, if you ever do decide to have a play around with minikube on Windows 10, make sure that even though you are connected to the internet via a wire, you also connect to WiFi πŸ™‚ If you remember this, your life will be a hell of a lot easier.

Example minikube start command

The command below will create a minikube VM using Hyper-V called “minikube” with 10000MB RAM, 4vCPU’s and will attach a virtual switch called “ExternalSwitch” which has to be created separately in Hyper-V manager.

#Start minikube
minikube start –vm-driver=hyperv –memory=10000 –cpus=4 –hyperv-virtual-switch=”ExternalSwitch”

After about 4mins you should have your new shiny minikube VM available. Output from the terminal is below:

$ minikube start –vm-driver=hyperv –memory=10000 –cpus=4 –hyperv-virtual-switch=”External”
minikube v1.3.1 on Microsoft Windows 10 Pro 10.0.18362 Build 18362
Creating hyperv VM (CPUs=4, Memory=10000MB, Disk=20000MB) …
Preparing Kubernetes v1.15.2 on Docker 18.09.8 …
Pulling images …
Launching Kubernetes …
Waiting for: apiserver proxy etcd scheduler controller dns
Done! kubectl is now configured to use “minikube”

Database CI/CD with Containers (Docker) and Azure DevOps (Demo’s – YouTube) — July 4, 2019

Database CI/CD with Containers (Docker) and Azure DevOps (Demo’s – YouTube)

Recording of my demo’s taken for Data In Devon and DataGrillen. Much easier to record them than provide scripts/screenshots πŸ™‚

CI/CD with Containers and Azure DevOps – 1 Build Pipeline Demo
CI/CD with Containers and Azure DevOps – 2 Pull and Run Image Locally
CI/CD with Containers and Azure DevOps – 3 Release Pipeline Demo
CI/CD with Containers and Azure DevOps – 4 Kubernetes Tour

Azure DevOps – Job Dependencies — March 12, 2019

Azure DevOps – Job Dependencies

I wanted to through together a quick post here to help out others who may be having issues with running multiple jobs in a build pipeline and there being no consistency in what order they run in.

The Problem

When I first started with VSTS and ultimately Azure DevOps, I went through many failed builds because the order of the jobs in your pipeline don’t run in the order that you’ve built them and how you would logically believe them to run. The image below shows two Build Pipeline jobs but when the build is queued, whether this be manual or via CI, the second job is running before job #1. In this example the build will fail because Job #2 is to deploy a dacpac to a SQL Server on Linux Docker Container (Using Ubuntu Agent Host) but obviously this cannot be done until the dacpac has been created in Job #1 which is running on a VS2017 Agent Host:

1 – Job running in wrong order

The reason for this is that when you specify multiple jobs in a build pipeline , by default they run in parallel. In my experience I never found that both jobs ran in parallel, always one after the other so it doesn’t seem to quite match what the Microsoft docs states but it’s not something I’ve ever spent the time investigating further.

This can obviously be very frustrating especially as (from my testing) there is no consistency in which order they run but I did find that cancelling a build or re-running a build straight after a failure seemed to through it out of sync whereas previously it was all running in the correct order.

The Fix

To stop this sporadic order of job running you can set Job dependencies in Azure DevOps. The process is so simple to setup and if you didn’t know about it and have been tearing your hair out over failed builds due to ordering then you’re going to kick yourself when you see the simplicity.

All you need to do is select the job you wish to start after another (other specific conditions can be applied) job completes and scroll to Dependencies, click the drop down and (in this example there is only one) the job you want to depend on will be in the list:

Select Job Dependency

That’s it, couple of clicks and your job ordering is sorted πŸ˜‰

CI/CD with Docker and Azure DevOps – Part 2 (Creating an Azure DevOps Pipeline) — February 14, 2019

CI/CD with Docker and Azure DevOps – Part 2 (Creating an Azure DevOps Pipeline)

For video demos please see: Database CI/CD with Containers (Docker) and Azure DevOps (Demo’s – YouTube)

Introduction

In part 1 of this series we went about setting up our Azure DevOps account, creating a project and adding a Database Project to it. In Part 2 we will look to run through creating a new build pipeline creating a new Docker Image and pushing it to DockerHub.

NOTE: In Part 3 we will change to using Microsoft Container Registry (MCR) for two reasons:

  • To highlight the issues with using DockerHub with Azure DevOps
  • Because we can πŸ™‚

Before we begin creating our build pipeline, it is advised that a Service Connection to Docker Hub (we will also be creating one for Kubernetes in Part 2) is created. This means we aren’t entering passwords / other secure information into our YAML file.

We can do this by selecting Service Connections from the Project Settings. From the image below you can see that there are a large variety of service connections that can be created but we will be choosing Docker Registry:

Simply give the connection a name and enter the Docker ID and Password:

NOTE: once you have created this you need to re-save your pipeline before it will work. This Resource Authorization link provides more information but Microsoft are working improving the user experience on this.

Now that we have created the service connection, we can now look to create our Build Pipeline. Select Builds from the menu on the left and click “New Pipeline”

Select the repository where the project you wish to build resides, in our case it is Azure Repos:

Select the project we have created – TestProj”:

You will now be presented with the option to use a Build Pipeline template or start from scratch.

One of the templates is “Docker Image” so we will choose that one:

This will auto generate some YAML code to get you started:

As we are using DockerHub as opposed to MCR we have to make a change to the azure-pipelines.yml file which will be used.

This link provides more information but in short we need to change the filename:

If you have a Docker Hub account, and you want to push the image to your Docker Hub registry, use the web UI to change the YAML file in the build pipeline from azure-pipelines.yml to azure-pipelines.docker.yml. This file is present at the root of your sample repository.

https://docs.microsoft.com/en-gb/azure/devops/pipelines/languages/docker?view=azure-devops&tabs=yaml#example

Once you have made the change, annoyingly you don’t seem to be able to exit from the file with a simple “Save”, you have to “Save And Run”, which will initiate a failed build.

You can pull the latest changes locally and view/change the file in VS if you prefer:

NOTE: You will also need to update the Pipeline to use the new file. You can do this using the Visual Editor:

So, we now have our YAML file name updated and commited as well as the build pipeline updated to use it. However, before we proceed we need an actual Docker image and push that to our Docker Hub repo

Pull latest SQL Server 2019 on Linux image to local repository:

docker pull mcr.microsoft.com/mssql/server #This will pull the latest version 

Now push this image up to Docker Hub giving it the tag “testprojsql2019“:

docker tag mcr.microsoft.com/mssql/server:latest sqlgeordie/azuredevops:testprojsql2019
docker push sqlgeordie/azuredevops:testprojsql2019

Using VSCode for output:

We’re not quite ready to run our build, the build pipeline doesn’t create a Dockerfile so we need to create this ourselves. If we don’t we get this error:

"unable
to prepare context: unable to evaluate symlinks in Dockerfile path:" lstat
/home/vsts/work/1/s/Dockerfile: no such file or directory

Dockerfile

FROM sqlgeordie/azuredevops:testprojsql2019
RUN mkdir -p /usr/src/sqlscript
WORKDIR /usr/src/sqlscript
CMD /bin/bash  

Now, we have to amend the YAML file to login in to DockerHub for us to be able to pull down the image in order to build using the Dockerfile. You will notice in the image below that i have highlighted “build an image”, the reason for this is relevant in the next section.

Build input:

steps:
 task: Docker@1 
 displayName: 'Build an image'
 inputs:
 containerregistrytype: 'container Registry'
 dockerRegistryEndpoint: sqlgeordiedockerhub
 imageName: 'sqlgeordie/azuredevops:$(Build.BuildId)'
 command: build an image
 dockerFile: '**/Dockerfile' 

Login input:

 task: Docker@1
 displayName: Login
 inputs:
     containerregistrytype: 'container Registry'
     dockerRegistryEndpoint: sqlgeordiedockerhub
     command: login 

Push Input:

task: Docker@1
   displayName: 'Push an image'
   inputs:
     command: push an image
     imageName: 'sqlgeordie/azuredevops:$(Build.BuildId)' 

There are examples on GitHub docs which have (in my opinion) errors. For example, I mentioned earlier that I highlighted “build an image” for a reason, that reason is that it is incorrectly stated as “build” (also the same for “push”) on GitHub and this gives errors.

Complete YAML File

#Docker image
#Build a Docker image to deploy, run, or push to a container registry.
#Add steps that use Docker Compose, tag images, push to a registry, run an image, and more:
#https://docs.microsoft.com/azure/devops/pipelines/languages/docker

trigger:
 - master 
pool:
 - vmImage: 'Ubuntu-16.04'
steps:
 - task: Docker@1 
   displayName: 'Build an image'
   inputs:
     containerregistrytype: 'container Registry'
     dockerRegistryEndpoint: sqlgeordiedockerhub
     imageName: 'sqlgeordie/azuredevops:$(Build.BuildId)'
     command: build an image
     dockerFile: '**/Dockerfile'
 - task: Docker@1
   displayName: Login
   inputs:
     containerregistrytype: 'container Registry'
     dockerRegistryEndpoint: sqlgeordiedockerhub
     command: login
 - task: Docker@1
   displayName: 'Push an image'
   inputs:
     command: push an image
     imageName: 'sqlgeordie/azuredevops:$(Build.BuildId)' 

The “incorrect” example in the docs is:

- task: Docker@1
     displayName: Build image
     inputs:
     command: build
          azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
          azureContainerRegistry: $(azureContainerRegistry)
          dockerFile: Dockerfile
          imageName: $(Build.Repository.Name) 

The strange thing is that if you edit the file directly online there is no error:

However, if you edit the pipeline you see the red syntax error “squiggle“:

Please make sure you change this otherwise you will receive and error in your build.

To get back on track, in theory, we should now be able to run the build which will pull the image from our DockerHub repository, and initiate building a new Docker image from our Dockerfile (granted a very basic build) and push it back up to DockerHub

We can now check the image exists in Docker Hub:

Pull it down:

Check local images:

There we have it, we have successfully built a new Docker image from a Dockerfile which resides on DockerHub and pushed the newly created image back to DockerHub.

In the Part 3 we will look to expand on this and look to incorporate it into the TestProj we created in Part 1 and show how we can push changes made to our TestDBProj to Azure DevOps to initiate a build process to provide a new Docker Image with our changes applied.

I have created some videos (no audio I’m afraid) of this process which was used as part of my sessions at Data In Devon and DataGrillen earlier this year. I will be looking to replace these with either a full blog post (ie. Part 3) or perhaps re-record the videos with audio.