During the quarantines, I’m looking for new ways to help you level up for free.
I decided to start live-streaming whenever I’m working on things I can share with you publicly. I wanted to add a new check to sp_Blitz, so I figured I’d do it live on camera while you watch, and talk you through my thought process.
In this 52-minute video, you’ll see me working in Azure Data Studio and Github as I explain why it’s hard to add DMV-related checks since SQL Server’s available columns change over time. I end up using dynamic SQL to make it work on both SQL Server 2008 and 2012+.
Related links to keep the training going:
- The Github issue I’m working on: #2134: sp_Blitz and sp_BlitzFirst: add warning about high USERSTORE_TOKENPERM cache size
- sys.dm_os_memory_clerks – the DMV whose contents changed over time
- When the USERSTORE_TOKENPERM Cache Stops You From Caching Query Plans – the problem I’m alerting folks about. This page has several related reading links, too.
- A Tourist’s Guide to the sp_Blitz Source Code
- How to fork a GitHub repository and contribute to an open source project by Rob Sewell – in case you’d like to contribute to an open source project
I’m not going to schedule these ad-hoc live streams ahead of time – it’s just whenever they fit into my schedule – but if you want to be alerted when I start broadcasting, here’s how – I live stream simultaneously in all of these places:
Have you noticed that application roles make the USERSTORE_TOKENPERM issue much worse?
David – yep, click on the links in the blog post for the related reading, and they’re covered in there if I remember right. If not, holler.
So here’s my I-was-only-a-VB-dev-for-15-months intermediate assessment when I came across the issue:
The application I was working with created a temporary SQL login for each users Windows login, based on permissions stored in tables in the database.
We had Citrix, and the users would often disconnect their laptops and either reconnect elsewhere or *not* reconnect until many hours or days later.
What I believe was happening: the users disconnecting their session to the application, leaving their credentials cached in the store, which never got cleared. So new connections generate new tokens and the old ones remain. Until the USERSTORE_TOKENPERM cache is full.
Of course at that point it all goes to hell because the CPU is spiking while SQL tries to create every plan on the fly and barely caches any of them.
It would be interesting to compare notes and see what other people are experiencing with this one, and what mechanisms their applications are using for security.
Honestly .. that was just amazing session. You always been superior in mixing fun with excellent professional work. Thanks and always appreciate your work.
Yasser – you’re welcome, and glad you enjoyed it.
Really fantastic session Brent. Are you going to do more sessions like this one while we are blocked at home? Thanks
Salam – yep! I talked about that in the start of the video and the bottom of the blog post. 😉
Fun to watch. And those pesky shortcuts, sometimes improving the quality of coding life, sometimes messing things up…
In the video, at 46:41, your shortcut messes up sys.dm_os_memory_clerks, at line 5381. Looks like a per mille character.
Gserdijn – thanks! I’ve corrected that in the checked-in version thanks to another eagle-eyed viewer. Congrats on being one of only two folks who caught it!
When I started at the current role, they used to stop backups during the busy time of month because “it caused performance problems”. They’d also stopped running DBCK CHECKDB() completely.
Long story short ….. Turns out the combination of security token cache bloat during the busy period and the memory grant required by DBCC CHECKDB() starved the SQL instance of memory causing large waits on “resource flag”. Set “access check cache quota” and “access check cache bucket count” and I reintroduced daily CHECKDB() and backups during our most critical part of the month.
Thank you! Watching your process and tool use in such a candid format was very informative.