You may have heard that there’s a secret back door into your SQL Server– but have you enabled access to it properly? Do you know when you might need to use it, and how you can connect? Kendra Little explains why the Dedicated Admin Connection (or “DAC”) is so important and how to configure remote access.
Don’t have 11 minutes / don’t have YouTube access? Head on over to our blog post on the DAC.
Jeremiah says: I had to use this once. I ran into a low memory situation on a SQL Server and Windows decided to page out SQL Server in the middle of the business day. The system ground to a halt, it was even unresponsive over RDP, and we had to hard power cycle the server (hold down the power button in the server room). I set up the DAC and the next time the issue happened, I was able to jump into the SQL Server, collect information, and then issue a graceful shutdown. This made it easier to diagnose the problem, too.
Brent says: Using the DAC is like knowing how to read the basics of DBCC CHECKDB output. You’re probably never going to need it, but when you do, just ten minutes of preparation will make a world of difference.
Jes says: What Brent said. A few minutes of practice every six months can save minutes or even hours of frustration – and potentially downtime – when faced with a real emergency.
One of the blocking scenarios I find most interesting is related to online index rebuilds. Index rebuilds are only mostly online. In order to complete they need a very high level of lock: a schema modification lock (SCH-M).
Here’s one way this can become a big problem:
- An online index rebuild starts against a large index
- A modification query runs before the index rebuild query. At the end of the index rebuild, it tries to get a schema modification lock. It can’t, because the modification query is still running
- The index rebuild’s super-high-level lock request then causes a massive pile-up behind it– blocking even queries that want to do dirty reads*
*I’m not saying dirty reads are a good thing, just pointing out that even NOLOCK can be blocked
Here’s a screenshot of a quick repro of that scenario:
Session 56’s need for a schema modification lock can cause a big backup!
Let’s Test WAIT_AT_LOW_PRIORITY and ABORT_AFTER_WAIT?
SQL Server 2014 introduced some new options for index rebuilds:
- MAX_DURATION (in minutes)
- ABORT_AFTER_WAIT (Kill yourself / Kill who’s blocking you / Kill others)
This sounds pretty good, but how exactly does it impact the scenario above? Does “WAIT_AT_LOW_PRIORITY” really mean that it won’t start that nasty blocking chain? Or is it just that I can use “MAX_DURATION” to control the duration of the pain in this case?
Let’s take a look. Here’s the modified command I’m testing:
ALTER INDEX PK_Votes ON dbo.Votes REBUILD WITH (ONLINE = ON ( WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 1 MINUTES, ABORT_AFTER_WAIT = SELF )) );
Here’s the commands I ran this time:
And here’s what the blocking looked like:
LCK_M_SCH_M_LOW_PRIORITY lock wait type saves the day
One thing to notice about the screenshot of the blocking: there’s no third query being blocked. There’s no pileup! “LOW_PRIORITY” really does mean low priority – the “NOLOCK” query is able to get a shared schema lock and move forward.
What Happened with ABORT_AFTER_WAIT?
As instructed, my online index rebuild called it quits. Here’s the error message it threw when it gave up:
I Like It!
Blocking chains during maintenance window can be a huge pain– especially if you’ve paid a high licensing dollar to get online index operations. Having more options is definitely a plus.
- Research more on your own by reading Kendra’s “Rebuild or Reorganize?” blog post
- Get into the controversy of index maintenance by watching Brent’s “Index Maintenance Doesn’t Matter” video
- Learn the foundations of SQL Server Database Maintenance online with Jes and Kendra in the Database Maintenance Starter Kit course
- Learn advanced database maintenance in our in-person Senior DBA class with sessions in Denver and Chicago in 2015
I started tweeting various single-words you could start an argument with the other day. I really liked this suggestion from @SQL_Kiwi:
@Kendra_Little One word ways to start an argument: always (or never)
— Paul White (@SQL_Kiwi) December 31, 2014
It got me thinking. What are the things that I would say you a DBA should always do or be ready for with SQL Server, unconditionally?
You should always:
- Know how much data it’s OK to lose and how long your databases can be offline
- Know who has the ability to drop a database in your production SQL Servers
- Monitor your production SQL Servers
- Use x64 architecture for production SQL Servers
- Be ready to back up or restore a database
- Have a plan if your SQL Server stops working (even if the plan is to turn off your cell phone)
- Document who’s next in line if you stop working (even if the person next in line is nobody)
- Remember to lock your workstation when you get up from your desk
As for the “nevers”, I’ll let you come up with that list.
Ever wanted to quickly and easily record how much CPU a query is using, how many logical reads it uses, what temporary objects are created, and whether it causes a recompile? sp_BlitzTrace™ is a free stored procedure that helps you quickly and easily use Extended Events in SQL Server 2012 and higher to observe what’s happening in one SQL Server session in close detail. Learn how you can install and run this tool to learn more about what your queries are doing.
Want to read other team members; take on the video? Scroll on down to read what people thought.
Jes thinks: What an easy way to get introduced to Extended Events! There can be so much to learn about them – this is an easy tool to get started with. This is a good way to measure the impact of queries.
Kendra thinks: I never want to write XQuery again.
Brent says: I’ve always smiled-and-nodded my way past Extended Events because it’s chock full of nuts. (“Oh hey, the ring buffer doesn’t work. Ooo look, you get to query XML. Hey, it’s different across different versions of SQL Server.”) This tool makes XE easy enough that even Brent can use it.
Jeremiah says: Extended Events are a really helpful tool, but there’s a steep learning curve. This is the kind of on ramp that I wanted when I first started using Extended Events.
Doug says: I’m a fan of this tool because 1) it gets useful information from XE without getting that XE smell on my hands, 2) it tells me all kinds of great stuff in the messages tab, including how to stop it, 3) it only does one session, so it’s not as dangerous as a tool that might attempt to watch everyone (sp_BlitzSauron).
tl;dr — Not necessarily.
One of our students asked me a great question: if you update statistics on every table in the database, is that effectively the same as dumping the procedure cache on the instance? Will every execution plan have to be recompiled the next time it is run? I thought it was a great question and it spurred an interesting discussion about maintenance and recompilation.
Updating statistics can certainly cause recompilation. But based on my observations, it won’t necessarily cause every execution plan to recompile.
To test this out, I’m going to use a free tool that Jeremiah and I wrote, sp_BlitzTrace™, to trace activity in Extended Events and show me when a recompile occurs.
Our Test Setup: Users Table with No Nonclustered Indexes
I’m running a simple, non-optimized query against a restored copy of the StackOverflow database on SQL Server 2014. To keep things simple, I have left the dbo.Users table without any non-clustered indexes. The auto-create statistics option is on with the default synchronous setting. I cleared the procedure cache before I started my test.
sp_BlitzTrace™ starts an Extended Events trace against my own session. Then I run my (somewhat troubled) query and stop the trace. Then I read the results.
exec sp_BlitzTrace @Action=’start’, @TargetPath=’S:\XEvents\Traces\’, @SessionId=@@SPID, @TraceRecompiles=1;
SELECT DisplayName, Reputation
WHERE DisplayName LIKE ‘Kendra%’
ORDER BY Reputation DESC;
exec sp_BlitzTrace @Action=’stop';
exec sp_BlitzTrace @Action=’read';
I Find the Statistic To Update and Get Out My SledgeHammer
I find the statistics on the table by running sp_helpstats. This shows that I have a column statistic on the DisplayName column. I copy the name of that off to my clipboard. This is the statistic I’m going to update, because it’s in the “where” clause of my query.
I update the statistics with FULLSCAN with this code:
UPDATE STATISTICS dbo.Users _WA_Sys_00000005_1D7B6025 WITH FULLSCAN;
I’m not a fan of updating all your stats with fullscan. I just got out the sledgehammer for the purpose of demonstrating if it would cause a recompile.
… And I Don’t See A Recompile
After the statistics update, I run my test query block again (the same one above) with sp_BlitzTrace™. I do NOT see a row returned showing a recompile event:
I can run the query multiple times, and the trace never shows a recompile event.
Hmm. Well, maybe stats didn’t update somehow? Well, we can check. I query sys.dm_db_stats_properties and it shows that this statistic DID update:
.. Let’s Fake It Out and Make a Modification
There’s Our Recompile!
Following the “fake insert” and statistics update, I run my query again and the Extended Events trace shows a sql_statement_recompile event. The cause is statistics change:
Finding: Statistics Update Alone Didn’t Cause a Recompile
SQL Server was smart enough to check if the data had changed. Updating statistics alone doesn’t always invalidate execution plans.
What Does This All Mean?
Mostly, I think it just means that SQL Server is pretty clever about when to do recompiles — at least SQL Server 2014, since that’s where I ran this test. But don’t take this as an excuse to be lazy. I’m a fan of using the @OnlyModifiedStatistics=’Y’ option on Ola Hallengren’s index maintenance solution if you’re using @UpdateStatistics=’Y': because why even update the statistic if it looks like nothing has changed?
- Friday, Jan 16, 2015 – 9 am to noon Central
- Friday, Feb 27, 2015 – 9 am to noon Central
- Friday, Mar 27, 2015 – 9 am to noon Central
Learn More about Database Maintenance Next Year
For $149, you get 3 hours of recorded video content on SQL Server database maintenance– and you get access to that for 18 months.
You also get to attend a 3 hour guided discussion and Q&A session with Jes and myself on the date you purchase. We keep this session full of quizzes and polls to let you test your knowledge. The questions also naturally help YOU think of questions and dig deeper into the topics we’re talking about.
We Keep Your Class Small!
Here’s what’s really unique about this course: we keep your class small. A maximum of 30 students can buy a seat each class. That means you get time to participate!
You’ll Learn the Intricacies of Database Maintenance
This course teaches you all about database maintenance, namely…
- The pros and cons of maintenance plans vs SQL Server Agent jobs
- How to select the right database recovery model
- How to plan a backup strategy
- How to check for corruption in your databases (and get notified if it happens)
- The different types of index maintenance operations (and what can go wrong)
- Why you need to be careful with index maintenance (and where to start)
I’ve never been so excited to write about licensing.
Licensing is hard, y’all. It changes often and it’s full of lots of little details. Microsoft puts out licensing datasheets, but they’re in a difficult spot — if they explain absolutely everything, the documents get longer and harder to understand. So they have to balance between brevity and clarity.
Recently I had a question I wanted to confirm for SQL Server 2014 licensing. I looked for phone numbers and email addresses that I could use and share with clients and readers, and came up with nothing. As a last resort, I decided to try out the “Chat now” feature on the SQL Server sales page to see if I could get an answer.
I Asked A Question About Passive Licenses and Cold Failover Licenses
Here’s the question I submitted:
I have a question about SQL Server 2014 licensing with Software Assurance. It is clear from the documentation that there is a single “passive” license for High Availability if you license an instance. I want to check if there is additionally a cold failover license that can be used for Disaster Recovery as described here (http://blogs.technet.com/b/volume-licensing/archive/2013/08/22/licensing-how-to-sql-server-fail-over-vs-cold-disaster-recovery-rights.aspx), or if that was removed in SQL Server 2014.
My SQL Licensing Question Was Answered
I had absolutely no idea what to expect from an on-demand chat window at 6:45 am Pacific time on a Monday. My chat request went into a queue and I waited for several minutes, which was totally fine. But then, Gilbert appeared!
He asked if it was OK for me to wait for a bit while he researched my question. (Willing to wait? I was willing to send flowers if he answered this.) After a bit, he came back and gave me an answer:
Cold disaster recovery rights have not been removed. In SQL Server 2014, you need Software Assurance to take advantage of this right, but using it doesn’t mean you lose the license for a “passive” node for failover.
Operators Are Standing By
I can practically hear angels singing. I asked a really specific licensing question before 7 am on a Monday morning and someone was there to help me almost immediately. They took the time to research my question AND I could ask follow up questions to make sure I understood properly.
When you’ve got a tough SQL Server licensing question, first check the licensing datasheet. If it’s not covered in there, head on over to that SQL Server sales page and click on “Chat now”. Cause that thing is awesome.
Extended Events have become one of my favorite tools for learning about tuning SQL Server. Whenever I wonder how something works, I can fire up a test instance and play around with XEvents. It’s always interesting, and at least 5% of the time I end up learning what I intended.
Not everything is sunshine and rainbows. Finding the right events and making sure you understand the output can be really frustrating. And you want a SQL Server 2012 or 2014 test instance to do this. The events are much richer and there’s an interface to help you along.
But like a lot of things, the interface isn’t all that intuitive.
New Session, or New Session Wizard?
I used to click on “New Session Wizard”, simply because I had no idea what the difference was between the two options, and it was first in the list.
The “Wizard” option gives you some templates that you can choose to customize. If you’re just starting out, that could be interesting to use on your test instance.
But it always left me frustrated that I couldn’t use some of the more advanced options in Extended Events. I felt like I was just scratching the surface, and trying to code sessions from scratch using Books Online was slow and painful.
“New Session” Just Gives You a Better Wizard
When you click new session, it turns out that you still get a wizard. It starts out asking you for a session name and some basic properties:
Then you can choose your events. I decided I wanted to test some plan guides, and see if I could use Extended Events to count how many times a plan guide was used. (This is something that the DMVs and perf counters don’t make accessible.) I added the plan_guide_successful event:
Extended Events Targets: I Can Pick a Histogram!
The “Data Storage” tab is where this gets good. I was always frustrated when I was using the Wizard that I had limited choices about how to set up a target. If you use this version of the Wizard (the Un-Wizard Wizard?), then you get to pick your target type and it’ll help you script that out.
In this case, I want to use the histogram target. I don’t care when the plan guides were executed– I just want a count of which ones are being used. This can help keep my trace lightweight, plus if that’s the only information I want then I don’t have to do any complicated grouping on the output.
The GUI is nice and friendly and lets me say I want my “Buckets” to be on the plan_guide_name field:
After I create my session, it shows up in my Extended Events session list in the SSMS Object Explorer. I can right click on it and script it out, start it, stop it, etc.
I Ran My Plan Guides Test and … It Worked!
I fired up my session, created some plan guides, and ran some queries against my restored copy of the StackOverflow database. After building up a bit of data, I just double-clicked on package0.histogram to open the results so far:
And it worked! I created a ‘template’ plan guide that I executed via a HammerDB workload– so that got hit a ton. Then I created two other plan guides that I just ran twice and once, respectively.
Extended Events Aren’t Always This Easy
Putting together this little example went smoothly. Surprisingly smoothly. So smoothly that I really want to quit while I’m ahead.
Lies we tell ourselves: “This should be simple to do with Extended Events.”
— Kendra Little (@Kendra_Little) December 14, 2014
Recently we got an email asking for help from a DBA who was concerned about a high percentage of the PREEMPTIVE_DEBUG wait on a SQL Server. They were investigating poor performance and had searched the internet high and low and not found information on what this wait means. They were stumped.
What to Do When You Find an Unusual Wait
We have a lot of tools to explore and learn things ourselves in SQL Server. Even when you find results explaining something on the internet (like this one!) it’s always good to use those tools to explore yourself and learn.
But first, don’t panic! Just because a wait is a high percentage doesn’t necessarily mean it’s a problem. For any wait…
- Narrow in and find out when the wait occurs, and how much of it there is. Just because it’s a top wait by percentage doesn’t mean it’s always a problem. The server could be bored.
- If it actually is a problem, identify what’s running when the wait occurs.
- Blog about it! Help others.
But what is the PREEMPTIVE_DEBUG wait, anyway?
I had a guess as to what this wait type meant, so I did a quick test to verify. In one SQL Server Session, I ran our sp_AskBrent procedure to take a 10 second sample of waits:
exec sp_AskBrent @ExpertMode=1, @Seconds=10; GO
In another session, I ran the built in procedure sp_spaceused, but ran it with the “Debug” button at the top of SSMS and started stepping into the code. (I just picked that procedure randomly.)
Here’s a screenshot of what I saw when sp_AskBrent® completed:
Success! The PREEMPTIVE_DEBUG wait is related to using debugging tools to step through code. When you’re stepping through something the SQL Server has to wait for you to say “go on”.
My guess is that this probably isn’t specific to only the debugging tool built into SQL Server Management Studio itself. Developers can use other debugging tools when working with Visual Studio or Windbg, and I bet they all produce this wait.
What to Do if You See PREEMPTIVE_DEBUG
The list I have above still applies– do a quick assessment of how much of the wait there is. But I’d also start asking some friendly questions of folks about whether they use debugging tools against production often– and if you can maybe give them a fresh database backup in a development environment where they can debug safely, away from your customers!
When learning how Read Committed Snapshot Isolation works in SQL Server, it can be a little tricky to understand how writes behave. The basic way I remember this is “Readers don’t block writers, writers don’t block readers, but writers still block writers.”
But that’s not so easy to understand. Let’s take a look at a simple test showing what it means that “writers still block writers”.
Create our simple test schema
We’re doing a super simple test: we just need a database with READ_COMMITTED_SNAPSHOT enabled and a table with a single row.
CREATE DATABASE RCSI_Updates GO ALTER DATABASE RCSI_Updates SET READ_COMMITTED_SNAPSHOT ON GO USE RCSI_Updates GO CREATE TABLE dbo.UpdateTest ( i int identity primary key, Color varchar(256) not null ); GO INSERT dbo.UpdateTest (Color) VALUES ('White'); GO
Start one transaction that does an update (Session #1)
An update begins! To test this out, we’re doing an explicit BEGIN TRAN so that our update will hold locks. Run this in one session in SQL Server Management Studio:
BEGIN TRAN UPDATE dbo.UpdateTest SET Color='Red' WHERE Color='White'; /* Transaction left open on purpose*/
Another update begins! But it’s blocked. (Session #2)
Before that update transaction complete, another session comes in. Just open up another session window in SSMS and run this:
UPDATE dbo.UpdateTest SET Color='Black' WHERE Color='White'; GO
This won’t complete right away, because it’s blocked by your first session.
Back in Session #1, run:
When Session #1 commits, it finishes up and cleans up its lock. Session #2 is also able to finish, and in its output you’ll see:
(0 row(s) affected)
Let’s check the rows in the table:
SELECT * FROM dbo.UpdateTest; GO
Here’s how things happened:
- Session #1 locked the row and turned the marble red.
- Session #2 was also doing an update, and it didn’t get to the row until Session #1 released the lock.
- At that point it found zero rows to update, because the color had already changed.
If You Wrote the Update Differently, You Might Get a Different outcome…
Remember, writers do NOT block readers under read committed snapshot isolation. What if someone wrote this as a two part operation (instead of a single update statement)?
CREATE TABLE #marbles (i int) INSERT #marbles (i) SELECT i FROM dbo.UpdateTest WHERE Color='White'; UPDATE dbo.UpdateTest SET Color='Black' FROM dbo.UpdateTest u JOIN #marbles m on u.i=m.i; GO
When we check how things turned out…
In this case, Session #2 wasn’t completely blocked! Things happened differently:
- Session #1 locks the row, that’s the same
- Session #2 is doing a SELECT statement, so it uses the version store and isn’t blocked. It’s able to populate the temp table.
- But Session #2 becomes blocked at the point it runs the UPDATE.
- When Session #1 completes, Session #2 is unblocked and updates using the data it’s cached in its temp table.
- Session #2 changes the color of the marble.
I’m not saying that it’s always better to do a single UPDATE or that it’s always better to use a temp table. You just have to know how they behave so you can pick the right code for your situation.
If you need to use a temporary table for performance reasons, but want your read to be blocked if any writer has a lock, you can achieve that by using locking hints. Which might cause a lot of blocking, just like anytime you use locking hints.
What About Snapshot Isolation?
In this post we’ve looked at Read Committed Snapshot Isolation. There’s another isolation level, SNAPSHOT, which behaves a bit differently. If you want to test that out, you can modify the code samples above to reproduce an “update conflict” with both of the sessions using SNAPSHOT isolation.
This May Sound Complicated, But It’s Not That Bad
When you talk about any isolation level in detail in SQL Server, things get complicated. I don’t think that Read Committed Snapshot Isolation is any more complicated than plain old Read Committed, it’s just that people don’t think too much about Read Committed, since it’s the default!