[Video] Office Hours 2016/02/03

In this week’s Office Hours Q&A, Erik, Tara, Doug, and the new guy Richie Rump answer fast-paced questions about logging Perfmon counters to a table, Standard Edition’s 4-socket licensing limit, whether odd or even MAXDOP numbers make a difference, and – one of our favorite questions in a long time – is it a good practice to cut your SQL Server’s max server memory in half every two hours just to, uh, clean things up?

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

When Shrinking Tempdb Just Won’t Shrink

I am not a proponent of shrinking databases, but sometimes you have to because your momma said to. Sometimes that database is tempdb.

It used to be that we were warned against shrinking tempdb because it could cause corruption, so your only recourse was to restart the SQL Server service. Paul Randal let us know that this is no longer a problem.


Like your life depended on it. Or perhaps you needed the alerts to stop.

If a user ran an atrocious adhoc query that caused your tempdb to grow so much that it caused your disk space alert to fire and:

  • you needed that alert to stop
  • the storage team is not going to give you more space
  • the user promised to never do that again

So you try to shrink tempdb, but it just won’t shrink.

Try clearing the plan cache:

And then try shrinking tempdb again.

I came across this solution recently when I had to shrink tempdb. I tried shrinking each of the 8 data files plus CHECKPOINTs, repeatedly. It would not budge. I almost threw in the towel and emailed out that the space issue would be fixed during our next patching window, but then I found David Levy’s reply. DBCC FREEPROCCACHE worked like a charm.

Brent says:

A funny thing happened on my way to set up Mirroring…

I’ve set up Mirroring about a billion times

I’m not bragging about that. I’d rather say that I set up a billion AGs, and not one of them ever failed. But then I’d be lying to you; those things fail like government programs. One thing I’d never done, though, is set up Mirroring with a Witness. I never wanted automatic failover, because it’s only one database at a time. If for some reason one database out of all that I had mirrored ever turned Ramblin’ Man and failed over to another server, there would understandably be some application consternation. Not to mention any maintenance and internal operations. They don’t react well to sudden database unavailability.

Of course, doing anything for the first time is horrible. Just ask my second wife.

Here’s where things got awkward

I have my databases! This is my top secret development environment. Stack Overflow is in an AG, and I had set up two other Mirrors: one synch and one asynch. I wanted to have a variety of setups to test some scripts against.

Everything looks good!

Everything looks good!

Alright, let’s set up Mirroring…

Configuring stuff is cool, right?

Configuring stuff is cool, right?

Yeah yeah next next next

Yeah yeah next next next

Service accounts whatever BORING

Service accounts whatever BORING



This is so easy. Seriously. Why doesn’t everyone do this? Why do you complicate your short, short lives with Availability Groups? Are they AlwaysOn? Are they Always On? WHO KNOWS? Not even Microsoft.

I'm hitting this button and jumping into a mile of Laphroaig.

I’m hitting this button and jumping into a mile of Laphroaig.




This is the error text:

The ALTER DATABASE command could not be sent to the remote server instance 'TCP://'. The database mirroring configuration was not changed. Verify that the server is connected, and try again.

Super Sleuth

Alright, that’s silly. I used the GUI. Instead of going to bed I’ll spend some time checking all my VM network settings. BRB.

I’m back. They were all correct. I could ping and telnet and set up linked servers and RDP. What in the name of Shub-Niggurath is going on with this thing?

I can even see the Endpoint! So close, and yet so far~~

The Endpoint is showing up on the Witness and what is this?

The Endpoint is showing up on the Witness and what is this?

Where are we now?

This is a good time for a quick recap

  1. Mirroring is up and running synchronously
  2. The endpoint is configured on the witness
  3. We get an error when we try to connect the witness


I should have done this hours ago.

I should have done this hours ago.

Well whaddya know? That’s a really good clue. Encryption and stuff. There’s no compatible algorithm. Ain’t that somethin’? You’d think that Microsoft would be cool about setting up the same kind of encryption across all the different Endpoints, if using different encryption would cause the setup to fail. Right guys? Heh. Right? Hey, hello?


Alright, let’s see what I need to be a matchmaker.

The Cure - Primary

The Cure – Primary

Oh. AES. Okay. Cool. Thanks.

Oh. AES. Okay. Cool. Thanks.



Since we have them both scripted out already, let’s just drop and re-create the Witness Endpoint with the right encryption algorithm.



That did not result in a forest fire. I’m hopeful. Sort of. It’s been a long night and I think I can see tomorrow from here.



Meanwhile, back on the Primary…



It worked! Now I have a Witness, and I can shut all my VMs down. That was so much fun.

What did we learn?

Microsoft hates you and doesn’t want you to sleep. Just kidding. Mostly. But seriously, why would they do that?

It mostly goes to show that it’s always a smart idea to use that little script button at the top of (most) GUIs in SSMS. Who knows what kind of foolishness you’ll find? A little reading can save you a lot of time troubleshooting errors that make you feel insane.

Thanks for reading!

Introducing Our Latest Employee, Richie Rump (aka @Jorriss)

Richie Rump, aka @Jorriss

Richie Rump, aka @Jorriss

During our very first training class, we showed the students how we use SET STATISTICS IO ON to get the number of logical reads performed on each table in a query, and then sum ’em up to see the query’s overall impact. It’s kind of a painful, manual process.

Sitting in the back row of the class (because that’s how he rolls), Richie Rump saw that process and thought to himself, “I bet I could make that easier.”

The bad news is that he probably didn’t learn much the rest of that day in class, because he immediately started building

The good news is that you got StatisticsParser, hahaha.

That kind of mentality is what we’re all about. We look for SQL Server tasks that are complex, and figure out how to make them easier for free. Whether it’s our blog posts, our community presentations, or our tools, we’re always looking for new ways to make your job suck less.

So let’s talk to Richie and figure out what makes him tick.

Brent: I think of each person as a toolbox. Over time, as we experience situations and solve problems, they become tools in our toolbox. I think back to my time in hotels, my work in accounting, and even my love of cars as various tools in my toolbox that I can use to get things done. What are some of the tools in your toolbox that make up who you are?

Richie: This surprises some but I spent almost five years as a project manager. I even earned the dreaded Project Management Professional (PMP) certification. I also started my career as a Access database programmer (way back in Access 2.0). Most of my career I spent using Microsoft tooling like Visual Basic, ASP, C#, and the .NET framework. I also spent a fair amount of time as a software architect. After my time as a project manager I pivoted my career towards the data side and absorbed all of the SQL Sever knowledge that I could. I spent almost twelve years in the supply-chain management vertical and have spent time in the accounting, payment processing, and legal areas as well. After reading all of that I feel old.

Brent: Today, you’re a serial (or maybe parallel) community activist – starting dotNet Miami, helping put on the South Florida Code Camp, co-founded the Away From the Keyboard podcast, built StatisticsParser and sp_DataProfile, give community presentations, etc. Take me back to the first time you said to yourself, “I should create something that gives back.”

Richie: That probably would be starting dotNet Miami. For years I followed the SQL Server community and the WordPress communities. I was always impressed by their generosity in not only their sharing of technical knowledge but in the way they care for each other as human beings. I looked around the developer community in Miami and wondered “Why can’t we have a .NET community like that in Miami?” So I grabbed a few people and met at a sports bar and we talked about starting a group. Six months later dotNet Miami was born. We’re still going strong and are committed to being a place where we can grow as technologists and as people. My favorite description of dotNet Miami is “We are not competitors, we are comrades.”

Brent: For years, you’ve been making our classes and webcasts more fun by poking fun at us. Now, you’re going to be on the other side. How do we help raise the next generation of the peanut gallery?

Richie: It takes commitment that’s for sure. I think the whole peanut gallery thing came from getting to know the team personally. Twitter conversations, in-person conference discussions, and training classes all lead to not me to poke fun at the Brent Ozar Unlimited team, but have a friendly conversation with friends. So you want to join the peanut gallery? Get to know us better over Twitter, chat with us at a conference, or join us for in-person training. (See what I did there?)

Brent: One last thing everybody’s gonna ask: where’s @Jorriss come from?

Richie: Ha! Back in the early days of the Internet we had these things called “handles”. For a while I was going by richier but that never sat well with me. So one night in college, a bunch of us geeky types were creating personas that would live in the Star Wars universe (don’t judge). There was a character in Timothy Zhan’s Heir to the Empire trilogy called Joruus C’baoth that I dug so I changed it a bit and out came Jorriss Orroz. From there the handle just stuck. Bonus points if you can figure out where the Orroz comes from.

[Video] Free Training of the Week: How to Configure Quorum for SQL Server

This year, we’re trying something new: every Monday, we’re going to show you one module out of our online training classes.

It’s the whole module, not a sneak peek, and it’s available the whole week. But you’d better not put it off – this post will disappear the following week, and get replaced by a new post.

This week, it’s Kendra Little explaining what quorum is, how to configure it, and what dynamic quorum and dynamic witness mean:

(If you don’t see the video above, you’re reading this somewhere that doesn’t support video embedding. You’ll need to read the post on our blog.)

Wanna see more? Check out our DBA’s Guide to High Availability and Disaster Recovery course. We cover log shipping, mirroring, quorum, failover clustering, AlwaysOn Availability Groups, and more. For $299, you get 18 months of access to stream it on your desktop, laptop, tablet, or phone.

[Video] Office Hours 2016/01/27

In this week’s Office Hours Q&A, we talk about the pains of patching lots of SQL Servers, how to move databases from one drive to another with minimal downtime, the difference between NOLOCK and READ UNCOMMITTED, and assign Doug all future calls for Dynamics tuning.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

SQL Server Agent is an application server.

Application server.

Application server.

SQL Server Agent is a job scheduler.

When we first get started managing SQL Server, we usually use it for backups, index maintenance, DBCC CHECKDB, and other common maintenance tasks.

And then one day, we say, “Hey, I need this T-SQL script to run once a day.” We add an Agent job for it. It’s a rare occasion, we think. Not really a big deal.

The next thing you know, the entire business’s lifeline depends on dozens – or heaven forbid, hundreds or thousands – of intricately interwoven and ill-documented Agent jobs. Nobody knows what they do – they just have to run.

Except many of them don’t. They fail for one reason or another, but then they work the next time, and we just hope and pray everybody ignores it.

Eventually, some of them start failing permanently. When we try to figure out what’s going on with them, we ask around about who wrote the script, and it always turns out it was some guy who left the company months ago. Nobody knows how to debug his stuff, or if it even matters. We’ll just leave it in place and maybe it will start working again.

Don’t let yourself get to this place.

To make your life easier, keep application-logic Agent jobs out of your production SQL Servers.

If someone needs application logic, and it has to be an Agent job, it doesn’t actually have to be on the production database server. Give them their own Standard Edition VM where they can go to town, creating any Agent jobs they want. However, those users are 100% responsible for managing the success and failure of their Agent jobs because these are applications, not databases. There are no databases on this server – or if they are, they ain’t backed up or production-quality. This server is for jobs that can point at any SQL Server.

In addition, give them a VM in the DR environment where they can sync their jobs. Source control and deployment is totally up to them.

This way, it lets you focus: if there’s a failing job on the production SQL Server, you own it.

To help you pull it off, consider using my SQL Server support matrix. It sets clear expectations about what you’ll allow in dev, QA, production, and mission-critical production boxes.

Another reason why scalar functions in computed columns is a bad idea

As if there weren’t enough reasons

In my last blog post I talked about different things that cause plans, or zones of plans, to execute serially. One of the items I covered was computed columns that reference scalar functions. We know that they’ll make queries go parallel, but what about other SQL stuff?

Oh no my index is fragmented

If you’re running Expensive Edition, index rebuilds can be both online and parallel. That’s pretty cool, because it keeps all your gadgets and gizmos mostly available during the whole operation, and the parallel bit usually makes things faster.

That is, unless you have a computed column in there that references a scalar function. I decided to write my test function to not perform any data access so it could be persisted. It’s dead simple, and I’m tacking it on to a column in the PostLinks table of the Stack Overflow database.

CREATE FUNCTION dbo.PIDMultiplier (@pid int)
        DECLARE @Out BIGINT;
        SELECT  @Out = @pid * 2
        RETURN @Out;

ALTER TABLE [dbo].[PostLinks]
ADD [Multiplied] AS dbo.[PIDMultiplier]([PostId]) PERSISTED

For this one, all we have to do is turn on actual execution plans and rebuild the index, then drop the column and rebuild again.


ALTER TABLE [dbo].[PostLinks] DROP COLUMN [Multiplied]

Here are my execution plans. The rebuild I ran when the table had my computed column in it stayed serial.

Hi, I'm garbage.

Hi, I’m garbage.

Parallel, sans computed column:

Dude, you're getting a parallel.

Dude, you’re getting a parallel.

But there’s a bigger fish in the pond

Probably the most important maintenance item you should be doing, aside from backups, is running DBCC CHECKDB. Seriously, if you’re not doing them both, start today. Ola Hallengren has basically done all the work for you. Back when I had a real job, I used his scripts everywhere.

Before we were so rudely interrupted by a soap box, we were talking about parallelism. This part was a little bit more complicated, but don’t worry, you don’t have to follow along. Just look at the pretty pictures. Sleep now. Yes. Sleep.

The first couple times I tried, the DBCC check never went parallel. Since I’m on my laptop, and not a production server, I can set Cost Threshold for Parallelism to 0. You read that right, ZE-RO! Hold onto your drool dish.

With that set, I fire up Ye Olde Oaken sp_BlitzTrace so I can capture everything with Extended Events. You’ll need all three commands, but you’ll probably have to change @SessionId, and you may have to change @TargetPath. Run the first command to start your session up.

EXEC [dbo].[sp_BlitzTrace] @SessionId = 61 , @Action = 'start' , @TargetPath = 'c:\temp\' , @TraceParallelism = 1 , @TraceExecutionPlansAndKillMyPerformance = 1 

EXEC [dbo].[sp_BlitzTrace] @Action = 'stop' 

EXEC [dbo].[sp_BlitzTrace] @Action = 'read'

With that running, toss in your DBCC command. I’m only using DBCC CHECKTABLE here to simplify. Rest assured, if you run DBCC CHECKDB, the CHECKTABLE part is included. The only checks that DBCC CHECKDB doesn’t run are CHECKIDENT and CHECKCONSTRAINT. Everything else is included.


ALTER TABLE [dbo].[PostLinks]
ADD [Multiplied] AS dbo.[PIDMultiplier]([PostId]) PERSISTED

Run DBCC CHECKTABLE, add the computed column back, and then run it again. When those finish, run the sp_BlitzTrace commands to stop and read session data. You should see execution plans for each run, and they should be way different.

Hell Yeah.

Hell Yeah.

Hell No.

Hell No.

So even DBCC checks are serialized. Crazy, right? I’d been hearing about performance hits to varying degrees when running DBCC checks against tables with computed columns for a while, but never knew why. There may be a separate reason for regular computed columns vs. ones that reference scalar functions. When I took the equivalent SQL out of a function, the DBCC check ran parallel.

ALTER TABLE [dbo].[PostLinks]
ADD [Multiplied] AS [PostId] * 2 PERSISTED

Of course, those online index rebuilds running single threaded might be a blessing in disguise, if you haven’t patched SQL recently.

I don’t have much of a grand closing paragraph here. These things can seriously mess you up for a lot of reasons. If you’re a vendor, please get away from using scalar functions, and please please don’t use them in computed columns.

Thanks for reading!

[Video] Office Hours 2016/01/20

In this week’s Office Hours Q&A, the entire team avoids spinlocks, then discusses the new Cardinality Estimator, the virtual inserted/deleted tables, monitoring tools, restoring single tables, DBA Viagra, and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go. (Rate our podcast, and you get 78% off our video training this month only!)

It’s Now Easier to Query sp_AskBrent’s Historical Tables

When you want to know why the server’s slow, you can ask Brent. sp_AskBrent® checks a lot of DMVs to find common causes of server slowness.

When you turn on @ExpertMode = 1, you can see additional information like file stats, wait stats, and Perfmon counters.

To log those to a table permanently, check out the @Output parameters. In this example, I’m writing all of the working tables to the DBAtools database:

EXEC dbo.sp_AskBrent
    @OutputDatabaseName = 'DBAtools'
  , @OutputSchemaName = 'dbo'
  , @OutputTableName = 'AskBrentResults'
  , @OutputTableNameFileStats = 'AskBrentResults_FileStats'
  , @OutputTableNamePerfmonStats = 'AskBrentResults_PerfmonStats'
  , @OutputTableNameWaitStats = 'AskBrentResults_WaitStats'

That creates the tables if they don’t already exist, and then adds the result sets each time it runs so you can track performance over time.

Why was the server slow yesterday afternoon?

When it’s time to query the data back out, you can add the @AsOf parameter with a date & time to see the main result set of alert results as of that moment in time:

The query goes 88mph, and then goes back in time

The query goes 88mph, and then goes back in time

That returns results within 15 minutes of either side of your time so you can see if there were any problems leading up to (or after) that moment. It’s really useful when someone says the server was slow last night.

However, that only shows the basic result set of sp_AskBrent® – not the detailed file/Perfmon/wait stats results. For those, you’re best off querying the tables directly for trending purposes.

v20: You Can Query Your File/Perfmon/Wait Statistics Over Time, Too

When you use the @parameter (or PerfmonStats or WaitStats), the latest version of sp_AskBrent® automatically creates a matching view, too, with a suffix of _Deltas. For example, if your results are in DBAtools.dbo.WaitStats, then sp_AskBrent automatically creates a view called WaitStats_Deltas. Querying that view will give you trending data over time because it automatically matches up samples for you to get running averages.

This way, if you run sp_AskBrent® in an Agent job every 5 minutes, you can track your SQL Server’s statistics over time. You can query it back out with SSMS, or your favorite reporting tool.

You can grab sp_AskBrent® along with all our other scripts in our handy download pack. Enjoy!