Blog

WSFC-SQL-LogoThe SQL Server community has been waiting a long time for an in-depth Windows clustering course on current versions of Windows.

See, AlwaysOn Availability Groups as well as good ol’ Failover Clustered Instances both rely on Windows clustering in order to manage uptime. If you’re going to do an AG or an FCI, you gotta know Windows.

Edwin Sarmiento, a well-respected cluster expert, has brought out an online course, and it’s big. I’m hearing great things about it from folks I know who have gone through it.

For the next five days, it’s $385. This is his new launch special that also includes his HA/DR deep dive course, Personal Lab course, and Azure HA/DR Hybrid Solutions course. That’s a heck of a deal. If you’re doing clustering, you should go take a look.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
1 comment ↑ Back to top

One of the most popular things in our First Responder Kit is our HA/DR planning worksheet. Here’s page one:

Page 1 - how our servers are doing now, and what the business wants

Page 1 – how our servers are doing now, versus what the business wants

In the past, we had three columns on this worksheet – HA, DR, and Oops Deletes. In this new version, we changed “Oops” Deletes to “Oops” Queries to make it clear that sometimes folks just update parts of a table, or they drop an entire database. We also added a column for corruption (since your protection & recovery options are different than they are for Oops moments).

When people first see this worksheet, they usually scoff and say, “The business is going to tell me we never want to lose data, and we’re never allowed to go down.” No problem – that’s where the second page of the worksheet comes in:

RPO/RTO cost range estimates

RPO/RTO cost range estimates

Find the amount of data you’re willing to lose on the left side, and the amount of downtime you’re willing to tolerate across the top. Where the boxes match up, that’s a rough price range of the solution.

In this version, we added an asterisk to a lot of supposedly synchronous solutions aren’t – for example, Always On Availability Groups don’t actually guarantee zero data loss. I still keep that sort of thing in zero data loss because most of the time, it’s zero data loss, but you just need to understand it’s not a guarantee.

I like printing those two pages front and back on the same piece of paper because it helps management understand that requirements and costs are two sides of the same coin. It’s management’s job to pick the right box (price range), and then it’s IT’s job to build a more detailed estimate for the costs inside the box. The third and final page of the worksheet breaks out the feature differences for each HA/DR option.

If you’re one of the tens of thousands of folks who’s signed up for email alerts whenever we update our First Responder Kit, then you’ve already got an email this week with these new changes. If not, head on over and pick it up now.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
8 comments ↑ Back to top

So you’d like to fix a bug or contribute code to the First Responder Kit, but you’re new to Github. Let’s get started.

1. Open a new issue on Github.

Go to the Github issues list for this repo, and start a new issue. Here’s the example that I’m going to work on today:

Example of a Github issue

Example of a Github issue

In the issue title, put the name of the script, and some basic details so someone can see at a glance what’s going on.

In the body of the issue, put as much details as you can including:

  • Description of the symptom
  • What you wanted the code to do
  • What it actually did instead
  • Ideally, how you want to fix it

And submit the issue. If you’re going to work on it yourself, assign yourself to it if you have the permissions. (If you don’t, join the Slack channel, then hop in and ask.)

2. Get the Github issue number.

After you add the issue, the number will be in the title, and at the end of the URL:

Getting the Github issue number

Getting the Github issue number – here, #324

Now it’s time to work on some code!

3. Download the repository.

First, get a Github account and install the Github desktop app. While this can all theoretically be done through the web, it’s a bit painful.

Then go to the First Responder Kit’s page, and click the Clone or Download button at the bottom right of this screenshot:

Download the repository

Download the repository

That will open your Github app and let you pick where to save the files locally.

4. Make sure you’re in the dev branch, and sync it.

There are multiple active branches – you want to start in the dev branch where pre-release fixes are happening. If you start in the master branch (release), you’re likely to be working with older code, and your changes will get rejected.

Here’s my Github app:

Get in the dev branch and sync it

Get in the dev branch and sync it

At the top left of my screenshot, there’s a dropdown for branches – mine says dev. Make sure yours does too, and then click Sync at the far right:

Dev, Sync

Dev, Sync

5. Create a branch for your code.

In the Github desktop app, click on the new-branch button right next to the dev dropdown:

Creating a branch

Creating a branch

For the name, use the issue number from Github, plus a slash and your name. This indicates that it’s your code for that particular issue – remember that other people may end up working on this same issue.

Make sure you’re branching from dev, not master.

6. Write your code, test it, and check it in.

For code guidelines, check out the CONTRIBUTING.md file in the repo.

Test your code against a case-sensitive instance of SQL Server 2008 and 2016, the oldest and newest supported versions. Other testers will also repeat your work, and automated testing will kick in later, but trust me, you want to do a sanity check first. Don’t get your pants pulled down in public.

Then click on the “1 Uncommitted Change” button at the top of the Github desktop (and there may be multiple changed files) and review your changes. They’re color-coded as to which lines have been removed from scripts, and which lines have been added.

Only check in changes and files that relate to your issue. If you accidentally changed a file that you didn’t mean to, in the Github app, right-click on that file name and click Discard Changes.

Checking in your fix

Checking in your fix

At the bottom left is your check-in title and description.

The title should:

  • Start with the issue number, like #324 – this is Github magic to link to an issue
  • Include the names of the files that were changed – this is helpful to others when they’re scanning a long list of titles of check-ins
  • A brief description of the issue

Then click Commit, and click Publish at the top right to sync your changes with the rest of the world. This doesn’t merge your code with the toxic sewer that is the dev branch just yet – you’re still on an island by yourself, but at least other people can get to your code.

7. Announce that your code is ready for testing.

Go back to your Github issue, and add a comment (don’t close it) that says you’re ready for testing. Include any notes on how people can reproduce the issue, or know that your code works.

Ready for testing

Ready for testing

In the Labels dropdown on the right side, add the label ReadyToTest so folks can find your work easier.

Someone else has to test your code before it gets merged into the main branch, so it’s in your best interest to make it as easy and friendly as possible for someone to know that your code works.

8. Ready to merge? Add a pull request.

After someone else has tested your code, and you’re both happy that it works, open the Github app and click Pull Request at the top right:

Creating a pull request

Creating a pull request

The check-in title and description will flow into here automatically, and click Send Pull Request. The maintainers will take a look at it and make sure things are kosher.

Stuff you don’t have to do

Release notes and changelog – you’ll notice that the top of each script has a changelog, version number, date, and release notes. Those are added when we consolidate a group of pull requests into a single release.

Documentation – if you don’t feel like writing it, that’s totally okay. We’re happy just to get code in at least, and we understand that the kinds of folks who write code don’t always overlap with the kinds of folks who like to write documentation. If you write things like a new check for new SQL Server problems, just know that we may not merge your code in with the dev branch until it’s also been documented.

For questions about the process, hop into the Slack channel. If you’re not already a member, sign up here. And thanks!

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
3 comments ↑ Back to top

This August, we’re goin’ back to Texas.

Last year, we flew the team out to Round Rock, Dell’s HQ, and ran all kinds of SQL Server experiments in their hardware labs. We broadcasted a whole bunch of webcasts live on the net for you to watch and see what we learned.

This year, we’re going back again, and the lineup includes sessions on:

  • How to Measure SQL Server
  • The Unbearable Lightness of BEGIN
  • Downtime Train
  • Performance Overhead of TDE, Query Store, BPE
  • And everybody’s favorite from last year: Watch SQL Server Break and Explode

Head on over and register now for free. Space is limited!

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
17 comments ↑ Back to top
Company retreat, 2016

Company retreat, 2016

I always wanna be honest with you, dear reader, and let you see how running a lil’ consulting shop goes. It’s been fun sharing a lot of our happy growing moments along the way. This one, though, is a lot less fun to share.

Yesterday, we had to let go of Angie, Doug, and Jessica.

The background: we basically sell emergency room services for SQL Server, and we’ve consistently been backlogged with work. In early 2015, we figured that me doing sales was holding the company back. If we hired a salesperson, we believed we’d be able to acquire more customers faster, and sell more work to past customers. So we hired Jessica, and staffed up on consultants.

This year, we’ve learned that a salesperson can’t really bring in more emergency room work (seems obvious in retrospect, but we were hoping for followup work), so I had to make a tough decision. I had to right-size the company back to where we’re regularly busy, maybe even turning some work away, just to make sure that we’re profitable overall. The training side of the business is still doing really well, and the company overall isn’t in trouble – but it would have been soon.

I’m bummed out, obviously, because it’s a failure on my part. These people are my friends, and I wanted to build a place where they could thrive for the rest of their working careers if they so chose. I’d always heard it’s tough to bridge the chasm between a 3-FTE consulting shop and a 10-FTE one, and now I really understand why.

Wanna keep in touch with them? Here’s their emails:

  • Angie Rudduck in Portland, Oregon – if you’re looking for an incredibly upbeat, fast-learning junior DBA or SQL Server support person, I’d highly recommend Angie. She did a great job streamlining our backup/recovery/CHECKDB education process.
  • Doug Lane in Cedar Rapids, Iowa – want to make your SQL Server faster, fast? Doug has been doing high end performance tuning for the last few years, and he’d be a huge asset to any SQL Server shop that needs a DBA who can build bridges between developers and SQL Server.
  • Jessica Connors in Chicago, Illinois – in the database world, salespeople often have a reputation for being slimy, but Jessica is totally different. She listens to your pains and matches you up with the right relief. She was a pleasure to work with.
Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
44 comments ↑ Back to top

SQL Server 2012 SP3, 2014 SP2, and 2016 users are going to find a lot of stuff to love in here. The new per-query memory grants fields in the plan cache are exposed in sp_BlitzCache, and sp_BlitzFirst also shows memory grants for live running queries in @ExpertMode = 1. Here’s the details:

sp_BlitzCache v3.1:

  • Show cost for stored procedures. #339
  • Warn about trace flags added at the query level, and global trace flags. #361
  • Add warnings about Remote Queries. #315
  • Do not show Forced Plans warning if the real cause is forced parameterization. #343
  • Fix divide-by-zero error if Cost Threshold for Parallelism is 0. #358
  • Fix warning for unparameterized query. #334

sp_BlitzFirst v25 (The Artist Formerly Known as sp_AskBrent):

  • Add new memory grants columns to 2012-2016 live queries output. #362
  • Add SQL login to live queries output. #354
  • Filter Perfmon counter display to skip counters with zeroes. Still logged to table though. #356

sp_Blitz v53.1:

  • Warn about 2016 Query Store cleanup bug in Standard, Evaluation, Express. #352
  • Updating list of supported SQL Server versions. #344
  • Fixing bug in wait stats percentages. #324

sp_BlitzIndex v4.1:

  • Compression information in @Mode = 2. #18
  • Use recently-modified check to improve indexes-not-in-use recommendations. #220
  • Alphabetical sort for @GetAllDatabases = 1, @Mode = 2 output. #351
  • Remove per-day cost filter for missing indexes in @Mode = 4. #338
  • Missing index benefit is now labeled per-day to make it more obvious. #330

To get involved with the development & testing, head over to the Github repository, or download the full First Responder Kit with our checklists & guides.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
0 comments ↑ Back to top

This week, Erik, Tara, Jessica, Doug, and Angie discuss queries, installing multiple instances of SQL server on a Windows VM, using DENY Database rules, migrating databases to Amazon RDS, availability groups, using filtered indexes, and more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Play

Office Hours Webcast – 2016-07-13

 

How should I convert a database from Access to SQL Server?

Jessica Connors: From Steven Mitchell, he says, “For converting MS Access 2010 to SQL 2012 would using SSMA tool or upsizing tool be preferred? Or some other method?” I have a feeling that we don’t know.

Tara Kizer: We don’t use Access.

Angie Rudduck: Sorry.

Jessica Connors: That’s important.

Doug Lane: Having used Access in the past, I would shy away from that because Access has a nasty habit of rewriting things in a way that just really makes no sense. It might be logically equivalent but you’re probably better off just importing. Like if you want to do the quick and dirty way, just do import data from Access data source. Go through all the tables, import the data that way. Because you’re going to end up with—and maybe you want this, but probably not—like keys coming through, constraints, weirdness. You’ll end up with fields that get converted into datatypes you don’t want. So you’re probably better off just importing that data and manually rigging up how you want those columns to come in.

Jessica Connors: All right, enough with Access.

 

When will Microsoft release an updated BPA?

Jessica Connors: Question from Henrico. He says, “When will MS release BPA for 2014?” Do we know?

Erik Darling: Ask MS.

Jessica Connors: Microsoft.

Doug Lane: I’m not even sure what BPA he’s referring to.

Tara Kizer: It’s the Best Practice Analyzer but it’s already out. I used it at my last job. Maybe the question is about 2016 since that was just released?

Erik Darling: Probably.

Jessica Connors: Do not know.

 

What is free virtual memory?

Jessica Connors: Wow, James really wants us to answer this question. He asked it twice, three times. “What is free virtual memory? What is free virtual memory? How to resolve the issue?”

Tara Kizer: What are you trying to solve? We need some context.

Jessica Connors: Yeah, anyone know what is free virtual memory? Is that a proper question?

Erik Darling: I guess it’s when you get virtual memory for free from the virtual memory store.

[Laughter]

Erik Darling: It was on sale that day.

Angie Rudduck: Yeah.

Jessica Connors: All right, James, we need more context.

 

Should I enable Lock Pages in Memory on VMs?

Jessica Connors: Onward to Edward. He says, “I see conflicting posts on the internet for lock pages in memory versus VMs. Thoughts? Pros and Cons? I’m thinking of adjusting the minimum memory to one half of max.”

Tara Kizer: I always set lock pages in memory as just say standard across servers. I worked in large enterprises so we just had a document to fall on, lock pages in memory was always set on the dedicated database servers. As far as min memory to half of max, that’s also what are standard was at these large corporations.

Erik Darling: Yeah, I think for VMs more specifically though you have to be careful because if you’re not setting reservations at the host level, then I think stuff can get ripped away anyway.

Tara Kizer: Yeah, we never even worried about that on VMs but maybe that’s just… I don’t know.

Erik Darling: I mean, it’s a VM, you don’t care about performance anyway.

Doug Lane: Didn’t this come up as a question last week where Brent answered it too and he said I could go either way but I usually do it?

Tara Kizer: That’s right, yeah.

Doug Lane: Rings a bell.

Angie Rudduck: He said it was not the first thing to worry about or something like that.

Doug Lane: Yeah, yeah.

Angie Rudduck: If that’s the only thing you have to worry about, good job.

 

What’s the fastest way to copy data between servers?

Jessica Connors: Question from J.H. He said, “Would select * into be the fastest way of copying large data, or some type of bulk method, or something else from one server to another via linked server?” He says, “My tests look like select into is very fast.”

Erik Darling: It depends on what version you’re on. SQL Server 2014 and up select into can run parallel which is a nice added bonus for that. If you’re going to move a large amount of data, I would probably shy away from select into and probably use a method that sort of respects your transaction log a bit more and breaks it up into batches.

Tara Kizer: I would actually use SSIS for this since it can do it in bulk. It has a fast way of transferring data between servers.

Doug Lane: What do you guys think about minimal logging in that case?

Erik Darling: If it works, it works. It doesn’t always work. Don’t forget that TABLOCK.

Doug Lane: Right.

Tara Kizer: The problem with minimal logging is you’re reducing your recovery points though. So, yeah, you can minimally log things but you don’t have recovery points in that window.

Erik Darling: Yeah, so if it’s a one-time data move and you don’t care about anything during that window, then go for it. But otherwise, if your database is in full recovery mode, you’re not going to get much out of minimal logging.

 

What’s the best way to recover free space from data files?

Jessica Connors: I’m going to butcher this name—I don’t know how to say it. “What is the best way to regain the free space in data files after deleting a large amount of data?”

Tara Kizer: Well, I mean, it’s shrinkfile. So a one-time shrink is okay if you have to delete a lot of data and you want to reclaim that space. If you don’t need to reclaim that space, leave it there. If you have enough storage, just leave it there because when you shrink the file down you’re going to be causing some performance issues, fragmentation. One time shrink though, go ahead and do it. DBCC SHRINKFILE.

Erik Darling: The thing with shrinking data files, at least when I’ve done it is that if you only shrink it to the point where you have free space, like if you just sort of truncate it to where your data ends or a little bit over that, like a gig or 512 megs over that, you don’t really see the fragmentation problems because you’re just getting rid of empty space at the end of the file. So you kind of don’t see the same problems, at least I haven’t. I just shrunk down a bunch of databases on my availability group VM thing because I blew up Stack Overflow with some horribleness. But I shrunk it down to just sort of where the [while 00:05:48] things ended and didn’t see much fragmentation change. So maybe that’s a 2016 thing though. Be careful with that.

Tara Kizer: Maybe. I’ve always just shrunk it down so that there was 10 percent or 20 percent free space that way autogrows didn’t have to occur immediately after we’re done shrinking.

Erik Darling: That’s a good plan.

 

How can I run cross-server queries without linked servers?

Jessica Connors: Question from our friend Justin. He says, “Other than using a linked server, do any of you SQL super geniuses know of a way to run a query from instance A against instance B?”

Tara Kizer: You can use OPENQUERY but usually we want you to do this work in the application. Have the application query the two different data sources. OPENQUERY, you can use OPENQUERY, it’s just like a linked server.

Erik Darling: It’s not necessarily better, it’s just different.

Tara Kizer: Yeah. It does have a benefit of getting around an issue where with a linked server it can query the entire table even though you have a where clause on it. So if you have a ten million row table and your results set is only going to be ten rows, linked server can pull over those ten million first and then do the filtering. OPENQUERY can get around that issue.

Erik Darling: There’s a really good talk by Conor Cunningham, one of the bigwigs at Microsoft in the SQL Server department, on the SQLBits website. So the SQLBits conference, they put all their videos online. They did one a few years back on distributed queries where he talks a lot about the pros and cons of different methods of querying across servers and how it works behind the scenes. You should definitely check that out if you’re really interesting in querying data across servers.

 

Why shouldn’t I install multiple instances in a VM?

Jessica Connors: Question from Raul. He says, “Are there any downsides to installing more than one SQL Server instance on a Windows VM?”

Tara Kizer: How many do you have, Erik?

[Erik puts up 10 fingers]

Tara Kizer: 10?

Erik Darling: More. I’d stick my foot up, I don’t want to gross anyone out.

Tara Kizer: On a test system, go for it. But on a production system, we don’t recommend it.

Erik Darling: Yeah, stacked instances in any environment, whether it’s physical or virtual is just bad news. If you’re already virtualized, why are you stacking instances? Spin up a new VM, it’s right there anyway.

Jessica Connors: I feel like we get that question a lot.

Angie Rudduck: Yep. That and shrinking, or replication, which there’s already one in there too.

Jessica Connors: Or, what are your favorite monitoring tools?

Erik Darling: I don’t mind answering that. That’s a nice question.

Jessica Connors: But it never changes. I mean, has it changed?

Doug Lane: Well, there all nice questions.

Jessica Connors: There’s no such thing as a dumb question.

Erik Darling: I just assume when someone asks that question it’s a vendor plant. “They always mention us! Do it again!”

[Laughter]

Doug Lane: “You don’t believe me? Watch, I’ll ask.”

Erik Darling: It’s the same thing every time, it’s amazing.

 

When would you use the DENY database roles?

Jessica Connors: Question from Samuel. He says, “What would be a scenario where you would use the DENY database roles?”

Erik Darling: Developers.

Angie Rudduck: Yeah, I think he says two things. He says, “Why does DENY even exist if you give datareader isn’t it the same as denydatawriter?” But not necessarily true. I think the key, my understanding anyway, is that SQL is the most restrictive. So if there’s a denydatawriter, even if you have datawriter as well, you’re getting denied. So I think it’s kind of safeguard, but I don’t know.

Doug Lane: It’s an easy way of making exceptions security wise. So you say, “They should be able to write across all these databases, except I’m going to DENY this particular one.” So the DENY is very explicit whereas it can be used as sort of an exceptions tool to just kind of cut out, “All right, don’t give them this. Give them this, give them this. But they can do everything else that the other permission that I gave them allows them to do.”

Tara Kizer: We used the DENY at my last job for sensitive data. We had active directory groups. We had a lot of people in certain active directory groups and a lot of people in a different one. One specific group got the denial on certain columns in a table, like address information, emails, sensitive information. The other group was allowed to see it but if you were in both groups, you got the denial because DENY overrides the grant.

Angie Rudduck: I like that.

Doug Lane: I was going to say you’ll get a lot of questions on that if you ever take a Microsoft certification exam.

Tara Kizer: Oh really?

Doug Lane: Yeah.

 

How do I get data into Amazon RDS?

Jessica Connors: Question from Kimberly Lu. She’s migrating to Amazon RDS. She says, “My first experience migrating a database to Amazon RDS has been painful because I could not use backup/restore. I had to backup/restore to a local server, generate scripts, recreate users, import/export data using wizard. Do you have a less painful process?”

Tara Kizer: Nope. It’s RDS. If you’re using Amazon EC2 it would be easier since it’s a virtual machine, but yeah, RDS, it’s like Azure SQL, right? Limited features.

Erik Darling: Yep.

Angie Rudduck: Somebody else asked about going to the cloud and how to do it. I think that that’s the key. If you’re doing the VM versus the hosted database, if you go to the hosted database at all, you have to script all of your data into or out of. There’s no backup/restore to hosted databases. So if you’re not prepared for that, maybe don’t go to a hosted database.

Erik Darling: Yeah, the one sort of exception is going to Azure. Well, you can use it for RDS too I believe. There’s something called the SQL Database Migration Wizard. It’s available on CodePlex. It just does sort of a BCP out and in to Azure. I’m pretty sure you can use it for RDS as well but I haven’t tried. So it’s okay for that but it’s still slow if you’re migrating a lot of data because you are doing a BCP out, writing a file, blah, blah, blah. Not the fastest thing in the world.

 

Can I use different drive letters with Availability Groups?

Jessica Connors: Availability group question, question from Tim Smith. He says, “One of the benefits of availability groups is not having to have identical hardware. Does that include drive/drive letters? I tested and it looked like you could have a database on different drives on each node but a recent outage has me questioning that.”

Tara Kizer: I’m pretty sure you can. I’ve always had the same drives across because it’s just so much easier with restores but I’m pretty sure it acts just like mirroring where you can have different drives because you’re telling it what to do during the restore command. I’m not too sure what issue he’s referring to on the failover though. But I’ve always used the same drives. I’m pretty sure that it supports it across different drive letters.

Erik Darling: Yeah, anytime that I’ve had to build a cluster or anything else, I’ve always made sure that the drive—even like different servers using the same sort of hardware, I always have the drive letters be the same and do the same pathing for my database and log files, just so that at least it’s consistent across all my stuff and I know where to expect things to be. So you can do it, but it’s not really a best practice to have things be totally different across servers like that, especially when you are counting on a one-to-one copy when you get over there.

Tara Kizer: Yeah, we would need more info on what happened when you did the failover.

Angie Rudduck: He said, “On the restore of the database after failover the NDF file was gone.” I feel like there was a different problem there. Like maybe you had a problem with your drive and somebody actually deleted something. Maybe there wasn’t just a failover. Maybe that’s why you had a failover.

Tara Kizer: I wonder if the NDF came later after the availability group was set up? If another file was added and it didn’t get added to the secondaries? I don’t know. I haven’t tested that.

(Postscript: yes, this is where different drive letters will fail.)

 

Can you turn off auto-updating for SQL Server 2016?

Jessica Connors: Question from Scott Kelly. Isn’t that an astronaut? Scott Kelly? Didn’t he spend a year in space?

Erik Darling: Yeah.

Jessica Connors: Scott Kelly is here, guys.

Doug Lane: That’s Commander Scott Kelly to you.

Erik Darling: How’s your bone density?

Jessica Connors: Yeah, they did an interesting test on him and his twin. Let’s see here. “Is SQL 2016 auto updating not able to be turned off?”

Doug Lane: Oh, boy.

Angie Rudduck: Like Windows updates but SQL updates?

Erik Darling: You can turn SQL updates off. You can turn automatic updates off for anything. Have you had an update? Because I haven’t.

Doug Lane: So there’s the phone home thing that I think in—what is it—Developer and in Express editions you can’t turn it off. If I remember right. But I don’t know if that’s the same as the auto update. I think you can disable auto updates from SQL Server.

Erik Darling: Yeah, the Customer Experience Improvement Program.

Jessica Connors: Yeah. Scott Kelly wants us to know that he did not work for NASA.

Angie Rudduck: No, he did. He’s not the astronaut but he did work for NASA.

Doug Lane: How about that.

Jessica Connors: Oh, he did work for NASA. That’s awesome.

Tara Kizer: I went backpacking with a guy who currently works for NASA this past weekend. He’s a physicist or something like that. It was really cool.

Angie Rudduck: Did he do better at the hike?

Tara Kizer: He stayed back with the person that was having high-altitude issues, or altitude sickness issues, I should say. He actually had to carry her out on his back because it was so bad. Yeah, I’ll tell you guys later.

Jessica Connors: You did a fourteener, right? You made it to the top…?

Tara Kizer: I completed it. It was rough. It was really rough.

Jessica Connors: Good job, Tara.

 

Can you do columnstore indexes on top of GUIDs in SQL 2012?

Jessica Connors: From Joshua. He says, “Is there any workarounds that allow columnstore indexes with a G-U-I-D in SQL 2012?”

Tara Kizer: GUID.

Jessica Connors: I’m like, it’s not GUI.

Tara Kizer: I don’t know, Erik?

Erik Darling: What is the question? “Are there any workarounds to using a columnstore index with a GUID?”

Doug Lane: Do they not allow GUIDs in 2012? I don’t even know.

Tara Kizer: I don’t like GUIDs so I don’t use them.

Erik Darling: Yeah, columnstore in 2012 in general is kind of bogus. You couldn’t update it and there were a lot of limitations so probably not. I haven’t tried explicitly to use a GUID with it and I certainly haven’t tried to use a GUID with it in 2012 because that was just a no fun version of columnstore. It was just lackluster.

 

Do you recommend Converge Solutions?

Jessica Connors: Question from Julius. He says, “Do you folks recommend running SQL Server on Converge Solutions? Any experience, drawbacks, benefits, etc.?”

Tara Kizer: I don’t even know what that is. Do you guys know?

Doug Lane: I have never heard of it.

Jessica Connors: All right, never heard of it, Julius.

Doug Lane: Sorry.

Tara Kizer: Our answer is no.

[Laughter]

Tara Kizer: I’ll have to look that up.

(Postscript: I’m guessing they weren’t referring to a brand name, and instead just general converged infrastructure, which is a pretty big/broad topic.)

 

Why is only one of my TempDB data files growing?

Jessica Connors: Ronny… oh boy, this looks long. Do you want to read it, Angie?

Angie Rudduck: Oh sure. So Ronny says, “I have multiple tempdb files and only one file is growing, actually the second file that was created. All other files are still reflecting the initial memory size.” He has trace flags 1118 and 1117 enabled—oh, no, just 1118, and wants to know that he also needs to add 1117. Does he need to delete the other files and start over before adding the additional trace flag?

Erik Darling: No. What you need to do is make sure that you grow all the other files to the same size as the second file. Elsewise, all of your queries will still be attracted to that larger data file and it will keep growing. So if you can, grow out your other files to match the size of the second file. If not, you’re going to have to deal with the kind of painful and possible corrupting experience of shrinking a tempdb data file.

Tara Kizer: They actually are saying it hasn’t been a problem—or, Paul Randal said corruption on shrinks has not been a problem in several years. They fixed it with 2012 I believe he said, or maybe even earlier. They just never made it public.

Erik Darling: Oh, okay, well, that’s good. I hope you’re using 2012 or later then, pal.

Tara Kizer: It might have been earlier. I can’t remember if it was an email or what it was.

Angie Rudduck: Yeah, if only they publicized when corruption is not happening anymore.

Erik Darling: They should just tell people that shrinking databases will cause corruptions so everyone will shut the heck up about it.

Doug Lane: So you also want to make sure that your file growths are set to zero, otherwise your tempdb files will keep growing. You don’t necessarily want that. It’s better to set it at the right size and then just kind of let it go with that because that may be why you ran into trouble in the first place is that you’ve got file growth on that one file and not on the others.

Tara Kizer: I actually quote/unquote disagree with that.

Doug Lane: Okay.

Tara Kizer: It’s not that that’s bad, it’s just it has to do with monitoring. So on the Brent Ozar website it says to grow your files all the way out to whatever that drive is for tempdb. If you do that, then your monitoring software is going to be alerting, the on-call DBA is about to get a disk space issue. Now of course, you could have an exception for the tempdb drives but I try to avoid exceptions. So instead, grow your files out to what they need to be and then keep autogrowth on. Have the file sizes be identical and you start out with the same exact autogrowth and they should grow proportionally based upon free space. Should not have an issue. If you do have an issue, I wonder what’s going on. You might have some kind of weird thing going on. I like to keep 20 percent free space to avoid alerts. Then when you do get an alert, grow out the mount point or drive.

Angie Rudduck: That’s for good IT teams that have alerts.
Tara Kizer: Yes.

Angie Rudduck: If you don’t have alerts, Doug has a good point. If you never allow them to grow, you never have your disk go over space, but your tempdb could go offline.

Tara Kizer: That’s the problem with that solution is that if you have a query, this wild query that runs, it’s going to fail if it needs more space in tempdb. Whereas if you had free space, well, if you have alerts in place, DBAs are going to start receiving alerts that we’re running out of space and you might be able to add the space before the query fails.

 

How can I move all my nonclustered indexes to another filegroup?

Jessica Connors: Speaking of running out of disk space, Gordon asks, “I’m running out of disk space for a one terabyte database with a single data file. So the plan is to move clustered indexes to a new file and a new filegroup. Given that everything is currently on the primary filegroup, would it be a good idea to move all the non-clustered indexes as well?”

Erik Darling: I would set up two different filegroups actually because if you’re doing this sort of work and you’re going to get some benefit out of it and you want to be able to restore the database as files or filegroups separately, you’re not going to want everything in the primary file group because you’re going to need that primary thing online first in order for the database to be accessible and restore other files and filegroups. So I would actually create one for clustered and one for non-clustered and move them off into one into each so that I can keep the primary part small and just have like the system databases and stuff in there. Be careful. Depending on what version you’re on because not all of the rebuilds are able to be online if you have max column types prior to 2012. So if you’re planning on a seamless move, mind your datatypes.

 

Are there any gotchas with SQL 2014 SP2?

Jessica Connors: Did SQL Server 2014 SP2 just come out? I thought they weren’t doing that anymore. You guys just told me they weren’t doing that anymore.

Tara Kizer: It will be rare.

Erik Darling: Starting in 2016.

Jessica Connors: You’re liars. Gotcha. “Any issues with this SP that y’all are aware of?”

Tara Kizer: Not yet.

Erik Darling: No, it’s a lot of fun.

Tara Kizer: It’s too new.

Angie Rudduck: It is too new.

Erik Darling: A lot of the stuff in Service Pack 2 was oddly catching SQL Server 2014 up to Service Pack 3 of 2012. It did add a couple of other things but most of it was just playing catch-up there with a lot of like the stuff that you can see execution plans and finding out about memory grants.

Tara Kizer: Oh, the memory grant issue. Memory grant columns are in Service Pack 2?

Erik Darling: They are and I’m going to be updating stuff.

Tara Kizer: Yeah, I’ve got a client that’s been waiting on those columns so I’ll let them know. Just real quick on the Service Pack 2 thing. When I said it’s too new to know about issues, just remember that service packs are fixing hundreds or thousands of bugs. So by not installing it, you’re more at risk of not installing it than you are of installing it. So service packs, hotfixes aren’t necessarily a bad thing and you don’t necessarily have to wait a while to see how other people are doing.

Erik Darling: If you’re worried about it, install it on a dev server. Let it hang out for a month and do your thing on it and then push it to prod. If you don’t have a dev server, then quit your job.

 

Are there any downsides to filtered indexes?

Jessica Connors: Question from Joe. He says, “Are there any particular downsides to using filtered indexes? I’ve recently been performance tuning some queries that have non-parameterized search arguments and they’re like magic.”

Tara Kizer: They are like magic. The only issue is if you have a query that has the opposite of what your filtering condition is, then it’s going to have to do a scan. But you know, you want your filtered indexes to be what your queries are usually going to do. Like is deleted equals zero for instance. If you have an is equals one, probably that’s going to be a maintenance job that runs at night to clean out data. It’s okay for those things to do a scan.

Doug Lane: The higher the cardinality on whatever it is you’re filtering on, the more effective that index will be. So if you’ve got data that’s split 50/50, it will be helpful. But if you’ve got data where you’re going to be looking for something where the filter condition there is down to say ten or fewer percent of the rows, then you’re in really good shape. Incidentally, I just put out a statistics course that does include filtered stats, which you get with filtered indexes.

Tara Kizer: All right.

Doug Lane: So you can check that out too.

Jessica Connors: I think about three months ago we just included access to our entire SQL training library for anyone that does the SQL Critical Care. If you want any of these guys to take a look and assess the health of a database, let me know. Reach out. For some reason, July and August are slower for us. I think it’s because summer.

Tara Kizer: Everyone is vacationing.

Angie Rudduck: Too much beer and barbecue.

Jessica Connors: Yeah, vacations. So if you are on this webinar and you reach out to me, Jessica@BrentOzar.com, we are supplying some special deals within the next two months for our SQL Critical Care product. So not only do you get the Critical Care but also the additional training, access to our SQL training library.

Angie Rudduck: Isn’t Brent come out with a new one? Not us three… but the other Brent?

Tara Kizer: Not these fabulous Brents.

[Laughter]

Angie Rudduck: Isn’t he coming out with a new video, Doug? I just expect you to know.

Doug Lane: I think so. I know I’ve got another one coming.

Angie Rudduck: Oh, see. There we go.

Tara Kizer: He painted his wall green so he must be working on something soon.

Doug Lane: Oh.

Tara Kizer: Not you—Brent.

Doug Lane: Oh, Brent, yeah. I’m thinking, “Is my tint that bad?” More stuff on like execution plans and querying.

Angie Rudduck: Nice.

Jessica Connors: Cool beans. All right, guys, that is all the time we have for today. I’ll let you go. Adios.

Erik Darling: Later.

Brent Ozar Unlimited Team
Brent Ozar Unlimited is a boutique consulting firm focused on understanding your environment and strategy. We partner with you to objectively identify pain points and develop solutions that align to your business goals. Your experience comes first; we share our knowledge and expertise to help you.
0 comments ↑ Back to top

Our class inventory has been growing lately – especially with Doug’s awesome new video on statistics – and it’s time to make things easier:

The DBA Bundle – $399 – includes our classes on HA/DR, VM/SAN/HW, and job interviews. (Because hey, you might get this stuff wrong and need a do-over.) A $627 value.

The Performance Bundle – $499 – includes execution plans, indexes, statistics, T-SQL, and more with a list price of $984.

The Everything Bundle – $699 – our legendary package has learned a new trick: now, you also get early access to our currently-in-production training classes. Right now, that’s our upcoming Performance Tuning When You Can’t Fix the Queries class, and it’s only available in the Everything Bundle. Total value – over $1,700.

And this month, to celebrate the new lineup, the Everything Bundle is just $599. Enjoy!

Update: if you have questions about your training videos, please email us – click the Contact button at the top of this page. We don’t really do support in blog comments. Thanks!

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
18 comments ↑ Back to top

Here at Brent Ozar Unlimited

We have a proud tradition of not blaming index fragmentation for everything. There are points you should deal with it, but they’re probably not 5% and 30% and 1000 pages. But that’s not what this blog post is about. I’m hoping to clarify why we’re more interested in up to date statistics, and also why statistics outside of indexes aren’t really the most helpful thing. If they were, we’d all just create statistics and every query would magically blaze from select to offset without a care in the world.

Statistics: It’s What’s Inside That Counts

Statistics are what SQL Server uses to figure out how many rows to expect from accessing a data structure. You can do some things that fundamentally break this, like using functions in joins or where clauses, using local variables or optimize for unknown, using table variables without recompile hints, and a sniffed parameter can just allow SQL to run a query without guessing at all. It already guessed once. No tag backs. As you may imagine, this is important information for SQL to have for running your queries optimally.

Indexes: Pride and Vanity

Indexes of the nonclustered variety contain subsets of your table or view’s data. Clustered ones are your table or view data ordered by the clustering key(s). Focusing on the nonclustered variety, they’re the “and the [band name]” to your clustered index’s “[lead singer name]”, and they’re great for providing SQL with a more appropriate data structure for your query.

If you don’t need to select all the columns, and you do need to filter, join, order by, or aggregate a column not in the key of the clustered index, nonclustered indexes get a solo after the chorus. Give the drummer some.

Nonclustered indexes will, under normal circumstances, get a statistics object created with rolling density information for the key columns going left to right, and a histogram on only the leftmost column.

I AM I SAID

I AM I SAID

With that out of the way

Why do we care more about statistics being updated than indexes being fragmented? Largely, because reading pages with some empty space from a fragmented index is oh-so-very-rarely the root cause of a performance issue. Especially if those pages are already in memory. Out of date statistics can allow SQL to continue to make some really bad guesses, and keep giving you a lousy execution plan no matter which way you tune your query.

The bigger your table gets, the worse the problem gets. Prior to 2016, if you don’t turn on Trace Flag 2371, about 20% of your table’s rows need to change before an automatic statistics update kicks in. For a 100 million row table, this can be a long ways off. Poor cardinality estimates here can really sink you. Rebuilding indexes for a 100 million row table is a B-U-M-M-E-R.

Log Shipping? Mirroring? Availability Group? Good luck with that.

Crappy server? Low memory? Slow disk? Dead man walking.

You may ultimately spend more time and expend more server resources defragmenting indexes than your queries will spend reading extra pages from fragmented indexes. Rebuilding or reorganizing large indexes can be a special kind of brutal.

Consider the process

Read a bunch of index pages with sys.dm_db_index_physical_stats to figure out if there’s fragmentation, reorganize or rebuild based on feedback.

  • Reorganize is online but single threaded and can take FOREVER on big tables, especially if you’re compacting LOBs.
  • Rebuild is offline and single threaded in Standard, online and potentially parallel in Enterprise, but you better hope you’re patched up so you don’t corrupt anything.

Is that worth it? Every night? For every index on every table in every user database? Only if you can prove it.

The one upside to Rebuilding is that it also updates statistics with a full scan. Think about this the next time you say something like “rebuilding the index fixed the problem”, you may have an epiphany on the way.

Wait for it.

Wait for it.

Wait for it.

Statistics with no indexes

SQL Server doesn’t make easy work of getting information about your Statistics, or finding out which statistics get used. Even at the query level, you have to use a spate of Trace Flags to find out what gets loaded and looked at. Cached plans don’t fare much better.

No wonder everyone cares about indexes and their fragmentation. Microsoft has made information about them easy and abundant, while Statistics are kept hidden in the basement next to piles of soft bones and a bowl of hot blood.

Head rush moment: SQL may use information from histograms outside of the index it chooses for cardinality estimation.

Back to earth: If you just create a bunch of statistics instead of indexes, you’re (at best) using your Clustered Index for everything (which is still bad), or you’re using a HEAP for everything (which is usually worse). You’re still generally better off creating good indexes for your workload. They’ll get statistics objects created and associated with them, and if SQL thinks another column is interesting, it will create a single column statistics object for it, as long as you haven’t turned off auto create stats.

Sure, you can put on your black cloak and goat mask and create some multi-column or filtered statistics, but in the words of a wise man (Doug), you end up with more stats to maintain and understanding query behavior gets more difficult.

Filtered statistics suffer from a problem where they don’t automatically update based on the filtered row count, but rather the table row count. Imagine you have a 100 million row table, and your filtered index is on 1 million rows. All million of those rows might change, but the statistics on that index won’t. 1 million is not 20% of 100 million. You’ll have to update the statistics manually, or rebuild the filtered index.

Multi-column statistics are hardly a replacement for a multi-column index, and it’s not like you get an enhanced histogram that includes the second column. It’s just like a normal histogram. All you get is the density information for the columns you throw in there. Boo hiss.

Moral of the story (B-B-B-B-B-B-BENNY WITHOUT THE JETS)

Indexes are a really important factor for performance, but index fragmentation very rarely is. Statistics are super helpful when they’re not out of date, and getting them up to date is much easier on your server’s nerves. Though not perfect, I’d rather take my chances here. Updating statistics can also cause a bad execution plan to get flushed out of the cache. On their own they can sometimes help queries, but you should only end up here after you’ve really tuned your indexes.

Unless you can establish a metric that makes nightly index defragmentation worthwhile, don’t jump to it as the default. Try just updating statistics. You may find that nothing at all changes, and you now have many extra hours a night to do maintenance. Like run DBCC CHECKDB. If you think index fragmentation is a performance problem, try corruption sometime. That’s not what Brent meant when he said “the fastest query is one you never make.”

Thanks for reading!

Erik Darling
Restoring tempdb since GETDATE()
7 comments ↑ Back to top

Using the StackOverflow database, let’s check out Krock’s query. He’s a competitive fella, and he’s looking to find users who signed up for StackOverflow after he did, but who have a higher reputation than he does.

I’m going to simplify the query a little here:

SELECT me.Id, u.Id
FROM dbo.Users me 
JOIN dbo.Users u on 
u.CreationDate > me.CreationDate
and u.Reputation > me.Reputation
WHERE me.Id = 557499;

The Users table has a clustered index on the Id field, so the initial seek on “me” (Krock’s row, Id = 557499) is super-fast. However, to find all the users with a higher creation date and reputation – man, that sucks scanning the clustered index for that, and it brings back a lot of rows. Here’s the execution plan:

execution_plan_1_details

Bringing back 5.3mm rows

Hover your mouse over the clustered index scan, and you get:

  • Estimated number of rows = 5,277,830
  • Actual number of rows = 5,277,831

Nice job, SQL Server, close enough for government work. Now it suggests a missing index – rightfully so – and let’s add it, because we want this query to go faster.

After adding the index, here’s the execution plan:

New and improved execution plan

New and improved execution plan

The query runs faster, make no mistake – but check out the estimates:

  • Estimated number of rows = 1
  • Actual number of rows = 165,367

Those estimates are built by SQL Server’s cardinality estimator (CE), and there have been major changes to it over the last couple of versions. You can control which CE you’re using by changing the database’s compatibility level. This particular StackOverflow database is running in 2016 compat mode – so what happens if we switch it back to 2012 compat mode?

Running in 2012 compatibility

Running in 2012 compatibility

It’s a differently shaped plan – 2012 decided to go parallel with this query. Its estimated subtree cost is 7.37 – higher than my SQL Server’s Cost Threshold for Parallelism.

The motivating factor can be discovered by hovering over that index seek:

Higher number of estimated rows

Higher number of estimated rows

SQL Server 2012’s cardinality estimator guessed that 475,005 users would have a newer creation date and higher reputation than Krock.

This query has so many fun lessons to share.

  • Your database options matter.
  • What database you run a query in matters.
  • Neither 2012 nor 2016’s cardinality estimator is “right” in this case, they’re just differently wrong
  • Indexes create statistics on their columns, but that doesn’t mean SQL Server has perfect information on what’s inside those columns
  • Besides, SQL Server can’t know Krock’s creation date or reputation until the plan is built and the query’s already running

I find statistics to be one of the coolest topics inside the engine, and that’s why I love Doug’s newest course, Statistics: SQL Server’s Guessing Game. I bet you’re gonna love it too – go check it out.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
9 comments ↑ Back to top
css.php