Why I Teach Training Classes on Weekends (the next one starts Friday)

You have a day job.

And your day job comes with two big problems.

Problem #1: it’s hard to pay attention during the week. Your calendar has tons of meetings scheduled for the foreseeable future, and you can’t just reschedule them whenever you want. Even when you’re not booked in a meeting, people are constantly pinging you for stuff, servers are going down, and you just can’t focus on a training class. You turn your head to solve one urgent issue, and then bam, when you turn back to class, you’re lost.

Problem #2: some of you don’t get paid to learn. Some of you are consultants, contractors, or freelancers, and when you take time off during the week, you’re not getting paid for it. Some of you are even paying for my training out of your own pockets, and your companies won’t give you time off to learn. You’re investing in yourself because you want to eventually get a better job, and you want to be more valuable to future employees. That’s also why we provide certificates of attendance to the classes, too. When you get a Live Class Season Pass, you can attend so many of my classes and learn so much, and that looks absolutely spectacular on a resume. It’s huge bang for the buck.

That’s why I teach my weekend Mastering classes. They’re the exact same in-depth Mastering classes with challenging hands-on VMs, but I run them over Friday, Saturday, and Sunday instead of during the week. I teach a couple of weekend rotations per year, and the next one starts this Friday:

Or, if you prefer weekday rotation, the next one is:

Weekend students – especially the consultants & contractors – also tell me they love the Instant Replays included with the Live Class Season Pass. When they hit a tough problem at work, they can pop open that particular module, start watching the recording, read through the slides, and run through the demo scripts. It’s like just-in-time learning, all year long.

Buy today, and you can start your learning journey Friday. You can show up to work better on Monday. Let’s do this!


How to Learn SQL Server in One Year

You’ve been using Microsoft SQL Server for a few years, and it’s becoming increasingly important to your job.

You can write a query, but…

You feel like something’s missing.

You attended a few local user groups or meetups before the quarantines hit, but you were disappointed at how scattered the sessions were. The material was all over the place, covering lots of different topics, and none of them went into depth. You couldn’t relate the topic to what you were actually doing at work.

You’ve been Googling to find solutions when you run into problems, but you’re starting to doubt what you read. You’ve read conflicting suggestions, and you keep hearing “It depends” – and you’re beginning to think it’s some kind of cop-out answer that people are giving when they don’t have the real answers.

I know. I’ve been there too.

If you’re going to learn,
you need a structured, timed plan.

You can’t be just randomly Googling around every now and then. You need a clear, linear plan where each session builds on top of the last one. You need a table of contents, and a recap at the end of each learning sprint.

You can’t just pick up a book, either: you’ve tried that. Nobody’s holding you accountable. The book sits on your bedside table for months, and it’s there for a reason – it puts you to sleep, and nobody’s pressuring you on a regular basis to show what you learned.

You need a combination of 3 things:

  1. Lively, fun lectures – not boring text
  2. Hands-on labs where you’re required to turn in your homework
  3. A timed schedule – you gotta show up and be there on specific days/times

If you put the dates on your calendar, show up for class, turn in your assignments to a fun instructor, and get feedback on your progress, THAT’S how you make real headway in just one year.

Get started in a 3-day sprint.

Mark these out in your calendar:

If you can’t make those, the next rotation starts February 1. I teach these 4x a year, but I’m telling you now because we gotta get you on board this train.

Those 3 classes are each 1 day long. If you can’t get the time off work, you can still do this: I teach those classes as 50% lecture, 50% hands-on labs. You can watch while I do the lectures & labs, and then keep up with your day job when I give the rest of the students time off to do the labs. (I’d recommend you do ’em yourself, though – remember, that’s part of the key to your success, actually getting your hand on the product and proving that you know what you think you know.) The hands-on labs for the Fundamentals classes can be done on your existing laptop/desktop.

Now, you might see “Fundamentals” and think that you’re already past ’em, and that you can move on. I don’t want you wasting time on fundamentals if you already know this stuff, so ask yourself these four questions as a sample test:

  • How do you measure which column goes first in an index key?
  • How do you index to support both joins AND filters?
  • When is a CTE better than a temp table, and vice versa?
  • What kinds of query operations are the most susceptible to parameter sniffing?

If you know the answers to those, you can skip Fundamentals. If not, get in there and get your fundamentals on. Don’t feel guilty: I know a lot of data professionals who’ve said they had over a decade of experience, but they were still shocked by what they learned in my Fundamentals classes.

After those 3 classes, you’re ready for Fundamentals of Columnstore on November 17th (iCal), but don’t feel like you HAVE to take that one. That one really only makes sense if you’re considering columnstore indexes. Those aren’t for everybody – they’re really targeted at reporting queries on 250GB+ databases running SQL Server 2016 or newer. If you don’t have those, feel free to skip that one.

Then, start to master SQL Server.

After you finish the Fundamentals, the really tough stuff starts, and you really need to take the first two in order – first indexing, then querying, and then you can take either #3 or #4 in either order – I teach ’em throughout the year:

  1. Dec 8-10: Mastering Index Tuning – iCal
  2. Jan 12-14: Mastering Query Tuning – iCal
  3. Optional: Feb 9-11: Mastering Server Tuning – iCal
  4. Optional: Feb 16-18: Mastering Parameter Sniffing – iCal

Those are 3-day classes, and each day goes like this:

  • 9AM-Noon: I teach you concepts with a mix of lectures & demos.
  • Noon-2PM: hands-on lab: you run a workload against the full-size Stack Overflow database and solve a performance problem, plus each lunch. You turn in your homework to me in the company Slack room, and I give you feedback on your work.
  • 2PM-3PM: I do the same lab you just did, but live onscreen, so you can see how my work compared to yours.
  • 3PM-4PM: I teach you another concept with a mix of lectures & demos.
  • 4PM-?: hands-on lab: you get another workload challenge. You can solve it from 4PM-5PM, or later if you prefer, or even in the next morning, whatever works best for you.

The Mastering classes are much tougher, and require a bigger VM to pull off. Read the instructions on setting up a lab VM, and you’ll want at least 4 CPU cores, 30GB RAM, and local solid state storage with 300GB free space. You’ve got some time before you have to set that up, though. If you don’t have access to that hardware, I offer a Live Class Season Pass with a lab VM included – but honestly, you can buy a little lab server from home at a more cost-effective price.

The first time you go through ’em,
you’re gonna fall off the horse.

When I went through the Microsoft Certified Master program, it had a 75% failure rate. Most of the students needed multiple attempts at the hands-on lab tests (although I’ll giddily report that I passed the first time, woohoo!) Even though I passed, my first reaction was, “I wanna do that again and see if I can do better, and faster.”

When Microsoft folded up the MCM program and I built my Mastering classes, I wanted to replicate that experience. I wanted to give you a very, very challenging test of your skills. Some folks would pass the first time, other folks would take multiple attempts, but hopefully it’d be fun enough for everybody that they would want to do it again and again.

That’s why most students opt for my Live Class Season Pass: the ability to attend all of my classes again and again for a year straight.

You’re reading that and going, “Who could possibly attend 3-day classes over and over throughout the year?” But here’s the thing: most folks tend to attend multiple times, but they pay different levels of attention depending on their schedule. When you have unlimited access to my classes, what you find is that you just check out my upcoming class list, add stuff to your calendar, and then leave my live sessions up on another monitor and on your speakers while you work. If an interesting concept comes in, or if you’ve got free time available, then you jump in.

By the end of the year,
you will be WAY better at SQL Server.

It’s going to require focus and attention. It’s going to require participation on your part. It’s going to take commitment and action.

You can float through another year, skating by on the bare minimum of skills, getting surprised at every turn.

Or you can decide that this is going to be the year when you start learning about problems before you encounter them, becoming proactive instead of reactive, and start to be seen as “the SQL pro” amongst your team members. You can decide that you’re going to take your career up a notch, and you’re going to let me teach you everything I know.


New Music Friday: Killa DBA’s New Release Just Dropped

Familiar readers will know that I don’t actually do New Music Friday here very often.

Okay, ever. I don’t do it ever.

Because frankly, nobody writes music about us.

Nobody except Homer McEwen, aka Killa DBA (YouTube@KillaDBA).

You may remember him from The SQL Server 2017 Song, and if not, you should go check that out too.

His new album Hug a DBA is available now on Apple Music and Spotify, free for folks who subscribe.

The SQL Server 2019 song is way more catchy than it has a right to be, and I don’t know how he figures out how to stitch all the features together in a way that they rhyme and flow. Nice work!


If Your Trigger Uses UPDATE(), It’s Probably Broken.

In yesterday’s blog post about using triggers to replace computed columns, a lively debate ensued in the comments. A reader posted a trigger that I should use, and I pointed out that their trigger had a bug – and then lots of users replied in saying, “What bug?”

I’ll demonstrate.

We’ll take the Users table in the Stack Overflow database and say the business wants to implement a rule. If someone updates their Location to a new place, we’re going to reset their Reputation back to 0 points.

Here’s the trigger we’ll write:

That trigger is completely broken because it doesn’t handle multi-row updates correctly.

To see what I mean, let’s look at all of the users named Brent:

Some of them have locations set, and some don’t:

Let’s update the ones with no location, and set it to be ‘Miami’ – it’s a nice place, after all:

And now check their location & reputation again:

UH OH. Everyone’s Reputation was reset, even people who didn’t change Locations.

Now, you might be saying, “That’s because you changed the Location using a COALESCE.” Nope – let’s check Richies:

And use a different UPDATE;

And they all get reset:

You can’t just blindly use UPDATE().

Because sooner or later, somebody’s going to do a multi-row update that affects some of the result set and not others, and your trigger will hit everything in the inserted table.

It’s up to you to figure out specifically which rows you need to process, and what you need to do to ’em.


Using Triggers to Replace Scalar UDFs on Computed Columns

Your database is riddled with computed columns whose definition includes a scalar user-defined function. Even up to & including SQL Server 2019, which boasts faster scalar function processing, any table that includes a scalar function cause all access to that table to go single-threaded. In that case, a trigger can actually be a great replacement.

No, wait, come back. Triggers have a pretty bad reputation amongst the database community because:

  • You can pile a lot of work into them, leading to surprise slowdowns at scale
  • You can hold locks open for a long time as you work through a lot of tables
  • Troubleshooting can be tricky since authors rarely log errors generated in triggers
  • They almost always have a bug involving multi-record processing

But there are some places where a trigger is actually the least-worst fit, and this is one of ’em. To show the problem, I’ll take the mid-size 50GB Stack Overflow database and do a simple count of the number of rows:

The execution plan goes parallel:

But add a computed column to the table, and then run our count again:

And the query can no longer go parallel, even though it’s not calling the function:

If we look at the query plan’s properties, we can see that SQL Server was unable to build a parallel execution plan, although it’s not obvious as to why:

We can fix that with a trigger.

Instead of basing the Seniority column off a computed, user-defined function, we can:

  • Remove the computed column
  • Add a real column
  • Keep it up to date with a trigger

Here’s how:

When I insert or update the users table, the trigger goes single-threaded as it updates the Seniority column – but that’s it. Now, when I run a count again, the count can go multi-threaded:

Sure, I would rather have the application owners recompile their app and keep the Seniority column up to date themselves as they do inserts & updates, but I often don’t have that luxury – especially when lots of applications are touching the Users table, and they’re all accustomed to the Seniority computed column just being constantly updated, automatically.

This post isn’t meant to be a start-to-finish tutorial on how to write the fastest triggers – it’s just a quick 500 words to remind you that in this scenario, triggers are better than the alternative of having scalar UDFs as computed columns.


Building SQL ConstantCare®: What Database Growths Do You Care About?

SQL ConstantCareSQL ConstantCare® alerts you when your databases have unusual growths.

No, not like that one on your neck. And you should really get that checked out. No, I’m talking about a database that suddenly jumps in size, making you suspicious about whether somebody created a backup table or built a set of ridiculously large indexes.

But Richie was working on an update to the unit tests, and we started wondering:

How big of growths do you personally care about?

If you were going to write the rule that alerted you when your database jumped in size, what would be your own personal size threshold? Does it change based on the database’s size? (Think about 1GB, 10GB, 100GB, 1TB, and 10TB databases.)

Let us know in the comments! We’re curious.


Updated First Responder Kit and Consultant Toolkit for October 2020

In preparation for this month’s new Fundamentals of Columnstore class, sp_BlitzIndex has a few cool new tricks! I’ll show you how to use ’em for free in the upcoming How I Use the First Responder Kit class, too.

Columnstore indexes are kinda like an index on every column (called a segment), and plus they’re kinda partitioned (called a row group). When you want to visualize how the data is stored, call sp_BlitzIndex just for that one table:

The last result set gives you a row per row group, plus columns for each column of the table, showing you what range of data is stored in each row group/segment combo. It helps you better understand what parts of the table SQL Server will be able to eliminate, and what parts it’ll still need to dig through. This also affects how we load data and how we manage index rebuilds.

Next, when you want to analyze the sizes of your columnstore dictionaries, we’re now breaking those out in the Index Size column. Columnstore indexes used to just show a total LOB size, but now it’s broken out between columnstore and dictionary data:

That “Index Size” column shows in @Modes 0, 2, 4, and table-level output. And hey, speaking of sizes, you’re probably curious about which indexes & tables are the largest ones in your database. sp_BlitzIndex @Mode = 2 now accepts a @SortOrder parameter. You can now sort by rows or size, plus more stuff too.

Brent Ozar's Consultant ToolkitTo get the new version:

Consultant Toolkit Changes

I updated it to this month’s First Responder Kit, but no changes to querymanifest.json or the spreadsheet. If you’ve customized those, no changes are necessary this month: just copy the /Resources/SQLServerSetup checklist folder over your existing files to replace the SQL scripts.

sp_Blitz Changes

  • Improvement: new check for >= 35 databases being VSS snapshotted at the same time. (#2581)
  • Improvement: if you’re outputting the results to database, the @OutputXMLasNVARCHAR parameter now lets you save XML columns of the table output as NVARCHAR. Useful if you’re using a single table to store results over linked servers and locally. (#2601, thanks David Prestridge.)
  • Improvement: added warnings for non-default settings for SQL Server 2019’s new database scoped configuration options. (#2573, thanks mtndew01.)

sp_BlitzCache Changes

  • Improvement: if you use the @SortOrder parameter, rows won’t be shown unless they actually have that problem. For example, if you used @SortOrder = ‘spills’, we were ordering by spills – but if not a single query plan in cache was having any spills, we were still sorting by that column, and showing plans where Spills = 0. This sounds trivial, but it’s actually a big deal for those of us who do a ton of plan cache analysis because now we can glance quickly at the output and go, “Your plan cache has been cleared since the spills problem happened, or the plan has aged out, or it’s using a recompile hint. (#2174, thanks Erik Darling.)

sp_BlitzFirst Changes

  • Fix: CHECKDB running wasn’t always shown due to a cross apply (rather than outer apply) to query plan DMFs. (#2609, thanks Erik Darling.)

sp_BlitzIndex Changes

  • Improvement: when you specify the @TableName parameter, there’s a new result set that visualizes the row groups and segments. (#2584)
  • Improvement: new @SortOrder parameter lets you sort the inventory output from @Mode = 2 by rows, reserved mb, forwarded fetches, lock time, reads, writes, and more. (#2619)
  • Improvement: the Index Size column now separates columnstore index size vs its dictionary size. (#2585)
  • Improvement: the stats histogram output now includes the modification_counter. (#2612, thanks Erik Darling.)
  • Fix: the stats histogram & columnstore visualizations weren’t working when sp_BlitzIndex was installed in a database other than master. (#2616, thanks Erik Darling.)

sp_BlitzLock Changes

  • Improvement: added @ExportToExcel parameter like sp_BlitzCache has. When set to 1, we don’t show the XML results in the output, making it easy to copy/paste the results to Excel. (#2613, thanks Rebecca Lewis.)

sp_BlitzWho Changes

  • Improvement: added @SortOrder parameter that accepts most of the column names, so you can sort by things like session_id or memory grant. (#2603, thanks Dylan Petrusma.)

sp_DatabaseRestore Changes

  • Fix: if you set @RunRecovery = 0 and also pass in a @DatabaseOwner name, we ignore you and don’t bother trying to change the database owner. (#2598, thanks Greg Dodd.)

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. If you need a free invite, hit SQLslack.com. Be patient – it’s staffed with volunteers who have day jobs.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.

Watch Us Work on the First Responder Kit

In this 12-minute video, Greg Dodd works on sp_DatabaseRestore:

In this 1-hour video, I work on the @SortOrder parameters for sp_BlitzWho and sp_BlitzIndex:

 


Free Webcast on Thursday: Avoiding Deadlocks with Query Tuning

To fix blocking & deadlocks, you have 3 tools:

  1. Have enough indexes to make your queries fast, but not so many that they slow down delete/update/insert operations. (I cover that in the Mastering Index Tuning class.)
  2. Use the right isolation level for your app’s needs. (I cover that in the Mastering Server Tuning class.)
  3. Tuning your T-SQL to work through tables in a consistent order and touch them as few times as possible.

I’ll cover #3 in a one-hour webcast with Quest Software on Thursday at noon Eastern. Register here.


Databases Are Like Walk-In Coolers. #tsql2sday

In the back of a restaurant, there’s a walk-in cooler (or a few of ’em.)

When a chef needs to make something, she goes into the walk-in cooler, gathers the necessary ingredients, and then comes out to the kitchen to do the cooking.

She doesn’t cook inside the walk-in – that wouldn’t be efficient. She just gets her ingredients and then does her work elsewhere. The fridge isn’t a place for work to be done: it’s just for ingredient storage.

She makes as few trips into the fridge as practical in order to prepare her dishes. At the start of her shift, she’ll fetch enough ingredients to prepare several servings of something – she wouldn’t dream of making a separate trip to the fridge for each and every salad plate she needs to make. (She might even prepare several salads at once, and then keep them nearby for orders.)

When the walk-in fridge goes down,
the chef calls a fridge tech.

When the walk-in is broken, the chef is the first person to notice – but she isn’t the person to repair the fridge. She calls a separate person in, a refrigeration technician. That fridge tech might indeed know how to cook, but that’s not his job in this scenario, and nobody’s paying him to cook stuff. His job here is just to fix the fridge to get it to cool the room appropriately.

If the fridge tech arrives onsite for a support call to fix a fridge that won’t stay cold, and if he notices that she’s leaving the door propped open all the time, he’ll have a quick talk with the chef. He might say, “Listen, the fridge you bought simply isn’t powerful enough to accommodate leaving the door open, or trying to run a deep fryer inside the fridge. You’re going to have to make a compromise: either do your cooking out on the line, or buy a bigger fridge, or start closing the door.”

Small restaurants don’t need a full time fridge tech. They might even just call the fridge manufacturer for warranty service. However, as restaurants grow into chains, or their sales become large enough to warrant it, they might hire a general handyman. That jack-of-all-trades will fix fridges, air conditioners, stoves, blenders, you name it – whatever breaks, they’ll do a good-enough job to fix most restaurant technology problems. Restaurants (or restaurant groups) have to become pretty big before they can justify hiring their own full time refrigeration technician.

Proactive restaurants will bring in a refrigeration technician on a regular basis to do a check-up. The tech can do maintenance to make sure the existing gear lasts as long as possible, and also warn when it’s approaching end of life, and time to think about a more efficient model.

Database server = fridge.
Chef = developer.
DBA = refrigeration tech.

If you use the database server to generate HTML, for example, you’re trying to cook inside the fridge.

If you run a query for every row you need to render on a web page, you’re making too many trips to the fridge.

If you begin a transaction and leave it open while you work, you’re leaving the door propped open.

If you host a 1TB data warehouse on 16GB RAM, you’re trying to run a highway-side McDonald’s with a dorm room fridge.

As your DBA/fridge tech, my job is to make sure you’ve got the right size fridge for your needs, help you understand the best ways to use it, and warn you when you’re using it inappropriately. I won’t tell you how to cook – sure, I can bang out a query, but that’s not really what you wanna pay me to do.

The cloud is like
renting a strip mall restaurant.

At first, it seems like putting your database in the cloud is like renting space in someone else’s fridge. Nobody in their right mind would do that: the latency would be unworkable. You can’t go to somebody else’s building and open their fridge every time you want to make an omelette.

But you don’t rent just the fridge.

You rent an entire restaurant space.

Strip mall owners have already set up generic space that works well enough for most kinds of restaurants – burgers, pasta, Chinese food – and you can rent a space from them for a monthly fee. They’ve already got walk-in coolers, stoves, deep fryers, cash registers, booths, and most commonly required restaurant gear.

Sure, you don’t get as much flexibility – you can’t lay out the restaurant exactly the way you want. However, if you’re starting out a new restaurant, and as long as you’re not aiming for something really unusual like conveyor-belt tank-to-table sushi, it’s probably good enough for you. (And yeah, there’s still a fridge tech, because sadly, the strip mall landlord just deposits your rent and doesn’t answer your phone calls.)

If you liked this, you’ll probably like the other entries for this week’s T-SQL Tuesday hosted by Rob Volk. Scroll down to the comments to see today’s entries, or read tweets tagged #sql2sday.


Free Live Fundamentals Classes, Registration Open Now

Update Oct 29th – classes are over. Thanks for joining in!

Well, this quarantine just isn’t letting up, so it’s time for another round of free training.

I’m running 3 free live classes on October 26-28:

  • October 26: Fundamentals of Index Tuning – 9AM-5 Eastern – iCal
  • October 27: Fundamentals of Query Tuning – 9AM-5 Eastern– iCal
  • October 28: How I Use the First Responder Kit – 9AM-5 Eastern– iCal

These are live only – be there or be square. Or I guess you could also be a Recorded Class Season Pass holder, which lets you watch the recordings of these anytime you want.

Register here, and see you in class in two weeks! If you’ve got friends who would benefit from some free trainin’, feel free to send them the link as well. Heck, if nothing else, you can leave the class up in a window while you “work” from home.


Woohoo! We Hit 25,000 YouTube Subscribers.

As I finished this weekend’s live stream, I noticed that we crossed a big number.

The Brent Ozar Unlimited YouTube channel now has over 25,000 subscribers.

Well, that’s kinda awesome. I’ve had a great time doing all kinds of free live streams during the quarantines, and it’s been super fulfilling seeing so many folks show up on a regular basis for learning and fun.

To give you an idea of what I’ve been up to over there, here are a sampling of some of my recent live streams:

To get alerted whenever I start streaming, subscribe to the channel. Here’s to the next 25,000 party people! Y’all make this so much fun.


What Is the SQL Server CEIP Service?

The CEIP Service is the Customer Experience Improvement Program that sends telemetry data back to Microsoft.

When you install Microsoft SQL Server, the CEIP Service is installed too. You can’t disable it during installation: it shows up in your services list whether you want it or not.

SQL Server CEIP Service

You know it’s high quality when they can’t even capitalize SQL Server correctly

What data does the CEIP Service collect?

Microsoft goes into a lot of detail here, and the short answer is things like:

  • Crash dumps – which may contain your data and the exact contents of your queries, which can contain personally identifiable information
  • Login names, email addresses, location information, server names (read the “End-user identifiable information” section, in particular the part about “may be emitted to Microsoft to deliver new hybrid and cloud features”)
  • Behavioral/usage data, server configuration, feature configuration, hardware manufacturers
  • Database names, table names, and column names

If you’d like to see examples of the exact queries it runs, I captured a trace of the CEIP Service’s queries in this post.

Can you disable the CEIP Service?

If you’re using SQL Server Standard Edition or Enterprise Edition, then on the SQL Server, go into the Start menu, Microsoft SQL Server, SQL Server Error and Usage Reporting. A dialog box will let you un-check the error reporting and feature usage reporting.

After you uncheck those boxes and click OK, the CEIP Service will still be running, but it won’t send any data back to Microsoft. The documentation says so, so it must be true, right? (Tin foil hat goes here.) If you’re managing a lot of servers in an enterprise, you can also use Group Policy to push out these registry keys to control data collection and crash dump collection.

A security-conscious customer of mine goes so far as to disable the service, then change the logon account for the SQLCEIP service to an invalid account & password, and uses PowerShell to check its login on a regular basis, making sure it doesn’t get re-enabled by a Cumulative Update.

However, if you’re using SQL Server Developer Edition or Express Edition, you can’t disable CEIP for SQL Server. Microsoft’s policy is that people who use free versions of SQL Server are paying for that by sending in their private data to Microsoft. The CEIP Service documentation page says:

Even if you follow the above instructions on Developer Edition, they’ll appear to work, but you’ll still be sending telemetry data back to Microsoft.


Poor Man’s Resource Governor: Database-Scoped Configurations

Let’s say you had report queries that were going wildly parallel, and you wanted to put a stop to it, but you couldn’t afford SQL Server Enterprise Edition.

You could do is create databases with different MAXDOP settings:

Yes, MAXDOP has been a database-scoped configuration since SQL Server 2016.

Then set your reporting users’ default databases:

And then when they each run the same query, they get different maxdops:

What’s that, you say? Your users aren’t fully prefixing their objects with database names, so this wouldn’t work for you? I hate to give you two bad ideas in a single blog post, but I have one word for you: synonyms.

I need to point out that this “solution” – and I use that term wildly loosely – has all kinds of drawbacks:

  • It only throttles CPU – queries can still get giant memory grants
  • Each database gets its own query plans, which means you’ve also just doubled your parameter sniffing problems
  • It’s painful to deal with lots of synonyms, and really, this only makes sense (and I use that term loosely) for reporting queries, not things that call stored procedures or do inserts

Free Webcast on Wednesday: The New Robot DBAs in SQL Server 2017, 2019, and Azure

The robots are really just people wearing metallic masks.

You’ve been hearing about all these robots that are coming to take your job. They’re going to automate all the performance tuning and make your life way easier, right? Or harder, I guess…since you’ll be looking for a job.

Thing is, most of that is just plain ol’ marketing hype that Microsoft is using trying to sell your management on upgrading to newer versions or moving to the cloud.

In this Idera Geek Sync session, I will blow the marketing smoke away, show you these features in action, and show you which ones are ready for prime time. You’ll walk away better equipped to have conversations with management about why you should (or shouldn’t) upgrade, and how you can use these features not just to stay employed, but have a better career.

Register here.


Want to use columnstore indexes? Take the ColumnScore test.

When columnstore indexes first came out in SQL Server 2012, they didn’t get a lot of adoption. Adding a columnstore index made your entire table read-only. I often talk about how indexing is a tradeoff between fast reads and slow writes, but not a lot of folks can make their writes quite that slow.

Thankfully, Microsoft really poured a ton of work into improving columnstore indexes over time. The What’s New in Columnstore Indexes page of the SQL Server documentation keeps getting more “yes” answers over newer versions.

Today, with widespread SQL Server 2016 adoption, at least some of your databases could safely implement columnstore indexes to make reports go faster. But should you? And will columnstore indexes backfire and kill non-reporting-query performance? And will your data loading techniques come back to haunt you over time?

Take the 7-question test at ColumnScore.com to find out.


Announcing a New Class: Fundamentals of Columnstore

Your report queries are too slow. Will columnstore indexes help?

You’ve tried throwing some hardware at it: your production SQL Server has 12 CPU cores or more, 128GB RAM, and SQL Server 2016 or newer. It’s still not enough to handle your growing data. It’s already up over 250GB, and they’re not letting you purge old data.

Fundamentals of Columnstore IndexesIn one day, you’ll learn:

  • How columnstore data is stored, and how that impacts your architecture choices
  • How to do a hands-on self-assessment of your workloads, data, and server to figure out whether columnstore indexes make sense for you
  • How to do a proof-of-concept implementation with initial creation and measuring how your queries are impacted

As this is a Fundamentals class, I need to be clear about what’s out of scope, too. We won’t be covering how to tune loads, tune query plans, design rowstore indexes atop columnstore ones, tune ongoing maintenance, feature cross-compatibility, compatibility levels, or behaviors of different SQL Server versions or editions.

Prerequisites – you need to be already thoroughly familiar with my free 90-minute class, How to Think Like the SQL Server Engine, and my 1-day Fundamentals of Index Tuning class. After you enroll, do the full prerequisites including chat setup.

Each student needs to bring their own SQL Server. You’re highly encouraged to follow along with the labs. Throughout the class, you’ll have homework where you need to complete a hands-on lab using SQL Server 2017 or 2019 and the Stack Overflow database. Read the instructions on how to set up your own lab SQL Server to follow along – a lab server is not included unless you buy my Live Class Season Pass PLUS Lab VM.

This is a live online class – but you have two buying choices:

Register now – and of course, Live Class Season Pass holders are welcome to drop in at any time, no registration required. Your pass gets you into any live online class I do during your subscription. See you in class!


Building SQL ConstantCare®: Warning You About New Large Tables & Indexes

SQL ConstantCareWhen I’m doing a SQL Server health check, one of the things I like showing clients is a list of their top tables by size.

Inevitably, there are a bunch of tables that:

  • Tables whose names start with “temp” or “backup” or “just_in_case”
  • Tables with no clustered index that were clearly created with a select-into, and then forgotten about
  • Tables that haven’t been accessed in months or years

We have some chuckles, make a quick calculation, and realize that our database is 10%-20% larger than it needs to be. That means our backups are taking 10-20% longer, corruption checking, restore testing, backup space, disaster recovery test time, you name it – it’s all 10-20% higher.

So now if SQL ConstantCare® sees a new table with at least a million rows, we give you a heads-up in your daily email:

The first table it found on the first day

This is such a good example of what SQL ConstantCare® is all about: giving you practical insight, not just raw metrics.


The 4 Presentations I’m Proudest Of, and What Inspired Them

I was thinking back about the presentations I’ve been the most proud of over the years. I’ve written and performed a *lot* of stuff over time, but here are some of the free public presentations I’m the most proud of, and the stories of how they came to be.

#4: Watch Brent Tune Queries (2014)

This show was inspired by the Discovery Network. I love watching people just do their job. Deadliest Catch. Dirty Jobs. Maine Cabin Masters.

I thought, “Can I build a session where someone just watches me work? Like a Dirty Jobs, but for SQL Server? And how much could I teach them in the process?”

The first version of this session was pretty rough, but as I started presenting it at conferences, I began by using a few slides to explain my query tuning process before I jumped in and went to work, taking real user-written queries from data.stackexchange.com and tuning them.

When I do it these days, I’m back to the mess: I don’t even bother starting with a slide deck. I start with a blank SSMS, type out a few key points about my query-tuning process, open up data.stackexchange.com, pick a query, and begin tuning it. (The queries aren’t selected by random – I pick them ahead of times to illustrate specific challenges.)

I like it because it’s so minimal and it evokes so much attendee interaction. I try to walk a line between getting attendees to suggest what they’d do, versus getting enough direction in my agenda to make progress in a query before the fend of the session.

#3: 500-Level Guide to Career Internals (2016)

Whenever conference schedules come out, my friends and I usually sort the schedule all kinds of different ways, looking at the contents. One of the ways we slice & dice it is by level – we wanna see what presenters think is 500-level material. (Usually, difficulty is described as 100, 200, 300, or 400 level.) We always had chuckles for things that were termed 500-level when it was really more of an introduction – especially if it was a short session that couldn’t possibly get into technical details.

One year I thought, “What would a 500-level career hacking session look like?”

I don’t think of myself as 500-level at anything, so I reframed the question as, “If someone presented a 500-level guide to career hacking at a SQL Server conference, what credentials would that person need to have?” I suddenly realized, uh, it was me: I had cofounded and bootstrapped a multi-million-dollar consulting company, hadn’t worked for anyone else for years, and had basically crafted the job that I wanted.

I had a lot of career development material that I’d written from the FreeCons that I used to run. They were free one-day workshops done before a conference, but rather than focusing on technical training, we focused on building lifelong connections between attendees. We talked careers, blogging, presenting, and more. We did talk technology, but only for the purpose of showing different presentation delivery methods. I really enjoyed doing those.

PASS recorded the session, but there was no videocamera. I hate watching sessions without a camera of the speaker, but I think this one’s worth it:

There’s nothing special about that presentation’s delivery, but it makes my list of top sessions because the material is so near and dear to my heart. There was even a moment onstage where I had to struggle to keep it together, and…I still get a little teary eyed thinking about that.

I recently updated and re-recorded this one for SQLBits 2020, where I had the challenge of picking my favorite 90 minutes of it altogether. That was fun.

#2: An Introduction to Statistics with Playing Cards (2020)

I can’t cook worth a damn, but I loved watching Alton Brown’s show Good Eats. The publicly available Alton Brown Bakes an Apple Pie shows several good examples of his team’s camera work, putting cameras inside a mock fridge, an oven, and above his workspace.

Ever since I saw the overhead camera usage, I filed it away in a text file where I keep ideas for presentation techniques. Later, when I wondered how I could use playing cards to explain how SQL Server’s statistics work, I realized I could combine it with the overhead camera technique to do something memorable.

This session has 3 things I love:

  • The camera angle that’s rarely used in IT presentations
  • The usage of an everyday object to tell a technology story
  • The obvious truth afterwards: when you finish watching the session, you say to yourself, “Well, of course it works that way. It’s obvious now. It couldn’t have possibly worked any other way.” That, for me, is a mark of a great presentation – where afterwards, the technology seems so simple.

#1: How to Think Like the SQL Server Engine (2010)

In 2010, I sat in Edward Tufte’s day-long seminar on how to communicate. A couple of my big takeaways:

  • Communicators should try to build a single handout piece of paper that conveyed a lot of information
  • Communicators should try to bring people as close as possible to the actual thing that’s happening

I thought about how I could pull those off, and I realized a happy coincidence: SQL Server stores data in 8KB pages. I could take the free public Stack Overflow database, export part of it into Excel, turn it into a printed handout, and get people to role play as the SQL Server engine.

I’ve updated it and re-recorded it a lot over the years, and here’s the latest version, fresh from a recent stream:

I’m really happy with those 4 sessions, and I hope y’all enjoy ’em too.


I Would Love a “Cost Threshold for Recompile” Setting.

I’m tired of sniffing your parameters.

In environments where complex queries can get bad plans due to parameter sniffing, it would help to say that all queries with an estimated cost over X should be recompiled every time they run.

For example, in environments where most of my workload is small OLTP queries, I’m fine with caching queries that cost under, say, 500, but above 500, I don’t care how good we think the plan is – it’s worth recompiling because those high-cost reports don’t run often, and I need them to get accurate plans for their own parameters every time they run.

This would help environments with parameter-sensitive queries where Adaptive Memory Grants backfire (like reports sometimes use small date ranges and sometimes years of data.)

I would love this feature. If you would too, cast an upvote.


SQL Server Suddenly Frozen? You Might Be Snapshotting Too Many Databases.

Snapshot backup tools like Azure Site Recovery and Veeam are great for sysadmins. They let you quickly replicate a virtual machine to somewhere else without knowing too much about the server’s contents.

To pull it off, they use the Windows Volume Shadow Copy Service to do snapshot backups – often referred to as VSS snaps. In a nutshell:

  1. The VSS service tells SQL Server to freeze its writes
  2. SQL Server stops writing to each of its databases, freezing them
  3. SQL Server tells VSS, “Okay, I’m out”
  4. VSS creates a shadow copy of the data & log files
  5. VSS tells SQL Server, “You can get back to work”
  6. SQL Server begins writing again, one database at a time (called thawing)

These snapshot backups show up in system tables like msdb.dbo.backupset because when used successfully, they’re like full backups. It sounds like witchcraft, but it really does work, and it’s worked for years.

But when it goes wrong, it goes WRONG.

SQL Server freezes databases one at a time, serially, until it’s got them all frozen. This can take some time, and while databases are frozen, you’ll see blocking chains for writing queries (which can in turn block readers, too, depending on your isolation level.)

Because of that one-at-a-time problem, KB #943471 says:

We recommend that you create a snapshot backup of fewer than 35 databases at the same time.

Backup apps like Azure Site Recovery and Veeam don’t warn you about that, though. You just find out when your SQL Server suddenly feels like it’s locked up, with tons of blocking issues.

For example, here’s a screenshot from a client’s msdb.dbo.backupset table when Azure Site Recovery was trying to snapshot hundreds of databases all at once:

See how the backup_start_date and backup_finish_date are 40-50 seconds apart? That’s because SQL Server was slowly, gradually stepping through the snapshots of each database. Finally, they all “finished” at 8:08:10, but even that is a little tricky – in this case, they didn’t all thaw at the exact same time. (I’m guessing, based on the blocking storms I’ve been seeing lately, that the backup_start_date and backup_finish_date date/times don’t really correspond to the exact VSS freeze/thaw times.)

To make things worse, sysadmins just check boxes in tools like Azure Site Recovery and Veeam and tell them to take backups once an hour. The backups don’t necessarily happen exactly on the hour, so developers are left scratching their heads as to why seemingly simple queries suddenly just freeze every now and then for no reason.

How To Tell If It’s Happening to You

If you’re doing a health check, run sp_Blitz. We warn you about how many snapshot backups your server has taken recently, and in the upcoming October build, we also warn you if you’ve had snapshots spanning 35+ databases in the last two weeks.

Here’s how to see which databases are involved:

Or if you’re troubleshooting live, run sp_BlitzWho and look for blocking chains led by backup commands. You’ll have to move fast, though, because blocking storms like this can disappear within 30-60 seconds. I don’t recommend that you log sp_BlitzWho to table that often – that’s just too much overhead. That’s where 3rd party monitoring tools are more helpful.


Menu
{"cart_token":"","hash":"","cart_data":""}