Blog

Tweaking the Defaults for Ola Hallengren’s Maintenance Scripts

If you’re a production database administrator responsible for backups, corruption checking, and index maintenance on SQL Server, try Ola Hallengren’s free database maintenance scripts. They’re better than yours (trust me), and they give you more flexibility than built-in maintenance plans.

However, the index maintenance defaults aren’t good for everyone. Here’s how they ship:

CREATE PROCEDURE [dbo].[IndexOptimize]
@Databases nvarchar(max),
@FragmentationLow nvarchar(max) = NULL,
@FragmentationMedium nvarchar(max) = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh nvarchar(max) = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationLevel1 int = 5,
@FragmentationLevel2 int = 30,

The defaults on some of these parameters are a little tricky:

When an index is between @FragmentationLevel1 (5%) and @FragmentationLevel2 (30%), the scripts will perform the actions listed in @FragmentationMedium, in order. First it’ll try an index reorganize, which is an online operation. If the index’s design doesn’t allow for that, IndexOptimize will try to rebuild the index online. If the index’s design or your SQL Server edition doesn’t allow for that, it’ll perform the last resort – an offline index rebuild.

When an index has higher fragmentation than @FragmentationLevel2 (30%), the scripts will try the actions listed in @FragmentationHigh – first an online rebuild, then if that’s not possible, an offline rebuild.

These defaults aren’t dangerous or deadly, but they do have drawbacks.

Index maintenance changes data pages and creates transaction log records. This means larger transaction log backups, which at first doesn’t sound too terribly harmful. However, if you’re using database mirroring, transaction log shipping, AlwaysOn Availability Groups, or anything else that copies your logged transactions around, you’ve put yourself behind. In one case, I saw an all-solid-state server generate ~25GB of transaction logs in under 3 minutes – all of which had to be copied out to the various reporting and disaster recovery servers. This creates an RPO/RTO problem without really solving a performance problem.

Index maintenance changes data pages, too. This also sounds harmless, but if you’re doing differential backups or storage replication, boom, you just inflated your storage and bandwidth costs. The larger your database becomes, the more you’ll start to investigate these recovery options, and the more index maintenance starts to backfire.

Offline index rebuilds cause blocking. As low as 5% fragmentation, the script defaults have the potential to take tables offline and block queries. I take kind of an extreme view on this: I would rather not take tables offline when there’s not a human being around to watch what’s happening.

So here’s the defaults I prefer:

  • FragmentationLevel1 = 30%
  • FragmentationLevel2 = 50%
  • FragmentationMedium = ‘INDEX_REORGANIZE,INDEX_REBUILD_ONLINE’
  • FragmentationHigh = ‘INDEX_REBUILD_ONLINE’

Note that I raised the fragmentation thresholds by a lot, and I removed the offline rebuild operations. I’m not saying I never rebuild indexes offline – I just don’t want to do that by default, with nobody around. When I’ve got tables that are heavily fragmented, and I’m convinced that a rebuild will solve the problem, AND I can only rebuild them offline, then I want to run the stored procedure manually when I can be around to keep an eye on it.

How to Change the IndexOptimize Defaults

You could just edit the IndexOptimize creation script. This way, whenever you execute it without any parameters, it will inherit the defaults you set at creation time. If you maintain IndexOptimize across a lot of servers, and you keep a single master version in a central location, this works.

But you have to be doggone sure to remember to change those defaults every single time you download a fresh version from Ola. (You won’t. You’re too busy reading blogs.)

The safer way is to leave his script the way it is, but when you work with SQL Agent jobs to run IndexOptimize, change the job’s settings.

You should probably be doing this anyway because you should create two separate Agent jobs: one to do your normal lightweight optimizations, and a second one for deeper optimizations (like with offline rebuilds). Some folks like to have weekday jobs that only do low-impact maintenance techniques for highly fragmented tables (like if something manages to hit 80% fragmentation on a Tuesday, go ahead and rebuild it online) followed by more invasive techniques on the weekends.

Just keep your backup schedule in mind – if you do full backups weekly on Saturday, followed by differential backups the rest of the week, don’t do your index rebuilds on Sunday. That instantly has the effect of inflating your differential backups all week long. Do the index maintenance first (like Saturday morning) followed by the full backup Saturday evening.

What Is Commodity Hardware?

You may have heard the term “commodity hardware” thrown around when describing solutions like Redis, Elasticsearch, or kCura’s new Data Grid.

Commodity hardware refers to cheap, standardized servers that are easy to buy off the shelf from any vendor. Here’s a typical example of a 2u, 2CPU commodity hardware server:

SYS-1028R-TDWSay we buy a few parts to get ‘er started:

  • Two Intel Xeon E5-2623 v3’s (quad core) – $900 total
  • 128GB RAM (using 8GB DIMMs) – $1,920
  • Two 512GB SSDs for fast storage – $450
  • Six 4TB hard drives for slow storage – $900
  • Grand total: $5,070

Not bad. Want a little more power? Here’s a faster config:

  • Two Intel Xeon E5-2623 v3’s (quad core) – $900 total
  • 256GB RAM (using 16GB DIMMs) – $3,500
  • 8 1TB SSDs – $2,600
  • Grand total: $7,900.

The term “commodity hardware” used to mean really crappy gear, but when you look at these numbers, that’s not necessarily the case anymore. You can build yourself quite the army of pizza boxes.

When vendors say, “You can deploy our solution on commodity hardware,” they’re not saying you’re going to get amazing performance with 16GB of RAM and a couple of spinning rusty frisbees. It’s time to reset your expectations about what commodity means.

SQL Server Tasks You Probably Shouldn’t Automate

Every now and then I run across an automated script that does something a little suspicious. I’m not saying you should never put these things into a script, but if you do, seriously think about the surprise gotchas when someone runs the script:

  1. DBCC commands other than CHECKDB
  2. sp_configure (and especially RECONFIGURE afterwards)
  3. ALTER SERVER CONFIGURATION
  4. ALTER DATABASE
  5. ALTER AVAILABILITY GROUP
  6. CREATE INDEX or DROP INDEX
  7. KILL
  8. SHUTDOWN
  9. And most of the database engine management stored procedures

If you’re doing any of these on a scheduled basis, take a few minutes to document what you’re doing, why, and whether it’s safe to stop doing it. Your successor will thank you.

Believe me, otherwise she’s going to throw you under the bus when you’re gone, and if you haven’t left something behind to defend you, you’re going to look like That Guy.

Who’s Allowed to Add Indexes to ISV Apps?

Performance tuning of independent software vendor (ISV) databases is a little tricky.

To understand who does what, let’s think through all of the work required with building and hosting a database application:

Who adds indexes and tunes queries?

Who adds indexes and tunes queries?

Typically the two parties involved – the vendor and the customer – start at opposite ends of the table. The software vendor starts by designing their tables, and gradually works down. The customer starts by buying hardware, and works their way up.

In the middle, things kinda fall apart.

When I was a DBA, I thought it was the vendor’s responsibility to put the right indexes on their tables. It’s their app, right? Aren’t they supposed to know how people query their own app? Why am I responsible for reinventing that wheel? Will I lose support if I add indexes?

Now that I’ve worked with a lot of ISVs, I see things differently. Here are their challenges:

Every customer uses the app differently. Some customers really hammer a particular feature hard, whereas others never use it at all. If you index for every possible feature/query, then the database will be dead slow for delete/update/insert operations.

Customers don’t give feature usage or index usage data back to the ISV. Software vendors have no way of knowing what indexes are working well out in the field, or sometimes even know what queries are running on the server. This comes back to our responsibilities grid at the beginning of the post – often even the customers aren’t running any performance monitoring software on their app at all.

Customers won’t upgrade to current versions quickly. Sometimes the ISV fixes performance by improving queries or adding the right indexes, but they only add it in the current version of the software. It takes development time to backport the fixes to older versions, and software vendors only have so much time. If you ask end users if they’d rather pay for new features or backporting stuff to older versions, they vote with their wallet.

Some customers have really bad DBAs (or none at all). Time and again, I’ve seen “DBAs” add dozens – or in one case hundreds – of indexes to an ISV’s tables in an effort to improve performance. The first few indexes make things better, so they start solving every problem by adding more indexes. You can guess how that ends up. As a result, the ISV’s support team starts making rules like “no customer is allowed to add indexes.”

Building a Successful Relationship With Your ISV

When you install a software vendor’s database, show them the list of responsibilities at the top of this post. Say, “I agree to do the stuff in my column – if you’ve got any guidance on things you’d like me to do for your application, send ‘em over, and I’ll be glad to follow them.” For example, the vendor may have a list of trace flags they’d like to enable for their app.

Then say, “For the stuff in the middle, who’s responsible?”

If the vendor agrees to tune indexes, then you’ll probably need to give them index usage data and plan cache data on a regular basis to help them tune. I like to be my software vendor’s best customer – I’ll say, “I’ll send you the output of sp_BlitzIndex® and sp_BlitzCache® every week or every month, whatever works for you. I’ll be glad to take my time working with you on the output, and together, we can make the software experience awesome for everybody.”

You’ll be surprised by how often they accept because they’re desperate to find customers willing to give them some diagnostic data about how their software works in the wild.

If the vendor says you’re responsible for tuning indexes, then:

  • Instead of dropping their indexes, disable them, and let their support team know which ones you’re disabling
  • Create your own indexes with your company’s initials, like BGO_CustomerNumber_CustomerName, so you can quickly identify which ones are yours versus which shipped with the product
  • Run sp_BlitzIndex® every month to make sure your indexes are actually helping, not hurting
  • When it’s time to deploy a new version of their app, enable the vendor’s indexes again first, drop your own indexes, run their deployment scripts, and then set things back the way you had ‘em (this way you minimize the chance of their deployment scripts breaking)

There’s no right or wrong choice – for example, Sharepoint customers aren’t allowed to touch the database, whereas SAP customers are encouraged to do their own index tuning through the SAP UI. The key is knowing which method your vendor supports right from the start. Otherwise, both sides just assume the other side is taking care of the problem.

Brent’s Bad Idea Jeans [Video]

Over the years, I’ve come up with some rather “interesting” answers to SQL Server problems including:

In this 20-minute video, I’ll explain what drove me to these crazy solutions, and it’ll be up to you to decide whether they’re awesome or awful.

For questions & answers about these, tune in to our Tuesday webcast.

Black Friday Sale: Free $299 for the Last Holdouts

All day long, you’ve been watching and waiting, thinking you could resist a killer deal.

Black-Friday-Sale

Coupon LastHoldout299 gets you $299 off any of our $299 training videos, including:

Move fast – it’s only good for the first 5 buyers.

And with that, we’re done for the day! See you next year.

Black Friday Sale: 40% Off Training Videos

So you didn’t get in on the midnight doorbuster sale for 50-90% off our training videos, and you’re jealous of all the cool kids who got up early.

Black-Friday-Sale

Coupon FunTrainingAtHome will take 40% off any of our online training videos including:

This discount is in place for the rest of the day, with no limit on buyers. Go check ‘em out!

Black Friday Sale: 30% Off In-Person Classes

So you didn’t get in on the midnight doorbuster sale for 50-90% off our in-person classes, and now you’re kicking yourself because your boss gave you permission to attend.

Black-Friday-Sale

Coupon EasyPainRelief will take 30% off any of our 4-5 day in-person training classes. Now, instead of the first number in their price being a 3, it’s a more palatable 2. Probably even more palatable than that cranberry salad you’re warming up.

This discount is in place for the rest of the day, with no limit on buyers.

Good luck, and hope to see you in Chicago, Denver, or Portland!

Black Friday Sale: Learn to Read Execution Plans for $99

You’re comfortable writing queries, but some of them are slow – and you need more ways to tune than just adding indexes. You’ve heard that you should read execution plans, but you don’t know where to start.

Black-Friday-Sale

In this 5+ hour video class taught by Microsoft MVP Jeremiah Peschka, you’ll:

  • Learn how to read SQL Server execution plans
  • Understand how different query syntax changes plans
  • Explore the plan cache to find problem queries
  • Learn solutions to common query problems

You even get sample code to download, plus queries to check your own system and find the worst execution plans that need help.

You can watch it for 18 months on your desktop, laptop, iPad, or iPhone.

The first five people who use coupon code LearnToRead99 get the course for just $99. Go!

(Oh, and we may have activated a few more of those Doorbuster50 codes for 50% off, too.)

Black Friday Sale: Free $29 Coupon – And a Test

SQL Server’s Maximum Degree of Parallelism (MAXDOP) controls the number of processors that are used for the execution of a query in a parallel plan.

Black-Friday-SalePop quiz: what’s our favorite knowledge base article for recommendations and guidelines for MAXDOP?

That KB# is your coupon code good for $29 off our training classes or videos – even the $29 videos! Good luck – it’s only good for the first 10 buyers.

css.php