Are you still rebuilding all of your indexes every week – or heaven forbid, every night – and still having performance problems? Odds are, you’re not actually making things better – you’re making them worse! I explain why in this video:
For more, read my post: Stop Worrying About Index Fragmentation.
Great info Brent .. Thanks for sharing !!
And in the perfect world, I completely agree with you – but as you, yourself, touch upon many times – there might be issues outside of control or decision-sphere (so to speak) which make the world painfully not-perfect.
And then the 50% improvement in I/O is still a usable factor if it can be measured/experienced by the end-user.
At times, the low hanging fruit are worthwhile to pick as well – even if the delicious ones at the top which are harder to reach are the best.
But in the perfect world….. oh, that would be nice. Where every server is dedicated and with enough RAM. 🙂
OK, I see
Tank you Bent
Brent – As always, very entertaining presentation. What you are saying makes total sense, in theory. Yes, it would be great if every server had enough memory to cache every database. The problem is, we don’t live in Theoretical Land. Do you have some numbers to support this approach? I think most DBAs are left-brain dominant, concrete thinkers like me. Personally, I would like some evidence before I abandon something so fundamental to my career as index maintenance. Is there a whitepaper available with supporting information?
Mike – if someone handed you a whitepaper with numbers for someone else’s environment, with someone else’s load, and someone else’s maintenance windows, why would you change your opinion?
You need to test in your environment, and furthermore, you need to:
1. Identify the problem you’re trying to solve.
2. Build a hypothesis on what you’ll do to solve it.
3. Measure the impact of your solution to prove that it worked.
Instead, you’re saying, “I’m just gonna keep doing something with no idea of whether it’s improving anything.” YOU’RE the one who needs numbers to support your approach, and that’s what this presentation was all about. 😉
Brent – Thanks for the quick response, even though it came across a little defensive. I apologize if asking for a real world example to support your presentation was unreasonable. I realize that you aren’t comfortable committing to a recommendation because there is no one-size-fits-all approach to anything related to SQL Server. At the same time, I have seen multiple times in my career in multiple environments when the index maintenance procedures didn’t run for a couple of weeks and overall application performance suffered noticably. While it’s true that I may have “no idea of whether it’s improving anything”, that doesn’t mean it’s not preventing things from getting worse.
Right. I’ve also seen multiple times where adding indexes made the server faster. Does that mean we should always add indexes? Nope.
Superstitions are bad. That’s the main point of this entire video – snapping people out of their “I’ve always done it this way, so it must be helping” mentality. I’m using brutally strong wording here in an effort to grab you by the shoulders, slap you across the face, and say, “Wake up!” 😀
Awesome video….very eye-opening explanation!
This video does a great job of doing what it seems you set out to do…challenging our superstitions and sacred cows, getting us to drop a lot of our hardened assumptions reinforced from years of use without challenge. So, granting, as I do, that in a lot of cases with shared and/or SSD storage index fragmentation is the least of our concerns, and in some cases we are actually harming our environments by slamming them with needless IO and traffic all through the maintenance window, do you generally recommend moving away from server-wide index defrag jobs (like Ola Hallengren’s which we use) and not sweating the fragmentation even as it creeps up to 99%? Granted, we’d have to test for our applications/infrastructure/load. Just wondering if in your mind index defragmentation should be ignored full stop, or just that we should stop believing that it is a panacea to cure all manner of database ills, which I readily grant, it is not. I suspect the real answer is that classic and ever useful DBA response, “it depends” :).
Anyway, we definitely endure a lot of issues and grief in the name of index maintenance and stats updates, to be sure, which is why this is an interesting topic! Next you’ll be telling me that burning nightly offerings of old SCSI drives to the terrible God “Iops” will not cause our storage bottleneck to dissipate!
Nic – great, glad I was able to challenge those superstitions.
You took away exactly the right message – I’m not saying defrag is evil, but I’m saying it’s a tool that you want to use to solve problems. Index maintenance is still important – but just like any tool, it needs to be used appropriately. To use it, you need to understand the problem you’re solving. So, what’s the problem you’re trying to solve in your environment?
Really excellent video Brent – content and style 10 out of 10
Thanks, glad you liked it!
Thanks Brent for this. Think about the ALTER INDEX..REBUILD command. It drop and re-create the index means..storing the data in contiguous locations which directly means faster retrieval of data in most of the cases (I should say more than 90%) of data. At 14:15 in you presentation you talk about same. So now If I don’t do that, over a period of time I will have fragmentation happening (both internal and external). So to avoid the fragmentation I will still have to REBUILD or REORG all my “carefully” planned indexes. I have experience in banking domain where you have direct attached storage even fusion IO but still fragmentation was leading to more IO performed when required data in not retrieved in “most efficient” manner. This is real world example. So I do not really agree on the fact that we should not rebuild indexes. Even though we DBA resolve all our slow storage issues, still we will require to rebuild indexes to have B-tree maintained.
I like the way you put the information together, but the issue is that you have a large (huge) number of assumptions that need to be true for re-indexing not to matter. I believe you are just putting up a straw man argument; that fragmentation doesn’t matter. It doesn’t matter if..and there are a lot of ifs. Yes, CPU and RAM should be addressed first, but some query is going to go to disk and when it does it needs to be as fast as financially possible. It might be that everything on a SAN is all mixed together, but if it isn’t then one of your assumptions is not true and the argument goes out the window. Don’t get me wrong, I understand where you are headed, but there are a large number of more junior level DBAs who listen to this stuff and it would be better if you presented information without the large number of assumptions, or at the very least make the assumptions very apparent. Essentially, the message that will be taken away from this, intended or not, is that there is no point in re-indexing when in fact next to no one will have their systems setup in such a way as to allow it.
Keith – as you’ve figured out, the point of the post and the video is to get you to think through what you’re doing. I’m glad to hear you’re putting the thought into it just as I’d intended.
hahahahahah…that was awesome. Merry Christmas.
You too, sir!
Hahaha, little monk asks senior monk: which one is more important – the Moon or the Sun. The senior monk answered: the Moon, because it shines in the dark when needed. Think about it fellows.
I think I just got mooned.
by the way, he says “fix that query”. That is 100% right. Bad queries can always bring down the server. Sometimes, it is a vendor software so you have to get to the right people who owns it to fix it. They can be defensive.
Referred here by you (Brent) from another video but… video is gone 🙁
Hopefully you can resolve, but even if not still want to say thank you so much for all you do, your courses are fantastic!
Working okay here – you may need to check the firewall at work that’s stopping videos.
Just copy/pasted the url in the browser and got a more detailed error: “This video is private”.
So… I’m thinking it’s probably not my nonexistent firewall, lol. 🙂
Just to confirm from another source, shows “private” status from here too…although plenty of other channel videos come through just fine. (comments subscription brought me back here!)
Working okay here – might be a transient thing from YouTube. May wanna check back in a few days. Cheers!
You suggest to put fill factor to 100%.
So in what scenario should I set fill factor to 90%, 80% or 70%?
What’s the problem that you’re trying to solve?
Thank you. I just saw the video and I was asking myself why Microsoft created this feature. In the past I was use to set fill factor to 90% or 80% because I “believed” that this way there was always empty space to fill up and in the end index fragmentation would occur less often.
“…I will leave empty space so SQL Server doesn’t have to go write at the end. I will eventually defrag 1 per month this way…”
I now see it was just an assumption.
So why Microsoft put this feature in my hands?