No one knows everything, especially about SQL Server. When you encounter a problem, especially for the first time, you may not know how to solve it. I’ve seen some crazy approaches taken to fix problems – like duct tape on a car. Many times, instead of fixing the root cause of a problem, such as a poorly-performing query, a server-level change is made.
Here are three rolls of duct tape to avoid.
Clearing the Plan Cache Regularly
“We found that the system would be running fine, then a user would run a report with a certain set of parameters, and it would slow down. Running DBCC FREEPROCACHE every day solved that problem.”
When you run DBCC FREEPROCACHE, all the query execution plans in memory are dropped. The next time every query is run, SQL Server will have to compile a plan, and this is expensive. CPU usage will increase. Queries will run slower the first time they are executed. Is it worth dumping the hundreds or thousands of plans that are saved for reuse for the sake of one or two that are bad?
If a query runs acceptably most of the time, but runs very slowly at other times, check into what else is running on the server at the same time, and also investigate statistics and parameter sniffing. Often, you’ll find that a simple change can fix the problem – without taking drastic measures that affect the whole server.
Shrinking Files Regularly
“Tempdb grows from its normal 25-30 GB to 100 GB every Wednesday night. Our drive is only 100 GB in size, so I set up a job to shrink it back down every Thursday morning.”
Or, your log file doubles in size once a week, so you create a job to run DBCC SHRINKFILE weekly, too.
If this happens, there is something running in your database that is causing the data or log file to grow. A large index is rebuilt frequently. A batch job is run to insert a large amount of data all at once, instead of in sections. You need to find the source of the problem and fix it. Constantly shrinking and regrowing files leads to fragmentation, and fragmentation leads to poor performance.
Combining Development and Production Databases on the Same Server
“Our database performance has been really bad the last month. We are testing new features in our development database, which is on the same server as our production database, and I think that might be the problem.”
When you write new code, you need to test it first. You absolutely should have separate development and production databases. You also should keep them on separate servers. When they share a server, they share resources, and poorly performing queries that are being tested in development can bring production to a screeching halt.
You have many options for setting up a development server. Trying to find hardware? Use an older server that has reached the end of its production lifecycle, or, if you have a virtual environment, request a virtual SQL Server. Worried about licensing? For development only, you can purchase a copy of SQL Server Developer Edition for about $50 per user.
Learning from the Past
These are only three of the valuable lessons I’ve learned from working with SQL Server. Finding the root cause of a performance problem and solving it is a much better long-term solution than implementing a short-term workaround.
There’s a new version in town. v18 adds new checks looking for:
- Backups stored on the same drive letters as user databases (Thanks @AlinSelicean!)
- Agent jobs that don’t have failure notifications set up (Thanks @Thirster42!)
- Shows free drive space per drive letter if you pass @CheckServerInfo = 1 in. We don’t consider this a problem – we just show additional info if you use this parameter.
- Added the elevated database permissions check back in (whoops! dropped it accidentally)
Plus bug fixes and improvements including:
- Lots and lots of typos
- Ignore max file sizes for filestream data files
- Switched a lot of @@SERVERNAME usage out with SERVERPROPERTY(‘ServerName’) because in some SQL Server cloud implementations, those don’t match, and it’s okay.
- Changed database name variables to be NVARCHAR(128) to handle those monster SharePoint databases
- Improved the non-aligned-indexes check to return results even if the index hasn’t been used
And more. Version 17 also added a neat new exclusions table parameter – if you’re the kind of DBA who wants to automate sp_Blitz data collection throughout the environment, but you want to skip certain checks or certain databases, you can do that now. I haven’t written up full instructions on that, but the basic idea is pretty obvious within reading the first 100-200 lines of the code.
As always, you can get the latest version and instructions from http://www.BrentOzar.com/blitz/. If you’d like to contribute code, you can email us at Help@BrentOzar.com. I’m still working through a backlog of about 20 more new check contributions, and we’ve got some fun stuff coming – including a thank-you page listing the contributors and their work.
Developers and database administrators frequently clash on major questions about how to scale application performance. Are they just being stubborn, or is one side really right? Jeremiah and I recently gave a talk on this topic to the AtlantaMDF User group and we wanted to open this debate to the world.
Someone’s Right and Someone’s Wrong
Developers and DBAs are usually being stubborn, and for good reason. DBAs tend to be tasked with being the protectors of an environment, while developers are on a mission to ship more features, make changes, and bring in more money. Each side has a reason for their views, but on most controversial topics, one side actually is more right than the other.
Most DBAs believe that Object Relational Mapping tools (ORMs) write terrible code and that stored procedures are a better data access layer. This belief is widespread because database administrators struggled for years with early versions of NHibernate and Entity Framework.
Some of the growing pains hurt: everything from type mismatches to hilariously long and tangled queries have burned people. But developers embrace ORMs for good reasons. These tools really do help build, test, and ship features more quickly. If you become a SQL Server performance tuning specialist, you can get the best of both worlds– but you have to let go of some of your hangups about ugly SQL queries.
It’s hard to argue against application caching… at least unless you’ve tried to implement it and come up frustrated. DBAs argue that developers should cache everything, while developers can tell you how that isn’t as easy as it sounds.
We know that usually there IS a place for caching, you just need to know how to find the “biggest bang for your buck” opportunities. (We’ve got some slides and scripts below to get you started.)
Access to Production
Who hasn’t had this argument over and over? Usually developers are fighting to get into production and DBAs are fighting to keep them out. Lots of time gets wasted on this topic needlessly.
We created tools like sp_BlitzIndex to help people share information for performance tuning more easily. You just gotta find the right tools and the right level of access that lets everyone do their jobs (and keep their jobs).
Check out the Full Presentation
Wish you were there? Check out the slides:
Get the Scripts
Download ‘em here.
Join Us Live
We train developers and DBAs diagnose the real bottlenecks in their SQL Servers and use data to get beyond the problems we just described. Join us at a live training event to learn how to performance tune SQL Server.
Get frustrated when you read conflicting opinions on the web? Me too — I can’t go to sleep when someone’s wrong on the Internet, but it’s tough to correct everybody. In this 30-minute session, I’ll explain the most common bad advice that I see, explain why it’s wrong, and show you how to set up your SQL Server for speed, not slowness.
Let’s say you get your hands on a brand new version of SQL Server that you haven’t played with before, and you want to know what system objects have changed – DMVs, DMFs, system stored procs, yadda yadda yadda. Sure, you could read the documentation – but ain’t nobody got time for that, and they’re probably leaving out the juicy stuff, right? Here’s what I do (thanks to improvement suggestions from Kendra):
First, create a linked server between the new version and the old version. This lets us query across servers and join things together. In my examples, I’ll be running my queries on the new SQL Server, which I’ll be calling local, and the linked server will point to the remote server.
Identify new system objects with this query:
SELECT oLocal.* FROM master.sys.all_objects oLocal LEFT OUTER JOIN MyOldServer.master.sys.all_objects oRemote ON oLocal.object_id = oRemote.object_id WHERE oRemote.name IS NULL ORDER BY oLocal.type_desc, oLocal.name
We’re getting all local system objects that don’t have a matching system object on the other (remote) server. The “MyOldServer” is the server name for our older SQL Server box that we’re comparing against – our linked server name. The results give you the new functions, stored procs, you name it:
We’re joining on object_id, but that’s a gamble – Microsoft could use different object_ids between versions if they wanted to. If we wanted to filter those out, we could join on name, schema_id, and type_desc (view, stored proc, etc), but that gets a little sketchy. For example, there’s now two CHECK_CONSTRAINTS views in SQL Server 2012 with different object_ids, and we wouldn’t catch that if we were only joining on names. In the below screenshot, the top result set is from SQL Server 2012, and the bottom is from SQL Server 2008R2:
Next, we want to find new or changed columns in existing objects. Here’s how:
SELECT oLocal.type_desc, oLocal.name, cLocal.* FROM master.sys.all_objects oLocal INNER JOIN master.sys.all_columns cLocal ON oLocal.object_id = cLocal.object_id INNER JOIN MyOldServer.master.sys.all_objects oRemote ON oLocal.object_id = oRemote.object_id LEFT OUTER JOIN MyOldServer.master.sys.all_columns cRemote ON oLocal.object_id = cRemote.object_id AND cLocal.column_id = cRemote.column_id --AND cLocal.name = cRemote.name COLLATE DATABASE_DEFAULT WHERE cRemote.name IS NULL ORDER BY oLocal.type_desc, oLocal.name, cLocal.column_id
Note that we’ve commented out a join on name – if you’re worried that existing columns might have changed names, that’s how you’d catch it. The results:
Presto – we can see what new instrumentation fields Microsoft is giving us. This helps script authors do a better job of testing their changes across multiple versions. When someone sends me an sp_Blitz improvement that relies on a DMV or field I’m not familiar with, I can quickly check multiple servers to see if it’s available across all of my lab servers, or which version it appeared in.
On May 9-10, we’re doing a 2-day training course for developers who rely on SQL Server. Here’s the most frequently asked questions we’ve gotten so far.
Q: Will you be doing this training in a different city this year?
Nope – this is it. In 2014, we’re going to offer a 2-day developer course and a 2-day DBA course, and we’re going to offer them together in the same week. We’ll run that in two different cities – Chicago, and a city to be determined.
Q: That’s four days. What are you going to do on the fifth day that week?
A free event open to the public sounds like a good idea, doesn’t it?
Q: How many people are coming?
There’s a couple dozen developers signed up so far, plus a couple of DBAs too. (Hey, everybody wants to figure out how to make queries go faster.) And of course there’s all four of us, too.
Q: Do I get the scripts and handouts?
Yes, you’ll get some of our favorite troubleshooting and demo scripts – cool stuff that we save for private paid sessions. We use these exact same scripts during our SQL Server Critical Care™ sessions to help clients make their SQL Servers faster and more reliable.
Q: My schedule is unpredictable. I want to save a spot but I might not make it.
That’s not a question. Okay, here’s the answer anyway – you can hold a spot in the class with a 20% nonrefundable deposit. On the order page, click the “Show other payment options” button, and choose to pay via check. It’ll give you payment information including the address for the check. Send the 20% now, and send the remaining 80% before the event starts. (We won’t be accepting checks onsite.)
Q: How will the Q&A work?
In addition to the onsite Q&A, we’ll run two two-hour Q&A webcasts after the training. This gives you the time to go back to the office, dig into what you learned, and ask followup questions based on your specific environment. We’ll run a morning session and an afternoon session to help more people attend.
Q: What’s with the “Enter promotional code” link on the order page?
It’s where you can put in stuff to get a discount on the price.
Q: What’s one of the discount codes?
I could tell you, but then I’d have to kill you. But they probably line up with funny things that have happened to us in the past, like when we talked about our experience writing a book or we dressed up in costumes for sessions. And they would probably expire soon, like the end of this week.
Q: How do I get my boss to send me?
Check out our business justification PDF. It’s written for the pointy-haired guys in mind.
Q: Okay, I want to register. Where do I click?
Developers frequently make a big mistake: they tune out on discussions of storage performance and IO. Don’t fall into this trap! Understanding how storage impacts SQL Server performance helps you tune the right parts of your code and focus on what really matters. In this 30 minute video, Microsoft Certified Master Kendra Little demonstrates why developers need to understand the IO impact on writes and reads in SQL Server. She’ll show you how simple configuration changes can sometimes save hundreds of hours of developer time.
Interested in the tools or references in the video? Check out the links at the bottom of this post.
Links and References
I’ve gotten a lot of great questions about log shipping through these webcasts, so I’ve put together an FAQ.
- What editions of SQL Server is log shipping available in?
- 2012 – Enterprise, Business Intelligence, Standard, and Web
- 2008R2 – Datacenter, Enterprise, Standard, Web, and Workgroup
- 2008 – Enterprise, Standard, Web, and Workgroup
- 2005 – Enterprise, Standard, and Workgroup
- Does the secondary need to be licensed?
- I am not the licensing police, and I am not Microsoft – check with your licensing representative to clarify your exact situation. Generally, you can have one warm standby server. However, the second someone starts using it for reporting, testing, or anything else, you need to license it like any other server.
- Log shipping is compatible with backup compression. What edition of SQL Server do I need to take advantage of compression?
- 2012 – Enterprise, Business Intelligence, or Standard
- 2008R2 – Datacenter, Enterprise, or Standard
- 2008 – Enterprise
- 2005 – Not available
- When log shipping is set up, Agent jobs are created to alert me if a backup, copy, or restore fails. How do I get notified?
- You need to go into the Agent job, pull up Notifications, and choose your method – email an operator, or write to the event log, for example.
- Are my logins shipped from the primary to the secondary?
- No, they are not. You’ll need to set up a separate method to sync the logins.
- Does this replace, or can it be combined with, our existing daily full and log backups?
- TL; DR – no.
- You’ll still want to take regular full and/or differential backups. Log shipping only takes one full backup – at the beginning – and that’s only if you specify that it does so. It can also be initialized from an existing full backup.
- Taking two log backups in separate jobs will break the log chain, however. If you implement log shipping, it will replace your current transaction log backup job.
- What’s the difference between the secondary being in “Restoring” vs. “Standby”?
- Restoring means the database is not accessible. Standby means it is read-only. You make this decision when you set up the log shipping.
- If the database is in Standby mode, users can query it – except when a log backup is being restored. You need to decide if a restore job will disconnect users, or if the restore is delayed until after the users are disconnected.
What other questions do you have?
Next month, I’ll be in Vegas at the SQLIntersection conference. Yeah yeah yeah, I’m a presenter, but forget that for a second. I want to give you a tour of the agenda and talk about why I’m so excited to be there as an attendee, too.
Here’s the pre-conference workshops:
Sunday, Kimberly Tripp is talking indexes. If you work with indexes as a DBA or a senior developer, I can’t think of a better index teacher today than Kimberly. I’m lucky to have seen most of the great SQL presenters in person around the world, and when it comes to indexes, she’s at the top. At $450 for one day, this is a screaming deal.
Monday, Kim’s got a workshop on understanding the plan cache. She’s a complete expert at that too, at the top of the game, and it’s a great session for developers and DBAs who manage internally developed applications. I’ve got a one-day workshop on SQL Server setup best practices, which seems odd because there’s a million best practices documents out there already, right? Well, things have changed a lot this year with SSDs, VMware, clusters, TempDB, and AlwaysOn Availability Groups. If you’re building a new server this year, you need to revisit what you think you know.
On Tuesday, the main conference sessions start:
There’s three track rooms, and in each room, we’ve got a host/MC. The host will be facilitating discussion in the room, helping with Q&A, and talking shop in between sessions. This is a completely new way of handling downtime between sessions, and I’m really excited about it.
The speakers are all killer, no filler. These are people who know their subjects really well, and they do a fantastic job of explaining the topics. It’s a complete all-star list, and I could write for pages about just this one list of sessions, but I’m going to focus on the people in my track just because I don’t wanna bore you to death.
Joe Sack ran the Microsoft Certified Master of SQL Server program when he worked at Microsoft, and now he’s with SQLskills. He’s one of the funniest, friendliest guys I know, and if you’ve been reading his incredibly detailed blogs, you’ll already know he’s smart too. His session topic is so relevant – if you’re not already troubleshooting SQL Server CPU issues, you will be soon. Companies are getting tighter and tighter with SQL licensing now that we pay by the core, and they want to minimize CPU use as much as possible.
Paul White is the blogger that wows other bloggers. His blog posts do an incredible job of teaching even the smartest SQL Server professionals, and I can’t wait to see his session on parallel execution. His classic post on forcing parallel execution plans contains a ton of gems, even just starting with the list of things that cause your queries to go single-threaded.
Andrew Kelly has been distilling the best parts of performance monitoring for years, and he’s at the top of his game here too. He doesn’t get the blog press that the other speakers get, but believe me, he knows his stuff forwards and backwards. One of my favorite gauges of presenter quality is the number of notes that attendees take during the session, and I always see people scribbling like crazy while Andrew’s onstage.
The day finishes up with Kevin Kline and Sumeet Bansai talking about how you can use local SSDs to achieve high availability and high performance with AlwaysOn Availability Groups. I’m not gonna lie – keynotes at conferences usually suck hard, but with Kevin onstage, that’s not going to happen here. His recent post about doing good technical demos is a clue that he not only takes his work seriously, but he wants to explain it to others well too
And that’s just the first conference day.
Because SQL Intersection is a brand new conference, you might be thinking, “I dunno if I should go. Maybe it’s just a flash in the pan and they don’t know what they’re doing.” Au contraire, mon frere – this conference is set up to be very different than others, and in a good way.
April is a great time to be in Vegas – the weather’s wonderful. Vegas has some of my favorite restaurants in the world, and I love seeing shows like Cirque du Soleil and Penn & Teller.
If you’d like to join me, Jeremiah, Kendra, and the sharpest, friendliest minds in the SQL Server business, register for SQL Intersection. The Show Package (conference plus one pre-conference or post-conference session) even includes a free Surface RT tablet. (I’m so not ready to see an army of attendees with Surface RTs.)
Use discount code OZAR and you’ll get $50 off, plus we’ll donate $10 to charity. We’re looking for charity ideas, so leave a comment with your favorite charity. We’ll pick our favorite suggestion(s) and report back with how much you gave to charity after the show.
Log shipping is a tried and true method in SQL Server for keeping a copy of your data on a secondary server. You have your primary and secondary server set up, and it’s working great. Are you monitoring it? Do you know what to do if you need to fail over to the secondary? Join Jes in this free 30 minute video to find out!
In case you missed it: Log Shipping Part 1: Preparing for Disaster!