Blog

Announcing Our Newest Video and In-Person Classes, and 30% Off in July

We’re really excited to share these, plus give you a discount to celebrate.

First, we’ve added a new video class - How to Read Execution Plans with Jeremiah. You’re comfortable writing queries, but some of them are slow – and you need more ways to tune than just adding indexes. You’ve heard that you should read execution plans, but you don’t know where to start. Learn more about reading execution plans.

Next up, our 2015 in-person training class lineup. Our students told us they loved our 2014 classes, but they wanted more of everything. We’ve lengthened the classes – we took the performance one from 3 days to 4, and added a couple of new 5-day classes:

Advanced-Querying-And-Indexing

Advanced Querying and Indexing: 5-day in-person class. Do you need to learn to write the fastest queries possible for SQL Server? In 2015, join us for five days of advanced TSQL query and index optimization. Join us in Chicago or Portland.

SQL Performance Troubleshooting: 4-day in-person class. You need to speed up a database server that you don’t fully understand – but that’s about to change in four days of learning and fun in Chicago, Denver, and Portland.

Senior-DBA-Class-of-2015

Senior DBA Class of 2015: 5-day in-person class. You’re a SQL Server DBA who is ready to advance to the next level in your career but aren’t sure how to fully master your environment and drive the right architectural changes. That’s about to change in one week of learning and adventure in Chicago and Denver.

Some of our students (especially the consultants) told us they wanted to really go in-depth and take two weeks of classes back-to-back. To make that easier, we lined up our classes and put them in some of our favorite cities, at the best times to spend a weekend between classes:

Denver in February (hey, it’s ski season!):

  • February 2-6 – Senior DBA Class of 2015
  • February 9-12 – SQL Server Performance Troubleshooting

Chicago in May (best time to visit our fair city):

  • May 4-8 – Advanced Querying and Indexing
  • May 11-14 – SQL Server Performance Troubleshooting

Portland in August (Oregon summers are beautiful):

  • Aug 3-7 – Advanced Querying and Indexing
  • Aug 10-13 – SQL Server Performance Troubleshooting

Chicago in September (not too hot, not too cold):

  • Sept 14-18 – Senior DBA Class of 2015
  • Sept 21-24 – SQL Server Performance Troubleshooting

Check out our full catalog of in-person training events and online training videos - and all of our videos & classes, not just the new classes, are 30% off with coupon Launch2015 in July. Come join us!

How to Back Up Terabytes of Databases

When you weren’t looking, your databases went and grew up. Now your backup window has grown so large that you’re about ready to open it and jump.

Time to make a choice.

The Native Way: Tuning SQL Server Backups

You can theoretically pull this off by using a combination of tactics:

Back up as infrequently as the business will allow. Run your full backups once a week (or if you want to go wild and crazy, once per month) and differential backups periodically. As Jes explains in her backup and recovery class, differentials back up the data pages that have changed since the last full backup. When disaster strikes, you only need to recover the most recent full backup, the most recent differential backup, and all of the log backups after the differential. This can shave a lot of time off your restores – but only if you minimize the number of changed pages in the database. This means…

Change the database as little as possible. We can’t change what the users do, but we can change what we DBAs do. Stop doing daily index defrag/rebuild jobs – you’re just changing pages in the database, which means instantly inflating the size of your differential backups. In a scenario like this, you can only do index maintenance when you’re sure it is the only way to solve a performance problem, and it absolutely has to be your last resort.

My backup camera collection has started to grow out of control.

My backup camera collection has started to grow out of control.

Tune the data file read speeds. You need to read the pages off disk as fast as possible to back them up. Use tools like CrystalDiskMark and SQLIO to measure how fast you’re going, and then tune your storage to go faster.

Compress the data as much as possible. It’s not just about minimizing the size of your backup file – it’s about minimizing the amount of data we have to write to disk. Bonus points for using index compression inside the database so that it’s compressed once, not recompressed every time we do a backup, although that doesn’t really help with off-row data.

Tune the backup target write speeds. If you’re using a small pool of SATA drives in RAID 5 as a backup target, it’s probably not going to be able to keep up with a giant volume of streaming writes, even if those writes are compressed versions of the database. Problems will get even worse if multiple servers are backing up to the same RAID 5 pool simultaneously because the writes will turn random, which is the worst case scenario for RAID 5.

Tune the bottleneck between the reads and the writes. If you’re backing up over the network, use 10Gb Ethernet to avoid the pains of trying to push a lot of data through a tiny 1Gb straw.

Tune your backup software settings. If you’re using native backups, start with using multiple files and the built-in options, and graph your results. Third party compression products usually offer all kinds of knobs to tweak – you’ll need to use that same level of graphing diligence.

Whew. I got tired just typing all that stuff. And if you’re lucky, at the end of it, your backups will complete in an hour or two, but the server might be darned near unusable while you’re beating the daylights out of it. Then the fun balancing act starts, trying to figure out the right point where the system is still usable but the backups complete quickly.

Or Just Cheat with SAN Snapshots.

In my Virtualization, SAN, and Hardware video class, I explain how SAN snapshots are able to take a full database backup of any size in just a couple of seconds.

I'm a big believer in snapshots.

I’m a big believer in snapshots.

Seriously.

Sort of.

See, while it’s technically a backup, I don’t really consider it a backup until it’s off the primary storage device. Your SAN storage, expensive as it was, is still vulnerable to failure, and you need to get that data out as quickly as possible. The good news is that you can move that data out without dragging it through the SQL Server’s storage connections, CPU, and network ports. You can simply (simply?) hook a virtual tape library, actual tape library, or another storage device to the same storage network, and copy directly between the two.

Your data read speeds may degrade during that process, but it’s up to you – if you want to architect your storage so that it’s fast enough to do these full backups without any noticeable performance to the end user, it’s possible by inserting enough quarters in the front.

You still have to pay attention, though, because your backup process will look like this:

  • Daily full backups via SAN snapshots – all writes are quiesced for 1-10 seconds during this time
  • Conventional log backups every X minutes – where X is dictated by the business

If you push a big index rebuild job through, you can still bloat the transaction log, and your log backups may take longer than X minutes to complete. This is where our RPO/RTO planning worksheet is so important – if your RPO is 1 minute, you simply may not be able to do index rebuild jobs.

SAN snapshots have one other drawback: depending on your storage make/model, snapshots may not be included in your licensing pricing. You may have to spend a lot more (typically tens of thousands of dollars) to unlock the feature. Ask your SAN admin if snapshots are right for your wallet.

Building Terabyte Servers Means Starting with Backups First

When I’m building a SQL Server to hold multiple terabytes of databases, this backup question is the very first one we have to address – even before we talk about the speed of end user queries.

Otherwise, we could end up designing a server with all local solid state drives, which is very inexpensive and satisfies end user performance goals – but we can’t back the data up fast enough.

Our Senior DBA Training Class: Attendee Feedback

When we finish our courses, we ask attendees what they thought. Here’s what they said about our How to Be a Senior DBA class in Chicago:

“I’ve attended many trainings.  This is the most valuable I’ve ever attended.” – Tim Costello, Consultant

“ROI for training is always a gamble.  However, I felt like I received my money’s worth before lunch on the first day.  Outstanding!” – Zach Eagle, DBA

“The single best database class I’ve ever attended.  I feel challenged to step up not only my DBA game, but my presentation & training game as well.” – Ben Bausili, Consultant

Goat and Jedi not always included. But sometimes. Like in Chicago.

Goat and Jedi not always included. But sometimes. Like in Chicago.

“I would tell people that this is the perfect course to take if you want to gain control of your environment and impress management.” – George Larkin, DBA

“This course is very in-depth and helpful to get information on the things you need to do in order to take your DBA skills to the next level.  The course is a bit overwhelming at times, but you need to have all of this detail.  Thanks to Brent and his team for answering any questions that came up and I would definitely recommend this course to others.” – Mike Hewitt, Software Engineer/DBA

“Outstanding training!  I knew very little about SANs & SSD.  I am now confident that I can troubleshoot on a higher level to determine if storage is ever our problem.  Thanks Ozar group!  You guys rock!” – Judy Beam, DBA

“Excellent format, content and team of presenters.  More like a conversation between peers than a boring lecture.  Fun and very meaningful.  You have a great team!” – Greg Noel, COO/CIO

“I came into the class with unfairly high expectations; Brent and Kendra somehow managed to exceed them.  Thank you guys!” – Ben Wyatt, DBA/Consultant

“How to be a Senior DBA is a consistently delivered training regimen that gives you useful tools and knowledge in a digestible and fun format.” – Luther Rochester, DBA

“Brent & the gang are approachable and easy to talk to.  They break down complex subject matter to easier to understand presentations.” – Kevin Murphy, Data Architect

“Far and away better than the MS courses.  The Ozar team knows their stuff and doesn’t pretend to know while they Google the answer on a break.” – Jon Worthy, IT

“This class is a must for every DBA.  All modules are well prepared & presented by two very well-respected SQL gurus in the community.  All questions & demos are answered & illustrated clearly.  I’ve learned a lot!” – Chai W., DBA

What will you say? Sign up now for the next one in Philadelphia this September or our Make SQL Server Apps Go Faster class in Seattle.

New sp_Blitz® v35 Checks for Unsupported and Dangerous SQL Server Builds

News broke recently of a dangerous data loss bug in SQL Server 2012 and 2014, and Aaron Bertrand explained which patch levels are affected. It’s kinda tricky, and I’m afraid most people aren’t even going to know about the bug – let alone whether or not they’re on a bad version.

I added build number checking into the latest version of sp_Blitz® so that you can just run it. If your build has the dangerous data loss bug, you’ll get a priority 20 warning about a dangerous build. If you’re running an out-of-support version, like SQL 2012 RTM with no service packs, you’ll get a priority 20 warning about that as well.

Other changes in the last couple of versions:

Changes in v35 – June 18, 2014:

  • John Hill fixed a bug in check 134 looking for deadlocks.
  • Robert Virag improved check 19 looking for replication subscribers.
  • Russell Hart improved check 34 to avoid blocking during restores.
  • Added check 126 for priority boost enabled. It was always in the non-default configurations check, but this one is so bad we called it out.
  • Added checks 128 and 129 for unsupported builds of SQL Server.
  • Added check 127 for unneccessary backups of ReportServerTempDB.
  • Changed fill factor threshold to <80% to match sp_BlitzIndex.

Changes in v34 – April 2, 2014:

  • Jason Pritchard fixed a bug in the plan cache analysis that did not return results when analyzing for high logical reads.
  • Kirby Richter @SqlKirby fixed a bug in check 75 (t-log sizes) that failed on really big transaction log files. (Not even gonna say how big.)
  • Oleg Ivashov improved check 94 (jobs without failure emails) to exclude SSRS jobs.
  • Added @SummaryMode parameter to return only one result set per finding.
  • Added check 124 for Performance: Deadlocks Happening Daily. Looks for more than 10 deadlocks per day.
  • Moved check 121 for Performance: Serializable Locking to be lower priority (down to 100 from 10) and only triggers when more than 10 minutes of the wait have happened since startup.
  • Changed checks 107-109 for Poison Waits to have higher thresholds, now looking at more than 5 seconds per hour of server uptime. Been up for 10 hours, we look for 50 seconds, that kind of thing.

Download the latest sp_Blitz®, or head over to the introduction page. Enjoy!

Announcing a New 2-Day Event: Make SQL Server Apps Go Faster

We started with our top-selling PASS Summit pre-con, and then doubled it to two days long!

You’re a developer or DBA stuck with a database server that’s not going fast enough. You’ve got a hunch that the biggest bottleneck is inside the database server somewhere, but where? In just two days, you’ll learn how to use powerful scripts to identify which queries are killing your server, what parts of the database server are holding you back, how to tackle indexing improvements, and how to identify query anti-patterns.

On Monday and Tuesday before the PASS Summit conference, we’ll dive deeply into:

  • How wait stats tell you where to focus your tuning
  • How the plan cache shows you which queries are the worst
  • How to make fast improvements by picking the right indexes
  • How to identify and fix the most common query anti-patterns
  • And you’ll even get execution plan, query, and index challenges to solve during the class
Make yourself comfortable and get your learn on.

Make yourself comfortable and get your learn on.

You’ll be plenty comfortable because we’re holding this independent event in the Big Picture Theater in downtown Seattle, one mile from the Convention Center. Coffee, beverages, snacks, and lunch are all covered. (Sorry, no popcorn – you know we wouldn’t be able to stop eating it once we get started.)

Registration is $695, and for another $200, we’ll throw in our Developer’s Guide to SQL Server Performance video class. This way, you can get started learning right away without waiting for the class!

Act fast, save $100 - if you register during July with coupon code GOFAST, registration is only $595.

Space is limited to just 100 seats – way less folks than we had last year, so you’d better move fast if you want to get in. This is a totally independent training event run by Brent Ozar Unlimited – you can’t get in with your PASS Summit ticket, and you can’t register for it as part of the PASS check-out process.

Check out the details now.

Let’s Test Your Backups Together [Video]

You manage SQL Server databases, but you never get the chance to take time out of your busy day to test your backups. You assume that just because the jobs are succeeding, that you’ll be able to restore your databases when disaster strikes. Join Brent Ozar as he walks you through several queries of your MSDB backup history tables, checks your RPO and RTO live, and helps you build a recovery strategy for your production databases in this one-hour video.

For more videos like this:

Our Sessions at the #SQLPASS Summit 2014

The session list has been published, and we’re excited to say all of us have been selected to speak at the PASS Summit again this year. Here’s our sessions:

Are Your Indexes Hurting You or Helping You? – Jes Schultz Borland – Queries need your help! Your mission, should you choose to accept it, is to make great decisions about what indexes are best for your workload. In this session, we’ll review the difference between clustered and nonclustered indexes, show when to use included columns, understand what sargability means, and introduce statistics. You’ll leave this session with the ability to confidently determine why, or why not, SQL Server uses your indexes when executing queries.

Developers: Who Needs a DBA? – Brent Ozar – You store data in SQL Server, but you don’t have enough work to keep a full-time DBA busy. In just one session, you’ll learn the basics of performance troubleshooting, backup, index tuning, and security. Brent Ozar, recovering developer, will teach you the basic care and feeding of a Microsoft SQL Server 2005, 2008, 2012, or 2014 instance and give you scripts to keep you out of trouble.

pass-summit-2014

Dynamic SQL: Build Fast, Flexible Queries – Jeremiah Peschka – Dynamic SQL is a misunderstood and much maligned part of a DBA’s tool kit – it can be used to solve difficult business problems, respond to diverse data needs, and alleviate performance problems. Many DBAs reject dynamic SQL outright as a potential source of SQL injections, being poorly performing, or just for being a hacky solution in general. Not so! Jeremiah Peschka has been making extensive use of dynamic SQL throughout his career to solve a variety of problems. In this session, we’ll be dispelling these misconceptions and demonstrating how dynamic SQL can become a part of every DBA’s tool kit.

From Minutes to Milliseconds: High-Performance SSRS Tuning – Doug Lane – Even though you’re an experienced report developer or administrator, performance tuning for SQL Server Reporting Services still feels as bewildering and hopeless as folding a fitted bed sheet. You’ve made your data sets smaller and timeouts longer, but it’s not enough to remove the slowness dragging down your reporting environment. In this session, you’ll learn how design and configuration choices put pressure on your report server and techniques to relieve that pressure. You’ll see how to configure your Reporting Services databases for speed, streamline your subscription schedules, and use caching for high-demand reports. You’ll also learn some design strategies to lighten your report processing load. If you want to maximize the speed of your Reporting Services environment and minimize the pain of performance tuning, this session is for you.

Lightning Talk: Conquer CXPACKET and Master MAXDOP – Brent Ozar – CXPACKET waits don’t mean you should set MAXDOP = 1. Microsoft Certified Master Brent Ozar will boil it all down and simplify CXPACKET to show you the real problem – and what you should do about it – in one quick 10-minute Lightning Talk.

Why Does SQL Server Keep Asking For This Index? – Kendra Little – SQL Server says you’d really benefit from an index, but you’d like to know why. Kendra Little will give you scripts to find which queries are asking for a specific missing index. You’ll learn to predict how a new index will change read patterns on the table, whether you need to add the exact index SQL Server is requesting, and how to measure performance improvements from your index changes. If you’re comfortable querying SQL Server’s missing index DMVs and looking at an execution plan here and there, this session is for you.

World’s Worst Performance Tuning Techniques – Kendra Little – Could one of your tricks for making queries faster be way off base? Kendra Little is a Microsoft Certified Master in SQL Server and a performance tuning consultant, which means she’s learned lots of lessons from her mistakes. In this session, you will learn how to stop obsessively updating or creating statistics, find alternatives to forcing an index, and deal with an addiction to ‘recompile’ hints.

Pre-Conference Session: psych! We didn’t get picked for an official pre-con this year (I know, right?) so we’re building our own lunar lander. Stay tuned – we’ll get things ironed out in the next week or two so you can make your official plans.

How to Pick a Monitoring Tool

Step 1: Make a list of 5 problems you’ve faced in the last couple of months that you needed alerting on. If you’ve got a help desk ticket system, look at the ticket types that occur most frequently and cause the most outage times.

For me as a DBA, that might be:

  • SQL Server service down
  • Deadlock occurs
  • A runaway query consumes high CPU
  • An Agent job is running more than 2x the time it usually takes to run, and it’s still going
  • Log shipping gets more than 15 minutes behind

Step 2: Set up a lab to repro those problems on demand. This is actually a great way to learn about these problems, by the way – the more you understand how to create these situations, the better you’ll be at detecting and reacting to them.

Step 3: Download the eval editions of the tools you want. All monitoring software vendors give away short-term (10-15 day) versions. Install & configure them to monitor your test lab.

Step 4: Actively evaluate them. Build a spreadsheet with a column for each monitoring tool, and a group of rows for each failure scenario. For each problem, when you trigger it, document:

My favorite monitoring dashboard. I'm looking for four 7's of uptime.

My favorite monitoring dashboard. I’m looking for four 7′s of uptime.

  • How long it takes to alert you about the correct underlying problem
  • How many false alarms you get (alarms that are unrelated to the real problem)
  • How intuitively obvious the real problem is when looking at the tool’s dashboard (are all the lights flashing red, or is there a single light flashing red exactly where the problem is?)

Step 5: Pick winners and negotiate price. Out of the tools you evaluated, pick at least two that you’re willing to live with. Call each of the vendors and say, “I did a tool evaluation, and it was a tie between you and ___. What’s your best price? I’m going to be asking the other guys too.”

They’re going to want to start talking about value differentiators, like how they’re so much better than the other company because they do ___. Doesn’t matter – you’ve already picked the two tools you’re willing to live with. Let them talk, listen fairly, and then repeat the question: what’s your best price?

You don’t have to pick the cheapest one – there may be one tool you like much more – but at least now you’ve gotten good prices on both, and you can make an informed decision.

Join me in my SQL Server Tools That Cost Money webcast to learn more about what tools are out there and how to evaluate them.

How to Get “Senior” in Your Title [Video]

In my blog post Sample SQL Server Database Administrator Job Descriptions, I explained how production DBAs are like fridge maintenance engineers, and development DBAs are like chefs.

In this 35-minute video, I explain how this plays into the concept of a “senior DBA” – you can’t just say “senior fridge user” any more than you can say “senior DBA.” There’s a lot of kinds of database administrators, and they each have different job tasks.

In the video, I use this grid to lay out common job duties:

DBA Job Duties

DBA Job Duties

Then in the video, I use a 10-question self-assessment test to check your skills across each job duty, figure out where you’ve got the best strengths, and guide your training to focus on one of the most common job descriptions.

For more videos like this:

How to Use Our HA/DR Planning Worksheets [Video]

You’re in charge of making sure the database never goes down, but your managers won’t give you any budget. You’re tired of feeling guilty like it’s your fault there’s an outage – and now there’s help. Microsoft Certified Master Brent Ozar helps businesses understand that they have to pay the piper if they want high availability. He’ll share his simple two-page worksheet to get the business users on your side in this 30-minute webcast.

Get the worksheets here.

For more videos like this:

css.php