You’ve been doing this database thing for a while, and you’re ready to get serious about it. What’s the next step?
Step 1: Define your specialty in one sentence.
If you say you do everything, you compete with everyone.
You want to be the only one they want. That means you’re:
- Actively sought-after
- Uniquely qualified
- A very high value for short bursts of time
- Respected for your opinion
- Worth more than your competitors (more on that later)
This sounds selfish, but remember – it’s not about you. It’s about your customers (whether they’re internal or external) and your ability to help them.
To pick your specialization, watch my webcast archive How to Get Senior In Your Title. I talk about the different types of DBAs and what they specialize in. Here’s one of the important slides from that session:
Most of you out in the crowd are going to say, “But I do all of these.” Sure you do – today. But we’re talking about where you want to be two years from now if you’re going to really stand out. Not only am I encouraging you to pick one of the columns, but I’m even encouraging you to focus on a specific horizontal row.
Examples of specialties include:
- “This server has to be reliable. We need AlwaysOn Availability Groups. I know just who to call.”
- “We need to manage thousands of servers with easier automation. I know the right person for the job.”
- “Our SQL Servers in VMware are just too slow, and nobody knows whose fault it is. I know who can tell.”
- “We need to offload our full text search, but we have no idea what to use. I know somebody who does.”
Notice that I’m phrasing these in a one-sentence pain point. You need to be known for resolving someone’s pain. This is the funny thing about business and consulting – you get paid the most to relieve urgent pain, not to provide keeping-the-lights-on maintenance.
The first step in your two-year plan is to write the one-sentence pain you want to resolve.
Step 2: Assess your current skills and your target skills.
Thinking about your one-sentence pain point:
- How many times have you relieved that pain?
- How many times have you failed to relieve it?
- When you hit an impasse, who did you escalate it to?
- Have you sketched out a process for diagnosing it? Has anyone?
- Have you documented the process for others to follow?
The more answers you have, and the more confident you are giving those answers aloud to someone else, the better your skills are. What, you expected a true/false multiple choice assessment test? Technology moves so fast that often the questions aren’t even right, let alone the answers.
Here’s a longer version of that assessment that I use for my own skills testing:
- I don’t know where the pain is coming from.
- I can identify the pain in clear terms.
- I know several possible root causes of the pain.
- I can identify exactly which one is at fault here.
- I know several ways to relieve that pain.
- I can identify exactly which one is right here.
- I’ve documented my triage process.
- I’ve hit situations where my process has been wrong, and I’ve learned from it.
From those levels, what level do you think you get paid for?
Surprise – it’s #1. You know plenty of people who are getting paid right now even though they have absolutely no idea where the pain is coming from. However, the higher your level, the easier it is to get paid more. (Don’t think that just because you’re on level 7, you’re making a bazillion dollars – there’s plenty of folks who aren’t great at negotiating their value, either.)
Figure out what level you’re at today, and get a rough idea of what level you want to be at in two years. Now let’s figure out how to get there.
Step 3: Build a 2-year learning plan to make that leap.
Divide the number of levels you want to jump by the amount of time you have. If you want to go up four levels, and you’ve got two years to do it, then you need to progress a level every 6 months.
This sounds really easy, but there’s a problem: you’re probably not repeatedly solving this pain point at your day job. You probably solve it every now and then, but not over and over in a way that helps you refine your technique.
That’s why a 2-year learning plan is really a 2-year sharing plan.
Nothing teaches you something like being forced to teach it to someone else. Heck, even building this blog post (and a presentation on it a few weeks ago) made me flesh out my own philosophies!
But to share, you have to get permission. Start by having this discussion with your manager:
Dear Manager – Recently, we ran into the problem of ____. To get relief, I did ____. Are you happy with that relief? If so, I’d like to talk about what I learned at my local SQL Server user group. I won’t mention our company name. Is that OK? Sincerely, Me
By having that discussion, you’re also making sure the manager is really satisfied with your pain relief efforts and that they saw value in your work. (After all, think of them as one of your first pain relief clients.)
Once you’ve got permission, here’s how you build the 2-year sharing plan: every level jump equals one user group presentation.
- Write the user group presentation agenda in 4-5 bullet points.
- Write a blog post about each bullet point. (The words in your blog post are what you’ll say out loud in your session – think about it as writing your script.)
- Build slides that help tell the story, but the slides are not the story. Don’t transcribe your blog posts word-for-word on the slide.
For example, if you need to hit the level “I know several ways to relieve that pain,” and your specialization is improving the performance of virtual SQL Servers, your user group session could be titled “5 Ways to Make Your Virtual SQL Server Faster.” You’d then write a blog post about each of the 5 ways. Presto, there’s your session resources.
At the end of your 2-year sharing plan, you’ve built up a solid repertoire of material, plus built up your own level of expertise. (You’ve also built up a little bit of a reputation – but more on that later.)
Step 4: Decide what lifestyle works best for you.
How much risk can you tolerate?
- Some. I could miss a couple of paychecks a year and manage my own benefits if I earned more.
- Lots. I’d be willing to go without income for a month or two per year if I could earn lots more.
- None. A very predictable salary and benefits are absolute requirements for me.
This determines whether you should be a full time employee, a long-term contractor that switches positions periodically, or a short-term consultant. In a nutshell, the differences are:
Consultants tell you what to do. They listen to your business problems, come up with solutions, and guide your staff on how to do it. They are typically short-term stints – a couple of days per month at a client, multiple clients at a time.
Contractors do what they’re told. They get a list of required solutions from the business and implement those solutions. They typically work together for long stints, showing up at the same client every day for months at a time, with only one live client relationship.
Full time employees do a mix of this. They come up with ideas, plus implement some of those ideas.
There’s no one answer that’s better for everyone. Heck, I’ve even changed my answer a few times over the last several years! It comes down to finding the right risk/reward balance for your own lifestyle needs, and then bringing the right customers in the door.
Step 5: Decide how you’ll market yourself.
Consultants sell advice, not manual labor, so they have many clients – which means doing a lot of sales.
Contractors sell labor, so they have fewer clients – which means less sales efforts.
Full time employees (FTEs) only have one sales push every few years when they change jobs.
Our company is a good example of the work required to do marketing and sales when you want to scale beyond one or two people:
- We have tens of thousands of regular blog readers
- Thousands of them attend the weekly webcasts
- Hundreds of them email us per month asking for help
- A few dozen turn into serious sales opportunities
- Around a dozen will book consulting engagements with us
This funnel approach demonstrates inbound marketing – using lots of free material to get the word out about your services and invite them to contact you for personal help. It’s a lot of hard work – very hard work.
The other approach is outbound marketing – cold calls to strangers asking if they’ve got your specialized pain point, and then trying to convince them that you’re the right person to bring pain relief. (You can kinda guess how I feel about outbound marketing.) Sure, it sounds slimy – but the takeaway is that it’s hard work, and every bit as hard as doing inbound marketing.
But only one of those options polishes your skills.
Inbound marketing is a rare two-for-one in life – it’s both your 2-year sharing plan, and your 2-year marketing plan. You don’t have much spare time, so you need every bit of it to count. Choose inbound marketing, do your learning and sharing in public, and you’ll write your own ticket.
Presto – You’re two years away from success.
No matter what pain you want to solve, how you want to solve it, or how you want to get paid for it, this simple plan will have you on the road to success. Now get started on writing down that one-sentence pain point!
kCura Relativity is a software product for law firms to find interesting stuff to help their case. To get you up to speed on what it does, here’s some of the posts I’ve written about Relativity in the past:
- Performance Tuning kCura Relativity – explains what the product is for and how DBAs can help make it faster
- Tiering kCura Relativity Databases – how to manage hundreds or thousands of Relativity databases
- Using Partitioning to Make Relativity Faster – when you have a 1TB+ workspace, this technique makes backups and maintenance easier
Today, I’m going to talk about the database mechanics – where data lives, and which tables you need to care about.
Relativity’s EDDS* Databases
The EDDS database stores Relativity’s system-wide configuration. (Just plain EDDS, no numbers or letters after it.) All of the users and processes will hit this database at various points of their work.
The EDDSResource database is like Relativity’s TempDB. I’m a huge, huge fan of this approach – this lets DBAs tune the EDDSResource independently from TempDB.
Each of the EDDS12345 databases (with a bunch of different numbers) is a legal matter, or in Relativity terms, a workspace. Think lawsuit or case, basically. As your lawyers take on new cases, each one will get its own new EDDS database.
You may also have an EDDSPerformance database, which houses kCura Performance Dashboard - a product that gathers performance metrics about your environment.
Distributed Environments: Spreading the Load Between SQL Servers
When you first get started, all of these databases could be on the same server. In the e-discovery business, though, growth happens really fast. Right from the get-go, you probably want to plan to separate them onto multiple SQL Server instances – in Relativity terms, a distributed environment with a couple of different SQL Servers:
We’ve got two SQL Servers, each with a couple of workspaces. (Obviously, Relativity scales WAY bigger than two workspaces on a single server, but I only wanna make these images so big, people.)
The EDDS database – the central config data – only lives on SQL1.
Both servers have their own EDDSResource database, and that’s for temporary scratch space.
But two standalone SQL Servers would be an insanely bad idea because if either server goes down, you’re screwed. Instead, you want to build a failover cluster of SQL Server instances, each instance living on a different physical box:
The databases live on shared storage, so if either box dies, the SQL Server instance can start up on the other box. Of course, this means you’ll have twice as many workspaces living on the same hardware, and that’s not a recipe for high performance, and you can mitigate that by buying a separate passive node. I’m not going into the intricacies of failover clustering here – for that, see our clustering resources.
How the EDDS Databases Affect Cluster Design
The EDDS database will consume performance resources. As your distributed environment grows larger and larger, the load on the EDDS database will increase. If other workspaces are sharing that instance, they may be bullied around by the EDDS database. In very large environments, the EDDS database may grow to the point where it needs its own SQL Server instance – or rather, you just don’t want to put any workspaces on that instance.
Any one workspace is confined to one database server. If you have a massive case going on with tens of terabytes of data, the load isn’t spread across the servers. One database still lives on just one host. While technically, SQL Server AlwaysOn Availability Groups does let you spread load across multiple servers, and technically Relativity 8.2 supports AlwaysOn Availability Groups, this is only for failover purposes – not for spreading load.
You can move databases between servers to balance load, but it requires some downtime and work on the application side. You can’t just back up a workspace on one server, restore it onto another SQL Server, and take off. kCura has administration tools to help with this task, but it’s up to you to figure out which databases should be on which servers. This is where the concept of tiered workspaces comes in.
How to Performance Tune the EDDS Database
Because this database holds configuration data, and because the queries that hit here aren’t usually user-created, you generally don’t want to touch this database.
A few weeks after deploying a new release of Relativity, I recommend checking SQL Server’s index usage DMVs and plan cache to find out if there are any new problems that pop up. There may be a new query that needs a new index, or a unique way of doing a query in your environment that hasn’t been seen out in the wild yet.
When issues like that pop up, start by opening a support case with kCura rather than making index changes in here. In your support case, include the query text from the plan cache (if applicable) and evidence from your index DMVs to support the index change you want to make. You could actually make index changes inside this database, but generally speaking, that’s not a good first step. Let the Relativity support folks make the call there because any index changes here can dramatically affect all workspaces.
How to Tune the EDDSResource Databases
You don’t. It’s a temporary staging ground. You can skip this guy under most circumstances.
How to Tune the EDDSPerformance Database
Performance Dashboard is a relatively new product – at least compared to Relativity. Early versions of it desperately needed a few indexes, so using tools like sp_BlitzIndex® pay off big time here. I would highly recommend checking missing indexes in this database, but then after a couple of low-hanging-fruit indexes are applied, this database won’t be a performance issue.
Before making changes here, again, start a support case with the changes you’d like to make. In most cases, this is just a simple known issue and easy to fix.
How to Tune the Workspaces (EDDS* Databases)
Ah, now here’s the fun stuff.
Expanding on the process I discussed in my Performance Tuning kCura Relativity post, every database is a new case that goes through a lifecycle:
- The database is created. It’s technically restored from a “template”, a database that the users have set up as their standard starting point. Larger Relativity shops may have several templates to choose from.
- Some documents are loaded. We need to load data into the database server as quickly as possible.
- Users review the documents. They run searches looking for terms and attributes that might indicate evidence that would bolster their case. As they review documents, they make small edits to the metadata fields at the document level, like marking whether the document has been reviewed, who reviewed it, and whether or not it was interesting. We need to audit everything the users do (as well as things the system does, too.)
- We go back to step 2 a few times. More documents get loaded, and users run more searches. This cycle continues for some time, until the amount of documents trickles to a halt, but searches still continue for a while.
- The case becomes dormant. Legal matters can drag on for years, but we may need to keep this database online the whole time. The amount of changes drops dramatically – sometimes with no data changes for months or years – but the database has to be online, and it has to be backed up.
Some of the major tables in each EDDS* database include (and remember, lawyers, I’m simplifying this for the DBAs in the house):
- Document, File – things we loaded into Relativity to search through, like Excel files and Outlook PSTs
- AuditRecord_PrimaryPartition – a log of most Relativity events, like document loads or end user searches (when this is a problem, start by partitioning it out)
- Artifact – think of this like a system table for Relativity that lists every Document, plus other system objects
- CodeArtifact – prior to Relativity v8, this one table stored records for all choices for every Document. (Think multiple-choice fields, like what kind of file type it was.) This had scalability limits because it had multiple times more rows than the Document table, and query plans could get ugly. This was changed in Relativity v8, but I’m mentioning it here in case any of you out there are still on 7.5. (Get on this level.)
Index tuning isn’t necessary on most of these tables because the queries that hit these tables are all managed by Relativity itself. The kCura developers sit around the office trying to figure out how to make those queries go faster, and they come up with some pretty good ideas. (Well, they also surf my site when they’re bored. Did I mention that they’re attractive people?) You shouldn’t need to touch indexes or queries here, other than the same every-new-version check that I described about EDDS.
Except for the Document table.
Oh, boy, the Document table.
Why the Document Table is Fun to Tune
Relativity lets end users write whatever crazy searches they want against the Document table. Wanna find every email with “S” in the email address? You got it. Need to see every PowerPoint created in June last year? Can do. Interested in every file whose extension is MP4 but the data is actually a PowerPoint slide deck and has hidden slides? No problem. You can build these searches in a GUI without understanding anything whatsoever about how SQL Server works, and Relativity will build the T-SQL for you.
To make matters even more fun (HA! see what I did there, lawyers? “matters”, oh, I kill me), the end users can add new fields to the Document table any time they want. If they want to add a new decimal field called LooksSuspicious, it happens with no DBA intervention or change request. Relativity generates the ALTER TABLE commands on the fly, and then users can populate that field and run searches against it.
Index tuning becomes really challenging because we may never be done loading documents. To load documents, we want as few indexes as possible for faster insert speeds. To search for documents, we want lots of indexes so we don’t have to scan the Document table. As DBAs, we’d like to ask the users, “Are you done loading now? Because I can add indexes to make this go fast.” The answer with Relativity may always be, “No, I might load some more tomorrow.”
And what I find the most interesting is that every EDDS* database can be wildly different. Every team that’s involved with every legal matter may have totally different approaches to loading, searching, and managing their documents. That means you have to treat every EDDS* database as its own unique indexing challenge.
At any given time, you might have a hundred EDDS* databases, each for a different legal matter, each with their own Document table. Each has different numbers of fields and indexes in each case.
You can’t conquer each of these databases individually. You simply have to use my tiered workspace approach, define the small databases that will work just fine on their own, and go tackle the largest and most active databases with traditional index performance tuning methods.
We’re really excited to share these, plus give you a discount to celebrate.
First, we’ve added a new video class - How to Read Execution Plans with Jeremiah. You’re comfortable writing queries, but some of them are slow – and you need more ways to tune than just adding indexes. You’ve heard that you should read execution plans, but you don’t know where to start. Learn more about reading execution plans.
Next up, our 2015 in-person training class lineup. Our students told us they loved our 2014 classes, but they wanted more of everything. We’ve lengthened the classes – we took the performance one from 3 days to 4, and added a couple of new 5-day classes:
Advanced Querying and Indexing: 5-day in-person class. Do you need to learn to write the fastest queries possible for SQL Server? In 2015, join us for five days of advanced TSQL query and index optimization. Join us in Chicago or Portland.
SQL Performance Troubleshooting: 4-day in-person class. You need to speed up a database server that you don’t fully understand – but that’s about to change in four days of learning and fun in Chicago, Denver, and Portland.
Senior DBA Class of 2015: 5-day in-person class. You’re a SQL Server DBA who is ready to advance to the next level in your career but aren’t sure how to fully master your environment and drive the right architectural changes. That’s about to change in one week of learning and adventure in Chicago and Denver.
Some of our students (especially the consultants) told us they wanted to really go in-depth and take two weeks of classes back-to-back. To make that easier, we lined up our classes and put them in some of our favorite cities, at the best times to spend a weekend between classes:
Denver in February (hey, it’s ski season!):
- February 2-6 – Senior DBA Class of 2015
- February 9-12 – SQL Server Performance Troubleshooting
Chicago in May (best time to visit our fair city):
- May 4-8 – Advanced Querying and Indexing
- May 11-14 – SQL Server Performance Troubleshooting
Portland in August (Oregon summers are beautiful):
- Aug 3-7 – Advanced Querying and Indexing
- Aug 10-13 – SQL Server Performance Troubleshooting
Chicago in September (not too hot, not too cold):
- Sept 14-18 – Senior DBA Class of 2015
- Sept 21-24 – SQL Server Performance Troubleshooting
Check out our full catalog of in-person training events and online training videos - and all of our videos & classes, not just the new classes, are 30% off with coupon Launch2015 in July. Come join us!
When you weren’t looking, your databases went and grew up. Now your backup window has grown so large that you’re about ready to open it and jump.
Time to make a choice.
The Native Way: Tuning SQL Server Backups
You can theoretically pull this off by using a combination of tactics:
Back up as infrequently as the business will allow. Run your full backups once a week (or if you want to go wild and crazy, once per month) and differential backups periodically. As Jes explains in her backup and recovery class, differentials back up the data pages that have changed since the last full backup. When disaster strikes, you only need to recover the most recent full backup, the most recent differential backup, and all of the log backups after the differential. This can shave a lot of time off your restores – but only if you minimize the number of changed pages in the database. This means…
Change the database as little as possible. We can’t change what the users do, but we can change what we DBAs do. Stop doing daily index defrag/rebuild jobs – you’re just changing pages in the database, which means instantly inflating the size of your differential backups. In a scenario like this, you can only do index maintenance when you’re sure it is the only way to solve a performance problem, and it absolutely has to be your last resort.
Tune the data file read speeds. You need to read the pages off disk as fast as possible to back them up. Use tools like CrystalDiskMark and SQLIO to measure how fast you’re going, and then tune your storage to go faster.
Compress the data as much as possible. It’s not just about minimizing the size of your backup file – it’s about minimizing the amount of data we have to write to disk. Bonus points for using index compression inside the database so that it’s compressed once, not recompressed every time we do a backup, although that doesn’t really help with off-row data.
Tune the backup target write speeds. If you’re using a small pool of SATA drives in RAID 5 as a backup target, it’s probably not going to be able to keep up with a giant volume of streaming writes, even if those writes are compressed versions of the database. Problems will get even worse if multiple servers are backing up to the same RAID 5 pool simultaneously because the writes will turn random, which is the worst case scenario for RAID 5.
Tune the bottleneck between the reads and the writes. If you’re backing up over the network, use 10Gb Ethernet to avoid the pains of trying to push a lot of data through a tiny 1Gb straw.
Tune your backup software settings. If you’re using native backups, start with using multiple files and the built-in options, and graph your results. Third party compression products usually offer all kinds of knobs to tweak – you’ll need to use that same level of graphing diligence.
Whew. I got tired just typing all that stuff. And if you’re lucky, at the end of it, your backups will complete in an hour or two, but the server might be darned near unusable while you’re beating the daylights out of it. Then the fun balancing act starts, trying to figure out the right point where the system is still usable but the backups complete quickly.
Or Just Cheat with SAN Snapshots.
In my Virtualization, SAN, and Hardware video class, I explain how SAN snapshots are able to take a full database backup of any size in just a couple of seconds.
See, while it’s technically a backup, I don’t really consider it a backup until it’s off the primary storage device. Your SAN storage, expensive as it was, is still vulnerable to failure, and you need to get that data out as quickly as possible. The good news is that you can move that data out without dragging it through the SQL Server’s storage connections, CPU, and network ports. You can simply (simply?) hook a virtual tape library, actual tape library, or another storage device to the same storage network, and copy directly between the two.
Your data read speeds may degrade during that process, but it’s up to you – if you want to architect your storage so that it’s fast enough to do these full backups without any noticeable performance to the end user, it’s possible by inserting enough quarters in the front.
You still have to pay attention, though, because your backup process will look like this:
- Daily full backups via SAN snapshots – all writes are quiesced for 1-10 seconds during this time
- Conventional log backups every X minutes – where X is dictated by the business
If you push a big index rebuild job through, you can still bloat the transaction log, and your log backups may take longer than X minutes to complete. This is where our RPO/RTO planning worksheet is so important – if your RPO is 1 minute, you simply may not be able to do index rebuild jobs.
SAN snapshots have one other drawback: depending on your storage make/model, snapshots may not be included in your licensing pricing. You may have to spend a lot more (typically tens of thousands of dollars) to unlock the feature. Ask your SAN admin if snapshots are right for your wallet.
Building Terabyte Servers Means Starting with Backups First
When I’m building a SQL Server to hold multiple terabytes of databases, this backup question is the very first one we have to address – even before we talk about the speed of end user queries.
Otherwise, we could end up designing a server with all local solid state drives, which is very inexpensive and satisfies end user performance goals – but we can’t back the data up fast enough.
When we finish our courses, we ask attendees what they thought. Here’s what they said about our How to Be a Senior DBA class in Chicago:
“I’ve attended many trainings. This is the most valuable I’ve ever attended.” – Tim Costello, Consultant
“ROI for training is always a gamble. However, I felt like I received my money’s worth before lunch on the first day. Outstanding!” – Zach Eagle, DBA
“The single best database class I’ve ever attended. I feel challenged to step up not only my DBA game, but my presentation & training game as well.” – Ben Bausili, Consultant
“I would tell people that this is the perfect course to take if you want to gain control of your environment and impress management.” – George Larkin, DBA
“This course is very in-depth and helpful to get information on the things you need to do in order to take your DBA skills to the next level. The course is a bit overwhelming at times, but you need to have all of this detail. Thanks to Brent and his team for answering any questions that came up and I would definitely recommend this course to others.” – Mike Hewitt, Software Engineer/DBA
“Outstanding training! I knew very little about SANs & SSD. I am now confident that I can troubleshoot on a higher level to determine if storage is ever our problem. Thanks Ozar group! You guys rock!” – Judy Beam, DBA
“Excellent format, content and team of presenters. More like a conversation between peers than a boring lecture. Fun and very meaningful. You have a great team!” – Greg Noel, COO/CIO
“I came into the class with unfairly high expectations; Brent and Kendra somehow managed to exceed them. Thank you guys!” – Ben Wyatt, DBA/Consultant
“How to be a Senior DBA is a consistently delivered training regimen that gives you useful tools and knowledge in a digestible and fun format.” – Luther Rochester, DBA
“Brent & the gang are approachable and easy to talk to. They break down complex subject matter to easier to understand presentations.” – Kevin Murphy, Data Architect
“Far and away better than the MS courses. The Ozar team knows their stuff and doesn’t pretend to know while they Google the answer on a break.” – Jon Worthy, IT
“This class is a must for every DBA. All modules are well prepared & presented by two very well-respected SQL gurus in the community. All questions & demos are answered & illustrated clearly. I’ve learned a lot!” – Chai W., DBA
What will you say? Sign up now for the next one in Philadelphia this September or our Make SQL Server Apps Go Faster class in Seattle.
News broke recently of a dangerous data loss bug in SQL Server 2012 and 2014, and Aaron Bertrand explained which patch levels are affected. It’s kinda tricky, and I’m afraid most people aren’t even going to know about the bug – let alone whether or not they’re on a bad version.
I added build number checking into the latest version of sp_Blitz® so that you can just run it. If your build has the dangerous data loss bug, you’ll get a priority 20 warning about a dangerous build. If you’re running an out-of-support version, like SQL 2012 RTM with no service packs, you’ll get a priority 20 warning about that as well.
Other changes in the last couple of versions:
Changes in v35 – June 18, 2014:
- John Hill fixed a bug in check 134 looking for deadlocks.
- Robert Virag improved check 19 looking for replication subscribers.
- Russell Hart improved check 34 to avoid blocking during restores.
- Added check 126 for priority boost enabled. It was always in the non-default configurations check, but this one is so bad we called it out.
- Added checks 128 and 129 for unsupported builds of SQL Server.
- Added check 127 for unneccessary backups of ReportServerTempDB.
- Changed fill factor threshold to <80% to match sp_BlitzIndex.
Changes in v34 – April 2, 2014:
- Jason Pritchard fixed a bug in the plan cache analysis that did not return results when analyzing for high logical reads.
- Kirby Richter @SqlKirby fixed a bug in check 75 (t-log sizes) that failed on really big transaction log files. (Not even gonna say how big.)
- Oleg Ivashov improved check 94 (jobs without failure emails) to exclude SSRS jobs.
- Added @SummaryMode parameter to return only one result set per finding.
- Added check 124 for Performance: Deadlocks Happening Daily. Looks for more than 10 deadlocks per day.
- Moved check 121 for Performance: Serializable Locking to be lower priority (down to 100 from 10) and only triggers when more than 10 minutes of the wait have happened since startup.
- Changed checks 107-109 for Poison Waits to have higher thresholds, now looking at more than 5 seconds per hour of server uptime. Been up for 10 hours, we look for 50 seconds, that kind of thing.
We started with our top-selling PASS Summit pre-con, and then doubled it to two days long!
You’re a developer or DBA stuck with a database server that’s not going fast enough. You’ve got a hunch that the biggest bottleneck is inside the database server somewhere, but where? In just two days, you’ll learn how to use powerful scripts to identify which queries are killing your server, what parts of the database server are holding you back, how to tackle indexing improvements, and how to identify query anti-patterns.
On Monday and Tuesday before the PASS Summit conference, we’ll dive deeply into:
- How wait stats tell you where to focus your tuning
- How the plan cache shows you which queries are the worst
- How to make fast improvements by picking the right indexes
- How to identify and fix the most common query anti-patterns
- And you’ll even get execution plan, query, and index challenges to solve during the class
You’ll be plenty comfortable because we’re holding this independent event in the Big Picture Theater in downtown Seattle, one mile from the Convention Center. Coffee, beverages, snacks, and lunch are all covered. (Sorry, no popcorn – you know we wouldn’t be able to stop eating it once we get started.)
Registration is $695, and for another $200, we’ll throw in our Developer’s Guide to SQL Server Performance video class. This way, you can get started learning right away without waiting for the class!
Act fast, save $100 - if you register during July with coupon code GOFAST, registration is only $595.
Space is limited to just 100 seats – way less folks than we had last year, so you’d better move fast if you want to get in. This is a totally independent training event run by Brent Ozar Unlimited – you can’t get in with your PASS Summit ticket, and you can’t register for it as part of the PASS check-out process.
You manage SQL Server databases, but you never get the chance to take time out of your busy day to test your backups. You assume that just because the jobs are succeeding, that you’ll be able to restore your databases when disaster strikes. Join Brent Ozar as he walks you through several queries of your MSDB backup history tables, checks your RPO and RTO live, and helps you build a recovery strategy for your production databases in this one-hour video.
For more videos like this:
The session list has been published, and we’re excited to say all of us have been selected to speak at the PASS Summit again this year. Here’s our sessions:
Are Your Indexes Hurting You or Helping You? – Jes Schultz Borland – Queries need your help! Your mission, should you choose to accept it, is to make great decisions about what indexes are best for your workload. In this session, we’ll review the difference between clustered and nonclustered indexes, show when to use included columns, understand what sargability means, and introduce statistics. You’ll leave this session with the ability to confidently determine why, or why not, SQL Server uses your indexes when executing queries.
Developers: Who Needs a DBA? – Brent Ozar – You store data in SQL Server, but you don’t have enough work to keep a full-time DBA busy. In just one session, you’ll learn the basics of performance troubleshooting, backup, index tuning, and security. Brent Ozar, recovering developer, will teach you the basic care and feeding of a Microsoft SQL Server 2005, 2008, 2012, or 2014 instance and give you scripts to keep you out of trouble.
Dynamic SQL: Build Fast, Flexible Queries – Jeremiah Peschka – Dynamic SQL is a misunderstood and much maligned part of a DBA’s tool kit – it can be used to solve difficult business problems, respond to diverse data needs, and alleviate performance problems. Many DBAs reject dynamic SQL outright as a potential source of SQL injections, being poorly performing, or just for being a hacky solution in general. Not so! Jeremiah Peschka has been making extensive use of dynamic SQL throughout his career to solve a variety of problems. In this session, we’ll be dispelling these misconceptions and demonstrating how dynamic SQL can become a part of every DBA’s tool kit.
From Minutes to Milliseconds: High-Performance SSRS Tuning – Doug Lane – Even though you’re an experienced report developer or administrator, performance tuning for SQL Server Reporting Services still feels as bewildering and hopeless as folding a fitted bed sheet. You’ve made your data sets smaller and timeouts longer, but it’s not enough to remove the slowness dragging down your reporting environment. In this session, you’ll learn how design and configuration choices put pressure on your report server and techniques to relieve that pressure. You’ll see how to configure your Reporting Services databases for speed, streamline your subscription schedules, and use caching for high-demand reports. You’ll also learn some design strategies to lighten your report processing load. If you want to maximize the speed of your Reporting Services environment and minimize the pain of performance tuning, this session is for you.
Lightning Talk: Conquer CXPACKET and Master MAXDOP – Brent Ozar – CXPACKET waits don’t mean you should set MAXDOP = 1. Microsoft Certified Master Brent Ozar will boil it all down and simplify CXPACKET to show you the real problem – and what you should do about it – in one quick 10-minute Lightning Talk.
Why Does SQL Server Keep Asking For This Index? – Kendra Little – SQL Server says you’d really benefit from an index, but you’d like to know why. Kendra Little will give you scripts to find which queries are asking for a specific missing index. You’ll learn to predict how a new index will change read patterns on the table, whether you need to add the exact index SQL Server is requesting, and how to measure performance improvements from your index changes. If you’re comfortable querying SQL Server’s missing index DMVs and looking at an execution plan here and there, this session is for you.
World’s Worst Performance Tuning Techniques – Kendra Little – Could one of your tricks for making queries faster be way off base? Kendra Little is a Microsoft Certified Master in SQL Server and a performance tuning consultant, which means she’s learned lots of lessons from her mistakes. In this session, you will learn how to stop obsessively updating or creating statistics, find alternatives to forcing an index, and deal with an addiction to ‘recompile’ hints.
Pre-Conference Session: psych! We didn’t get picked for an official pre-con this year (I know, right?) so we’re building our own lunar lander. Stay tuned – we’ll get things ironed out in the next week or two so you can make your official plans.
Step 1: Make a list of 5 problems you’ve faced in the last couple of months that you needed alerting on. If you’ve got a help desk ticket system, look at the ticket types that occur most frequently and cause the most outage times.
For me as a DBA, that might be:
- SQL Server service down
- Deadlock occurs
- A runaway query consumes high CPU
- An Agent job is running more than 2x the time it usually takes to run, and it’s still going
- Log shipping gets more than 15 minutes behind
Step 2: Set up a lab to repro those problems on demand. This is actually a great way to learn about these problems, by the way – the more you understand how to create these situations, the better you’ll be at detecting and reacting to them.
Step 3: Download the eval editions of the tools you want. All monitoring software vendors give away short-term (10-15 day) versions. Install & configure them to monitor your test lab.
Step 4: Actively evaluate them. Build a spreadsheet with a column for each monitoring tool, and a group of rows for each failure scenario. For each problem, when you trigger it, document:
- How long it takes to alert you about the correct underlying problem
- How many false alarms you get (alarms that are unrelated to the real problem)
- How intuitively obvious the real problem is when looking at the tool’s dashboard (are all the lights flashing red, or is there a single light flashing red exactly where the problem is?)
Step 5: Pick winners and negotiate price. Out of the tools you evaluated, pick at least two that you’re willing to live with. Call each of the vendors and say, “I did a tool evaluation, and it was a tie between you and ___. What’s your best price? I’m going to be asking the other guys too.”
They’re going to want to start talking about value differentiators, like how they’re so much better than the other company because they do ___. Doesn’t matter – you’ve already picked the two tools you’re willing to live with. Let them talk, listen fairly, and then repeat the question: what’s your best price?
You don’t have to pick the cheapest one – there may be one tool you like much more – but at least now you’ve gotten good prices on both, and you can make an informed decision.
Join me in my SQL Server Tools That Cost Money webcast to learn more about what tools are out there and how to evaluate them.