It has come to my attention that some people bought training from us last week and they paid full price.
While that’s cool – we really do appreciate your generosity – you can do better.
During the month of July, use the coupon code Launch2015 to save 30% off any of our training, including:
- Make SQL Apps Go Faster – our pre-PASS class in Seattle, an incredible value for 2 days of our training – just $416.50 with the coupon
- How to Read Execution Plans – Jeremiah’s awesome new 5+ hour video class – a killer launch price at $209.30
- SQL Server Performance Troubleshooting 2015 – the expanded 4-day version of this year’s 3-day class
So in the remaining 3 days left to use the coupon code, be smart: shop for our training videos & classes, but don’t pay full price. And use the savings to buy yourself a little something. You’re welcome.
You’ve been doing this database thing for a while, and you’re ready to get serious about it. What’s the next step?
Step 1: Define your specialty in one sentence.
If you say you do everything, you compete with everyone.
You want to be the only one they want. That means you’re:
- Actively sought-after
- Uniquely qualified
- A very high value for short bursts of time
- Respected for your opinion
- Worth more than your competitors (more on that later)
This sounds selfish, but remember – it’s not about you. It’s about your customers (whether they’re internal or external) and your ability to help them.
To pick your specialization, watch my webcast archive How to Get Senior In Your Title. I talk about the different types of DBAs and what they specialize in. Here’s one of the important slides from that session:
Most of you out in the crowd are going to say, “But I do all of these.” Sure you do – today. But we’re talking about where you want to be two years from now if you’re going to really stand out. Not only am I encouraging you to pick one of the columns, but I’m even encouraging you to focus on a specific horizontal row.
Examples of specialties include:
- “This server has to be reliable. We need AlwaysOn Availability Groups. I know just who to call.”
- “We need to manage thousands of servers with easier automation. I know the right person for the job.”
- “Our SQL Servers in VMware are just too slow, and nobody knows whose fault it is. I know who can tell.”
- “We need to offload our full text search, but we have no idea what to use. I know somebody who does.”
Notice that I’m phrasing these in a one-sentence pain point. You need to be known for resolving someone’s pain. This is the funny thing about business and consulting – you get paid the most to relieve urgent pain, not to provide keeping-the-lights-on maintenance.
The first step in your two-year plan is to write the one-sentence pain you want to resolve.
Step 2: Assess your current skills and your target skills.
Thinking about your one-sentence pain point:
- How many times have you relieved that pain?
- How many times have you failed to relieve it?
- When you hit an impasse, who did you escalate it to?
- Have you sketched out a process for diagnosing it? Has anyone?
- Have you documented the process for others to follow?
The more answers you have, and the more confident you are giving those answers aloud to someone else, the better your skills are. What, you expected a true/false multiple choice assessment test? Technology moves so fast that often the questions aren’t even right, let alone the answers.
Here’s a longer version of that assessment that I use for my own skills testing:
- I don’t know where the pain is coming from.
- I can identify the pain in clear terms.
- I know several possible root causes of the pain.
- I can identify exactly which one is at fault here.
- I know several ways to relieve that pain.
- I can identify exactly which one is right here.
- I’ve documented my triage process.
- I’ve hit situations where my process has been wrong, and I’ve learned from it.
From those levels, what level do you think you get paid for?
Surprise – it’s #1. You know plenty of people who are getting paid right now even though they have absolutely no idea where the pain is coming from. However, the higher your level, the easier it is to get paid more. (Don’t think that just because you’re on level 7, you’re making a bazillion dollars – there’s plenty of folks who aren’t great at negotiating their value, either.)
Figure out what level you’re at today, and get a rough idea of what level you want to be at in two years. Now let’s figure out how to get there.
Step 3: Build a 2-year learning plan to make that leap.
Divide the number of levels you want to jump by the amount of time you have. If you want to go up four levels, and you’ve got two years to do it, then you need to progress a level every 6 months.
This sounds really easy, but there’s a problem: you’re probably not repeatedly solving this pain point at your day job. You probably solve it every now and then, but not over and over in a way that helps you refine your technique.
That’s why a 2-year learning plan is really a 2-year sharing plan.
Nothing teaches you something like being forced to teach it to someone else. Heck, even building this blog post (and a presentation on it a few weeks ago) made me flesh out my own philosophies!
But to share, you have to get permission. Start by having this discussion with your manager:
Dear Manager – Recently, we ran into the problem of ____. To get relief, I did ____. Are you happy with that relief? If so, I’d like to talk about what I learned at my local SQL Server user group. I won’t mention our company name. Is that OK? Sincerely, Me
By having that discussion, you’re also making sure the manager is really satisfied with your pain relief efforts and that they saw value in your work. (After all, think of them as one of your first pain relief clients.)
Once you’ve got permission, here’s how you build the 2-year sharing plan: every level jump equals one user group presentation.
- Write the user group presentation agenda in 4-5 bullet points.
- Write a blog post about each bullet point. (The words in your blog post are what you’ll say out loud in your session – think about it as writing your script.)
- Build slides that help tell the story, but the slides are not the story. Don’t transcribe your blog posts word-for-word on the slide.
For example, if you need to hit the level “I know several ways to relieve that pain,” and your specialization is improving the performance of virtual SQL Servers, your user group session could be titled “5 Ways to Make Your Virtual SQL Server Faster.” You’d then write a blog post about each of the 5 ways. Presto, there’s your session resources.
At the end of your 2-year sharing plan, you’ve built up a solid repertoire of material, plus built up your own level of expertise. (You’ve also built up a little bit of a reputation – but more on that later.)
Step 4: Decide what lifestyle works best for you.
How much risk can you tolerate?
- Some. I could miss a couple of paychecks a year and manage my own benefits if I earned more.
- Lots. I’d be willing to go without income for a month or two per year if I could earn lots more.
- None. A very predictable salary and benefits are absolute requirements for me.
This determines whether you should be a full time employee, a long-term contractor that switches positions periodically, or a short-term consultant. In a nutshell, the differences are:
Consultants tell you what to do. They listen to your business problems, come up with solutions, and guide your staff on how to do it. They are typically short-term stints – a couple of days per month at a client, multiple clients at a time.
Contractors do what they’re told. They get a list of required solutions from the business and implement those solutions. They typically work together for long stints, showing up at the same client every day for months at a time, with only one live client relationship.
Full time employees do a mix of this. They come up with ideas, plus implement some of those ideas.
There’s no one answer that’s better for everyone. Heck, I’ve even changed my answer a few times over the last several years! It comes down to finding the right risk/reward balance for your own lifestyle needs, and then bringing the right customers in the door.
Step 5: Decide how you’ll market yourself.
Consultants sell advice, not manual labor, so they have many clients – which means doing a lot of sales.
Contractors sell labor, so they have fewer clients – which means less sales efforts.
Full time employees (FTEs) only have one sales push every few years when they change jobs.
Our company is a good example of the work required to do marketing and sales when you want to scale beyond one or two people:
- We have tens of thousands of regular blog readers
- Thousands of them attend the weekly webcasts
- Hundreds of them email us per month asking for help
- A few dozen turn into serious sales opportunities
- Around a dozen will book consulting engagements with us
This funnel approach demonstrates inbound marketing – using lots of free material to get the word out about your services and invite them to contact you for personal help. It’s a lot of hard work – very hard work.
The other approach is outbound marketing – cold calls to strangers asking if they’ve got your specialized pain point, and then trying to convince them that you’re the right person to bring pain relief. (You can kinda guess how I feel about outbound marketing.) Sure, it sounds slimy – but the takeaway is that it’s hard work, and every bit as hard as doing inbound marketing.
But only one of those options polishes your skills.
Inbound marketing is a rare two-for-one in life – it’s both your 2-year sharing plan, and your 2-year marketing plan. You don’t have much spare time, so you need every bit of it to count. Choose inbound marketing, do your learning and sharing in public, and you’ll write your own ticket.
Presto – You’re two years away from success.
No matter what pain you want to solve, how you want to solve it, or how you want to get paid for it, this simple plan will have you on the road to success. Now get started on writing down that one-sentence pain point!
kCura Relativity is a software product for law firms to find interesting stuff to help their case. To get you up to speed on what it does, here’s some of the posts I’ve written about Relativity in the past:
- Performance Tuning kCura Relativity – explains what the product is for and how DBAs can help make it faster
- Tiering kCura Relativity Databases – how to manage hundreds or thousands of Relativity databases
- Using Partitioning to Make Relativity Faster – when you have a 1TB+ workspace, this technique makes backups and maintenance easier
Today, I’m going to talk about the database mechanics – where data lives, and which tables you need to care about.
Relativity’s EDDS* Databases
The EDDS database stores Relativity’s system-wide configuration. (Just plain EDDS, no numbers or letters after it.) All of the users and processes will hit this database at various points of their work.
The EDDSResource database is like Relativity’s TempDB. I’m a huge, huge fan of this approach – this lets DBAs tune the EDDSResource independently from TempDB.
Each of the EDDS12345 databases (with a bunch of different numbers) is a legal matter, or in Relativity terms, a workspace. Think lawsuit or case, basically. As your lawyers take on new cases, each one will get its own new EDDS database.
You may also have an EDDSPerformance database, which houses kCura Performance Dashboard - a product that gathers performance metrics about your environment.
Distributed Environments: Spreading the Load Between SQL Servers
When you first get started, all of these databases could be on the same server. In the e-discovery business, though, growth happens really fast. Right from the get-go, you probably want to plan to separate them onto multiple SQL Server instances – in Relativity terms, a distributed environment with a couple of different SQL Servers:
We’ve got two SQL Servers, each with a couple of workspaces. (Obviously, Relativity scales WAY bigger than two workspaces on a single server, but I only wanna make these images so big, people.)
The EDDS database – the central config data – only lives on SQL1.
Both servers have their own EDDSResource database, and that’s for temporary scratch space.
But two standalone SQL Servers would be an insanely bad idea because if either server goes down, you’re screwed. Instead, you want to build a failover cluster of SQL Server instances, each instance living on a different physical box:
The databases live on shared storage, so if either box dies, the SQL Server instance can start up on the other box. Of course, this means you’ll have twice as many workspaces living on the same hardware, and that’s not a recipe for high performance, and you can mitigate that by buying a separate passive node. I’m not going into the intricacies of failover clustering here – for that, see our clustering resources.
How the EDDS Databases Affect Cluster Design
The EDDS database will consume performance resources. As your distributed environment grows larger and larger, the load on the EDDS database will increase. If other workspaces are sharing that instance, they may be bullied around by the EDDS database. In very large environments, the EDDS database may grow to the point where it needs its own SQL Server instance – or rather, you just don’t want to put any workspaces on that instance.
Any one workspace is confined to one database server. If you have a massive case going on with tens of terabytes of data, the load isn’t spread across the servers. One database still lives on just one host. While technically, SQL Server 2012 AlwaysOn Availability Groups does let you spread load across multiple servers, and technically Relativity 8.2 supports SQL Server 2012, they don’t support AGs for failover or spreading load yet.
You can move databases between servers to balance load, but it requires some downtime and work on the application side. You can’t just back up a workspace on one server, restore it onto another SQL Server, and take off. kCura has administration tools to help with this task, but it’s up to you to figure out which databases should be on which servers. This is where the concept of tiered workspaces comes in.
How to Performance Tune the EDDS Database
Because this database holds configuration data, and because the queries that hit here aren’t usually user-created, you generally don’t want to touch this database.
A few weeks after deploying a new release of Relativity, I recommend checking SQL Server’s index usage DMVs and plan cache to find out if there are any new problems that pop up. There may be a new query that needs a new index, or a unique way of doing a query in your environment that hasn’t been seen out in the wild yet.
When issues like that pop up, start by opening a support case with kCura rather than making index changes in here. In your support case, include the query text from the plan cache (if applicable) and evidence from your index DMVs to support the index change you want to make. You could actually make index changes inside this database, but generally speaking, that’s not a good first step. Let the Relativity support folks make the call there because any index changes here can dramatically affect all workspaces.
How to Tune the EDDSResource Databases
You don’t. It’s a temporary staging ground. You can skip this guy under most circumstances.
How to Tune the EDDSPerformance Database
Performance Dashboard is a relatively new product – at least compared to Relativity. Early versions of it desperately needed a few indexes, so using tools like sp_BlitzIndex® pay off big time here. I would highly recommend checking missing indexes in this database, but then after a couple of low-hanging-fruit indexes are applied, this database won’t be a performance issue.
Before making changes here, again, start a support case with the changes you’d like to make. In most cases, this is just a simple known issue and easy to fix.
How to Tune the Workspaces (EDDS* Databases)
Ah, now here’s the fun stuff.
Expanding on the process I discussed in my Performance Tuning kCura Relativity post, every database is a new case that goes through a lifecycle:
- The database is created. It’s technically restored from a “template”, a database that the users have set up as their standard starting point. Larger Relativity shops may have several templates to choose from.
- Some documents are loaded. We need to load data into the database server as quickly as possible.
- Users review the documents. They run searches looking for terms and attributes that might indicate evidence that would bolster their case. As they review documents, they make small edits to the metadata fields at the document level, like marking whether the document has been reviewed, who reviewed it, and whether or not it was interesting. We need to audit everything the users do (as well as things the system does, too.)
- We go back to step 2 a few times. More documents get loaded, and users run more searches. This cycle continues for some time, until the amount of documents trickles to a halt, but searches still continue for a while.
- The case becomes dormant. Legal matters can drag on for years, but we may need to keep this database online the whole time. The amount of changes drops dramatically – sometimes with no data changes for months or years – but the database has to be online, and it has to be backed up.
Some of the major tables in each EDDS* database include (and remember, lawyers, I’m simplifying this for the DBAs in the house):
- Document, File – things we loaded into Relativity to search through, like Excel files and Outlook PSTs
- AuditRecord_PrimaryPartition – a log of most Relativity events, like document loads or end user searches (when this is a problem, start by partitioning it out)
- Artifact – think of this like a system table for Relativity that lists every Document, plus other system objects
- CodeArtifact – prior to Relativity v8, this one table stored records for all choices for every Document. (Think multiple-choice fields, like what kind of file type it was.) This had scalability limits because it had multiple times more rows than the Document table, and query plans could get ugly. This was changed in Relativity v8, but I’m mentioning it here in case any of you out there are still on 7.5. (Get on this level.)
Index tuning isn’t necessary on most of these tables because the queries that hit these tables are all managed by Relativity itself. The kCura developers sit around the office trying to figure out how to make those queries go faster, and they come up with some pretty good ideas. (Well, they also surf my site when they’re bored. Did I mention that they’re attractive people?) You shouldn’t need to touch indexes or queries here, other than the same every-new-version check that I described about EDDS.
Except for the Document table.
Oh, boy, the Document table.
Why the Document Table is Fun to Tune
Relativity lets end users write whatever crazy searches they want against the Document table. Wanna find every email with “S” in the email address? You got it. Need to see every PowerPoint created in June last year? Can do. Interested in every file whose extension is MP4 but the data is actually a PowerPoint slide deck and has hidden slides? No problem. You can build these searches in a GUI without understanding anything whatsoever about how SQL Server works, and Relativity will build the T-SQL for you.
To make matters even more fun (HA! see what I did there, lawyers? “matters”, oh, I kill me), the end users can add new fields to the Document table any time they want. If they want to add a new decimal field called LooksSuspicious, it happens with no DBA intervention or change request. Relativity generates the ALTER TABLE commands on the fly, and then users can populate that field and run searches against it.
Index tuning becomes really challenging because we may never be done loading documents. To load documents, we want as few indexes as possible for faster insert speeds. To search for documents, we want lots of indexes so we don’t have to scan the Document table. As DBAs, we’d like to ask the users, “Are you done loading now? Because I can add indexes to make this go fast.” The answer with Relativity may always be, “No, I might load some more tomorrow.”
And what I find the most interesting is that every EDDS* database can be wildly different. Every team that’s involved with every legal matter may have totally different approaches to loading, searching, and managing their documents. That means you have to treat every EDDS* database as its own unique indexing challenge.
At any given time, you might have a hundred EDDS* databases, each for a different legal matter, each with their own Document table. Each has different numbers of fields and indexes in each case.
You can’t conquer each of these databases individually. You simply have to use my tiered workspace approach, define the small databases that will work just fine on their own, and go tackle the largest and most active databases with traditional index performance tuning methods.
We’re really excited to share these, plus give you a discount to celebrate.
First, we’ve added a new video class - How to Read Execution Plans with Jeremiah. You’re comfortable writing queries, but some of them are slow – and you need more ways to tune than just adding indexes. You’ve heard that you should read execution plans, but you don’t know where to start. Learn more about reading execution plans.
Next up, our 2015 in-person training class lineup. Our students told us they loved our 2014 classes, but they wanted more of everything. We’ve lengthened the classes – we took the performance one from 3 days to 4, and added a couple of new 5-day classes:
Advanced Querying and Indexing: 5-day in-person class. Do you need to learn to write the fastest queries possible for SQL Server? In 2015, join us for five days of advanced TSQL query and index optimization. Join us in Chicago or Portland.
SQL Performance Troubleshooting: 4-day in-person class. You need to speed up a database server that you don’t fully understand – but that’s about to change in four days of learning and fun in Chicago, Denver, and Portland.
Senior DBA Class of 2015: 5-day in-person class. You’re a SQL Server DBA who is ready to advance to the next level in your career but aren’t sure how to fully master your environment and drive the right architectural changes. That’s about to change in one week of learning and adventure in Chicago and Denver.
Some of our students (especially the consultants) told us they wanted to really go in-depth and take two weeks of classes back-to-back. To make that easier, we lined up our classes and put them in some of our favorite cities, at the best times to spend a weekend between classes:
Denver in February (hey, it’s ski season!):
- February 2-6 – Senior DBA Class of 2015
- February 9-12 – SQL Server Performance Troubleshooting
Chicago in May (best time to visit our fair city):
- May 4-8 – Advanced Querying and Indexing
- May 11-14 – SQL Server Performance Troubleshooting
Portland in August (Oregon summers are beautiful):
- Aug 3-7 – Advanced Querying and Indexing
- Aug 10-13 – SQL Server Performance Troubleshooting
Chicago in September (not too hot, not too cold):
- Sept 14-18 – Senior DBA Class of 2015
- Sept 21-24 – SQL Server Performance Troubleshooting
Check out our full catalog of in-person training events and online training videos - and all of our videos & classes, not just the new classes, are 30% off with coupon Launch2015 in July. Come join us!
You know how to design indexes, but you’re not sure how good your skills really are. In this quiz-packed session you’ll get a chance to test your skills! Kendra Little will walk you through a set of index design challenges. You’ll have time to answer each problem on your own, then find out whether the SQL Server engine thinks you made the right choice. At the end of the session you’ll get to tally your score (but you can keep it a secret if you like).
Take the quiz while you watch the video. If you’d like to dive straight into the questions, skip to 2:30.
Compare your answers to the group
Our webcast had 416 attendees and 80% of people answered each question. To find out the right answers (and why) watch the video above, but here’s the stats on how webcast viewers answered each question.
This is NOT the answer key– don’t assume the majority of people got every question right! This just lets you compare what you thought to what others thought.
Q1) A NonClustered Index is…
Q2) What will this query probably due given this table definition?
Q3) What is most likely the biggest problem with this table?
Q4) INCLUDED columns are…
Q5) A unique constraint is…
A only) 11%
A and B) 11%
A, B, and C) 16%
A, B, C, and D) 14%
A and C) 41%
A, C and D) 8%
A and D) 4%
(There were some other combos, but they all got low percentages)
Q6) Which one of these statements is ALWAYS true?
Q7) How many indexes were written to?
Q8) Why wouldn’t SQL Server Ask for an Index?
When you weren’t looking, your databases went and grew up. Now your backup window has grown so large that you’re about ready to open it and jump.
Time to make a choice.
The Native Way: Tuning SQL Server Backups
You can theoretically pull this off by using a combination of tactics:
Back up as infrequently as the business will allow. Run your full backups once a week (or if you want to go wild and crazy, once per month) and differential backups periodically. As Jes explains in her backup and recovery class, differentials back up the data pages that have changed since the last full backup. When disaster strikes, you only need to recover the most recent full backup, the most recent differential backup, and all of the log backups after the differential. This can shave a lot of time off your restores – but only if you minimize the number of changed pages in the database. This means…
Change the database as little as possible. We can’t change what the users do, but we can change what we DBAs do. Stop doing daily index defrag/rebuild jobs – you’re just changing pages in the database, which means instantly inflating the size of your differential backups. In a scenario like this, you can only do index maintenance when you’re sure it is the only way to solve a performance problem, and it absolutely has to be your last resort.
Tune the data file read speeds. You need to read the pages off disk as fast as possible to back them up. Use tools like CrystalDiskMark and SQLIO to measure how fast you’re going, and then tune your storage to go faster.
Compress the data as much as possible. It’s not just about minimizing the size of your backup file – it’s about minimizing the amount of data we have to write to disk. Bonus points for using index compression inside the database so that it’s compressed once, not recompressed every time we do a backup, although that doesn’t really help with off-row data.
Tune the backup target write speeds. If you’re using a small pool of SATA drives in RAID 5 as a backup target, it’s probably not going to be able to keep up with a giant volume of streaming writes, even if those writes are compressed versions of the database. Problems will get even worse if multiple servers are backing up to the same RAID 5 pool simultaneously because the writes will turn random, which is the worst case scenario for RAID 5.
Tune the bottleneck between the reads and the writes. If you’re backing up over the network, use 10Gb Ethernet to avoid the pains of trying to push a lot of data through a tiny 1Gb straw.
Tune your backup software settings. If you’re using native backups, start with using multiple files and the built-in options, and graph your results. Third party compression products usually offer all kinds of knobs to tweak – you’ll need to use that same level of graphing diligence.
Whew. I got tired just typing all that stuff. And if you’re lucky, at the end of it, your backups will complete in an hour or two, but the server might be darned near unusable while you’re beating the daylights out of it. Then the fun balancing act starts, trying to figure out the right point where the system is still usable but the backups complete quickly.
Or Just Cheat with SAN Snapshots.
In my Virtualization, SAN, and Hardware video class, I explain how SAN snapshots are able to take a full database backup of any size in just a couple of seconds.
See, while it’s technically a backup, I don’t really consider it a backup until it’s off the primary storage device. Your SAN storage, expensive as it was, is still vulnerable to failure, and you need to get that data out as quickly as possible. The good news is that you can move that data out without dragging it through the SQL Server’s storage connections, CPU, and network ports. You can simply (simply?) hook a virtual tape library, actual tape library, or another storage device to the same storage network, and copy directly between the two.
Your data read speeds may degrade during that process, but it’s up to you – if you want to architect your storage so that it’s fast enough to do these full backups without any noticeable performance to the end user, it’s possible by inserting enough quarters in the front.
You still have to pay attention, though, because your backup process will look like this:
- Daily full backups via SAN snapshots – all writes are quiesced for 1-10 seconds during this time
- Conventional log backups every X minutes – where X is dictated by the business
If you push a big index rebuild job through, you can still bloat the transaction log, and your log backups may take longer than X minutes to complete. This is where our RPO/RTO planning worksheet is so important – if your RPO is 1 minute, you simply may not be able to do index rebuild jobs.
SAN snapshots have one other drawback: depending on your storage make/model, snapshots may not be included in your licensing pricing. You may have to spend a lot more (typically tens of thousands of dollars) to unlock the feature. Ask your SAN admin if snapshots are right for your wallet.
Building Terabyte Servers Means Starting with Backups First
When I’m building a SQL Server to hold multiple terabytes of databases, this backup question is the very first one we have to address – even before we talk about the speed of end user queries.
Otherwise, we could end up designing a server with all local solid state drives, which is very inexpensive and satisfies end user performance goals – but we can’t back the data up fast enough.
Let’s say you want to have an addition built on your house. You contact a contractor, who comes to your house, looks at your lot, looks at what exists, and asks you questions about exactly what you want. He then gives you an estimate of the work – approximately how long he thinks it will take his crew, what supplies will be needed, and the cost of those supplies.
SQL Server gives you a similar option. When you write a query, before committing to it, you can request an estimated execution plan.
The estimated execution plan is designed to show what SQL Server would most likely do if it were to execute the query. Using statistics, it estimates how many rows may be returned from each table. It chooses the operators it will use to retrieve the data – scans or seeks. It decides how to best join the tables together – nested loops, merge joins, hash joins. It is a reasonably accurate guide to what SQL Server will do.
You can view an estimated execution plan for an individual statement, a stored procedure, or a function. The plan can be text, XML, or graphical. To view text, use SET SHOWPLAN_TEXT ON. For an XML version, use the command SET SHOWPLAN_XML ON. To view a graphical plan in SSMS, click the Display Estimated Execution Plan button in SSMS.
There are some cases in which SQL Server can’t create an estimated execution plan. If your query has parameters in it and values aren’t passed in, SQL Server can’t interpret those – it needs literal values. If the query references a temp table that is not declared, the plan also can’t be generated.
Now, let’s go back to our construction project. We’ve signed the contract and the contractor begins work. While the addition is being framed, you decide you want to add an extra room, or add windows, or make the ceilings higher. The contractor has to adjust for this in terms of time and cost. This will change how long the project takes, and how expensive it is.
Executing a query in SQL Server is no different. The actual execution plan is shown after a query is executed. The difference here is that SQL Server can tell you exactly how many reads were performed, how many rows were read, and what joins were performed.
If it’s a long-running query, it will take a while to get the execution plan. Parameters, if required, must be passed in.
The text plan is generated using SET STATISTICS PROFILE ON. The XML version of the actual plan can be viewed by using SET STATISTICS XML ON. A graphical version can be generated in SSMS by using the Include Actual Execution Plan button.
There are some cases in which things that show up in the estimated plan will not show in the actual plan. For example, when you call a scalar-value function, the estimated plan will show it – the actual plan will not. (This is why the impact of functions can be very misunderstood.)
“IF I VIEW AN ESTIMATED PLAN, THEN IMMEDIATELY RUN THE QUERY AND VIEW THE ACTUAL EXECUTION PLAN, WHY DO I SEE DIFFERENCES?”
The query optimizer is going to use statistics on the tables and indexes to decide how to perform the actual query execution. If statistics change for any reason between the time you estimate a plan and when the query is actually run, you can see differences.
Changes to table schema, indexes, or even the data can affect the statistics. If a new index is added, or rows are updated in the table, when the query optimizer executes the query, it could choose a different set of operators than it did during estimation. Sometimes the differences between the estimated and actual plans can be large!
How can you prevent this from being a problem? Make sure statistics are updated on your tables and indexes. Auto update stats will automatically refresh statistics if a specific number of rows in a table change – after a table reaches 500 rows, roughly 20% of the rows need to change. (Exact details about that are here http://support.microsoft.com/kb/195565/en-us). The more rows that your table contains, the more changes that need to be made for them to automatically refresh – on large tables, you may need to set up more frequent stats updates.
You also want to be aware that using table variables on large result sets can be wildly inaccurate – they always estimate a low number of rows.
TRIVIA: WHICH TYPE OF PLAN IS STORED IN THE PLAN CACHE?
The estimated plan is stored in the plan cache. If you review the XML (doesn’t that sound like fun?!), you will see “ParameterCompiledValue” listed near the end. This is what value the query was run with when the plan was stored. Ensuing executions may use different values, which can lead to less-than-optimal performance if bad parameter sniffing happens.
Estimated execution plans can be very useful as you are writing and tuning queries, giving you an idea of how SQL Server will most likely perform query execution. However, if you need to know exactly what steps SQL Server will take, executing the query and reviewing the actual execution plan is the only way to be certain.
Learn More in Our Execution Plan Training
Our How to Read Execution Plans Training explains how to get an execution plan, how to read the operators, and learn solutions to common query problems.
SQL Server transactional replication is a blessing and a curse. It’s a great developer tool that lets you scale out data to multiple servers, even using Standard Edition. But as your business picks up, your datasets get larger, and your customers grow more demanding, replication can start to fall behind. You need to learn how to tune it to keep up.
Before You Start Tuning Replication….
Make sure you can prove when changes you make to replication improve performance. Or find out quickly if you make something worse. (It’s gonna happen.)
If I’m going to make changes with something as complex as replication, here are my basic requirements:
- Monitoring must alert the DBA team replication latency exceeds allowed thresholds
- Monitoring needs to track historical latency to show if my changes reduce latency
- I need a production-like staging environment to test my changes.
If you haven’t configured monitoring for transactional replication, read how to do it here. The “easy” and “medium” steps are a small amount of work and are incredibly useful.
Don’t Skip “Normal” SQL Server Performance Tuning!
I’m going to give you a lot of transactional replication specific performance tuning tips in this post. But don’t skip other elements of SQL Server performance tuning! Wait statistics, virtual file stats, and identifying bottlenecks are still important. Get started with SQL Server performance tuning here.
1) Are You Using the right versions of Windows and SQL Server for Replication?
For replication performance, you want to be on Windows Server 2008 and SQL Server 2008 minimum. It really makes a difference.
2) Have You Scaled up your distributor?
When replication performance is important, use a dedicated distributor server so that your distributor doesn’t have to fight with a publisher or subscriber for CPU, memory, network, or storage resources. If you need high availability for the distribution database, you have limited options: failover clustering is pretty much the only way to go.
3) Is Replication really what you need?
4) Are You Using the right type of subscriptions and tools for replication over the WAN?
Argenis Fernandez shares what he learned from tuning transactional replication over wide area networks. This is a great use of “pull” subscriptions.
5) Have You Made sure Articles are NOT in more than one publication?
Kendal Van Dyke shows that having articles in multiple publications can also magnify the number of commands in your distribution database. That bloats your distribution database and will slow you way down as activity picks up.
(Note: If you’re using row-filtering on your articles, you may be the exception to this rule.)
6) Do You Only run Snapshots when you need them?
I’ve come across cases where the Snapshot agents for a publication were set to run on a schedule, even when replication wasn’t being initialized. I believe that when someone was setting replication up, they had checked off the option to “Schedule the Snapshot Agent to run at the following times” without realizing that it wasn’t needed. Don’t run snapshots on a schedule, it will lock up the publishing database. (When you open “Job Activity Monitor” these jobs show up with the category “REPL-Snapshot”.)
7) Are you Using “Immediate Sync” to your Advantage?
The immediate sync option is hard to spot when you first set up replication. Setting this to false can help minimize the impact of running a replication snapshot if you need to add new articles, or even remove and re-add a few articles. Learn more about it here. As always, test your changes outside of production first! (I personally have a fear of having immediate sync set to true because of this old bug from SQL Server 2005.)
For information on how the immediate_sync can also impact your distribution database, read more here.
Thanks to Allen McGuire for his comment reminding us on the benefits of this setting!
8) Are You Replicating only the articles and columns you need on the Subscriber?
Don’t just “add all.” For scalability, replicate only the articles that must be in replication, and only the columns that need to be replicated. This not only helps overall performance, this reduces the impact of times when you may need to re-initialize replication. This goes especially for large object / LOB data types. If you must replicate LOB types, Microsoft recommends that you use newer types such as nvarchar(max), varbinary(max), etc.
9) Do You Set the ‘Replicate Schema Changes’ subscription option to false when needed?
New columns being added to a published article shouldn’t be replicated to the subscriber unless they really need to be there. You can turn off the replication of schema changes by setting the ‘Replicate Schema Changes’ subscription option to ‘false’. (It defaults to ‘true’.)
10) Have You Considered static row filters?
“Static row filters” allow you to include only certain rows in a given publication. One gotcha: the row-filter is only evaluated when the row is inserted, not when the row is updated, so you really want this to be a value that doesn’t change. There is overhead to applying the row filter itself: Microsoft only recommends you use the row filters if your replication setup can’t handle replicating the full dataset. Be careful with indexing if you use row filters.
11) Have You Optimized your subscriber databases for re-initialization?
Face it: re-initialization happens. Unfortunately it usually happens when you least expect it and had plans to be doing something else. There are a few things you can do to keep re-initialization from making your publication database unusuable for long periods.
- Isolate your replication subscriber tables into their own database, and only keep replicated articles in there. This also typically helps you use use recovery models that are optimized for minimal logging in that database to speed up bulk inserts. Consider using synonyms to quickly “repoint” to replicated articles to give you flexibility.
- Evaluate whether initializing replication from backup could help.
12) Have you Considered Using multiple publications?
There’s pros and cons to splitting out publications. Here’s the pros:
- You can isolate large tables that are the biggest problems for snapshotting into their own publications so that they get their own snapshots. That can be helpful if there are other tables you might need to remove and re-add to replication more frequently. (The immediate_sync setting can also help with this, see #7 above.)
- This will give you multiple distribution agents so changes can be applied to your subscribers in parallel. (This one’s a pro and a con, as you’ll see.)
- This is more work to manage. You should be checking in scripts for your entire replication into source and have documentation on everything. More publications makes that whole process harder.
- All those distribution agents can backfire if you don’t have the resources to support them working on the subscriber at the same time.
- Be mindful not to put non row-filtered articles in more than one publication as noted above.
13) Are “Subscription Streams” Right for You? (or not?)
This option allows you to raise the number of connections that the distribution agents use to apply changes to the subscriber. But there’s overhead to managing all these threads, and you can get into situations where transactions aren’t fully consistent if you hit problems.
This feature is primarily recommended for use on situations where you have high network latency and are not changing the setting often. Keep in mind that if you’re splitting your articles into multiple publications for other reasons, you’ve already got multiple distribution agents running in parallel.
14) Are You replicating non-clustered indexes blindly?
Confession: I did this wrong for years. It’s very easy to set up transactional replication and send all the indexes over to the subscriber: you just set “Copy nonclustered indexes” in the articles property to “true”. But you’re only required to replicate the Primary Key and unique indexes. There’s two big problems with replicating all the nonclustered indexes:
- It can make re-initialization slower. By default the subscriber will have objects created, bulk load the articles, create “extra” nonclustered indexes, then “catch up” on any changes that came in after the snapshot was pushed. You definitely want to make sure that all “extra” nonclustered indexes are disabled or don’t exist while that bulk load is happening. But if a lot may have changed since the snapshot ran, you may not want the indexes to be created until the very end, anyway. Handling the nonclustered index creation outside of replication gives you that flexibility.
- It’s very rare for the exact same queries to run on the publisher and subscriber. You usually want nonclustered indexes that are specific to the workload on the subscriber, anyway.
Identify the “extra” nonclustered indexes specific to the queries that run on the subscriber. Script them out, check them into your source control, and have a process to deploy them whenever replication needs to be re-initialized.
15) Could publishing stored procedure execution work?
If your workload is run entirely by stored procedures, this can be a great option. (FYI, there is a bug/hotfix for this in SQL Server 2012/2014 listed below.)
16) Are You Using Read Committed Snapshot Isolation on replication subscribers?
It’s common for the distribution agent to have to fight with other processes while it tries to insert, update, and delete rows in the subscriber database. One DBA that I worked with removed a huge amount of blocking and speeded up processing by using RCSI on a subscriber database. Learn more about RCSI here.
17) Have You Ruled Out Virtual Log File fragmentation on the publisher database?
If you’ve got more than 10K virtual log files on your publication database, it could slow down replication. I’ve particularly seen this cause replication to get behind when a large operation like an index rebuild was run. Our free sp_Blitz® script will diagnose high numbers of VLFs for you.
18) You Haven’t Been Randomly Fiddling with Settings on the Distribution Agent, Have You?
There’s a lot of little settings you can change on the Agent Profiles in replication. I’m not a huge fan of changing them unless you can prove they helped your performance, though. These are typically “fine tuning” settings after you have the right architecture in place, in my experience.
19) Have You Looked Out for Replication Bugs / Fixes?
Man, these can get you. Like any other complicated tool, things can go wrong. Here’s a few highlights:
- KB 2674882 – Deadlocking distribution agents OR replication queries with very high memory grants. This can occur on SQL Server 2005, 2008, or 2008R2.
- Unexpectedly inactive subscriptions. There are many performance reasons to upgrade from SQL Server 2005, but if you must be on it then you shouldn’t run anything less than SP4.
- KB 2958429 - Service Packs Matter. SQL Server 2012 SP2 added some features to replication logging (and even a few improvements to Peer to Peer replication, oddly enough). Scroll to “Highlights for SQL Server 2012 SP2″ and expand “Replication” to see the list. If you apply this service pack, you may also want to apply KB 2969896.
- KB 2897221 - Stack dumps/ non-yielding schedulers if you’re replicating stored procedures. SQL Server 2012 or 2014.
- KB 949296 – Replication Agents cannot run when you have many agents and the Windows desktop heap is “used up”. (Thanks to Michael Bourgon for suggesting we link to this one.)
Public Safety ANNOUNCEMENT: Replication Needs Special Treatment for Hotfixes and Upgrades
With any hotfixes, it’s always good to review KB 941232, “How to apply a hotfix for SQL Server in a replication topology.” (You’ll need special steps if you’re using Database Mirroring or AlwaysOn Availability Groups.)
Be careful with the steps you take to upgrade and ‘drain’ replicated transactions. If you don’t do this, in some cases you may have big problems during an upgrade.
Got a Transactional Replication Horror Story or Recommendation?
Or do you have a favorite setting I haven’t mentioned here, or something you disagree with? Replication is full of controversy! Share it with us in the comments.
Sometimes you end up in a good plan / bad plan situation: an important query runs just fine most of the time. The query is parameterized, a good execution plan gets re-used, everything is cool.
But sometimes, a “bad plan” gets compiled and starts to be reused. This is “bad” parameter sniffing. “Bad plans” can come in a few varieties: maybe it’s slow some parameter combinations and can cause page timeouts sometimes. Maybe the “bad” query plan has a very large workspace memory grant that just isn’t needed, and it causes problems because lots of different queries are using it — then you get all sorts of nasty Resource Semaphore waits and everything gets slow.
Whatever the situation is, sometimes you want to stabilize a particular execution plan that’s “good” for all the different parameters that the query can run with.
Option 1: Change the code
The very best option is changing the code so you don’t have to resort to shenanigans behind the scenes. You can rewrite the TSQL, change indexes, or use hints to get a specific plan. But sometimes this is difficult to do: maybe it’s vendor code you can’t change. Maybe there’s a long code release process and it will take a very long time to get the code changed.
If you can tune the code, absolutely do it. If you can’t, at least get the request to fix the code noted by the vendor or software development team. Don’t skip it altogether, because the options I describe below aren’t all that fantastic.
Option 2: Plan guide that thing
Plan guides are like duct tape: it’s something you want to have on hand for emergency quick fixes, but you don’t want to rely on it long term as a building material. It’s also not suited for every kind of fix.
Plan guides let you do a few things:
- Apply query hints like “optmize for value”, “optimize for unknown”, “recompile”, and “maxdop” to a query
- Selectively turn on trace flags for a query, such as TF 4199 (performance optimizer changes), TF 9481(Older cost-based optimizer if running on SQL Server 2014), TF 2312 (newer cost-based optimizer if running on SQL Server 2014)
- Add selected table hints, like forceseek and specific index hints. (You cannot add NOLOCK or change isolation levels, don’t get too excited.)
- “Template” plan guides let you “force parameterize” a specific query, or enforce “simple parameterization” for a certain query if the database is using forced parameterization
But duct tape isn’t perfect. Here’s the biggest gotchas that I’ve found with plan guides:
- I’ve found that trying to use an index hint in a plan guide can cause queries to silently fail. That’s awkward.
- Plan guides don’t work with all types of queries. I haven’t been able to get them to work with temporary tables or table variables referenced in the query, for example.
- Making sure that a plan guide is working and is picking up your query is tricky. Plan guides are very picky about matching query text exactly.
- Plan guides can make code releases fail. If you’ve got a plan guide that references a stored procedure and something goes to alter it, SQL Server’s going to stop the ALTER with an error.
My biggest advice for plan guides: test them out on a non production system first. Verify that the plan guide is working and that the queries are doing exactly what you want before deploying to production. Treat the plan guide like real code as much as possible: put it into all environments, check it into source, use change control, and document it.
Option 3: Manually Force the “Right” Parameter Sniffing
If you can’t change the code and a plan guide doesn’t work, you can get a little creative. (By “get a little creative”, I mean that everything I’m about to describe can go horribly wrong.)
First, find the bad plan in cache. In SQL Server 2008 and higher, we get a nifty plan_hash for query plans. You can identify the “bad plan hash” that you don’t want to keep in cache. When it’s in cache, you then:
- Remove it from the cache. You can do this using DBCC FREEPROCCACHE and the plan_handle value (you can get this by running: sp_BlitzCache @results=’expert’). Or if it’s a stored procedure, you can use the sp_recompile procedure against the stored procedure to blow away the plan. (Thanks @DBArgenis for this tip!)
- Run a version of the query that puts the plan you want to be in cache. Usually this means running a version of the query with the parameters that give you the “right” plan.
You’ve got the burden of doing a some testing. Be careful with how you remove plans from cache: sp_recompile requires a schema level (exclusive) lock, so I don’t recommend running that against tables.
You need to generate the plan in a way that’s going to be re-used and make sure it works. You need to make sure that the plan you’re putting in cache really is good for re-use!
Stabilizing execution plans with NORECOMPUTE
If this doesn’t sound quite crazy enough for you, you can go a little farther and try to increase the chances of your “good” plan staying in cache longer. One tool you can use for this is NORECOMPUTE.
When you update statistics on a table with NORECOMPUTE, you tell SQL Server not to automatically update statistics on the table as data changes. This will automatically happen when approximately 20% of the rows in the table have been modified (the algorithm is more complicated, but 20% is pretty easy to remember). Updated statistics will cause the optimizer to consider a new execution plan for your query. So NORECOMPUTE reduces the chance of the “good” query being bumped out.
If you use NORECOMPUTE, be aware that this could have a negative effect on some queries and cause them to get a terrible estimate on queries that they’re running. You probably want to manually update statistics for the table at least once a day if data changes in it. You can do this using a built in command like sp_updatestats, custom code you write yourself, or Ola Hallengren’s free index maintenance scripts (see Examples B & D for two options). Just please don’t do it with a maintenance plan.
Document the heck out of this. It’s easy for people to have no idea it’s in place, or find it and change it without knowing what it impacts. If plan guides are duct tape, this is more like Scotch Tape.
These Are EMergency Fixes – Don’t Start Your Performance Tuning Process With These Techniques
A lot of times you don’t need any of this crazy stuff. Remember: many times you can fix these issues with good indexing or simple code changes. Safety first!
Learn More in Our Execution Plan Training
Our How to Read Execution Plans Training explains how to get an execution plan, how to read the operators, and learn solutions to common query problems.
At some point you’re going to be confronted with an Oracle installation or even just an Oracle DBA. Communicating with a DBA who works on a different product can be difficult, it’s like speaking US English and having a conversation with a native English speaker from Scotland. The words are the same, but they have different meanings.
While this is by no means an exhaustive list, it will help SQL Server DBAs have a meaningful conversation with their Oracle colleagues.
Oracle refers to the database as the data files on a disk that store data.
The set of memory structures and system processes that manage database files. Basically, the instance is executables and memory. Oracle has some different terms to separate out plan cache, buffer pool, and other concepts. But at a high level, executables and memory make a database instance.
So far things seem the same. Up until Oracle 12c, though, these two concepts were close to one in the same – one instance of Oracle housed one database (things like Oracle RAC not included). One thing to take note of - Oracle on Windows runs within one process, just like SQL Server. On Linux, however, there will be multiple Oracle processes each with a clearly defined purpose.
A tablespace is roughly analogous to a filegroup. You can create tables and indexes inside a tablespace. Like a filegroup, you can take tablespace backups separate from the rest of the database.
Unlike SQL Server, each tablespace can have many different options – some tablespaces can be logged while others are not. During tablespace creation, DBAs can manage a variety of features of each tablespace including a separate undo tablespace (see below), per user disk quotas, logging, or even on-disk block size (this can be helpful when dealing with LOB data).
In short, Oracle DBAs can customize database behavior at the tablespace level as well as at the database level. This can be useful for controlling archive data performance, blobs, or managing other aspects of storage.
Every user is created with a default tablespace. The default tablespace defines where that user’s tables and indexes will be created unless a different location is specified. This is like setting up a default filegroup, but it can be set per user instead of per database, and it provides finer grained control. A default tablespace is not like a default schema in SQL Server – users can create objects with different schemas inside their default tablespace. This isn’t related to object ownership like schemas in SQL Server, it’s related to object placement on disk.
You know how SQL Server has one tempdb? Within Oracle, database administrators can specify a different temporary work space on a user by user basis. Fast OLTP workloads can have access to SSD temporary tablespace while data warehouse queries and ETL jobs can have their own temporary tablespace that uses rotational disks. Heck, you could even allocate PCI-Express storage for executives’ temporary tablespace if they needed lightning fast joins or just wanted to foot the bill for PCI-Express storage.
Oracle uses MVCC by default (in SQL Server you’d call it
READ COMMITTED SNAPSHOT ISOLATION). Row versions have to be stored somewhere, but there’s no tempdb. The undo tablespace is used to track changes that have been made and to put the database back into a consistent state if a transaction is rolled back. Although it is possible to create multiple undo tablespaces, only one undo tablespace will be used any single Oracle instance at a time.
If only one tablespace can be active per Oracle instance, why have multiple undo tablespaces? Oracle RAC can contain multiple Oracle instances reading the same database. Each of the Oracle RAC instances can have a separate undo tablespace. If this sounds confusing, don’t worry – Oracle RAC is complex and deserves a separate blog post.
Once upon a time, Oracle DBAs had to configure the undo tablespace by hand. This was called the rollback segment. Poorly configured rollback segments led to “snapshot too old” errors and grumpy DBAs. If you ever encounter Oracle using a rollback segment, kindly ask the DBA why they aren’t using automatic rollback management (undo tablespaces).
Redo log files
It’s a transaction log file! A key Oracle difference is that everything gets logged, even the undo information. Redo log files are used just like SQL Server transaction log files.
Like SQL Server’s transaction log, Oracle can have multiple redo log files. These log files are written to in a circular fashion – the log files are written to in order and, when all log files are full, Oracle will circle around to the beginning again.
Unlike SQL Server, having multiple redo log files is the preferred way to manage Oracle logging. By default, there are two groups of redo log files, but this can and should be configured, based on RPO/RTO needs.
Archived redo log files
These are redo log files that have been backed up. There are a number of ways to have Oracle automatically manage creating backups of redo log files that vary from manual to completely automated. If the disks storing these files fills up, Oracle will not be able to write to the data files – active redo log files can’t be archived any more. To ensure safety, writes are stopped.
Oracle temporary tables are similar to SQL Server’s with one major exception – they’re statically defined. Even though an Oracle temp table definition will stick around until dropped, the data only persists for the duration of a session (or transaction if the table is configured that way).
The data inside a temporary table exists only for the current session – you can’t view data in another session’s temp table. The upside is that temp table metadata is always available for other users to query.
Oracle backups are very different from SQL Server backups – they’re both more simple and more complex than SQL Server at the same time. Many Oracle shops use a tool call Oracle Recovery Manager (RMAN) to handle database and redo log backups, archival, and even the expiration and deletion of backup files.