Blog

How to Back Up Terabytes of Databases

When you weren’t looking, your databases went and grew up. Now your backup window has grown so large that you’re about ready to open it and jump.

Time to make a choice.

The Native Way: Tuning SQL Server Backups

You can theoretically pull this off by using a combination of tactics:

Back up as infrequently as the business will allow. Run your full backups once a week (or if you want to go wild and crazy, once per month) and differential backups periodically. As Jes explains in her backup and recovery class, differentials back up the data pages that have changed since the last full backup. When disaster strikes, you only need to recover the most recent full backup, the most recent differential backup, and all of the log backups after the differential. This can shave a lot of time off your restores – but only if you minimize the number of changed pages in the database. This means…

Change the database as little as possible. We can’t change what the users do, but we can change what we DBAs do. Stop doing daily index defrag/rebuild jobs – you’re just changing pages in the database, which means instantly inflating the size of your differential backups. In a scenario like this, you can only do index maintenance when you’re sure it is the only way to solve a performance problem, and it absolutely has to be your last resort.

My backup camera collection has started to grow out of control.

My backup camera collection has started to grow out of control.

Tune the data file read speeds. You need to read the pages off disk as fast as possible to back them up. Use tools like CrystalDiskMark and SQLIO to measure how fast you’re going, and then tune your storage to go faster.

Compress the data as much as possible. It’s not just about minimizing the size of your backup file – it’s about minimizing the amount of data we have to write to disk. Bonus points for using index compression inside the database so that it’s compressed once, not recompressed every time we do a backup, although that doesn’t really help with off-row data.

Tune the backup target write speeds. If you’re using a small pool of SATA drives in RAID 5 as a backup target, it’s probably not going to be able to keep up with a giant volume of streaming writes, even if those writes are compressed versions of the database. Problems will get even worse if multiple servers are backing up to the same RAID 5 pool simultaneously because the writes will turn random, which is the worst case scenario for RAID 5.

Tune the bottleneck between the reads and the writes. If you’re backing up over the network, use 10Gb Ethernet to avoid the pains of trying to push a lot of data through a tiny 1Gb straw.

Tune your backup software settings. If you’re using native backups, start with using multiple files and the built-in options, and graph your results. Third party compression products usually offer all kinds of knobs to tweak – you’ll need to use that same level of graphing diligence.

Whew. I got tired just typing all that stuff. And if you’re lucky, at the end of it, your backups will complete in an hour or two, but the server might be darned near unusable while you’re beating the daylights out of it. Then the fun balancing act starts, trying to figure out the right point where the system is still usable but the backups complete quickly.

Or Just Cheat with SAN Snapshots.

In my Virtualization, SAN, and Hardware video class, I explain how SAN snapshots are able to take a full database backup of any size in just a couple of seconds.

I'm a big believer in snapshots.

I’m a big believer in snapshots.

Seriously.

Sort of.

See, while it’s technically a backup, I don’t really consider it a backup until it’s off the primary storage device. Your SAN storage, expensive as it was, is still vulnerable to failure, and you need to get that data out as quickly as possible. The good news is that you can move that data out without dragging it through the SQL Server’s storage connections, CPU, and network ports. You can simply (simply?) hook a virtual tape library, actual tape library, or another storage device to the same storage network, and copy directly between the two.

Your data read speeds may degrade during that process, but it’s up to you – if you want to architect your storage so that it’s fast enough to do these full backups without any noticeable performance to the end user, it’s possible by inserting enough quarters in the front.

You still have to pay attention, though, because your backup process will look like this:

  • Daily full backups via SAN snapshots – all writes are quiesced for 1-10 seconds during this time
  • Conventional log backups every X minutes – where X is dictated by the business

If you push a big index rebuild job through, you can still bloat the transaction log, and your log backups may take longer than X minutes to complete. This is where our RPO/RTO planning worksheet is so important – if your RPO is 1 minute, you simply may not be able to do index rebuild jobs.

SAN snapshots have one other drawback: depending on your storage make/model, snapshots may not be included in your licensing pricing. You may have to spend a lot more (typically tens of thousands of dollars) to unlock the feature. Ask your SAN admin if snapshots are right for your wallet.

Building Terabyte Servers Means Starting with Backups First

When I’m building a SQL Server to hold multiple terabytes of databases, this backup question is the very first one we have to address – even before we talk about the speed of end user queries.

Otherwise, we could end up designing a server with all local solid state drives, which is very inexpensive and satisfies end user performance goals – but we can’t back the data up fast enough.

Comparing Estimated and Actual Execution Plans in SQL Server

Let’s say you want to have an addition built on your house. You contact a contractor, who comes to your house, looks at your lot, looks at what exists, and asks you questions about exactly what you want. He then gives you an estimate of the work – approximately how long he thinks it will take his crew, what supplies will be needed, and the cost of those supplies.

SQL Server gives you a similar option. When you write a query, before committing to it, you can request an estimated execution plan.

The estimated execution plan is designed to show what SQL Server would most likely do if it were to execute the query. Using statistics, it estimates how many rows may be returned from each table. It chooses the operators it will use to retrieve the data – scans or seeks. It decides how to best join the tables together – nested loops, merge joins, hash joins. It is a reasonably accurate guide to what SQL Server will do.

You can view an estimated execution plan for an individual statement, a stored procedure, or a function. The plan can be text, XML, or graphical. To view text, use SET SHOWPLAN_TEXT ON. For an XML version, use the command SET SHOWPLAN_XML ON. To view a graphical plan in SSMS, click the Display Estimated Execution Plan button in SSMS.

Est Exec Plan

There are some cases in which SQL Server can’t create an estimated execution plan. If your query has parameters in it and values aren’t passed in, SQL Server can’t interpret those – it needs literal values. If the query references a temp table that is not declared, the plan also can’t be generated.

Now, let’s go back to our construction project. We’ve signed the contract and the contractor begins work. While the addition is being framed, you decide you want to add an extra room, or add windows, or make the ceilings higher. The contractor has to adjust for this in terms of time and cost. This will change how long the project takes, and how expensive it is.

Executing a query in SQL Server is no different. The actual execution plan is shown after a query is executed. The difference here is that SQL Server can tell you exactly how many reads were performed, how many rows were read, and what joins were performed.

If it’s a long-running query, it will take a while to get the execution plan. Parameters, if required, must be passed in.

The text plan is generated using SET STATISTICS PROFILE ON. The XML version of the actual plan can be viewed by using SET STATISTICS XML ON. A graphical version can be generated in SSMS by using the Include Actual Execution Plan button.

Actual Exec PLan

There are some cases in which things that show up in the estimated plan will not show in the actual plan. For example, when you call a scalar-value function, the estimated plan will show it – the actual plan will not. (This is why the impact of functions can be very misunderstood.)

“IF I VIEW AN ESTIMATED PLAN, THEN IMMEDIATELY RUN THE QUERY AND VIEW THE ACTUAL EXECUTION PLAN, WHY DO I SEE DIFFERENCES?”

The query optimizer is going to use statistics on the tables and indexes to decide how to perform the actual query execution. If statistics change for any reason between the time you estimate a plan and when the query is actually run, you can see differences.

Changes to table schema, indexes, or even the data can affect the statistics. If a new index is added, or rows are updated in the table, when the query optimizer executes the query, it could choose a different set of operators than it did during estimation. Sometimes the differences between the estimated and actual plans can be large!

How can you prevent this from being a problem? Make sure statistics are updated on your tables and indexes. Auto update stats will automatically refresh statistics if a specific number of rows in a table change – after a table reaches 500 rows, roughly 20% of the rows need to change. (Exact details about that are here http://support.microsoft.com/kb/195565/en-us). The more rows that your table contains, the more changes that need to be made for them to automatically refresh – on large tables, you may need to set up more frequent stats updates.

You also want to be aware that using table variables on large result sets can be wildly inaccurate – they always estimate a low number of rows.

TRIVIA: WHICH TYPE OF PLAN IS STORED IN THE PLAN CACHE?

The estimated plan is stored in the plan cache. If you review the XML (doesn’t that sound like fun?!), you will see “ParameterCompiledValue” listed near the end. This is what value the query was run with when the plan was stored. Ensuing executions may use different values, which can lead to less-than-optimal performance if bad parameter sniffing happens.

 

An example of compiled values in the execution plan XML

An example of compiled values in the execution plan XML

Estimated execution plans can be very useful as you are writing and tuning queries, giving you an idea of how SQL Server will most likely perform query execution. However, if you need to know exactly what steps SQL Server will take, executing the query and reviewing the actual execution plan is the only way to be certain.

Learn More in Our Execution Plan Training

Our How to Read Execution Plans Training explains how to get an execution plan, how to read the operators, and learn solutions to common query problems.

Performance Tuning SQL Server Transactional Replication: A Checklist

salmon-swimming-shutterstock_154697786

Swim faster!

SQL Server transactional replication is a blessing and a curse. It’s a great developer tool that lets you scale out data to multiple servers, even using Standard Edition. But as your business picks up, your datasets get larger, and your customers grow more demanding, replication can start to fall behind. You need to learn how to tune it to keep up.

Before You Start Tuning Replication….

Make sure you can prove when changes you make to replication improve performance. Or find out quickly if you make something worse. (It’s gonna happen.)

If I’m going to make changes with something as complex as replication, here are my basic requirements:

  • Monitoring must alert the DBA team replication latency exceeds allowed thresholds
  • Monitoring needs to track historical latency to show if my changes reduce latency
  • I need a production-like staging environment to test my changes.

If you haven’t configured monitoring for transactional replication, read how to do it here. The “easy” and “medium” steps are a small amount of work and are incredibly useful.

Don’t Skip “Normal” SQL Server Performance Tuning!

I’m going to give you a lot of transactional replication specific performance tuning tips in this post. But don’t skip other elements of SQL Server performance tuning! Wait statistics, virtual file stats, and identifying bottlenecks are still important. Get started with SQL Server performance tuning here.

1) Are You Using the right versions of Windows and SQL Server for Replication?

For replication performance, you want to be on Windows Server 2008 and SQL Server 2008 minimum. It really makes a difference.

2) Have You Scaled up your distributor?

When replication performance is important, use a dedicated distributor server so that your distributor doesn’t have to fight with a publisher or subscriber for CPU, memory, network, or storage resources. If you need high availability for the distribution database, you have limited options: failover clustering is pretty much the only way to go.

3) Is Replication really what you need?

Check your assumptions. Could you get by with Transaction Log Shipping? Are you a good candidate for Availability Groups?

4) Are You Using the right type of subscriptions and tools for replication over the WAN?

Argenis Fernandez shares what he learned from tuning transactional replication over wide area networks. This is a great use of “pull” subscriptions.

5) Have You Made sure Articles are NOT in more than one publication?

Kendal Van Dyke shows that having articles in multiple publications can also magnify the number of commands in your distribution database. That bloats your distribution database and will slow you way down as activity picks up.

(Note: If you’re using row-filtering on your articles, you may be the exception to this rule.)

6) Do You Only run Snapshots when you need them?

I’ve come across cases where the Snapshot agents for a publication were set to run on a schedule, even when replication wasn’t being initialized. I believe that when someone was setting replication up, they had checked off the option to “Schedule the Snapshot Agent to run at the following times” without realizing that it wasn’t needed. Don’t run snapshots on a schedule, it will lock up the publishing database. (When you open “Job Activity Monitor” these jobs show up with the category “REPL-Snapshot”.)

7) Are you Using “Immediate Sync” to your Advantage?

The immediate sync option is hard to spot when you first set up replication. Setting this to false can help minimize the impact of running a replication snapshot if you need to add new articles, or even remove and re-add a few articles. Learn more about it here. As always, test your changes outside of production first! (I personally have a fear of having immediate sync set to true because of this old bug from SQL Server 2005.)

For information on how the immediate_sync can also impact your distribution database, read more here.

Thanks to Allen McGuire for his comment reminding us on the benefits of this setting!

8) Are You Replicating only the articles and columns you need on the Subscriber?

Don’t just “add all.” For scalability, replicate only the articles that must be in replication, and only the columns that need to be replicated. This not only helps overall performance, this reduces the impact of times when you may need to re-initialize replication. This goes especially for large object / LOB data types. If you must replicate LOB types, Microsoft recommends that you use newer types such as nvarchar(max), varbinary(max), etc.

9) Do You Set the ‘Replicate Schema Changes’ subscription option to false when needed?

New columns being added to a published article shouldn’t be replicated to the subscriber unless they really need to be there. You can turn off the replication of schema changes by setting the ‘Replicate Schema Changes’ subscription option to ‘false’. (It defaults to ‘true’.)

10) Have You Considered static row filters?

“Static row filters” allow you to include only certain rows in a given publication. There is overhead to applying the row filter itself: Microsoft only recommends you use the row filters if your replication setup can’t handle replicating the full dataset.

11) Have You Optimized your subscriber databases for re-initialization?

Face it: re-initialization happens. Unfortunately it usually happens when you least expect it and had plans to be doing something else. There are a few things you can do to keep re-initialization from making your publication database unusuable for long periods.

  • Isolate your replication subscriber tables into their own database, and only keep replicated articles in there. This also typically helps you use use recovery models that are optimized for minimal logging in that database to speed up bulk inserts. Consider using synonyms to quickly “repoint” to replicated articles to give you flexibility.
  • Evaluate whether initializing replication from backup could help.

12) Have you Considered Using multiple publications?

There’s pros and cons to splitting out publications. Here’s the pros:

  • You can isolate large tables that are the biggest problems for snapshotting into their own publications so that they get their own snapshots. That can be helpful if there are other tables you might need to remove and re-add to replication more frequently. (The immediate_sync setting can also help with this, see #7 above.)
  • This will give you multiple distribution agents so changes can be applied to your subscribers in parallel. (This one’s a pro and a con, as you’ll see.)

Cons:

  • This is more work to manage. You should be checking in scripts for your entire replication into source and have documentation on everything. More publications makes that whole process harder.
  • All those distribution agents can backfire if you don’t have the resources to support them working on the subscriber at the same time.
  • Be mindful not to put non row-filtered articles in more than one publication as noted above.

13) Are “Subscription Streams” Right for You? (or not?)

This option allows you to raise the number of connections that the distribution agents use to apply changes to the subscriber. But there’s overhead to managing all these threads, and you can get into situations where transactions aren’t fully consistent if you hit problems.

This feature is primarily recommended for use on situations where you have high network latency and are not changing the setting often. Keep in mind that if you’re splitting your articles into multiple publications for other reasons, you’ve already got multiple distribution agents running in parallel.

14) Are You replicating non-clustered indexes blindly?

Confession: I did this wrong for years. It’s very easy to set up transactional replication and send all the indexes over to the subscriber: you just set “Copy nonclustered indexes” in the articles property to “true”. But you’re only required to replicate the Primary Key and unique indexes. There’s two big problems with replicating all the nonclustered indexes:

  • It can make re-initialization slower. By default the subscriber will have objects created, bulk load the articles, create “extra” nonclustered indexes, then “catch up” on any changes that came in after the snapshot was pushed. You definitely want to make sure that all “extra” nonclustered indexes are disabled or don’t exist while that bulk load is happening. But if a lot may have changed since the snapshot ran, you may not want the indexes to be created until the very end, anyway. Handling the nonclustered index creation outside of replication gives you that flexibility.
  • It’s very rare for the exact same queries to run on the publisher and subscriber. You usually want nonclustered indexes that are specific to the workload on the subscriber, anyway.

Identify the “extra” nonclustered indexes specific to the queries that run on the subscriber. Script them out, check them into your source control, and have a process to deploy them whenever replication needs to be re-initialized.

15) Could publishing stored procedure execution work?

If your workload is run entirely by stored procedures, this can be a great option. (FYI, there is a bug/hotfix for this in SQL Server 2012/2014 listed below.)

16) Are You Using Read Committed Snapshot Isolation on replication subscribers?

It’s common for the distribution agent to have to fight with other processes while it tries to insert, update, and delete rows in the subscriber database. One DBA that I worked with removed a huge amount of blocking and speeded up processing by using RCSI on a subscriber database. Learn more about RCSI here.

17) Have You Ruled Out Virtual Log File fragmentation on the publisher database?

If you’ve got more than 10K virtual log files on your publication database, it could slow down replication. I’ve particularly seen this cause replication to get behind when a large operation like an index rebuild was run. Our free sp_Blitz® script will diagnose high numbers of VLFs for you.

18) You Haven’t Been Randomly Fiddling with Settings on the Distribution Agent, Have You?

There’s a lot of little settings you can change on the Agent Profiles in replication. I’m not a huge fan of changing them unless you can prove they helped your performance, though. These are typically “fine tuning” settings after you have the right architecture in place, in my experience.

19) Have You Looked Out for Replication Bugs / Fixes?

Man, these can get you. Like any other complicated tool, things can go wrong. Here’s a few highlights:

  • KB 2674882 – Deadlocking distribution agents OR replication queries with very high memory grants. This can occur on SQL Server 2005, 2008, or 2008R2.
  • Unexpectedly inactive subscriptions. There are many performance reasons to upgrade from  SQL Server 2005, but if you must be on it then you shouldn’t run anything less than SP4.
  • KB 2958429 – Service Packs Matter. SQL Server 2012 SP2 added some features to replication logging (and even a few improvements to Peer to Peer replication, oddly enough). Scroll to “Highlights for SQL Server 2012 SP2″ and expand “Replication” to see the list. If you apply this service pack, you may also want to apply KB 2969896.
  • KB 2897221 – Stack dumps/ non-yielding schedulers if you’re replicating stored procedures. SQL Server 2012 or 2014.
  • KB 949296 – Replication Agents cannot run when you have many agents and the Windows desktop heap is “used up”. (Thanks to Michael Bourgon for suggesting we link to this one.)
  • The Log Reader Agent may fail to start on SQL Server 2012 Service Pack 1.
  • KB 2655789 – FIX: Replication Log Reader Agent fails with “18773” and “18805” error messages if you set the ‘large value types out of row’ table option to ON for the publication article table Print Print Email Email

Public Safety ANNOUNCEMENT: Replication Needs Special Treatment for Hotfixes and Upgrades

With any hotfixes, it’s always good to review KB 941232, “How to apply a hotfix for SQL Server in a replication topology.” (You’ll need special steps if you’re using Database Mirroring or AlwaysOn Availability Groups.)

Be careful with the steps you take to upgrade and ‘drain’ replicated transactions. If you don’t do this, in some cases you may have big problems during an upgrade.

Got a Transactional Replication Horror Story or Recommendation?

Or do you have a favorite setting I haven’t mentioned here, or something you disagree with? Replication is full of controversy! Share it with us in the comments.

Stabilizing Execution Plans: Plan Guides and NORECOMPUTE

Plan Guides are like duct tape

What could go wrong?

Sometimes you end up in a good plan / bad plan situation: an important query runs just fine most of the time. The query is parameterized, a good execution plan gets re-used, everything is cool.

But sometimes, a “bad plan” gets compiled and starts to be reused. This is “bad” parameter sniffing. “Bad plans” can come in a few varieties: maybe it’s slow some parameter combinations and can cause page timeouts sometimes. Maybe the “bad” query plan has a very large workspace memory grant that just isn’t needed, and it causes problems because lots of different queries are using it — then you get all sorts of nasty Resource Semaphore waits and everything gets slow.

Whatever the situation is, sometimes you want to stabilize a particular execution plan that’s “good” for all the different parameters that the query can run with.

Option 1: Change the code

The very best option is changing the code so you don’t have to resort to shenanigans behind the scenes. You can rewrite the TSQL, change indexes, or use hints to get a specific plan. But sometimes this is difficult to do: maybe it’s vendor code you can’t change. Maybe there’s a long code release process and it will take a very long time to get the code changed.

If you can tune the code, absolutely do it. If you can’t, at least get the request to fix the code noted by the vendor or software development team. Don’t skip it altogether, because the options I describe below aren’t all that fantastic.

Option 2: Plan guide that thing

Plan guides are like duct tape: it’s something you want to have on hand for emergency quick fixes, but you don’t want to rely on it long term as a building material. It’s also not suited for every kind of fix.

Plan guides let you do a few things:

  • Apply query hints like “optmize for value”, “optimize for unknown”, “recompile”, and “maxdop” to a query
  • Selectively turn on trace flags for a query, such as TF 4199 (performance optimizer changes), TF 9481(Older cost-based optimizer if running on SQL Server 2014), TF 2312 (newer cost-based optimizer if running on SQL Server 2014)
  • Add selected table hints, like forceseek and specific index hints. (You cannot add NOLOCK or change isolation levels, don’t get too excited.)
  • “Template” plan guides let you “force parameterize” a specific query, or enforce “simple parameterization” for a certain query if the database is using forced parameterization

But duct tape isn’t perfect. Here’s the biggest gotchas that I’ve found with plan guides:

  • I’ve found that trying to use an index hint in a plan guide can cause queries to silently fail. That’s awkward.
  • Plan guides don’t work with all types of queries. I haven’t been able to get them to work with temporary tables or table variables referenced in the query, for example.
  • Making sure that a plan guide is working and is picking up your query is tricky. Plan guides are very picky about matching query text exactly.
  • Plan guides can make code releases fail. If you’ve got a plan guide that references a stored procedure and something goes to alter it, SQL Server’s going to stop the ALTER with an error.

My biggest advice for plan guides: test them out on a non production system first. Verify that the plan guide is working and that the queries are doing exactly what you want before deploying to production. Treat the plan guide like real code as much as possible: put it into all environments, check it into source, use change control, and document it.

Option 3: Manually Force the “Right” Parameter Sniffing

If you can’t change the code and a plan guide doesn’t work, you can get a little creative. (By “get a little creative”, I mean that everything I’m about to describe can go horribly wrong.)

First, find the bad plan in cache. In SQL Server 2008 and higher, we get a nifty plan_hash for query plans. You can identify the “bad plan hash” that you don’t want to keep in cache. When it’s in cache, you then:

  1. Remove it from the cache. You can do this using DBCC FREEPROCCACHE and the plan_handle value (you can get this by running: sp_BlitzCache @results=’expert’). Or if it’s a stored procedure, you can use the sp_recompile procedure against the stored procedure to blow away the plan. (Thanks @DBArgenis for this tip!)
  2. Run a version of the query that puts the plan you want to be in cache. Usually this means running a version of the query with the parameters that give you the “right” plan.

You’ve got the burden of doing a some testing. Be careful with how you remove plans from cache: sp_recompile requires a schema level (exclusive) lock, so I don’t recommend running that against tables.

You need to generate the plan in a way that’s going to be re-used and make sure it works. You need to make sure that the plan you’re putting in cache really is good for re-use!

Stabilizing execution plans with NORECOMPUTE

NORECOMPUTE - flimsy tape

NORECOMPUTE – flimsy tape

If this doesn’t sound quite crazy enough for you, you can go a little farther and try to increase the chances of your “good” plan staying in cache longer. One tool you can use for this is NORECOMPUTE.

When you update statistics on a table with NORECOMPUTE, you tell SQL Server not to automatically update statistics on the table as data changes. This will automatically happen when approximately 20% of the rows in the table have been modified (the algorithm is more complicated, but 20% is pretty easy to remember). Updated statistics will cause the optimizer to consider a new execution plan for your query. So NORECOMPUTE reduces the chance of the “good” query being bumped out.

If you use NORECOMPUTE, be aware that this could have a negative effect on some queries and cause them to get a terrible estimate on queries that they’re running. You probably want to manually update statistics for the table at least once a day if data changes in it. You can do this using a built in command like sp_updatestats, custom code you write yourself, or Ola Hallengren’s free index maintenance scripts (see Examples B & D for two options). Just please don’t do it with a maintenance plan.

Document the heck out of this. It’s easy for people to have no idea it’s in place, or find it and change it without knowing what it impacts. If plan guides are duct tape, this is more like Scotch Tape.

These Are EMergency Fixes – Don’t Start Your Performance Tuning Process With These Techniques

A lot of times you don’t need any of this crazy stuff. Remember: many times you can fix these issues with good indexing or simple code changes. Safety first!

Learn More in Our Execution Plan Training

Our How to Read Execution Plans Training explains how to get an execution plan, how to read the operators, and learn solutions to common query problems.

Oracle Terminology for the SQL Server DBA

translation-monster

The translation team hard at work.

At some point you’re going to be confronted with an Oracle installation or even just an Oracle DBA. Communicating with a DBA who works on a different product can be difficult, it’s like speaking US English and having a conversation with a native English speaker from Scotland. The words are the same, but they have different meanings.

While this is by no means an exhaustive list, it will help SQL Server DBAs have a meaningful conversation with their Oracle colleagues.

Database
Oracle refers to the database as the data files on a disk that store data.

Database instance
The set of memory structures and system processes that manage database files. Basically, the instance is executables and memory. Oracle has some different terms to separate out plan cache, buffer pool, and other concepts. But at a high level, executables and memory make a database instance.

So far things seem the same. Up until Oracle 12c, though, these two concepts were close to one in the same – one instance of Oracle housed one database (things like Oracle RAC not included). One thing to take note of – Oracle on Windows runs within one process, just like SQL Server. On Linux, however, there will be multiple Oracle processes each with a clearly defined purpose.

Tablespace
A tablespace is roughly analogous to a filegroup. You can create tables and indexes inside a tablespace. Like a filegroup, you can take tablespace backups separate from the rest of the database.

Unlike SQL Server, each tablespace can have many different options – some tablespaces can be logged while others are not. During tablespace creation, DBAs can manage a variety of features of each tablespace including a separate undo tablespace (see below), per user disk quotas, logging, or even on-disk block size (this can be helpful when dealing with LOB data).

In short, Oracle DBAs can customize database behavior at the tablespace level as well as at the database level. This can be useful for controlling archive data performance, blobs, or managing other aspects of storage.

This tablespace is vast and untouched.

This tablespace is vast and untouched.

Default tablespace
Every user is created with a default tablespace. The default tablespace defines where that user’s tables and indexes will be created unless a different location is specified. This is like setting up a default filegroup, but it can be set per user instead of per database, and it provides finer grained control. A default tablespace is not like a default schema in SQL Server – users can create objects with different schemas inside their default tablespace. This isn’t related to object ownership like schemas in SQL Server, it’s related to object placement on disk.

Temporary tablespace
You know how SQL Server has one tempdb? Within Oracle, database administrators can specify a different temporary work space on a user by user basis. Fast OLTP workloads can have access to SSD temporary tablespace while data warehouse queries and ETL jobs can have their own temporary tablespace that uses rotational disks. Heck, you could even allocate PCI-Express storage for executives’ temporary tablespace if they needed lightning fast joins or just wanted to foot the bill for PCI-Express storage.

Undo tablespace
Oracle uses MVCC by default (in SQL Server you’d call it READ COMMITTED SNAPSHOT ISOLATION). Row versions have to be stored somewhere, but there’s no tempdb. The undo tablespace is used to track changes that have been made and to put the database back into a consistent state if a transaction is rolled back. Although it is possible to create multiple undo tablespaces, only one undo tablespace will be used any single Oracle instance at a time.

If only one tablespace can be active per Oracle instance, why have multiple undo tablespaces? Oracle RAC can contain multiple Oracle instances reading the same database. Each of the Oracle RAC instances can have a separate undo tablespace. If this sounds confusing, don’t worry – Oracle RAC is complex and deserves a separate blog post.

Rollback
Once upon a time, Oracle DBAs had to configure the undo tablespace by hand. This was called the rollback segment. Poorly configured rollback segments led to “snapshot too old” errors and grumpy DBAs. If you ever encounter Oracle using a rollback segment, kindly ask the DBA why they aren’t using automatic rollback management (undo tablespaces).

Redo log files
It’s a transaction log file! A key Oracle difference is that everything gets logged, even the undo information. Redo log files are used just like SQL Server transaction log files.

Like SQL Server’s transaction log, Oracle can have multiple redo log files. These log files are written to in a circular fashion – the log files are written to in order and, when all log files are full, Oracle will circle around to the beginning again.

Unlike SQL Server, having multiple redo log files is the preferred way to manage Oracle logging. By default, there are two groups of redo log files, but this can and should be configured, based on RPO/RTO needs.

Archived redo log files
These are redo log files that have been backed up. There are a number of ways to have Oracle automatically manage creating backups of redo log files that vary from manual to completely automated. If the disks storing these files fills up, Oracle will not be able to write to the data files – active redo log files can’t be archived any more. To ensure safety, writes are stopped.

Temporary tables
Oracle temporary tables are similar to SQL Server’s with one major exception – they’re statically defined. Even though an Oracle temp table definition will stick around until dropped, the data only persists for the duration of a session (or transaction if the table is configured that way).

The data inside a temporary table exists only for the current session – you can’t view data in another session’s temp table. The upside is that temp table metadata is always available for other users to query.

Backups
Oracle backups are very different from SQL Server backups – they’re both more simple and more complex than SQL Server at the same time. Many Oracle shops use a tool call Oracle Recovery Manager (RMAN) to handle database and redo log backups, archival, and even the expiration and deletion of backup files.

Our Senior DBA Training Class: Attendee Feedback

When we finish our courses, we ask attendees what they thought. Here’s what they said about our How to Be a Senior DBA class in Chicago:

“I’ve attended many trainings.  This is the most valuable I’ve ever attended.” – Tim Costello, Consultant

“ROI for training is always a gamble.  However, I felt like I received my money’s worth before lunch on the first day.  Outstanding!” – Zach Eagle, DBA

“The single best database class I’ve ever attended.  I feel challenged to step up not only my DBA game, but my presentation & training game as well.” – Ben Bausili, Consultant

Goat and Jedi not always included. But sometimes. Like in Chicago.

Goat and Jedi not always included. But sometimes. Like in Chicago.

“I would tell people that this is the perfect course to take if you want to gain control of your environment and impress management.” – George Larkin, DBA

“This course is very in-depth and helpful to get information on the things you need to do in order to take your DBA skills to the next level.  The course is a bit overwhelming at times, but you need to have all of this detail.  Thanks to Brent and his team for answering any questions that came up and I would definitely recommend this course to others.” – Mike Hewitt, Software Engineer/DBA

“Outstanding training!  I knew very little about SANs & SSD.  I am now confident that I can troubleshoot on a higher level to determine if storage is ever our problem.  Thanks Ozar group!  You guys rock!” – Judy Beam, DBA

“Excellent format, content and team of presenters.  More like a conversation between peers than a boring lecture.  Fun and very meaningful.  You have a great team!” – Greg Noel, COO/CIO

“I came into the class with unfairly high expectations; Brent and Kendra somehow managed to exceed them.  Thank you guys!” – Ben Wyatt, DBA/Consultant

“How to be a Senior DBA is a consistently delivered training regimen that gives you useful tools and knowledge in a digestible and fun format.” – Luther Rochester, DBA

“Brent & the gang are approachable and easy to talk to.  They break down complex subject matter to easier to understand presentations.” – Kevin Murphy, Data Architect

“Far and away better than the MS courses.  The Ozar team knows their stuff and doesn’t pretend to know while they Google the answer on a break.” – Jon Worthy, IT

“This class is a must for every DBA.  All modules are well prepared & presented by two very well-respected SQL gurus in the community.  All questions & demos are answered & illustrated clearly.  I’ve learned a lot!” – Chai W., DBA

What will you say? Sign up now for the next one in Philadelphia this September or our Make SQL Server Apps Go Faster class in Seattle.

New sp_Blitz® v35 Checks for Unsupported and Dangerous SQL Server Builds

News broke recently of a dangerous data loss bug in SQL Server 2012 and 2014, and Aaron Bertrand explained which patch levels are affected. It’s kinda tricky, and I’m afraid most people aren’t even going to know about the bug – let alone whether or not they’re on a bad version.

I added build number checking into the latest version of sp_Blitz® so that you can just run it. If your build has the dangerous data loss bug, you’ll get a priority 20 warning about a dangerous build. If you’re running an out-of-support version, like SQL 2012 RTM with no service packs, you’ll get a priority 20 warning about that as well.

Other changes in the last couple of versions:

Changes in v35 – June 18, 2014:

  • John Hill fixed a bug in check 134 looking for deadlocks.
  • Robert Virag improved check 19 looking for replication subscribers.
  • Russell Hart improved check 34 to avoid blocking during restores.
  • Added check 126 for priority boost enabled. It was always in the non-default configurations check, but this one is so bad we called it out.
  • Added checks 128 and 129 for unsupported builds of SQL Server.
  • Added check 127 for unneccessary backups of ReportServerTempDB.
  • Changed fill factor threshold to <80% to match sp_BlitzIndex.

Changes in v34 – April 2, 2014:

  • Jason Pritchard fixed a bug in the plan cache analysis that did not return results when analyzing for high logical reads.
  • Kirby Richter @SqlKirby fixed a bug in check 75 (t-log sizes) that failed on really big transaction log files. (Not even gonna say how big.)
  • Oleg Ivashov improved check 94 (jobs without failure emails) to exclude SSRS jobs.
  • Added @SummaryMode parameter to return only one result set per finding.
  • Added check 124 for Performance: Deadlocks Happening Daily. Looks for more than 10 deadlocks per day.
  • Moved check 121 for Performance: Serializable Locking to be lower priority (down to 100 from 10) and only triggers when more than 10 minutes of the wait have happened since startup.
  • Changed checks 107-109 for Poison Waits to have higher thresholds, now looking at more than 5 seconds per hour of server uptime. Been up for 10 hours, we look for 50 seconds, that kind of thing.

Download the latest sp_Blitz®, or head over to the introduction page. Enjoy!

Monitoring SQL Server Transactional Replication

replication-latency

“It will catch up eventually. I hope.”

It was the best of times, it was the worst of times. I was a SQL Server DBA, and if something went wrong in Transactional Replication I needed to find out about it right away and help keep things healthy, day or night. Here’s what I learned from that experience about monitoring replication.

If you’re just getting started and need an introduction to transactional replication, head over here.

Tracer Tokens Aren’t Really Your Friend

“Tracer Tokens” were introduced in SQL Server 2005. They sound awfully good. Books Online explains that you can automate them using sys.sp_posttracertoken and report on them using sp_helptracertokenhistory.

There’s a big problem: tracer tokens are too patient.

Let’s say my replication is incredibly overwhelmed and I send out a tracer token. I won’t hear back until it reaches its destination or definitively fails. That could be a very, very long time. The fact that it’s potentially unknown means I don’t want to rely heavily on it for monitoring.

Don’t Rely Too Much on Replication Monitor (REPLMON.exe)

When replication is behind, it’s natural to turn to Replication Monitor. The first five links in “Monitoring Replication” in Books Online point to it, after all.

Replication Monitor isn’t all bad. But don’t depend on it too much, either.

  • Replication Monitor is a tool to help you answer the question “how are things doing right now?” It doesn’t baseline or give the kind of historical info that your manager wants to see.
  • Replication Monitor may run queries to count the number of undistributed commands that may take a while to run and be performance intensive (particularly when things get backed up in the distributor).

I’ve personally seem some cases where running more than one instance of Replication Monitor while a publication snapshot was being taken also caused blocking. Too many people checking to see “how much longer will this take?” actually caused things to take longer. It’s not just me, Microsoft recommends you avoid running multiple instances of Replication Monitor.

ReplMon protip: You can disable automatic refreshing for the Replication Monitor UI, and just refresh the data when you need it. More info in Books Online here. (Thanks to John Samson for this tip.)

Replication Monitor is useful, but you’re better off if people can get information on replication health without everyone having to run Replmon. You can do this fairly easily by using simpler tools to create dashboards to chart replication latency.

Easy Replication Monitoring: Alert on Latency with Canary Tables

It’s easy to build your own system for tracking replication latency for each publication. Here are the ingredients for the simplest version:

  • Add a table named dbo.Canary_PubName to each publication
  • dbo.Canary_PubName has a single row with a datetime column in it
  • A SQL Server Agent job on the publisher updates the datetime to the current timestamp every minute
  • A SQL Server Agent job on the subscriber checks dbo.Canary_PubName every minute and alerts if the difference between the current time and the timestamp is greater than N minutes

It’s very simple to extend this to a simple dashboard using a third party monitoring tool or SQL Server Reporting Services: you simply poll all the dbo.Canary tables and report on the number of minutes of latency on each server.

This simple process gets around the weaknesses of tracer tokens, and also gives you immediate insight into how much latency you have on each subscriber. Bonus: this exact same technique also works well with logshipping and AlwaysOn Availability Groups. Tastes great, less filling.

Medium Replication Monitoring: Notify when Undistributed Commands Rise in the Distribution Database

The distribution database is a special place for Transactional Replication. The log reader agent pulls information on what’s changed from the transaction log of the publication database and translates it into commands that hang out in the distribution database before the changes go out to subscribers.

If you have a lot of data modification occurring on the publisher, you can get a big backup of commands in the distribution database.

If replication performance is important, set up a SQL Server Agent job on your distribution server to regularly check the amount of undistributed commands with a script like Robert Davis provides here. Have it alert you when the commands go above a given threshold.

Real world example: When I was the DBA for an environment with mission-critical replication, we would warn when undistributed commands rose above 500K and create a severity-1 ticket when they rose above 1 million. We did this after setting up dashboards to baseline replication latency and also baselining the amount of undistributed commands in distribution, so that we knew what our infrastructure could recover from and what might need DBA attention to recover in time.

Difficult Replication Monitoring: Alert When Individual Articles are Unhealthy

Here’s where things get tricky. It’s very difficult to prove that all articles in replication are healthy. The steps up to this point have tracked latency for the entire publication and bottlenecks in the distribution database.Things get pretty custom if you need to prove that individual tables are all up to date.

I once had a situation where a code release removed some articles from replication, modified the tables and data significantly, then re-added the articles to replication.

There was an issue with the scripts and one of the articles didn’t get put back into replication properly at the end of the process. Replication was working just fine. No script had explicitly dropped the table from the subscriber, so it just hung out there with stale data. The problem wasn’t discovered for a few days, and it was a bit difficult to track down. Unfortunately, the next week was kind of a downer because a lot of data had to be re-processed after that article was fixed.

Here’s what’s tricky: typically some articles change much more often than others. Monitoring individual articles typically requires baselining “normal” latency per article, then writing custom code that checks each article against the allowed latency. This is significantly more difficult for any large articles that don’t have a “Last Modified Date” style column.

(Disclaimer: in the case that you don’t have a “Last Modified” date on your subscriber, I do not suggest layering Change Tracking on top of the replication subscriber. If you are tempted to do that, first read my post on Performance Tuning Change Tracking, then go through all the steps that you would do if you needed to re-initialize replication or make schema changes on articles. You’ll change your mind by the end.)

Special Cases: The “Desktop Heap” is Used Up

This is a special case for replication. If you have a large amount of replication agents on a single server (such as 200 or more), you may run into issues where things just silently stop working due to desktop heap exhaustion. This is an issue that can be hard to identify because the agents just stop working!

Canary tables can help monitor for this, but you’ll need a lot of them since this can happen on an agent-by-agent basis. Read more about fixing desktop heap problem in replication in KB 949296. (Thanks to Michael Bourgon for suggesting we include this.)

Test Your Monitoring out in Staging

The #1 mistake I find with transactional replication is ignoring the staging environment. This is critical to supporting replication and creating effective monitoring for it.

The staging environment isn’t the same thing as development or QA. It’s a place where you have the same number of SQL Server instances as production, and the same replication setup as production. You test changes against staging before they go to production. You can also use it to test replication changes.

Staging is also where you confirm that your replication monitoring works. Data probably doesn’t constantly change in your staging environment, but that’s OK. Use canary tables and get creative to simulate load for test purposes.

Do You Have a Technique for Monitoring Replication Not Listed Here?

Tell us about it in the comments!

Let’s Test Your Backups Together [Video]

You manage SQL Server databases, but you never get the chance to take time out of your busy day to test your backups. You assume that just because the jobs are succeeding, that you’ll be able to restore your databases when disaster strikes. Join Brent Ozar as he walks you through several queries of your MSDB backup history tables, checks your RPO and RTO live, and helps you build a recovery strategy for your production databases in this one-hour video.

For more videos like this:

Our Sessions at the #SQLPASS Summit 2014

The session list has been published, and we’re excited to say all of us have been selected to speak at the PASS Summit again this year. Here’s our sessions:

Are Your Indexes Hurting You or Helping You? – Jes Schultz Borland – Queries need your help! Your mission, should you choose to accept it, is to make great decisions about what indexes are best for your workload. In this session, we’ll review the difference between clustered and nonclustered indexes, show when to use included columns, understand what sargability means, and introduce statistics. You’ll leave this session with the ability to confidently determine why, or why not, SQL Server uses your indexes when executing queries.

Developers: Who Needs a DBA? – Brent Ozar – You store data in SQL Server, but you don’t have enough work to keep a full-time DBA busy. In just one session, you’ll learn the basics of performance troubleshooting, backup, index tuning, and security. Brent Ozar, recovering developer, will teach you the basic care and feeding of a Microsoft SQL Server 2005, 2008, 2012, or 2014 instance and give you scripts to keep you out of trouble.

pass-summit-2014

Dynamic SQL: Build Fast, Flexible Queries – Jeremiah Peschka – Dynamic SQL is a misunderstood and much maligned part of a DBA’s tool kit – it can be used to solve difficult business problems, respond to diverse data needs, and alleviate performance problems. Many DBAs reject dynamic SQL outright as a potential source of SQL injections, being poorly performing, or just for being a hacky solution in general. Not so! Jeremiah Peschka has been making extensive use of dynamic SQL throughout his career to solve a variety of problems. In this session, we’ll be dispelling these misconceptions and demonstrating how dynamic SQL can become a part of every DBA’s tool kit.

From Minutes to Milliseconds: High-Performance SSRS Tuning – Doug Lane – Even though you’re an experienced report developer or administrator, performance tuning for SQL Server Reporting Services still feels as bewildering and hopeless as folding a fitted bed sheet. You’ve made your data sets smaller and timeouts longer, but it’s not enough to remove the slowness dragging down your reporting environment. In this session, you’ll learn how design and configuration choices put pressure on your report server and techniques to relieve that pressure. You’ll see how to configure your Reporting Services databases for speed, streamline your subscription schedules, and use caching for high-demand reports. You’ll also learn some design strategies to lighten your report processing load. If you want to maximize the speed of your Reporting Services environment and minimize the pain of performance tuning, this session is for you.

Lightning Talk: Conquer CXPACKET and Master MAXDOP – Brent Ozar – CXPACKET waits don’t mean you should set MAXDOP = 1. Microsoft Certified Master Brent Ozar will boil it all down and simplify CXPACKET to show you the real problem – and what you should do about it – in one quick 10-minute Lightning Talk.

Why Does SQL Server Keep Asking For This Index? – Kendra Little – SQL Server says you’d really benefit from an index, but you’d like to know why. Kendra Little will give you scripts to find which queries are asking for a specific missing index. You’ll learn to predict how a new index will change read patterns on the table, whether you need to add the exact index SQL Server is requesting, and how to measure performance improvements from your index changes. If you’re comfortable querying SQL Server’s missing index DMVs and looking at an execution plan here and there, this session is for you.

World’s Worst Performance Tuning Techniques – Kendra Little – Could one of your tricks for making queries faster be way off base? Kendra Little is a Microsoft Certified Master in SQL Server and a performance tuning consultant, which means she’s learned lots of lessons from her mistakes. In this session, you will learn how to stop obsessively updating or creating statistics, find alternatives to forcing an index, and deal with an addiction to ‘recompile’ hints.

Pre-Conference Session: psych! We didn’t get picked for an official pre-con this year (I know, right?) so we’re building our own lunar lander. Stay tuned – we’ll get things ironed out in the next week or two so you can make your official plans.

css.php