SQL Server Common Table Expressions

Common table expressions are a feature of SQL that lets a developer create a query that can be referenced multiple times. This feature gives developers another tool to add flexibility or just to simplify code.

Why Common Table Expression?

Why is it called a Common Table Expression (CTE)? Because that’s what the documentation calls it!


No, seriously, that’s it. Different databases have different terms for this feature. That’s what we call it in SQL Server.

What does a CTE do?

A CTE effectively creates a temporary view that a developer can reference multiple times in the underlying query. You can think of the CTE as if it were an inline view.

Here’s a slight re-write of a StackExchange query to find interesting and unanswered questions.

WITH unanswered AS (
    FROM dbo.Posts p
           FROM dbo.Posts a
           WHERE a.ParentId = p.Id
                 AND a.Score > 0) = 0
          AND CommunityOwnedDate IS NULL
          AND ClosedDate IS NULL
          AND ParentId IS NULL
          AND AcceptedAnswerId IS NULL
SELECT TOP 2000 unanswered.Id AS [Post Link] ,
           / 10.0 + us.Reputation
           / 200.0 + p.Score * 100)
       AS Weight
FROM   unanswered
       JOIN dbo.Posts p ON unanswered.Id = p.Id
       JOIN PostTags pt ON pt.PostId = unanswered.Id
       JOIN Users u ON u.Id = p.OwnerUserId
GROUP BY unanswered.Id, u.Reputation, p.Score

The CTE, lines 1 – 12, effectively creates a temporary view that we can use throughout the rest of the query. You can also think of it in the same way that you’d think of a derived table (a join to a subquery).

CTEs make it possible to simplify code, express complex ideas more concisely, or just write code quickly without having to worry as much about structure. They’re a great feature of the SQL language that many people overlook.

If you want to learn more about CTEs, check out our upcoming Advanced Querying and Indexing class in both Chicago and Portland. We’ll cover the basics of CTEs, pitfalls, and performance tuning options.

Comparing SQL Server and Oracle Log Shipping

Both Oracle and SQL Server offer several ways to support disaster recovery scenarios. One of the simplest ways to handle DR in the SQL Server world is to use SQL Server log shipping. Oracle doesn’t offer log shipping as a part of the core product, but it’s possible to set up log shipping in Oracle.

I thought it would be fun to take a look at Oracle and SQL Server log shipping side by side, so let’s take a look!

Log Shipping Set Up

SQL Server wins this one. Reason: there’s a wizard. Even a novice DBA can get log shipping working using the wizard.

Congratulations Oracle professional, you’re setting this up by hand! If you’re at all handy with scripting, once you’ve set up Oracle log shipping, you can port your code to any other Oracle installation and be up and running in a few minutes once you’ve set up the log shipping scripts for the first time. Setting up Oracle log shipping the first time might just take you a while.

Although SQL Server wins for ease of set up, it’s worth reminding you – Oracle log shipping set up can be easily moved from server to server by deploying scripts and changing one or two variables.

Day to Day Operations

SQL Server’s log shipping adds additional requirements for database administrators. Log shipping sets up an extra set of transaction log backup jobs – existing jobs will need to be disabled or deleted. If a DBA takes a log backup outside of the normal log shipping jobs, that can potentially break log shipping. Switching SQL Server into SIMPLE recover mode will break log shipping – and yes, this is more common than you’ll think.

Oracle’s log shipping works by copying archived redo log files. There are no extra backup jobs to add. Instead, the log shipping jobs copy archived redo log files from the flash recovery area. Rotating out active redo log files will move them into the archive redo log file area. DBAs can take advantage of their existing backup strategy. It is still possible for an Oracle DBA to break log shipping by using NOARCHIVELOG mode or adding tablespaces or files without adding them on the secondary.

Of course, a DBA can also use the FORCE LOGGING option to prevent users from switching to NOARCHIVELOG mode and breaking the log shipping.

One other note – Oracle log shipping will move security between both instances while SQL Server’s log shipping won’t.

Reversing Log Shipping

In both cases, there’s a lot of work that has to be done when you want to reverse log shipping.

With SQL Server, you can gracefully reverse log shipping by using a “tail of the log backup”, but not a lot of people know how to do this properly so they can avoid the overhead of re-initializing running a full backup. In the end, people frequently re-initialize log shipping, lose data, or just get very upset until they stumble upon the answer.

To reverse Oracle log shipping, we can use the same set up scripts, but change the name of the primary and standby servers. Depending on how things are set up, it may be possible to rapidly start the Oracle instance using the fast recovery area to bring the standby online and immediately resume log shipping.

If you’ve set up log shipping using robust scripts, the Oracle log shipping approach can be easily reversed. If you haven’t, reversing log shipping poses just as many problems on one system as the other.

Querying the Replica

It’s not uncommon to use log shipping to provide a readable replica. This can be very helpful when users can tolerate stale data for reports, but you have not been able to develop a reporting schema.

With SQL Server, we have to put the SQL Server into STANDBY mode. As Kendra Little explains in Reporting From a Log Shipping Secondary in STANDBY Mode, putting SQL Server into STANDBY mode is necessary to read from the secondary, but it can have make recovery take longer which could, in some scenarios, put you outside of your recovery point objective.

With Oracle, this process is easy – at any point during log shipping, we just mount the database in read only mode using the command ALTER DATABASE OPEN READ ONLY. Users will be able to read up to the last restored transaction. Once it’s time to start restoring data, you can start the database in recovery mode.


Licensing is annoying and complex. If I get this wrong, sound off in the comments and I’ll do my best to clarify and correct.

With SQL Server licensing, this may change depending on when you licensed your SQL Server. However, you get one “free” standby instance, as long as you don’t read from that instance. You do have to pay for software assurance on the log shipping secondary.

Oracle’s licensing is simpler: if it’s running, you pay for it. Also, you pay for support.

Oracle wins through simple licensing. If you’re using Oracle Enterprise Edition, you have many more options for disaster recovery, and much more entertaining ways to pay.

Compression and Encryption

When you’re moving backups across the network, compression can help meet your recovery point objective.

SQL Server can compress backups in the Standard Edition of the product. This can either be enabled as a default SQL Server level setting or in the log shipping jobs.

Oracle’s compression is only found in either Oracle Enterprise Edition or customers using Oracle’s backup to the cloud feature – Oracle Database Backup Service. However, it’s trivial to leverage in-flight compression when moving files between Oracle instances. In a recent deployment, we used rsync to move files between primary and standby and enabled the -z flag to ensure we got compression.

Starting with SQL Server 2014, SQL Server supports backup encryption. Oracle Standard Edition, on the other hand, does not have backup encryption. In the Oracle world DBAs are far more likely to use SSH, scp, or rsync to move files between servers – all of these support encrypting data transferred between servers.

There’s no clear winner here – both compression and encryption are possible with both products, but they are built in to SQL Server. The Oracle approach lets system administrators adopt the practices that are in use across enterprise.


Oracle and SQL Server both have log shipping options available. Both tools are built on different sets of technologies and they both have different strengths and weaknesses.

SQL Server Oracle
Set Up Wizard. Manual, but scriptable.
Operations Additional caveats to prevent breaking the log chain. Mostly automatic. Caveats apply, but preventative measures are available.
Reversing Log Shipping Can be done with rocket science and custom scripts. Scripts can be deployed on both primary and standby – reversing requires switching several commands.
Querying the Standby Yes, only in STANDBY mode. STANDBY can make restores slow. Mitigate with complex scripts. Yes, in READ ONLY mode. Oracle must be restarted to resume log shipping.
Licensing Requires software assurance. Requires a second, fully licensed, Oracle instance.
Compression SQL Server backup compression. OS, storage, or network compression.

Brent Says: WOOHOO, SQL Server actually wins a couple of these!

Kendra Says: Ain’t nothin’ wrong with a wizard, especially when you’re just getting started with something.

Managing Oracle Performance with Statistics

Oracle maintains statistics about data stored on disk. The statistics and histograms help guide the optimizer during query plan compilation. These statistics are usually helpful. Sometimes the statistics are not helpful.

As data changes, the statistics collected by Oracle may become less accurate. If I’ve created a histogram on the column transaction_date, my histogram won’t know about data that’s been added since the histogram was last created or updated. Over time, that histogram becomes less helpful. DBAs can update statistics, but another problem can arise – the new statistics might be worse than the old statistics. DBAs need tools to manage stats and verify success before moving new statistics into production.

Historical Statistics

Oracle gives DBAs a powerful tool in managing statistics – historical statistics retention. By retaining historical copies of statistics, a DBA can ensure that there’s always a plan to back out changes that have caused a performance problem. You can view the current historical retention period by running:

SELECT dbms_stats.get_stats_history_retention() FROM dual;

On my Oracle 12c install (, the retention period is set to 31 days. Larger systems may want to set up smaller retention periods.

Changing the statistics retention is as easy as running:

EXEC dbms_stats.alter_stats_history_retention(5);

Once you have identified the statistics that you want to restore, you can put them back in place using DBMS_STATS.RESTORE_TABLE_STATS:

EXEC dbms_stats.restore_table_stats (
         ownname         => 'SH',
         tabname         => 'CUSTOMERS',
         as_of_timestamp => TO_DATE('2015-01-01 11:38',
                                    'YYYY-MM-DD HH24:MI')

Of course, you would need to know when statistics were last collected. Thankfully, you can review when statistics were collected with the DBA_TAB_STATS_HISTORY view.

Keep in mind that the historical statistics have to go somewhere. Monitor the size of the SYSAUX tablespace and adjust the retention period as needed. You may find that you don’t need all of those historical statistics.

Locking Statistics

Once you’ve found the good statistics, you probably don’t want to keep restoring them over and over again. The next step you can take is to lock the statistics in place. Statistics can be locked at the schema, table, or partition level. Once an object has been locked, anything depending on that object will be skipped during system wide statistics updates.

Locking the statistics on a table is as easy as:

EXEC dbms_stats.lock_table_stats('SH', 'CUSTOMERS');

If there were an index on the CUSTOMERS table that needed a statistics update, we can force Oracle to update statistics using the force flag:

EXEC dbms_stats.gather_index_stats(
         ownname => 'SH',
         indname => 'IX_CUST_FNAME',
         force   => TRUE

By locking some stats in place, we can make sure that important queries don’t slip off a good plan. Using dbms_stats.restore_table_stats makes it possible to restore known good stats. There has to be a way to know if your statistics are going to be a problem before you move them into production, right?

Pending Statistics

Some tables are more volatile than others. When data changes and statistics are re-analyzed it may be possible that data is sampled in a way that can cause problems. This is why we have the ability to lock statistics. In some cases, you may want to test changes to statistics before they go into production, rather than locking statistics in place or rolling back to your last known good statistics. In this case, you can create pending statistics.

To create pending statistics, the first step is to disable publishing statistics for a table:

EXEC dbms_stats.set_table_prefs('SH', 'SALES', 'PUBLISH', 'false');

Once publish is set to false, newly gathered statistics wiill be placed in a holding area rather than being immediately made available for use by Oracle. At this point, the statistics are present, but unusable. Pending statistics can be used by changing the optimizer_use_pending_statistics variable to TRUE.

ALTER SESSION SET optimizer_use_pending_statistics = TRUE;
/* amazing query goes here */

Once it’s been determined that stats are good, they can be be published. DBAs have the option to publish all stats or just stats for a single table:

/* publish all stats */
EXEC dbms_stats.publish_pending_stats(NULL, NULL) ;
/* publish single object stats */
EXEC dbms_stats.publish_pending_stats('SH', 'SALES') ;

Or, if the statistics stink, you can delete them with the dbms_stats.delete_pending_stats procedure.

Exporting Statistics

There’s one other tool DBAs have for managing stats – exporting statistics. After creating a stats table using dbms_stats.create_stat_table, a DBA can export statistics using on of severalEXPORT_*_STATS stored procedures. Statistics can be exported for columns, indexes, schemas, and tables as well as several other database and system level statistics.

The EXPORT_*_STATS procedures will push data into the statistics table and that data can be exported to another Oracle system where an IMPORT_*_STATS procedure can be used to import stats to the other Oracle system. This can be important for development, testing, and staging environments where it may not be possible to restore a large volume of data, but developers need to assess how queries will run in the production environment.

Statistics Maintenance Doesn’t Have to be a Drag

DBAs have powerful tools in Oracle to make sure statistics don’t cause performance problems. By combining these tools, it’s possible to create a stable set of statistics that lead to predictable performance. To learn more about managing Oracle database statistics, head over to the Managing Optimizer Statistics: Basic Topics or refer to the DBMS_STATS documentation for reference material.

The Basics of Oracle Statistics

Databases uses statistics about data to determine the best way to query tables. Should the database seek or scan? How many rows will be returned from tables that are being joined? Oracle is no exception, but the way Oracle treats statistics is different from SQL Server.

Oracle Statistics Collection

Just like SQL Server, Oracle can collect statistics manually. Unlike SQL Server, Oracle includes an automatic statistics collection job.

The Oracle automatic stats collection job is scheduled to run, by default, during configured maintenance windows. These maintenance windows are configured for each Oracle database that you’re running. This gives the people who know the data best, the DBAs, a chance to schedule statistics collection whenever it’s needed and whenever it won’t cause problems for production workloads.

Oracle also supplies manual stats gathering at the index, table, schema, or even database level through the DBMS_STATS package:

    DBMS_STATS.gather_table_stats('SH', 'SALES');

The statistics gathering utilities have a wealth of options available that allow DBAs to choose to only rebuild existing stats, change the number of buckets in the histogram, choose sample percent, and control how new statistics are generated.

Controlling Statistics Collection

How much control do you have about how statistics are collected in SQL Server? Your options are basically: automatic generation is on or off, automatic updates are on or off, and the time your statistics collection job is allowed to run.

There are a number of options that are available in Oracle for determining how statistics are collected. The options exist at multiple levels – from the entire database down to a single table. This allows DBAs to tune for specific scenarios where one table may need to gather statistics with a much higher degree of parallelism, or at a different sample rate, than other tables.

Data Change Threshold

SQL Server will opt to update statistics if 20% of the data, plus 500 rows, has changed.

While Oracle will not automatically update statistics, the percentage of stale rows is tracked on an index by index basis. When the percentage of stale rows goes beyond the STALE_PERCENT value, the statistics are considered to be stale and need to be collected again. Although the statistics update will not happen automatically, it is possible to monitor this value and take appropriate action, if necessary.

The Ascending Key Problem

What happens when you add new data that the engine isn’t aware of? Oracle will make a best guess based on the current distribution of data in the table. There’s additional logic built-in to assume that fewer rows exist the further the queried value gets from the minimum or maximum value that’s currently tracked. This is probably true for many systems, but it could cause problems when a lot of data is coming in. On the bright side, as a DBA, you have full control over when statistics get updated.

Learning More About Oracle Statistics

You can learn more about Oracle’s table statistics in the Optimizer Statistics Concepts documentation, the DBMS_STATS package, and through the DBA_TAB_HISTOGRAMS view.

Oracle Flashback: Undeleting Data

Or, “Oops, I didn’t mean to do that.”

We’ve all had to deal with an accidental deletion or even a dropped table. Sometimes you leave off a where clause, sometimes you drop a table and find out that the change request listed the wrong table. Oracle uses a feature called Flashback to give database professionals the power to view data at different moments in time and take action based on what they find.

Using Flashback on a Dropped Table

No matter how rigorous your practices are, mistakes happen. How do you deal with accidentally dropping a table?

In the SQL Server world, we’d reach for our most recent backup, restore a copy of the database, and hope that the outage didn’t take too long. With Oracle, we can look in the recycle bin.

First we create a table:

CREATE TABLE test ( id INT, things VARCHAR2(50) );

SELECT * FROM test ;


        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO

And then we drop the table:




ERROR at line 1:
ORA-00942: table or view does not exist

Oh no, I meant to drop test_two! Now what?

Thankfully, I can take a look in the recycle bin using the command SHOW RECYCLEBIN:

---------------- ------------------------------ ------------ -------------------
BIG_TABLE        BIN$AZEU5yqWPZXgUwpdqMBL1Q==$0 TABLE        2014-08-26:15:51:31
TEST             BIN$CyLJjKhZPPDgUwpdqMAvJw==$0 TABLE        2014-12-26:09:23:46

Okay, now I can see that back in August I dropped a table named BIG_TABLE and our table named TEST is still hanging around. Let’s restore the TEST table:


Crisis averted! I’ve been able to restore the previously deleted table with a new name. I could leave out the RENAME TO portion and restore the table back with its original name, but it’s always possible that someone else has restored the table. Instead, it’s a probably a better practice to safely restore the table with a different name and then change the name with an ALTER TABLE:

ALTER TABLE test_dropped RENAME TO test;

Fixing Row Level Mistakes with Flashback

Let’s keep using our test table. Here’s what the data looks like now:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 FIVE

That data isn’t right. I need to know when the right data was present so I can get the right row in place. In order to do that, I can use the VERSIONS clause to tell Oracle to bring back specific versions of rows present for this particular table:

SELECT versions_startscn,
FROM   test
                TO_TIMESTAMP('2014-12-26 09:30:00', 'YYYY-MM-DD HH24:MI:SS')
                AND TO_TIMESTAMP('2014-12-26 09:49:00', 'YYYY-MM-DD HH24:MI:SS') ;

All the rows, all the time.

All the rows, all the time.

Now we can see a history of the row with an id of 6!

  • The row was inserted at 9:38AM.
  • At 9:41, someone corrected the things column to read SIX.
  • At 9:48, someone update the things column back to FIVE – which we know is wrong.

How do we get the correct row back?

If we’d configured the database to track supplemental log data, we could use the view flashback_transaction_query and grab the UNDO_SQL column to rollback that one command. Since I have not configured tracking supplemental log data, we’ll have to be a little bit trickier.

Thankfully, we can use some UPDATE shenanigans to get exactly the rows that we want:

UPDATE  test t1
SET     things = (SELECT  t2.things
                  FROM    test AS OF TIMESTAMP
                  TO_TIMESTAMP('2014-12-26 09:42:00', 'YYYY-MM-DD HH24:MI:SS') t2
                  WHERE = )
WHERE = 6 ;



        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 SIX

Holy cow, we got our row back! Make sure you COMMIT the transaction (remember – Oracle won’t auto-commit like SQL Server).

Looking Into the Past

If you hadn’t guessed, it’s even possible to view the contents of a table as of a moment in time using the AS OF TIMESTAMP clause:

TO_TIMESTAMP('2014-12-26 09:40:00', 'YYYY-MM-DD HH24:MI:SS');


        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 FIVE

This functionality makes it possible for analysts to undo changes to the database, review changes for auditing purposes, or even to recalculate historical reports based on newer formulas.

The Tip of the Iceberg

Undeleting tables and reviewing previous rows is only a small portion of what you can accomplish with Oracle Flashback. Entire transactions can be flashed back, databases can be rolled back to a point in time, or the Data Archive feature can be used to house all changed versions of rows for all time. But even without diving into more advanced features, Oracle Flashback gives database professionals the powerful ability to correct for “oops” deletions in the application.

Oracle Wait Events

Knowing what a query has been waiting on helps you figure out where you need to tune that system. Oracle’s wait interface provides a significant amount of information to help a DBA decide where to put tuning resources. There are multiple ways to see what Oracle has been waiting on:

Wait Classes

Oracle waits are split into multiple classes. We can review total instance-wide totals for each wait class by using the v$system_wait_class view. This view aggregates information about an entire instance of Oracle since start up. The view only shows events at an event class level – this can make it valuable to determine if a problem is related to I/O or some other portion of the system.

This screen shot shows a relatively idle Oracle instance that isn’t up to much at all. It is, however, immediately apparent that waits are broken up into multiple wait classes. An explanation of the different wait classes is available in the Oracle Database Reference – Appendix C – Classes of Wait Events. Most scripts ignore events in the Idle wait class and we’re going to be doing the same.

System Waits using v$system_event

It’s possible to get more details about waits by using the v$system_event view. This view contains wait event timing across the system since the last restart. Although this only givings timings and average since the system started up, this information helps discover immediate trends:

select  wait_class,
        total_waits AS bg_total_waits,
        time_waited_micro / 1000.0 AS time_waited_ms,
        average_wait * 10 AS average_wait_ms ,
        total_waits_fg AS fg_total_waits ,
        time_waited_micro_fg / 1000.0 AS fg_time_waited_ms ,
        average_wait_fg * 10 AS fg_average_wait_ms
from    v$system_event
where   wait_class != 'Idle'
order by time_waited_micro desc ;

Wait, why are we multiplying by 10 for the average wait columns? Well, the AVERAGE_WAIT columns are stored in hundredths of a second, and we’d like everything to be in one uniform number. We also have to divide microseconds by 1,000 since microseconds is a mindbogglingly tiny duration, I’ve gone and converted everything to milliseconds here.

There are more columns reported by v$system_event, but the ones in this query are likely to be the most interesting to beginners. The FG columns display the waits from foreground (user facing) sessions. Other waits are happening elsewhere in Oracle. They may be slowing down the system, but they are not directly caused by end users.

Single Session Waits

What if we want to know what’s going on within one session? What then?

The v$session_event view contains information about currently executing sessions. It’s remarkably similar to the v$system_event view:

select SID,
       total_waits AS bg_total_waits,
       max_wait * 10 max_wait_ms ,
       time_waited_micro / 1000.0 AS bg_time_waited_ms,
       average_wait AS bg_average_wait
from   v$session_event
where  wait_class != 'Idle' ;

v$session_event displays all of the waits for each active session. Obviously, we want to filter out the Idle waits since they’re idle and, for our purposes, ignorable.

v$waitclassmetric_history: A History of Waiting

What if we want finer granularity about how an Oracle instance has been performing? v$system_event lets us review waits since the beginning of database time. v$session_event lets us see waits for currently running queries. What else is there?

The view v$waitclassmetric_history gives us some insight into what’s happened over the last hour, but only by wait class. This isn’t as limiting as you might think – sometimes it’s only necessary to know that we were waiting on disk or concurrency. The data in v$waitclassmetric_history is broken down into one minute buckets.

select  swc.wait_class,
        begin_time ,
        end_time ,
        SUM(wcmh.average_waiter_count) avg_waiter_count,
        SUM(wcmh.dbtime_in_wait) AS dbtime_in_wait ,
        SUM(wcmh.time_waited) * 10 AS time_waited_ms,
        SUM(wcmh.wait_count) AS wait_count ,
        SUM(wcmh.time_waited_fg) * 10 AS time_waited_fg_ms,
        SUM(wcmh.wait_count_fg) AS wait_count_fg
from    V$SYSTEM_WAIT_CLASS swc, v$waitclassmetric_history wcmh
where   swc.wait_class != 'Idle'
        AND swc.wait_class_id = wcmh.wait_class_id
group by swc.wait_class, begin_time, end_time
order by 2 desc, 3 desc, 6 desc ;

Once again, we’re multiplying time_waited by 10 to get the time in milliseconds. These times are stored as floating point numbers, so we aren’t losing observable fidelity in our data.

The last minute of waiting

The last minute of waiting

Reviewing the screen shot, we can see a lot of information. This screen shot only shows the previous minute (which you could get from v$waitclassmetric), but when you query your own systems, you’ll see what I mean.

dba_hist: The Long View

If you’ve been licensed for the Oracle tuning and diagnostic packs, you have another tool at your disposal – the dba_hist views. Specifically, the dba_hist_system_event view. This contains a history of wait stats, summarized at one hour intervals, since you cleared out the data. This is part of Oracle’s AWR (Automatic Workload Repository).

The values stored in dba_hist_system_event are a running total of wait information. You can query it, but remember that these are cumulative values. To get helpful metrics for each hour, you have to do something like this:

select  snap.END_INTERVAL_TIME,
        dhse2.event_name ,
        dhse2.TOTAL_WAITS - dhse1.total_waits AS number_of_waits,
        (dhse2.time_waited_micro - dhse1.time_waited_micro) / 1000 AS time_waited_ms
        JOIN DBA_HIST_SYSTEM_EVENT dhse2 ON dhse2.snap_id = dhse1.snap_id + 1
                                            AND dhse2.wait_class_id = dhse1.wait_class_id
                                            AND dhse2.event_id = dhse1.event_id
        JOIN DBA_HIST_SNAPSHOT snap ON dhse2.snap_id = snap.snap_id
order by dhse2.snap_id desc ;

Remember – this feature costs money, so don’t run it on your production machines unless you’re positive that you’re already licensed for it.

Putting it all together

By choosing the right view (and maybe the right licensing), you can review Oracle wait stats and start getting to the bottom of performance issues. Wait events aren’t the be all end all of Oracle performance tuning, but they are a good first step into determining where the problems lie.

Oracle HA & DR Basics [With Video]

Oracle has different availability and recovery options from SQL Server. Being aware of what these options are and how you can use them will go a long way toward keeping your Oracle boxes just as safe as your SQL Server boxes.

Here’s my 17-minute video explaining it, or you can keep reading and get your learn on silently.

Protect the precious data!

Protect the precious data!

Data Guard

Oracle Data Guard is a way to set up and maintain standby databases. There’s a primary (yay!) and one or more standby databases. A standby is initialized from a backup of the primary.

If Data Guard sounds SQL Server log shipping to you, you’d be close to right.

If Data Guard sounds SQL Server database mirroring, you’d also be close to right.

Oracle Data Guard is something of a hybrid between SQL Server log shipping and database mirroring.

When you’re using a physical standby database, Data Guard works by streaming redo log records from the primary to the secondaries. The redo logs are applied at the secondary and the secondary database is kept up to date (for values of up to date). The physical standby is an exact copy of the primary database.

A logical standby database is logically the same as the production database. It has the same data, but the structures can differ. In a logical standby, redo logs are transformed into SQL statements and applied to the standby database. This flexibility gives the DBA options like creating reporting focused indexes, using Data Guard for rolling upgrades, or other helpful scenarios. There are some limitations and considerations for using a logical standby, but the option is there.

It’s also possible to use a snapshot standby. A snapshot standby only gets changes applied periodically. The difference from logical and physical standbys is that the snapshot standby is fully updatable. Users can make changes and validate different scenarios. Whenever you’re ready to resynchronize, the changes made are discarded, the database is converted into a physical standby and changes can be applied again.

Holy cow, that’s a lot to think about. Plus, there are three modes of operation for the Data Guard set up:

  • Maximum performance
  • Maximum availability
  • Maximum protection

Maximum performance is the default configuration for Data Guard. In this mode, asynchronous log record transport is used. Depending on the speed of all pieces of the system, some data loss is possible, depending on network latency and how much data the secondary has been able to process.

Maximum availability tries to stay online and provide zero data loss. The maximum availability mode will synchronously send log records to the standby, but if the standby isn’t available, the primary will wait until a timeout expires. Data Guard will regularly attempt to contact the standby (whenever logs are switched). Once the standby comes back online, Data Guard will start sending logs to the standby. Some data loss is also possible in maximum availability mode.

Maximum protection is synchronous. If the standby goes away, the primary will stall and eventually fail. If there is more than one standby database, the primary is going to wait for at least one standby to respond. Needless to say, it’s recommended to have more than one standby if you’re going to be operating in maximum protection mode.

To complicate matters even further, it’s possible to have a Data Guard Broker as a witness to make help ensure automatic failover for both databases and for clients connecting to Oracle.

We can use Oracle Data Guard to achieve very small windows of data loss with short failovers.

Active Data Guard

Active Data Guard is an additional licensed feature for Oracle Data Guard. Active Data Guard makes it possible to read from the standby servers while still applying log changes to the instance. Active Data Guard is roughly analogous with SQL Server’s AlwaysOn Availability Groups.

Active Data Guard also includes a feature called Far Sync. In this scenario, the DBA sets up a limited Far Sync instance. Far Sync is used to use Active Data Guard for off-site purposes. The Far Sync instance itself is a limited Oracle instance – it can’t be queried. The purpose of the Far Sync instance is to stand in as an intermediary. Log records are moved to the Far Sync instance and then they’re sent to the geographically distant standby.

Basically – the Far Sync instance can be used for disaster recovery. Active Data Guard, with the Far Sync option, gives you one solution for both HA and DR. Depending on how Active Data Guard has been configured, it can be used to allow zero to minimal data loss and support very fast recovery times.

If you lose one piggy bank, you've got more, right?

If you lose one piggy bank, you’ve got more, right?

Oracle RAC

Oracle Real Application Clusters (RAC) is a way to ensure high availability with minimal failover. You can think of it as somewhat similar to SQL Server’s failover clustering, but with RAC all RAC nodes are active and serving data. Oracle RAC does this by having all RAC nodes share the same set of disks. This can lead to some complexities, but it presents a lot of availability options for DBAs looking for a combination of performance and availability.

RAC provides several options for failover. Clients can be made RAC aware – a client can move between different RAC nodes if one node goes down. Depending on requirements, the query can return an error to the client or keep running on the new RAC node. RAC also provides a RAC listener that enables automatic failover between RAC nodes.

DBAs can upgrade RAC with minimal downtime. Individual nodes can be upgraded through a rolling upgrade. The other nodes will see increased load during this time, but it’s a price to pay for the ability to keep serving data during an upgrade. Occasionally updates will come along that can’t be applied through a rolling update process so make sure you read the release notes.

Maximum Availability Architecture.

You can’t read more than a few paragraphs about Oracle HA & DR without seeing the phrase “Maximum Availability Architecture.” Oracle publish a document about the Oracle Maximum Availability Architecture (MAA). Basically, the MAA is a pre-defined set of features that combine multiple Oracle technologies to create a highly available and redundant Oracle environment. The MAA covers instance failure, storage failure, human error, and the loss of a data center. But, like all HA solutions, start with the business requirements and map them to the technology solutions.

When Does Index Fragmentation Matter?

A while back, we posted about how SQL Server index fragmentation doesn’t matter. That blog post generated a lot of heated discussion and has served to make people aware that fragmentation may not be the source of their woes. Sometimes, though, there are data access patterns where fragmentation really does matter.

Perpetual Fragmentation

Some data access patterns lead to tables and indexes that will always be fragmented, no matter what you do. If you have frequent data changes in the middle of a table or index, you can see heavy fragmentation.

What’s this pattern look like? Check it out!

CREATE TABLE process_list
      transaction_id INT IDENTITY(1, 1)
                         PRIMARY KEY ,
      status_flag BIT ,
      last_update DATETIME2 ,
      transaction_type INT ,
      transaction_desc VARCHAR(25)

CREATE INDEX IX_giant_index_process_list
ON dbo.process_list (transaction_id, status_flag);

TRUNCATE TABLE process_list ;

/* Random string generation code taken from:


/* insert another 1000 rows */
DECLARE @string_max_length TINYINT = 25;

INSERT  INTO process_list (status_flag, last_update, transaction_type, transaction_desc)
        SELECT  0 ,
                GETUTCDATE() ,
                v.number % 4 + 1 ,
        FROM    master.dbo.spt_values v
                JOIN ( SELECT TOP 1
                                LEFT(REPLACE(CAST (NEWID() AS NVARCHAR(MAX)),
                                             '-', ''),
                                     % @string_max_length + 1) AS txt
                       FROM     SYS.OBJECTS A
                                CROSS JOIN SYS.OBJECTS B
                     ) AS x ON 1 = 1
        WHERE   v.type = 'P'
                AND v.number < 1001;

/* Look at table/index size.
   The table gains 9 pages.
   The index only gains 1 page from its previous state
SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,,,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE = 'process_list' ;

-- Change the status of 70%ish of transaction to TRUE
UPDATE process_list
SET     status_flag = 1
WHERE   transaction_id % 10 < 7 ;

/* Get rid of processed rows */
DELETE FROM process_list
WHERE status_flag = 1 ;

/* Look at table/index size.
   No page size changes...
SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,,,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE = 'process_list' ;
GO 10

/* And now we rebuild the table
   Before the rebuild:
     index - 25 pages
     table - 57 pages

   If you just run ALTER TABLE process_list REBUILDema:
     index - 25 pages
     table - 25 pages

   After the rebuild with the index:
     index -  7 pages
     table - 25 pages
ALTER INDEX ALL ON dbo.process_list REBUILD ;

SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,,,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE = 'process_list' ;

TL;DR – The code in this example inserts a large number of rows at the end of the table. 70% of rows are marked as processed and are then deleted. This causes a lot of free space or, as you call it, fragmentation.

The only way to get rid of this fragmentation is to rebuild both the non-clustered index and the clustered index. For this example, the defragmentation reduces the size of the index by more than 3x (from 25 pages to 7 pages) and the table consumes just over 50% less space.

For workloads with a lot of random deletes (or updates that change the size of a row in a way that causes data movement), index defragmentation is necessary. Without index fragmentation, these database will continue to grow in size and result in the usual problems that we associate with fragmentation.

What Doesn’t Cause Perpetual Fragmentation

At this point you might be saying “Ah ha! You’ve proved that I should always defragment my indexes! Constantly! ZOMGWTFBBQ!!!11!11!shiftone!!!!!”

You’re wrong.

For some workloads, you can still avoid index fragmentation – if you’re adding data to the tailing end of the table and deleting data from the beginning of the table, you won’t need to defragment your indexes. Ghost record clean up should take care of deleted rows in this case.

For most workloads, your best bet is still to analyze SQL Server wait statistics and understand the basic problem before you start defragmenting indexes on a regular basis.


The original idea for this blog post comes from Hemant K Chitale’s Index Growing Larger Than The Table. Even though SQL Server handles this situation differently from Oracle, the underlying pattern is still interesting for database practitioners.

The random length string code was adapted from T-SQL: Random String.

Using sp_BlitzCache™ Advanced Features

sp_BlitzCache™ is a fantastic tool for finding the worst queries in your SQL Server. Did you know that it has a number of advanced options available to you? These options can help you dig into your worst queries, slice and dice the data, and get back exactly what you need, without wading through hundreds of results.

Only/Ignore Query Hashes

Version 2.3 of sp_BlitzCache™ introduced two parameters so you can ignore individual query hashes. Reader Brian Baker had the idea of filtering out a single stored procedure by supplying all of the query hashes for a stored procedure. There’s one problem – there’s no easy way to filter an entire procedure (well, not without making sp_BlitzCache™ really complex).

Here’s how we’d go about answering Brian’s question:

Step 1: sp_BlitzCache™ in expert mode.

EXEC dbo.sp_BlitzCache @results = 'expert' ;
Keep scrolling right.

Keep scrolling right.

Find the stored procedure that you want in the plan cache and then scroll all the way to the right until you get just past the Query Plan column. There you’ll find the Plan Handle. This lets us find a stored procedure. Copy it to your clipboard and then past it into the following script:

/* Step 1: Run sp_BlitzCache @results = 'expert' */
EXEC dbo.sp_BlitzCache @results = 'expert' ;

/* Step 2: Grab the plan_handle of the plan you want to examine. */
DECLARE @plan_handle AS varbinary(64) ;


SELECT query_hash
FROM   ( SELECT n.value('@QueryHash', 'nvarchar(max)') AS query_hash
         FROM   sys.dm_exec_procedure_stats AS deps
                CROSS APPLY sys.dm_exec_query_plan(deps.plan_handle) AS deqp
                OUTER APPLY deqp.query_plan.nodes('//p:StmtSimple') AS q(n)
         WHERE  deps.plan_handle = @plan_handle
       ) AS x
WHERE  query_hash IS NOT NULL ;

Alright, now that we have our query hash(es), you can plug them into sp_Blitzcache™:

EXEC sp_BlitzCache @only_query_hashes = '0x6A26E5C5E3825653' ;

Or, if you feel like ignoring that particular set of statements:

EXEC sp_BlitzCache @ignore_query_hashes = '0x6A26E5C5E3825653' ;

And, there you have it – how to zoom in on a single procedure, or completely ignore it.

Only Look at Queries or Procedures

This is much simpler, to narrow down the scope of sp_BlitzCache™, you can use the @query_filter parameter. This parameter can be one of three values: procedures, statements, all. The default is all and, by default, sp_BlitzCache™ will look at stored procedures, statements, and triggers. By specifying either “procedures” or “statements”, you can forcesp_BlitzCache™ to narrow down its scope.

This can be particularly helpful if you have encapsulated your business logic into stored procedures and you want to know where to focus your tuning, rather than worrying about individual statements.

Rapidly Re-Running sp_BlitzCache™

Have you ever run sp_BlitzCache™ and noticed that it takes a while on production systems? And then you realized, after you wandered off and got coffee, that you forgot to use the@export_to_excel parameter?

I’ve done that, too.

sp_BlitzCache™ now ships with a @reanalyze feature. By default, sp_BlitzCache™ will analyze your plan cache fresh every time the query is executed. By using @reanalyze = 1, you’re telling sp_BlitzCache™ to skip collecting data and immediately report on the information that it already has. This is great when you want to export to excel, send the report to your co-workers, and then immediately get to work on the results. It also lets you re-sort the already collected data in a different ways.

There You Have It!

Three fancy features in sp_BlitzCache™ that you can use to make your life easier and more fun.

What are you waiting for? Download it right now!

Announcing sp_BlitzCache™ v2.4

Welcome to sp_BlitzCache™ v2.4. This release brings a few changes and bug fixes.

  • Fixed a logical error in detecting the output table. Thanks to Michael Bluett for pointing that out.
  • Sorting by executions per second finally works. Thanks to Andrew Notarian and Calvin Jones for submitting this week.
  • Added a @query_filter parameter – this allows you to only look at “statements” or “procedures” in the plan cache.
  • A check was added to identify trivial execution plans that have been cached. If you’re seeing a lot of these, you need to fix it.
  • The @reanalyze parameter was added. When set to 0, sp_BlitzCache™ will pull fresh data from the plan cache. When set to 1, though, sp_BlitzCache™ will re-read the results temporary table. This is helpful if you want to save off results in Excel and display results so you can tune queries.
  • Added the ability to see a query’s SET options. This is hidden just to the right of the plan in the results grid.
  • Moved the #procs temp table to a global temp table named ##procs. This shouldn’t be a problem because you’d probably get angry if two DBAs were running this stored procedure at the same time any way.

Download it right now!

Update: Denis Gobo noticed that sp_BlitzCache™ could potential clobber global temp tables. Global temp table names have been updated in sp_BlitzCache™ to avoid this in the future. Make sure you’re using v2.4.1.