Blog

Getting BLOBS out of the Database

Raise your hand if you’re storing BLOBs in the database.

I get to say that a lot during our training. Every time I say it, nearly every hand in the room goes up. Some hands go up faster than others, but eventually nearly every hand is up.

It’s a design that happens far more often than it should, but it does happen.

Why Store BLOBs in the Database?

People put binary data in the database because they need the data to be point in time consistent with the rest of the database. It’s not enough to save space in the database if you can’t recover the file to a moment in time.

Think about this scenario:

  1. 8:00AM – Brent saves a new contract.
  2. 9:00AM – Kendra modifies the contract and saves it.
  3. 9:30AM – Jeremiah modifies the contract and saves it.

If the contract is being stored inside the database, we can recover to any point in time and have the appropriate version of the document. It may not be the most current version of the contract, but it’s the version of the document that’s consistent with the rest of the database.

Why Not Use the Filesystem?

File systems are great. They do an excellent job of storing files and organizing them into folders. File systems don’t do a great job of being point in time consistent with a relational database. There’s no transaction log to help us roll back writes that are in flight.

It’s a lot of work to get a full database backup and a file system back up to be remotely close to the same point in time. Restoring a database to a point in time is easy. Restoring a file system to a point in time is close to impossible.

Why Not Use an Appliance?

appliancesThere’s a third option available – some kind of appliance that sits between database and the file system. The appliance should manage file metadata and provide all access to the files in the file system.

Commercial databases ship with features that sound similar. SQL Server has a FILESTREAMdata type and Oracle has both a BFILE and ORD data type. Both of these types let the database interact with files in the file system. But they still have a problem – you’re stuck managing data through the database. Let’s be clear: this is not an appliance.

Content Addressable Storage (CAS) is a mature technology. The idea behind CAS is that a hardware device handles the meta-data for a given incarnation of a file. A developer sends a file into the CAS appliance and the CAS appliance returns a pointer to the file. Whenever the file changes, a new copy is created and a new handle is returned to the developer. Files can’t be modified, so any thing stored in the database can only point to the right version of the file.

We can combine this with a database pretty easily. Instead of storing a file path in the database, we store the handle that we get back from the CAS system.

How Does CAS Solve the Problem?

The main reason people store BLOBs in the database is so they can get blobs that are consistent with a point in time in the database. By using a storage device that cannot be modified (the CAS), we can make sure that the location we’ve stored in the database is always the right location – there’s no way to tamper with the files that we’re storing, so whatever gets stored in the database is correct.

There’s overhead to this approach – old data may never get cleared out. Typically, though, CAS systems store data on large, slow disks. There’s little need for the throughput that we use for a relational database store system. Do those cat pictures really need to be stored on RAID 10 SSDs? Moving BLOB storage outside of the relational database will free up resources for serving queries. Picking the right way to store your BLOB data will make it easier to scale your system.

Kendra says: Finding the right storage for large objects is a huge architectural decision that impacts performance and availability. Choose wisely!

Brent says: Want your SQL Server’s DBCCs and backups to run faster? This can help a lot.

Doug says: “It’s a lot of work to get a full database backup and a file system back up to be remotely close to the same point in time.” -> This is a major drawback that’s easily overlooked. Make sure everyone’s okay with that possibility when choosing the file system for BLOB data.

Which Tables are Being Used in Oracle?

When we’re making changes to a database, it’s important to know which tables and indexes are in use. If we don’t need an index, maybe we can drop it. If nobody is using a table, do we still need it?

We can find out which tables and indexes are in use through two different mechanisms.

V$SEGMENT_STATISTICS

Of course Oracle tracks the tables in use. In this case, we can see this information in the view V$SEGMENT_STATISTICS. This view contains a lot of information about any all of the different segments in your Oracle database. A [segment][seg] is the set of extents (data blocks) allocated to a single database object. The V$SEGMENT_STATISTICS view needs to be pivoted to get the information we need – it contains one row for reads, one for writes, etc.

This basic query will let us see the volume of activity on different segments in the database:

SELECT  vss.owner,
        vss.object_name,
        vss.subobject_name,
        vss.object_type ,
        vss.tablespace_name ,
        SUM(CASE statistic_name WHEN 'logical reads' THEN value ELSE 0 END
            + CASE statistic_name WHEN 'physical reads' THEN value ELSE 0 END) AS reads ,
        SUM(CASE statistic_name WHEN 'logical reads' THEN value ELSE 0 END) AS logical_reads ,
        SUM(CASE statistic_name WHEN 'physical reads' THEN value ELSE 0 END) AS physical_reads ,
        SUM(CASE statistic_name WHEN 'segment scans' THEN value ELSE 0 END) AS segment_scans ,
        SUM(CASE statistic_name WHEN 'physical writes' THEN value ELSE 0 END) AS writes
FROM    v$segment_statistics vss
WHERE   vss.owner NOT IN ('SYS', 'SYSTEM')
GROUP BY vss.owner,
        vss.object_name ,
        vss.object_type ,
        vss.subobject_name ,
        vss.tablespace_name
ORDER BY reads DESC;

This approach will show us information about reads, writes, and segment scans (full table scans) since the dynamic view was last cleared. Typically, this is going to show us information since the database was last started.

The downside of V$SEGMENT_STATISTICS is that it’s across the lifetime of that process. If you happen to have great database up time, a table or index could show up in V$SEGMENT_STATISTICS even though it hasn’t been used in weeks or months.

Query Waits

In order to determine which queries are hitting which tables, we can start sampling the current waits in the system. V$SESSION_WAIT contains a bunch of information including the cryptic p1, p2, and p3 columns. For disk related waits, we only need the p1 and p2 waits. These refer to the file and block number that are a part of the wait. To see what’s going on, we can just do something like:

SELECT  vs.username ,
        vsw.wait_class,
        vsw.EVENT AS wait_type ,
        vsw.WAIT_TIME_MICRO / 1000 AS wait_time_ms ,
        vsw.TIME_REMAINING_MICRO / 1000 AS time_remaining_ms ,
        vsw.STATE ,
        de.SEGMENT_NAME ,
        de.SEGMENT_TYPE, 
        de.OWNER ,
        de.TABLESPACE_NAME
FROM    V$SESSION_WAIT vsw
        JOIN V$SESSION vs ON vsw.SID = vs.SID
        LEFT JOIN DBA_EXTENTS de ON vsw.p1 = de.file_id
                                    AND vsw.p2 BETWEEN de.BLOCK_ID AND (de.BLOCK_ID + de.BLOCKS)
WHERE   vsw.wait_class <> 'Idle'
        AND vs.username IS NOT NULL 
ORDER BY wait_time_ms DESC;sqs

This query will show us:

  • Who’s running a query
  • What their current wait is
  • If it’s a disk wait, which tablespace, object, and object type is being waited on.

The problem with this approach, though, is that it will only catch the queries that are running at the moment you sample. With enough samples, this can be effective, but sampling the system is going to put a small load on it.

Active Session History

If you’re licensed for the Oracle Diagnostic Pack, you’ll have access to the Active Session History. If you aren’t licensed for the Diagnostic Pack, keep reading, but don’t run these queries on any of your Oracle systems – your Oracle auditors will love you, but your accountants won’t.

Active Session History samples data over time and lets us know which queries have been running inside a given time window. The upside of this is that we can dig into the data collected and figure out if the tables we’re interested in have been used inside a given time window.

SELECT  du.username,
        s.sql_text, 
        MAX(ash.sample_time) AS last_access ,
        sp.object_owner ,
        sp.object_name ,
        sp.object_alias as aliased_as ,
        sp.object_type ,
        COUNT(*) AS access_count 
FROM    v$active_session_history ash
        JOIN v$sql s ON ash.force_matching_signature = s.force_matching_signature
        LEFT JOIN v$sql_plan sp ON s.sql_id = sp.sql_id
        JOIN DBA_USERS du ON ash.user_id = du.USER_ID
WHERE   ash.session_type = 'FOREGROUND' 
        AND ash.SQL_ID IS NOT NULL
        AND sp.object_name IS NOT NULL
        AND ash.user_id <> 0
GROUP BY du.username, 
        s.sql_text, 
        sp.object_owner, 
        sp.object_name, 
        sp.object_alias, 
        sp.object_type 
ORDER BY 3 DESC;s

You can use this query as a basis to help you isolate who is using which tables and how frequently the queries are being run. If you’re looking for one particular table or index, you can start filtering through the query plan operations to find the object in question.

Three Ways to Find the Truth

So, there you have it – three ways to find some version of the truth. Looking at V$SEGMENT_STATISTICS will let you understand the way tables have been used historically, but it won’t help you understand if a table has been used recently. V$SESSION_WAIT can be sampled to get a better idea of which tables and indexes are being used and who is using them. And if you need an accurate view, and you’re licensed for the Diagnostic Pack, you can use V$ACTIVE_SESSION_HISTORY to review how frequently a table or index has been used over time.

Formatting Oracle SQL*Plus Output (video)

Just because it’s plain text, that doesn’t mean it has to be ugly. Join Jeremiah Peschka and learn the basics of formatting Oracle SQL*Plus output.

 Kendra says: that ability to eliminate repeating values is pretty rad!

SQL Server Common Table Expressions

Common table expressions are a feature of SQL that lets a developer create a query that can be referenced multiple times. This feature gives developers another tool to add flexibility or just to simplify code.

Why Common Table Expression?

Why is it called a Common Table Expression (CTE)? Because that’s what the documentation calls it!

Selection_20150225-002

No, seriously, that’s it. Different databases have different terms for this feature. That’s what we call it in SQL Server.

What does a CTE do?

A CTE effectively creates a temporary view that a developer can reference multiple times in the underlying query. You can think of the CTE as if it were an inline view.

Here’s a slight re-write of a StackExchange query to find interesting and unanswered questions.

WITH unanswered AS (
    SELECT p.id
    FROM dbo.Posts p
    WHERE (SELECT COUNT(*)
           FROM dbo.Posts a
           WHERE a.ParentId = p.Id
                 AND a.Score > 0) = 0
          AND CommunityOwnedDate IS NULL
          AND ClosedDate IS NULL
          AND ParentId IS NULL
          AND AcceptedAnswerId IS NULL
)
SELECT TOP 2000 unanswered.Id AS [Post Link] ,
       (sum(t.[Count])
           / 10.0 + us.Reputation
           / 200.0 + p.Score * 100)
       AS Weight
FROM   unanswered
       JOIN dbo.Posts p ON unanswered.Id = p.Id
       JOIN PostTags pt ON pt.PostId = unanswered.Id
       JOIN Users u ON u.Id = p.OwnerUserId
GROUP BY unanswered.Id, u.Reputation, p.Score
ORDER BY Weight DESC ;

The CTE, lines 1 – 12, effectively creates a temporary view that we can use throughout the rest of the query. You can also think of it in the same way that you’d think of a derived table (a join to a subquery).

CTEs make it possible to simplify code, express complex ideas more concisely, or just write code quickly without having to worry as much about structure. They’re a great feature of the SQL language that many people overlook.

If you want to learn more about CTEs, check out our upcoming Advanced Querying and Indexing class in both Chicago and Portland. We’ll cover the basics of CTEs, pitfalls, and performance tuning options.

Comparing SQL Server and Oracle Log Shipping

Both Oracle and SQL Server offer several ways to support disaster recovery scenarios. One of the simplest ways to handle DR in the SQL Server world is to use SQL Server log shipping. Oracle doesn’t offer log shipping as a part of the core product, but it’s possible to set up log shipping in Oracle.

I thought it would be fun to take a look at Oracle and SQL Server log shipping side by side, so let’s take a look!

Log Shipping Set Up

SQL Server wins this one. Reason: there’s a wizard. Even a novice DBA can get log shipping working using the wizard.

Congratulations Oracle professional, you’re setting this up by hand! If you’re at all handy with scripting, once you’ve set up Oracle log shipping, you can port your code to any other Oracle installation and be up and running in a few minutes once you’ve set up the log shipping scripts for the first time. Setting up Oracle log shipping the first time might just take you a while.

Although SQL Server wins for ease of set up, it’s worth reminding you – Oracle log shipping set up can be easily moved from server to server by deploying scripts and changing one or two variables.

Day to Day Operations

SQL Server’s log shipping adds additional requirements for database administrators. Log shipping sets up an extra set of transaction log backup jobs – existing jobs will need to be disabled or deleted. If a DBA takes a log backup outside of the normal log shipping jobs, that can potentially break log shipping. Switching SQL Server into SIMPLE recover mode will break log shipping – and yes, this is more common than you’ll think.

Oracle’s log shipping works by copying archived redo log files. There are no extra backup jobs to add. Instead, the log shipping jobs copy archived redo log files from the flash recovery area. Rotating out active redo log files will move them into the archive redo log file area. DBAs can take advantage of their existing backup strategy. It is still possible for an Oracle DBA to break log shipping by using NOARCHIVELOG mode or adding tablespaces or files without adding them on the secondary.

Of course, a DBA can also use the FORCE LOGGING option to prevent users from switching to NOARCHIVELOG mode and breaking the log shipping.

One other note – Oracle log shipping will move security between both instances while SQL Server’s log shipping won’t.

Reversing Log Shipping

In both cases, there’s a lot of work that has to be done when you want to reverse log shipping.

With SQL Server, you can gracefully reverse log shipping by using a “tail of the log backup”, but not a lot of people know how to do this properly so they can avoid the overhead of re-initializing running a full backup. In the end, people frequently re-initialize log shipping, lose data, or just get very upset until they stumble upon the answer.

To reverse Oracle log shipping, we can use the same set up scripts, but change the name of the primary and standby servers. Depending on how things are set up, it may be possible to rapidly start the Oracle instance using the fast recovery area to bring the standby online and immediately resume log shipping.

If you’ve set up log shipping using robust scripts, the Oracle log shipping approach can be easily reversed. If you haven’t, reversing log shipping poses just as many problems on one system as the other.

Querying the Replica

It’s not uncommon to use log shipping to provide a readable replica. This can be very helpful when users can tolerate stale data for reports, but you have not been able to develop a reporting schema.

With SQL Server, we have to put the SQL Server into STANDBY mode. As Kendra Little explains in Reporting From a Log Shipping Secondary in STANDBY Mode, putting SQL Server into STANDBY mode is necessary to read from the secondary, but it can have make recovery take longer which could, in some scenarios, put you outside of your recovery point objective.

With Oracle, this process is easy – at any point during log shipping, we just mount the database in read only mode using the command ALTER DATABASE OPEN READ ONLY. Users will be able to read up to the last restored transaction. Once it’s time to start restoring data, you can start the database in recovery mode.

Licensing

Licensing is annoying and complex. If I get this wrong, sound off in the comments and I’ll do my best to clarify and correct.

With SQL Server licensing, this may change depending on when you licensed your SQL Server. However, you get one “free” standby instance, as long as you don’t read from that instance. You do have to pay for software assurance on the log shipping secondary.

Oracle’s licensing is simpler: if it’s running, you pay for it. Also, you pay for support.

Oracle wins through simple licensing. If you’re using Oracle Enterprise Edition, you have many more options for disaster recovery, and much more entertaining ways to pay.

Compression and Encryption

When you’re moving backups across the network, compression can help meet your recovery point objective.

SQL Server can compress backups in the Standard Edition of the product. This can either be enabled as a default SQL Server level setting or in the log shipping jobs.

Oracle’s compression is only found in either Oracle Enterprise Edition or customers using Oracle’s backup to the cloud feature – Oracle Database Backup Service. However, it’s trivial to leverage in-flight compression when moving files between Oracle instances. In a recent deployment, we used rsync to move files between primary and standby and enabled the -z flag to ensure we got compression.

Starting with SQL Server 2014, SQL Server supports backup encryption. Oracle Standard Edition, on the other hand, does not have backup encryption. In the Oracle world DBAs are far more likely to use SSH, scp, or rsync to move files between servers – all of these support encrypting data transferred between servers.

There’s no clear winner here – both compression and encryption are possible with both products, but they are built in to SQL Server. The Oracle approach lets system administrators adopt the practices that are in use across enterprise.

Summary

Oracle and SQL Server both have log shipping options available. Both tools are built on different sets of technologies and they both have different strengths and weaknesses.

SQL Server Oracle
Set Up Wizard. Manual, but scriptable.
Operations Additional caveats to prevent breaking the log chain. Mostly automatic. Caveats apply, but preventative measures are available.
Reversing Log Shipping Can be done with rocket science and custom scripts. Scripts can be deployed on both primary and standby – reversing requires switching several commands.
Querying the Standby Yes, only in STANDBY mode. STANDBY can make restores slow. Mitigate with complex scripts. Yes, in READ ONLY mode. Oracle must be restarted to resume log shipping.
Licensing Requires software assurance. Requires a second, fully licensed, Oracle instance.
Compression SQL Server backup compression. OS, storage, or network compression.

Brent Says: WOOHOO, SQL Server actually wins a couple of these!

Kendra Says: Ain’t nothin’ wrong with a wizard, especially when you’re just getting started with something.

Managing Oracle Performance with Statistics

Oracle maintains statistics about data stored on disk. The statistics and histograms help guide the optimizer during query plan compilation. These statistics are usually helpful. Sometimes the statistics are not helpful.

As data changes, the statistics collected by Oracle may become less accurate. If I’ve created a histogram on the column transaction_date, my histogram won’t know about data that’s been added since the histogram was last created or updated. Over time, that histogram becomes less helpful. DBAs can update statistics, but another problem can arise – the new statistics might be worse than the old statistics. DBAs need tools to manage stats and verify success before moving new statistics into production.

Historical Statistics

Oracle gives DBAs a powerful tool in managing statistics – historical statistics retention. By retaining historical copies of statistics, a DBA can ensure that there’s always a plan to back out changes that have caused a performance problem. You can view the current historical retention period by running:

SELECT dbms_stats.get_stats_history_retention() FROM dual;

On my Oracle 12c install (12.1.0.2), the retention period is set to 31 days. Larger systems may want to set up smaller retention periods.

Changing the statistics retention is as easy as running:

EXEC dbms_stats.alter_stats_history_retention(5);

Once you have identified the statistics that you want to restore, you can put them back in place using DBMS_STATS.RESTORE_TABLE_STATS:

EXEC dbms_stats.restore_table_stats (
         ownname         => 'SH',
         tabname         => 'CUSTOMERS',
         as_of_timestamp => TO_DATE('2015-01-01 11:38',
                                    'YYYY-MM-DD HH24:MI')
     );

Of course, you would need to know when statistics were last collected. Thankfully, you can review when statistics were collected with the DBA_TAB_STATS_HISTORY view.

Keep in mind that the historical statistics have to go somewhere. Monitor the size of the SYSAUX tablespace and adjust the retention period as needed. You may find that you don’t need all of those historical statistics.

Locking Statistics

Once you’ve found the good statistics, you probably don’t want to keep restoring them over and over again. The next step you can take is to lock the statistics in place. Statistics can be locked at the schema, table, or partition level. Once an object has been locked, anything depending on that object will be skipped during system wide statistics updates.

Locking the statistics on a table is as easy as:

EXEC dbms_stats.lock_table_stats('SH', 'CUSTOMERS');

If there were an index on the CUSTOMERS table that needed a statistics update, we can force Oracle to update statistics using the force flag:

EXEC dbms_stats.gather_index_stats(
         ownname => 'SH',
         indname => 'IX_CUST_FNAME',
         force   => TRUE
     );

By locking some stats in place, we can make sure that important queries don’t slip off a good plan. Using dbms_stats.restore_table_stats makes it possible to restore known good stats. There has to be a way to know if your statistics are going to be a problem before you move them into production, right?

Pending Statistics

Some tables are more volatile than others. When data changes and statistics are re-analyzed it may be possible that data is sampled in a way that can cause problems. This is why we have the ability to lock statistics. In some cases, you may want to test changes to statistics before they go into production, rather than locking statistics in place or rolling back to your last known good statistics. In this case, you can create pending statistics.

To create pending statistics, the first step is to disable publishing statistics for a table:

EXEC dbms_stats.set_table_prefs('SH', 'SALES', 'PUBLISH', 'false');

Once publish is set to false, newly gathered statistics wiill be placed in a holding area rather than being immediately made available for use by Oracle. At this point, the statistics are present, but unusable. Pending statistics can be used by changing the optimizer_use_pending_statistics variable to TRUE.

ALTER SESSION SET optimizer_use_pending_statistics = TRUE;
/* amazing query goes here */

Once it’s been determined that stats are good, they can be be published. DBAs have the option to publish all stats or just stats for a single table:

/* publish all stats */
EXEC dbms_stats.publish_pending_stats(NULL, NULL) ;
/* publish single object stats */
EXEC dbms_stats.publish_pending_stats('SH', 'SALES') ;

Or, if the statistics stink, you can delete them with the dbms_stats.delete_pending_stats procedure.

Exporting Statistics

There’s one other tool DBAs have for managing stats – exporting statistics. After creating a stats table using dbms_stats.create_stat_table, a DBA can export statistics using on of severalEXPORT_*_STATS stored procedures. Statistics can be exported for columns, indexes, schemas, and tables as well as several other database and system level statistics.

The EXPORT_*_STATS procedures will push data into the statistics table and that data can be exported to another Oracle system where an IMPORT_*_STATS procedure can be used to import stats to the other Oracle system. This can be important for development, testing, and staging environments where it may not be possible to restore a large volume of data, but developers need to assess how queries will run in the production environment.

Statistics Maintenance Doesn’t Have to be a Drag

DBAs have powerful tools in Oracle to make sure statistics don’t cause performance problems. By combining these tools, it’s possible to create a stable set of statistics that lead to predictable performance. To learn more about managing Oracle database statistics, head over to the Managing Optimizer Statistics: Basic Topics or refer to the DBMS_STATS documentation for reference material.

The Basics of Oracle Statistics

Databases uses statistics about data to determine the best way to query tables. Should the database seek or scan? How many rows will be returned from tables that are being joined? Oracle is no exception, but the way Oracle treats statistics is different from SQL Server.

Oracle Statistics Collection

Just like SQL Server, Oracle can collect statistics manually. Unlike SQL Server, Oracle includes an automatic statistics collection job.

The Oracle automatic stats collection job is scheduled to run, by default, during configured maintenance windows. These maintenance windows are configured for each Oracle database that you’re running. This gives the people who know the data best, the DBAs, a chance to schedule statistics collection whenever it’s needed and whenever it won’t cause problems for production workloads.

Oracle also supplies manual stats gathering at the index, table, schema, or even database level through the DBMS_STATS package:

BEGIN
    DBMS_STATS.gather_table_stats('SH', 'SALES');
END;
/

The statistics gathering utilities have a wealth of options available that allow DBAs to choose to only rebuild existing stats, change the number of buckets in the histogram, choose sample percent, and control how new statistics are generated.

Controlling Statistics Collection

How much control do you have about how statistics are collected in SQL Server? Your options are basically: automatic generation is on or off, automatic updates are on or off, and the time your statistics collection job is allowed to run.

There are a number of options that are available in Oracle for determining how statistics are collected. The options exist at multiple levels – from the entire database down to a single table. This allows DBAs to tune for specific scenarios where one table may need to gather statistics with a much higher degree of parallelism, or at a different sample rate, than other tables.

Data Change Threshold

SQL Server will opt to update statistics if 20% of the data, plus 500 rows, has changed.

While Oracle will not automatically update statistics, the percentage of stale rows is tracked on an index by index basis. When the percentage of stale rows goes beyond the STALE_PERCENT value, the statistics are considered to be stale and need to be collected again. Although the statistics update will not happen automatically, it is possible to monitor this value and take appropriate action, if necessary.

The Ascending Key Problem

What happens when you add new data that the engine isn’t aware of? Oracle will make a best guess based on the current distribution of data in the table. There’s additional logic built-in to assume that fewer rows exist the further the queried value gets from the minimum or maximum value that’s currently tracked. This is probably true for many systems, but it could cause problems when a lot of data is coming in. On the bright side, as a DBA, you have full control over when statistics get updated.

Learning More About Oracle Statistics

You can learn more about Oracle’s table statistics in the Optimizer Statistics Concepts documentation, the DBMS_STATS package, and through the DBA_TAB_HISTOGRAMS view.

Oracle Flashback: Undeleting Data

Or, “Oops, I didn’t mean to do that.”

We’ve all had to deal with an accidental deletion or even a dropped table. Sometimes you leave off a where clause, sometimes you drop a table and find out that the change request listed the wrong table. Oracle uses a feature called Flashback to give database professionals the power to view data at different moments in time and take action based on what they find.

Using Flashback on a Dropped Table

No matter how rigorous your practices are, mistakes happen. How do you deal with accidentally dropping a table?

In the SQL Server world, we’d reach for our most recent backup, restore a copy of the database, and hope that the outage didn’t take too long. With Oracle, we can look in the recycle bin.

First we create a table:

CREATE TABLE test ( id INT, things VARCHAR2(50) );
INSERT INTO test VALUES ( 1, 'ONE' );
INSERT INTO test VALUES ( 2, 'TWO' );
COMMIT;

SELECT * FROM test ;

Results:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO

And then we drop the table:

DROP TABLE test;
COMMIT;

SELECT * FROM test;

Results:

SELECT * FROM test
              *
ERROR at line 1:
ORA-00942: table or view does not exist

Oh no, I meant to drop test_two! Now what?

Thankfully, I can take a look in the recycle bin using the command SHOW RECYCLEBIN:

ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
---------------- ------------------------------ ------------ -------------------
BIG_TABLE        BIN$AZEU5yqWPZXgUwpdqMBL1Q==$0 TABLE        2014-08-26:15:51:31
TEST             BIN$CyLJjKhZPPDgUwpdqMAvJw==$0 TABLE        2014-12-26:09:23:46

Okay, now I can see that back in August I dropped a table named BIG_TABLE and our table named TEST is still hanging around. Let’s restore the TEST table:

FLASHBACK TABLE test TO BEFORE DROP RENAME TO test_dropped ;

Crisis averted! I’ve been able to restore the previously deleted table with a new name. I could leave out the RENAME TO portion and restore the table back with its original name, but it’s always possible that someone else has restored the table. Instead, it’s a probably a better practice to safely restore the table with a different name and then change the name with an ALTER TABLE:

ALTER TABLE test_dropped RENAME TO test;

Fixing Row Level Mistakes with Flashback

Let’s keep using our test table. Here’s what the data looks like now:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 FIVE

That data isn’t right. I need to know when the right data was present so I can get the right row in place. In order to do that, I can use the VERSIONS clause to tell Oracle to bring back specific versions of rows present for this particular table:

SELECT versions_startscn,
       versions_starttime,
       versions_endscn,
       versions_endtime,
       versions_xid,
       versions_operation,
       test.*
FROM   test
       VERSIONS BETWEEN TIMESTAMP
                TO_TIMESTAMP('2014-12-26 09:30:00', 'YYYY-MM-DD HH24:MI:SS')
                AND TO_TIMESTAMP('2014-12-26 09:49:00', 'YYYY-MM-DD HH24:MI:SS') ;

All the rows, all the time.

All the rows, all the time.

Now we can see a history of the row with an id of 6!

  • The row was inserted at 9:38AM.
  • At 9:41, someone corrected the things column to read SIX.
  • At 9:48, someone update the things column back to FIVE – which we know is wrong.

How do we get the correct row back?

If we’d configured the database to track supplemental log data, we could use the view flashback_transaction_query and grab the UNDO_SQL column to rollback that one command. Since I have not configured tracking supplemental log data, we’ll have to be a little bit trickier.

Thankfully, we can use some UPDATE shenanigans to get exactly the rows that we want:

UPDATE  test t1
SET     things = (SELECT  t2.things
                  FROM    test AS OF TIMESTAMP
                  TO_TIMESTAMP('2014-12-26 09:42:00', 'YYYY-MM-DD HH24:MI:SS') t2
                  WHERE   t1.id = t2.id )
WHERE t1.id = 6 ;

SELECT * FROM test;

Results:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 SIX

Holy cow, we got our row back! Make sure you COMMIT the transaction (remember – Oracle won’t auto-commit like SQL Server).

Looking Into the Past

If you hadn’t guessed, it’s even possible to view the contents of a table as of a moment in time using the AS OF TIMESTAMP clause:

SELECT * FROM test AS OF TIMESTAMP
TO_TIMESTAMP('2014-12-26 09:40:00', 'YYYY-MM-DD HH24:MI:SS');

Results:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 FIVE

This functionality makes it possible for analysts to undo changes to the database, review changes for auditing purposes, or even to recalculate historical reports based on newer formulas.

The Tip of the Iceberg

Undeleting tables and reviewing previous rows is only a small portion of what you can accomplish with Oracle Flashback. Entire transactions can be flashed back, databases can be rolled back to a point in time, or the Data Archive feature can be used to house all changed versions of rows for all time. But even without diving into more advanced features, Oracle Flashback gives database professionals the powerful ability to correct for “oops” deletions in the application.

Oracle Wait Events

Knowing what a query has been waiting on helps you figure out where you need to tune that system. Oracle’s wait interface provides a significant amount of information to help a DBA decide where to put tuning resources. There are multiple ways to see what Oracle has been waiting on:

Wait Classes

Oracle waits are split into multiple classes. We can review total instance-wide totals for each wait class by using the v$system_wait_class view. This view aggregates information about an entire instance of Oracle since start up. The view only shows events at an event class level – this can make it valuable to determine if a problem is related to I/O or some other portion of the system.

This screen shot shows a relatively idle Oracle instance that isn’t up to much at all. It is, however, immediately apparent that waits are broken up into multiple wait classes. An explanation of the different wait classes is available in the Oracle Database Reference – Appendix C – Classes of Wait Events. Most scripts ignore events in the Idle wait class and we’re going to be doing the same.

System Waits using v$system_event

It’s possible to get more details about waits by using the v$system_event view. This view contains wait event timing across the system since the last restart. Although this only givings timings and average since the system started up, this information helps discover immediate trends:

select  wait_class,
        event,
        total_waits AS bg_total_waits,
        time_waited_micro / 1000.0 AS time_waited_ms,
        average_wait * 10 AS average_wait_ms ,
        total_waits_fg AS fg_total_waits ,
        time_waited_micro_fg / 1000.0 AS fg_time_waited_ms ,
        average_wait_fg * 10 AS fg_average_wait_ms
from    v$system_event
where   wait_class != 'Idle'
order by time_waited_micro desc ;

Wait, why are we multiplying by 10 for the average wait columns? Well, the AVERAGE_WAIT columns are stored in hundredths of a second, and we’d like everything to be in one uniform number. We also have to divide microseconds by 1,000 since microseconds is a mindbogglingly tiny duration, I’ve gone and converted everything to milliseconds here.

There are more columns reported by v$system_event, but the ones in this query are likely to be the most interesting to beginners. The FG columns display the waits from foreground (user facing) sessions. Other waits are happening elsewhere in Oracle. They may be slowing down the system, but they are not directly caused by end users.

Single Session Waits

What if we want to know what’s going on within one session? What then?

The v$session_event view contains information about currently executing sessions. It’s remarkably similar to the v$system_event view:

select SID,
       wait_class,
       event,
       total_waits AS bg_total_waits,
       max_wait * 10 max_wait_ms ,
       time_waited_micro / 1000.0 AS bg_time_waited_ms,
       average_wait AS bg_average_wait
from   v$session_event
where  wait_class != 'Idle' ;

v$session_event displays all of the waits for each active session. Obviously, we want to filter out the Idle waits since they’re idle and, for our purposes, ignorable.

v$waitclassmetric_history: A History of Waiting

What if we want finer granularity about how an Oracle instance has been performing? v$system_event lets us review waits since the beginning of database time. v$session_event lets us see waits for currently running queries. What else is there?

The view v$waitclassmetric_history gives us some insight into what’s happened over the last hour, but only by wait class. This isn’t as limiting as you might think – sometimes it’s only necessary to know that we were waiting on disk or concurrency. The data in v$waitclassmetric_history is broken down into one minute buckets.

select  swc.wait_class,
        begin_time ,
        end_time ,
        SUM(wcmh.average_waiter_count) avg_waiter_count,
        SUM(wcmh.dbtime_in_wait) AS dbtime_in_wait ,
        SUM(wcmh.time_waited) * 10 AS time_waited_ms,
        SUM(wcmh.wait_count) AS wait_count ,
        SUM(wcmh.time_waited_fg) * 10 AS time_waited_fg_ms,
        SUM(wcmh.wait_count_fg) AS wait_count_fg
from    V$SYSTEM_WAIT_CLASS swc, v$waitclassmetric_history wcmh
where   swc.wait_class != 'Idle'
        AND swc.wait_class_id = wcmh.wait_class_id
group by swc.wait_class, begin_time, end_time
order by 2 desc, 3 desc, 6 desc ;

Once again, we’re multiplying time_waited by 10 to get the time in milliseconds. These times are stored as floating point numbers, so we aren’t losing observable fidelity in our data.

The last minute of waiting

The last minute of waiting

Reviewing the screen shot, we can see a lot of information. This screen shot only shows the previous minute (which you could get from v$waitclassmetric), but when you query your own systems, you’ll see what I mean.

dba_hist: The Long View

If you’ve been licensed for the Oracle tuning and diagnostic packs, you have another tool at your disposal – the dba_hist views. Specifically, the dba_hist_system_event view. This contains a history of wait stats, summarized at one hour intervals, since you cleared out the data. This is part of Oracle’s AWR (Automatic Workload Repository).

The values stored in dba_hist_system_event are a running total of wait information. You can query it, but remember that these are cumulative values. To get helpful metrics for each hour, you have to do something like this:

select  snap.END_INTERVAL_TIME,
        dhse2.wait_class,
        dhse2.event_name ,
        dhse2.TOTAL_WAITS - dhse1.total_waits AS number_of_waits,
        (dhse2.time_waited_micro - dhse1.time_waited_micro) / 1000 AS time_waited_ms
FROM    DBA_HIST_SYSTEM_EVENT dhse1
        JOIN DBA_HIST_SYSTEM_EVENT dhse2 ON dhse2.snap_id = dhse1.snap_id + 1
                                            AND dhse2.wait_class_id = dhse1.wait_class_id
                                            AND dhse2.event_id = dhse1.event_id
        JOIN DBA_HIST_SNAPSHOT snap ON dhse2.snap_id = snap.snap_id
WHERE   snap.END_INTERVAL_TIME > SYSDATE - 1
order by dhse2.snap_id desc ;

Remember – this feature costs money, so don’t run it on your production machines unless you’re positive that you’re already licensed for it.

Putting it all together

By choosing the right view (and maybe the right licensing), you can review Oracle wait stats and start getting to the bottom of performance issues. Wait events aren’t the be all end all of Oracle performance tuning, but they are a good first step into determining where the problems lie.

Oracle HA & DR Basics [With Video]

Oracle has different availability and recovery options from SQL Server. Being aware of what these options are and how you can use them will go a long way toward keeping your Oracle boxes just as safe as your SQL Server boxes.

Here’s my 17-minute video explaining it, or you can keep reading and get your learn on silently.

Protect the precious data!

Protect the precious data!

Data Guard

Oracle Data Guard is a way to set up and maintain standby databases. There’s a primary (yay!) and one or more standby databases. A standby is initialized from a backup of the primary.

If Data Guard sounds SQL Server log shipping to you, you’d be close to right.

If Data Guard sounds SQL Server database mirroring, you’d also be close to right.

Oracle Data Guard is something of a hybrid between SQL Server log shipping and database mirroring.

When you’re using a physical standby database, Data Guard works by streaming redo log records from the primary to the secondaries. The redo logs are applied at the secondary and the secondary database is kept up to date (for values of up to date). The physical standby is an exact copy of the primary database.

A logical standby database is logically the same as the production database. It has the same data, but the structures can differ. In a logical standby, redo logs are transformed into SQL statements and applied to the standby database. This flexibility gives the DBA options like creating reporting focused indexes, using Data Guard for rolling upgrades, or other helpful scenarios. There are some limitations and considerations for using a logical standby, but the option is there.

It’s also possible to use a snapshot standby. A snapshot standby only gets changes applied periodically. The difference from logical and physical standbys is that the snapshot standby is fully updatable. Users can make changes and validate different scenarios. Whenever you’re ready to resynchronize, the changes made are discarded, the database is converted into a physical standby and changes can be applied again.

Holy cow, that’s a lot to think about. Plus, there are three modes of operation for the Data Guard set up:

  • Maximum performance
  • Maximum availability
  • Maximum protection

Maximum performance is the default configuration for Data Guard. In this mode, asynchronous log record transport is used. Depending on the speed of all pieces of the system, some data loss is possible, depending on network latency and how much data the secondary has been able to process.

Maximum availability tries to stay online and provide zero data loss. The maximum availability mode will synchronously send log records to the standby, but if the standby isn’t available, the primary will wait until a timeout expires. Data Guard will regularly attempt to contact the standby (whenever logs are switched). Once the standby comes back online, Data Guard will start sending logs to the standby. Some data loss is also possible in maximum availability mode.

Maximum protection is synchronous. If the standby goes away, the primary will stall and eventually fail. If there is more than one standby database, the primary is going to wait for at least one standby to respond. Needless to say, it’s recommended to have more than one standby if you’re going to be operating in maximum protection mode.

To complicate matters even further, it’s possible to have a Data Guard Broker as a witness to make help ensure automatic failover for both databases and for clients connecting to Oracle.

We can use Oracle Data Guard to achieve very small windows of data loss with short failovers.

Active Data Guard

Active Data Guard is an additional licensed feature for Oracle Data Guard. Active Data Guard makes it possible to read from the standby servers while still applying log changes to the instance. Active Data Guard is roughly analogous with SQL Server’s AlwaysOn Availability Groups.

Active Data Guard also includes a feature called Far Sync. In this scenario, the DBA sets up a limited Far Sync instance. Far Sync is used to use Active Data Guard for off-site purposes. The Far Sync instance itself is a limited Oracle instance – it can’t be queried. The purpose of the Far Sync instance is to stand in as an intermediary. Log records are moved to the Far Sync instance and then they’re sent to the geographically distant standby.

Basically – the Far Sync instance can be used for disaster recovery. Active Data Guard, with the Far Sync option, gives you one solution for both HA and DR. Depending on how Active Data Guard has been configured, it can be used to allow zero to minimal data loss and support very fast recovery times.

If you lose one piggy bank, you've got more, right?

If you lose one piggy bank, you’ve got more, right?

Oracle RAC

Oracle Real Application Clusters (RAC) is a way to ensure high availability with minimal failover. You can think of it as somewhat similar to SQL Server’s failover clustering, but with RAC all RAC nodes are active and serving data. Oracle RAC does this by having all RAC nodes share the same set of disks. This can lead to some complexities, but it presents a lot of availability options for DBAs looking for a combination of performance and availability.

RAC provides several options for failover. Clients can be made RAC aware – a client can move between different RAC nodes if one node goes down. Depending on requirements, the query can return an error to the client or keep running on the new RAC node. RAC also provides a RAC listener that enables automatic failover between RAC nodes.

DBAs can upgrade RAC with minimal downtime. Individual nodes can be upgraded through a rolling upgrade. The other nodes will see increased load during this time, but it’s a price to pay for the ability to keep serving data during an upgrade. Occasionally updates will come along that can’t be applied through a rolling update process so make sure you read the release notes.

Maximum Availability Architecture.

You can’t read more than a few paragraphs about Oracle HA & DR without seeing the phrase “Maximum Availability Architecture.” Oracle publish a document about the Oracle Maximum Availability Architecture (MAA). Basically, the MAA is a pre-defined set of features that combine multiple Oracle technologies to create a highly available and redundant Oracle environment. The MAA covers instance failure, storage failure, human error, and the loss of a data center. But, like all HA solutions, start with the business requirements and map them to the technology solutions.

css.php