Blog

Moving Databases Made Easy – SQL Server on a File Share

We’ve all had to move a database. For databases above a small size, this is a painful process. While there are a number of ways that you can move a database without downtime, they all involve a lot of extra work. What if there were something easier?

Every database is unique and precious.

Every database is unique and precious.

What’s the Real Problem?

For most people, moving a database is a one time thing – you’re moving to a new server. That’s pretty much it. I spent 5 minutes staring at my ceiling trying to figure out other reasons that most people might move a database from one SQL Server to another.

For some people, though, databases could move on a regular basis. Think about it like this – you host a software as a service (SaaS) application. You’ve got one database per relative unit of work (customer, forum, cat picture collection). These databases can sit idle for months before, suddenly, there’s tremendous activity.

Normally, you wouldn’t make each SQL Server capable of handling peak load for any section of your SaaS. Instead, you would probably have a few servers ready to handle the worst load and move your busiest database over to those servers.

Typically, this is going to involve a small outage and setting up something like database mirroring or log shipping in order to move the database with minimal down time.

SMB to the Rescue!

Microsoft has supported housing SQL Server databases on a file share for a while now. If you’re not familiar with the idea, I recommend checking out the MSDN article Install SQL Server with SMB Fileshare as a Storage Option.

Moving a database with SMB is straight forward – detach the database from the old server, attach the database to the new server, bring the database online. For most of us, this quick change is much preferred to other methods of moving the database.

SMB File Shares and You

Although SMB file shares are a new technology, they’re worth considering for new SQL Servers. SMB file shares make it easy to move databases between SQL Servers. If you want to learn more about using SMB check out the MSDN article on using SMB and SQL Server.

Brent says: If your storage area network (SAN) is using older connectivity methods like 2Gb or 4Gb FC, and your Ethernet is 10Gb, you might even see a performance boost.

How Do I Know My Query Will Be Fast In Production?

We’ve all had this problem before – the production server is significantly more powerful than the development server. You tune a query and you’re pretty sure you’ve made it faster, but there’s that doubt in the back of your mind. There are so many things that can change performance – memory, CPU count, disk performance – how can you tell that you’ve made a query better?

Measuring Slowness

When you know you have a slow query that you need to tune, run it in production one last time. But, before you do that, gather CPU and IO metrics using STATISTICS IO and STATISTICS TIME:

SET STATISTICS IO, TIME ON; 

EXEC dbo.MyAwfulQuery @slow = 'yes';

You’ll get a bunch of jibber jabber in the “Messages” tab that documents how much CPU time was spent compiling and running your query as well as information about logical IO. Save this off to a text file for now.

Measuring Fastness

Go to your dev server. Tune your query. I’ll wait.

As you’re tuning your query, pay attention to the CPU time and logical IO numbers that you’re getting back from STATISTICS IO and STATISTICS TIME. These are a relative indication that things are getting better. If there are CPU differences, you’re likely to see different CPU numbers on that dev machine, but that’s OK. We’re looking to compare total logical IO and total CPU time in milliseconds.

If you want an easy route to compare your results, you can paste the output from STATISTICS IO and STATISTICS TIME into statisticsparser.com. This will go through the results and push the data into a nice table for your perusal.

How Much Faster Will my Query Be?

Using STATISTICS IO and STATISTICS TIME to tune helps me figure out how well I’m doing compared to where I started from. Some queries move around a lot of data. In these cases, it can help to know how much faster dev is than production.

If you really want a reasonable comparison between dev and production, you can find some guidance in the Fast Track Data Warehouse Reference Guide. There’s a section of this document about measuring the MCR (Maximum Consumption Rate) and BCR (Base Consumption Rate) of your SQL Server. These numbers reflect 100% cache reads and 100% disk reads, respectively.

Once you have MCR and BCR, you make effective predictions about how query performance could differ between the two. This gives you the power to predict, with some margin of error, how fast newly developed queries can run.

But is it Faster?

Between STATISTICS IO and STATISTICS TIME you can determine if a query is going to use fewer resources. The MCR and BCR give you relative numbers for how fast a server can process data. Between these measurements, you can predict how a query will perform between environments and what kind of impact your changes will have.

Brent says: as you get used to doing these measurements, you’ll also be able to tell if a new dev-only query is going to be fast enough in production, too. You’ll know how many logical reads is slow (or fast) in your environment.

Erik says: The hardest thing for me to ever get was commensurate data to test on in dev. If you don’t have it, get it. If you have it, don’t waste it.

Identity Columns in Oracle

Many databases have the ability to create automatically incremented numbers. In SQL Server, we can use an `IDENTITY` property on a column to get autoincrementing numbers. But what can do we in Oracle?

Sequences

Sequences work in all recent versions and editions of Oracle. The default way to do this is pretty simple:

CREATE TABLE my_table (
    id NUMBER PRIMARY KEY,
    whatever VARCHAR2(30)
);

CREATE SEQUENCE my_table_seq ;

CREATE OR REPLACE TRIGGER my_table_insert 
BEFORE INSERT ON my_table
FOR EACH ROW
BEGIN
    SELECT my_table_seq.nextval 
    INTO   :new.id
    FROM   dual ;
END ;

/* Or, if you hate triggers, you can make sure all inserts look like: */
INSERT INTO my_table (id, whatever)
VALUES (my_table_seq.nextval, 'Erik');

Now that SQL Server supports sequences, you could use sequences to help with portability between both systems, but there’s a gotcha – SQL Server’s next value syntax is different from Oracle’s. That means you’d need to use a trigger based solution if you didn’t want code to change.

Identities

What about identities? With SQL Server, we just mark a column as having an identity value. Magic! It’s done for us. Oracle is a commerical database, surely it has a magic option, right?

Oracle introduced identity columns with Oracle 12c. This lets developers keep using the code they’re used to. Check this out:

CREATE TABLE my_table (
    id NUMBER GENERATED AS IDENTITY PRIMARY KEY,
    whatever VARCHAR2(30)
);

This lets you keep using the same kind of inserts that you would normal use for SQL Server, with one execption. Oracle gives you some flexibility for generating identities – you can generate them `ALWAYS` (the default) or `BY DEFAULT` (when a value isn’t provided).

If you really want to mimic SQL Server’s identity behavior, including `IDENTITY INSERT` then you can create your table like this:

CREATE TABLE my_table (
    id NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
    whatever VARCHAR2(30)
);

You can even go so far as to protect yourself from trying to insert `NULL` values into the table using the `ON NULL` clause:

CREATE TABLE my_table (
    id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY PRIMARY KEY,
    whatever VARCHAR2(30)
);

Sequences and Identities in Oracle and SQL Server

If you want to make sure you can reuse as much code as possible between Oracle and SQL Server, identity columns are a safe bet. The vast majority of code won’t need to change and your application will be none the wiser.

Brent says: when people talk about porting their application from one database back end to another, this is a good example of how it’s so tricky. Sure, both databases have exactly the same functionality available, but even something as simple as an identity column is syntactically different between them.

Erik says: In case anyone’s wondering, it didn’t hurt getting inserted into an Oracle table. The bathrooms in Tablespace were very clean.

Getting BLOBS out of the Database

Raise your hand if you’re storing BLOBs in the database.

I get to say that a lot during our training. Every time I say it, nearly every hand in the room goes up. Some hands go up faster than others, but eventually nearly every hand is up.

It’s a design that happens far more often than it should, but it does happen.

Why Store BLOBs in the Database?

People put binary data in the database because they need the data to be point in time consistent with the rest of the database. It’s not enough to save space in the database if you can’t recover the file to a moment in time.

Think about this scenario:

  1. 8:00AM – Brent saves a new contract.
  2. 9:00AM – Kendra modifies the contract and saves it.
  3. 9:30AM – Jeremiah modifies the contract and saves it.

If the contract is being stored inside the database, we can recover to any point in time and have the appropriate version of the document. It may not be the most current version of the contract, but it’s the version of the document that’s consistent with the rest of the database.

Why Not Use the Filesystem?

File systems are great. They do an excellent job of storing files and organizing them into folders. File systems don’t do a great job of being point in time consistent with a relational database. There’s no transaction log to help us roll back writes that are in flight.

It’s a lot of work to get a full database backup and a file system back up to be remotely close to the same point in time. Restoring a database to a point in time is easy. Restoring a file system to a point in time is close to impossible.

Why Not Use an Appliance?

appliancesThere’s a third option available – some kind of appliance that sits between database and the file system. The appliance should manage file metadata and provide all access to the files in the file system.

Commercial databases ship with features that sound similar. SQL Server has a FILESTREAMdata type and Oracle has both a BFILE and ORD data type. Both of these types let the database interact with files in the file system. But they still have a problem – you’re stuck managing data through the database. Let’s be clear: this is not an appliance.

Content Addressable Storage (CAS) is a mature technology. The idea behind CAS is that a hardware device handles the meta-data for a given incarnation of a file. A developer sends a file into the CAS appliance and the CAS appliance returns a pointer to the file. Whenever the file changes, a new copy is created and a new handle is returned to the developer. Files can’t be modified, so any thing stored in the database can only point to the right version of the file.

We can combine this with a database pretty easily. Instead of storing a file path in the database, we store the handle that we get back from the CAS system.

How Does CAS Solve the Problem?

The main reason people store BLOBs in the database is so they can get blobs that are consistent with a point in time in the database. By using a storage device that cannot be modified (the CAS), we can make sure that the location we’ve stored in the database is always the right location – there’s no way to tamper with the files that we’re storing, so whatever gets stored in the database is correct.

There’s overhead to this approach – old data may never get cleared out. Typically, though, CAS systems store data on large, slow disks. There’s little need for the throughput that we use for a relational database store system. Do those cat pictures really need to be stored on RAID 10 SSDs? Moving BLOB storage outside of the relational database will free up resources for serving queries. Picking the right way to store your BLOB data will make it easier to scale your system.

Kendra says: Finding the right storage for large objects is a huge architectural decision that impacts performance and availability. Choose wisely!

Brent says: Want your SQL Server’s DBCCs and backups to run faster? This can help a lot.

Doug says: “It’s a lot of work to get a full database backup and a file system back up to be remotely close to the same point in time.” -> This is a major drawback that’s easily overlooked. Make sure everyone’s okay with that possibility when choosing the file system for BLOB data.

Which Tables are Being Used in Oracle?

When we’re making changes to a database, it’s important to know which tables and indexes are in use. If we don’t need an index, maybe we can drop it. If nobody is using a table, do we still need it?

We can find out which tables and indexes are in use through two different mechanisms.

V$SEGMENT_STATISTICS

Of course Oracle tracks the tables in use. In this case, we can see this information in the view V$SEGMENT_STATISTICS. This view contains a lot of information about any all of the different segments in your Oracle database. A [segment][seg] is the set of extents (data blocks) allocated to a single database object. The V$SEGMENT_STATISTICS view needs to be pivoted to get the information we need – it contains one row for reads, one for writes, etc.

This basic query will let us see the volume of activity on different segments in the database:

SELECT  vss.owner,
        vss.object_name,
        vss.subobject_name,
        vss.object_type ,
        vss.tablespace_name ,
        SUM(CASE statistic_name WHEN 'logical reads' THEN value ELSE 0 END
            + CASE statistic_name WHEN 'physical reads' THEN value ELSE 0 END) AS reads ,
        SUM(CASE statistic_name WHEN 'logical reads' THEN value ELSE 0 END) AS logical_reads ,
        SUM(CASE statistic_name WHEN 'physical reads' THEN value ELSE 0 END) AS physical_reads ,
        SUM(CASE statistic_name WHEN 'segment scans' THEN value ELSE 0 END) AS segment_scans ,
        SUM(CASE statistic_name WHEN 'physical writes' THEN value ELSE 0 END) AS writes
FROM    v$segment_statistics vss
WHERE   vss.owner NOT IN ('SYS', 'SYSTEM')
GROUP BY vss.owner,
        vss.object_name ,
        vss.object_type ,
        vss.subobject_name ,
        vss.tablespace_name
ORDER BY reads DESC;

This approach will show us information about reads, writes, and segment scans (full table scans) since the dynamic view was last cleared. Typically, this is going to show us information since the database was last started.

The downside of V$SEGMENT_STATISTICS is that it’s across the lifetime of that process. If you happen to have great database up time, a table or index could show up in V$SEGMENT_STATISTICS even though it hasn’t been used in weeks or months.

Query Waits

In order to determine which queries are hitting which tables, we can start sampling the current waits in the system. V$SESSION_WAIT contains a bunch of information including the cryptic p1, p2, and p3 columns. For disk related waits, we only need the p1 and p2 waits. These refer to the file and block number that are a part of the wait. To see what’s going on, we can just do something like:

SELECT  vs.username ,
        vsw.wait_class,
        vsw.EVENT AS wait_type ,
        vsw.WAIT_TIME_MICRO / 1000 AS wait_time_ms ,
        vsw.TIME_REMAINING_MICRO / 1000 AS time_remaining_ms ,
        vsw.STATE ,
        de.SEGMENT_NAME ,
        de.SEGMENT_TYPE, 
        de.OWNER ,
        de.TABLESPACE_NAME
FROM    V$SESSION_WAIT vsw
        JOIN V$SESSION vs ON vsw.SID = vs.SID
        LEFT JOIN DBA_EXTENTS de ON vsw.p1 = de.file_id
                                    AND vsw.p2 BETWEEN de.BLOCK_ID AND (de.BLOCK_ID + de.BLOCKS)
WHERE   vsw.wait_class <> 'Idle'
        AND vs.username IS NOT NULL 
ORDER BY wait_time_ms DESC;sqs

This query will show us:

  • Who’s running a query
  • What their current wait is
  • If it’s a disk wait, which tablespace, object, and object type is being waited on.

The problem with this approach, though, is that it will only catch the queries that are running at the moment you sample. With enough samples, this can be effective, but sampling the system is going to put a small load on it.

Active Session History

If you’re licensed for the Oracle Diagnostic Pack, you’ll have access to the Active Session History. If you aren’t licensed for the Diagnostic Pack, keep reading, but don’t run these queries on any of your Oracle systems – your Oracle auditors will love you, but your accountants won’t.

Active Session History samples data over time and lets us know which queries have been running inside a given time window. The upside of this is that we can dig into the data collected and figure out if the tables we’re interested in have been used inside a given time window.

SELECT  du.username,
        s.sql_text, 
        MAX(ash.sample_time) AS last_access ,
        sp.object_owner ,
        sp.object_name ,
        sp.object_alias as aliased_as ,
        sp.object_type ,
        COUNT(*) AS access_count 
FROM    v$active_session_history ash
        JOIN v$sql s ON ash.force_matching_signature = s.force_matching_signature
        LEFT JOIN v$sql_plan sp ON s.sql_id = sp.sql_id
        JOIN DBA_USERS du ON ash.user_id = du.USER_ID
WHERE   ash.session_type = 'FOREGROUND' 
        AND ash.SQL_ID IS NOT NULL
        AND sp.object_name IS NOT NULL
        AND ash.user_id <> 0
GROUP BY du.username, 
        s.sql_text, 
        sp.object_owner, 
        sp.object_name, 
        sp.object_alias, 
        sp.object_type 
ORDER BY 3 DESC;s

You can use this query as a basis to help you isolate who is using which tables and how frequently the queries are being run. If you’re looking for one particular table or index, you can start filtering through the query plan operations to find the object in question.

Three Ways to Find the Truth

So, there you have it – three ways to find some version of the truth. Looking at V$SEGMENT_STATISTICS will let you understand the way tables have been used historically, but it won’t help you understand if a table has been used recently. V$SESSION_WAIT can be sampled to get a better idea of which tables and indexes are being used and who is using them. And if you need an accurate view, and you’re licensed for the Diagnostic Pack, you can use V$ACTIVE_SESSION_HISTORY to review how frequently a table or index has been used over time.

Formatting Oracle SQL*Plus Output (video)

Just because it’s plain text, that doesn’t mean it has to be ugly. Join Jeremiah Peschka and learn the basics of formatting Oracle SQL*Plus output.

 Kendra says: that ability to eliminate repeating values is pretty rad!

SQL Server Common Table Expressions

Common table expressions are a feature of SQL that lets a developer create a query that can be referenced multiple times. This feature gives developers another tool to add flexibility or just to simplify code.

Why Common Table Expression?

Why is it called a Common Table Expression (CTE)? Because that’s what the documentation calls it!

Selection_20150225-002

No, seriously, that’s it. Different databases have different terms for this feature. That’s what we call it in SQL Server.

What does a CTE do?

A CTE effectively creates a temporary view that a developer can reference multiple times in the underlying query. You can think of the CTE as if it were an inline view.

Here’s a slight re-write of a StackExchange query to find interesting and unanswered questions.

WITH unanswered AS (
    SELECT p.id
    FROM dbo.Posts p
    WHERE (SELECT COUNT(*)
           FROM dbo.Posts a
           WHERE a.ParentId = p.Id
                 AND a.Score > 0) = 0
          AND CommunityOwnedDate IS NULL
          AND ClosedDate IS NULL
          AND ParentId IS NULL
          AND AcceptedAnswerId IS NULL
)
SELECT TOP 2000 unanswered.Id AS [Post Link] ,
       (sum(t.[Count])
           / 10.0 + us.Reputation
           / 200.0 + p.Score * 100)
       AS Weight
FROM   unanswered
       JOIN dbo.Posts p ON unanswered.Id = p.Id
       JOIN PostTags pt ON pt.PostId = unanswered.Id
       JOIN Users u ON u.Id = p.OwnerUserId
GROUP BY unanswered.Id, u.Reputation, p.Score
ORDER BY Weight DESC ;

The CTE, lines 1 – 12, effectively creates a temporary view that we can use throughout the rest of the query. You can also think of it in the same way that you’d think of a derived table (a join to a subquery).

CTEs make it possible to simplify code, express complex ideas more concisely, or just write code quickly without having to worry as much about structure. They’re a great feature of the SQL language that many people overlook.

If you want to learn more about CTEs, check out our upcoming Advanced Querying and Indexing class in both Chicago and Portland. We’ll cover the basics of CTEs, pitfalls, and performance tuning options.

Comparing SQL Server and Oracle Log Shipping

Both Oracle and SQL Server offer several ways to support disaster recovery scenarios. One of the simplest ways to handle DR in the SQL Server world is to use SQL Server log shipping. Oracle doesn’t offer log shipping as a part of the core product, but it’s possible to set up log shipping in Oracle.

I thought it would be fun to take a look at Oracle and SQL Server log shipping side by side, so let’s take a look!

Log Shipping Set Up

SQL Server wins this one. Reason: there’s a wizard. Even a novice DBA can get log shipping working using the wizard.

Congratulations Oracle professional, you’re setting this up by hand! If you’re at all handy with scripting, once you’ve set up Oracle log shipping, you can port your code to any other Oracle installation and be up and running in a few minutes once you’ve set up the log shipping scripts for the first time. Setting up Oracle log shipping the first time might just take you a while.

Although SQL Server wins for ease of set up, it’s worth reminding you – Oracle log shipping set up can be easily moved from server to server by deploying scripts and changing one or two variables.

Day to Day Operations

SQL Server’s log shipping adds additional requirements for database administrators. Log shipping sets up an extra set of transaction log backup jobs – existing jobs will need to be disabled or deleted. If a DBA takes a log backup outside of the normal log shipping jobs, that can potentially break log shipping. Switching SQL Server into SIMPLE recover mode will break log shipping – and yes, this is more common than you’ll think.

Oracle’s log shipping works by copying archived redo log files. There are no extra backup jobs to add. Instead, the log shipping jobs copy archived redo log files from the flash recovery area. Rotating out active redo log files will move them into the archive redo log file area. DBAs can take advantage of their existing backup strategy. It is still possible for an Oracle DBA to break log shipping by using NOARCHIVELOG mode or adding tablespaces or files without adding them on the secondary.

Of course, a DBA can also use the FORCE LOGGING option to prevent users from switching to NOARCHIVELOG mode and breaking the log shipping.

One other note – Oracle log shipping will move security between both instances while SQL Server’s log shipping won’t.

Reversing Log Shipping

In both cases, there’s a lot of work that has to be done when you want to reverse log shipping.

With SQL Server, you can gracefully reverse log shipping by using a “tail of the log backup”, but not a lot of people know how to do this properly so they can avoid the overhead of re-initializing running a full backup. In the end, people frequently re-initialize log shipping, lose data, or just get very upset until they stumble upon the answer.

To reverse Oracle log shipping, we can use the same set up scripts, but change the name of the primary and standby servers. Depending on how things are set up, it may be possible to rapidly start the Oracle instance using the fast recovery area to bring the standby online and immediately resume log shipping.

If you’ve set up log shipping using robust scripts, the Oracle log shipping approach can be easily reversed. If you haven’t, reversing log shipping poses just as many problems on one system as the other.

Querying the Replica

It’s not uncommon to use log shipping to provide a readable replica. This can be very helpful when users can tolerate stale data for reports, but you have not been able to develop a reporting schema.

With SQL Server, we have to put the SQL Server into STANDBY mode. As Kendra Little explains in Reporting From a Log Shipping Secondary in STANDBY Mode, putting SQL Server into STANDBY mode is necessary to read from the secondary, but it can have make recovery take longer which could, in some scenarios, put you outside of your recovery point objective.

With Oracle, this process is easy – at any point during log shipping, we just mount the database in read only mode using the command ALTER DATABASE OPEN READ ONLY. Users will be able to read up to the last restored transaction. Once it’s time to start restoring data, you can start the database in recovery mode.

Licensing

Licensing is annoying and complex. If I get this wrong, sound off in the comments and I’ll do my best to clarify and correct.

With SQL Server licensing, this may change depending on when you licensed your SQL Server. However, you get one “free” standby instance, as long as you don’t read from that instance. You do have to pay for software assurance on the log shipping secondary.

Oracle’s licensing is simpler: if it’s running, you pay for it. Also, you pay for support.

Oracle wins through simple licensing. If you’re using Oracle Enterprise Edition, you have many more options for disaster recovery, and much more entertaining ways to pay.

Compression and Encryption

When you’re moving backups across the network, compression can help meet your recovery point objective.

SQL Server can compress backups in the Standard Edition of the product. This can either be enabled as a default SQL Server level setting or in the log shipping jobs.

Oracle’s compression is only found in either Oracle Enterprise Edition or customers using Oracle’s backup to the cloud feature – Oracle Database Backup Service. However, it’s trivial to leverage in-flight compression when moving files between Oracle instances. In a recent deployment, we used rsync to move files between primary and standby and enabled the -z flag to ensure we got compression.

Starting with SQL Server 2014, SQL Server supports backup encryption. Oracle Standard Edition, on the other hand, does not have backup encryption. In the Oracle world DBAs are far more likely to use SSH, scp, or rsync to move files between servers – all of these support encrypting data transferred between servers.

There’s no clear winner here – both compression and encryption are possible with both products, but they are built in to SQL Server. The Oracle approach lets system administrators adopt the practices that are in use across enterprise.

Summary

Oracle and SQL Server both have log shipping options available. Both tools are built on different sets of technologies and they both have different strengths and weaknesses.

SQL Server Oracle
Set Up Wizard. Manual, but scriptable.
Operations Additional caveats to prevent breaking the log chain. Mostly automatic. Caveats apply, but preventative measures are available.
Reversing Log Shipping Can be done with rocket science and custom scripts. Scripts can be deployed on both primary and standby – reversing requires switching several commands.
Querying the Standby Yes, only in STANDBY mode. STANDBY can make restores slow. Mitigate with complex scripts. Yes, in READ ONLY mode. Oracle must be restarted to resume log shipping.
Licensing Requires software assurance. Requires a second, fully licensed, Oracle instance.
Compression SQL Server backup compression. OS, storage, or network compression.

Brent Says: WOOHOO, SQL Server actually wins a couple of these!

Kendra Says: Ain’t nothin’ wrong with a wizard, especially when you’re just getting started with something.

Managing Oracle Performance with Statistics

Oracle maintains statistics about data stored on disk. The statistics and histograms help guide the optimizer during query plan compilation. These statistics are usually helpful. Sometimes the statistics are not helpful.

As data changes, the statistics collected by Oracle may become less accurate. If I’ve created a histogram on the column transaction_date, my histogram won’t know about data that’s been added since the histogram was last created or updated. Over time, that histogram becomes less helpful. DBAs can update statistics, but another problem can arise – the new statistics might be worse than the old statistics. DBAs need tools to manage stats and verify success before moving new statistics into production.

Historical Statistics

Oracle gives DBAs a powerful tool in managing statistics – historical statistics retention. By retaining historical copies of statistics, a DBA can ensure that there’s always a plan to back out changes that have caused a performance problem. You can view the current historical retention period by running:

SELECT dbms_stats.get_stats_history_retention() FROM dual;

On my Oracle 12c install (12.1.0.2), the retention period is set to 31 days. Larger systems may want to set up smaller retention periods.

Changing the statistics retention is as easy as running:

EXEC dbms_stats.alter_stats_history_retention(5);

Once you have identified the statistics that you want to restore, you can put them back in place using DBMS_STATS.RESTORE_TABLE_STATS:

EXEC dbms_stats.restore_table_stats (
         ownname         => 'SH',
         tabname         => 'CUSTOMERS',
         as_of_timestamp => TO_DATE('2015-01-01 11:38',
                                    'YYYY-MM-DD HH24:MI')
     );

Of course, you would need to know when statistics were last collected. Thankfully, you can review when statistics were collected with the DBA_TAB_STATS_HISTORY view.

Keep in mind that the historical statistics have to go somewhere. Monitor the size of the SYSAUX tablespace and adjust the retention period as needed. You may find that you don’t need all of those historical statistics.

Locking Statistics

Once you’ve found the good statistics, you probably don’t want to keep restoring them over and over again. The next step you can take is to lock the statistics in place. Statistics can be locked at the schema, table, or partition level. Once an object has been locked, anything depending on that object will be skipped during system wide statistics updates.

Locking the statistics on a table is as easy as:

EXEC dbms_stats.lock_table_stats('SH', 'CUSTOMERS');

If there were an index on the CUSTOMERS table that needed a statistics update, we can force Oracle to update statistics using the force flag:

EXEC dbms_stats.gather_index_stats(
         ownname => 'SH',
         indname => 'IX_CUST_FNAME',
         force   => TRUE
     );

By locking some stats in place, we can make sure that important queries don’t slip off a good plan. Using dbms_stats.restore_table_stats makes it possible to restore known good stats. There has to be a way to know if your statistics are going to be a problem before you move them into production, right?

Pending Statistics

Some tables are more volatile than others. When data changes and statistics are re-analyzed it may be possible that data is sampled in a way that can cause problems. This is why we have the ability to lock statistics. In some cases, you may want to test changes to statistics before they go into production, rather than locking statistics in place or rolling back to your last known good statistics. In this case, you can create pending statistics.

To create pending statistics, the first step is to disable publishing statistics for a table:

EXEC dbms_stats.set_table_prefs('SH', 'SALES', 'PUBLISH', 'false');

Once publish is set to false, newly gathered statistics wiill be placed in a holding area rather than being immediately made available for use by Oracle. At this point, the statistics are present, but unusable. Pending statistics can be used by changing the optimizer_use_pending_statistics variable to TRUE.

ALTER SESSION SET optimizer_use_pending_statistics = TRUE;
/* amazing query goes here */

Once it’s been determined that stats are good, they can be be published. DBAs have the option to publish all stats or just stats for a single table:

/* publish all stats */
EXEC dbms_stats.publish_pending_stats(NULL, NULL) ;
/* publish single object stats */
EXEC dbms_stats.publish_pending_stats('SH', 'SALES') ;

Or, if the statistics stink, you can delete them with the dbms_stats.delete_pending_stats procedure.

Exporting Statistics

There’s one other tool DBAs have for managing stats – exporting statistics. After creating a stats table using dbms_stats.create_stat_table, a DBA can export statistics using on of severalEXPORT_*_STATS stored procedures. Statistics can be exported for columns, indexes, schemas, and tables as well as several other database and system level statistics.

The EXPORT_*_STATS procedures will push data into the statistics table and that data can be exported to another Oracle system where an IMPORT_*_STATS procedure can be used to import stats to the other Oracle system. This can be important for development, testing, and staging environments where it may not be possible to restore a large volume of data, but developers need to assess how queries will run in the production environment.

Statistics Maintenance Doesn’t Have to be a Drag

DBAs have powerful tools in Oracle to make sure statistics don’t cause performance problems. By combining these tools, it’s possible to create a stable set of statistics that lead to predictable performance. To learn more about managing Oracle database statistics, head over to the Managing Optimizer Statistics: Basic Topics or refer to the DBMS_STATS documentation for reference material.

The Basics of Oracle Statistics

Databases uses statistics about data to determine the best way to query tables. Should the database seek or scan? How many rows will be returned from tables that are being joined? Oracle is no exception, but the way Oracle treats statistics is different from SQL Server.

Oracle Statistics Collection

Just like SQL Server, Oracle can collect statistics manually. Unlike SQL Server, Oracle includes an automatic statistics collection job.

The Oracle automatic stats collection job is scheduled to run, by default, during configured maintenance windows. These maintenance windows are configured for each Oracle database that you’re running. This gives the people who know the data best, the DBAs, a chance to schedule statistics collection whenever it’s needed and whenever it won’t cause problems for production workloads.

Oracle also supplies manual stats gathering at the index, table, schema, or even database level through the DBMS_STATS package:

BEGIN
    DBMS_STATS.gather_table_stats('SH', 'SALES');
END;
/

The statistics gathering utilities have a wealth of options available that allow DBAs to choose to only rebuild existing stats, change the number of buckets in the histogram, choose sample percent, and control how new statistics are generated.

Controlling Statistics Collection

How much control do you have about how statistics are collected in SQL Server? Your options are basically: automatic generation is on or off, automatic updates are on or off, and the time your statistics collection job is allowed to run.

There are a number of options that are available in Oracle for determining how statistics are collected. The options exist at multiple levels – from the entire database down to a single table. This allows DBAs to tune for specific scenarios where one table may need to gather statistics with a much higher degree of parallelism, or at a different sample rate, than other tables.

Data Change Threshold

SQL Server will opt to update statistics if 20% of the data, plus 500 rows, has changed.

While Oracle will not automatically update statistics, the percentage of stale rows is tracked on an index by index basis. When the percentage of stale rows goes beyond the STALE_PERCENT value, the statistics are considered to be stale and need to be collected again. Although the statistics update will not happen automatically, it is possible to monitor this value and take appropriate action, if necessary.

The Ascending Key Problem

What happens when you add new data that the engine isn’t aware of? Oracle will make a best guess based on the current distribution of data in the table. There’s additional logic built-in to assume that fewer rows exist the further the queried value gets from the minimum or maximum value that’s currently tracked. This is probably true for many systems, but it could cause problems when a lot of data is coming in. On the bright side, as a DBA, you have full control over when statistics get updated.

Learning More About Oracle Statistics

You can learn more about Oracle’s table statistics in the Optimizer Statistics Concepts documentation, the DBMS_STATS package, and through the DBA_TAB_HISTOGRAMS view.

css.php