Blog

When Does Index Fragmentation Matter?

A while back, we posted about how SQL Server index fragmentation doesn’t matter. That blog post generated a lot of heated discussion and has served to make people aware that fragmentation may not be the source of their woes. Sometimes, though, there are data access patterns where fragmentation really does matter.

Perpetual Fragmentation

Some data access patterns lead to tables and indexes that will always be fragmented, no matter what you do. If you have frequent data changes in the middle of a table or index, you can see heavy fragmentation.

What’s this pattern look like? Check it out!

CREATE TABLE process_list
    (
      transaction_id INT IDENTITY(1, 1)
                         PRIMARY KEY ,
      status_flag BIT ,
      last_update DATETIME2 ,
      transaction_type INT ,
      transaction_desc VARCHAR(25)
    );
GO

CREATE INDEX IX_giant_index_process_list
ON dbo.process_list (transaction_id, status_flag);
GO

TRUNCATE TABLE process_list ;
GO

/* Random string generation code taken from:

http://social.technet.microsoft.com/wiki/contents/articles/21196.t-sql-random-string.aspx

*/

/* insert another 1000 rows */
DECLARE @string_max_length TINYINT = 25;

INSERT  INTO process_list (status_flag, last_update, transaction_type, transaction_desc)
        SELECT  0 ,
                GETUTCDATE() ,
                v.number % 4 + 1 ,
                x.txt
        FROM    master.dbo.spt_values v
                JOIN ( SELECT TOP 1
                                LEFT(REPLACE(CAST (NEWID() AS NVARCHAR(MAX)),
                                             '-', ''),
                                     ABS(CHECKSUM(NEWID()))
                                     % @string_max_length + 1) AS txt
                       FROM     SYS.OBJECTS A
                                CROSS JOIN SYS.OBJECTS B
                     ) AS x ON 1 = 1
        WHERE   v.type = 'P'
                AND v.number < 1001;


/* Look at table/index size.
   The table gains 9 pages.
   The index only gains 1 page from its previous state
 */
SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,
        o.name, 
        i.name,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE   o.name = 'process_list' ;



-- Change the status of 70%ish of transaction to TRUE
UPDATE process_list
SET     status_flag = 1
WHERE   transaction_id % 10 < 7 ;



/* Get rid of processed rows */
DELETE FROM process_list
WHERE status_flag = 1 ;



/* Look at table/index size.
   No page size changes... 
 */
SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,
        o.name, 
        i.name,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE   o.name = 'process_list' ;
GO 10


/* And now we rebuild the table 
   Before the rebuild:
     index - 25 pages
     table - 57 pages
   
   If you just run ALTER TABLE process_list REBUILDema:
     index - 25 pages
     table - 25 pages
 
   After the rebuild with the index:
     index -  7 pages
     table - 25 pages
 */
ALTER INDEX ALL ON dbo.process_list REBUILD ;

SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,
        o.name, 
        i.name,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE   o.name = 'process_list' ;
GO

TL;DR – The code in this example inserts a large number of rows at the end of the table. 70% of rows are marked as processed and are then deleted. This causes a lot of free space or, as you call it, fragmentation.

The only way to get rid of this fragmentation is to rebuild both the non-clustered index and the clustered index. For this example, the defragmentation reduces the size of the index by more than 3x (from 25 pages to 7 pages) and the table consumes just over 50% less space.

For workloads with a lot of random deletes (or updates that change the size of a row in a way that causes data movement), index defragmentation is necessary. Without index fragmentation, these database will continue to grow in size and result in the usual problems that we associate with fragmentation.

What Doesn’t Cause Perpetual Fragmentation

At this point you might be saying “Ah ha! You’ve proved that I should always defragment my indexes! Constantly! ZOMGWTFBBQ!!!11!11!shiftone!!!!!”

You’re wrong.

For some workloads, you can still avoid index fragmentation – if you’re adding data to the tailing end of the table and deleting data from the beginning of the table, you won’t need to defragment your indexes. Ghost record clean up should take care of deleted rows in this case.

For most workloads, your best bet is still to analyze SQL Server wait statistics and understand the basic problem before you start defragmenting indexes on a regular basis.

Acknowledgements

The original idea for this blog post comes from Hemant K Chitale’s Index Growing Larger Than The Table. Even though SQL Server handles this situation differently from Oracle, the underlying pattern is still interesting for database practitioners.

The random length string code was adapted from T-SQL: Random String.

Using sp_BlitzCache™ Advanced Features

sp_BlitzCache™ is a fantastic tool for finding the worst queries in your SQL Server. Did you know that it has a number of advanced options available to you? These options can help you dig into your worst queries, slice and dice the data, and get back exactly what you need, without wading through hundreds of results.

Only/Ignore Query Hashes

Version 2.3 of sp_BlitzCache™ introduced two parameters so you can ignore individual query hashes. Reader Brian Baker had the idea of filtering out a single stored procedure by supplying all of the query hashes for a stored procedure. There’s one problem – there’s no easy way to filter an entire procedure (well, not without making sp_BlitzCache™ really complex).

Here’s how we’d go about answering Brian’s question:

Step 1: sp_BlitzCache™ in expert mode.

EXEC dbo.sp_BlitzCache @results = 'expert' ;
Keep scrolling right.

Keep scrolling right.

Find the stored procedure that you want in the plan cache and then scroll all the way to the right until you get just past the Query Plan column. There you’ll find the Plan Handle. This lets us find a stored procedure. Copy it to your clipboard and then past it into the following script:

/* Step 1: Run sp_BlitzCache @results = 'expert' */
EXEC dbo.sp_BlitzCache @results = 'expert' ;

/* Step 2: Grab the plan_handle of the plan you want to examine. */
DECLARE @plan_handle AS varbinary(64) ;

SET @plan_handle = 0xEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE ;

WITH XMLNAMESPACES('http://schemas.microsoft.com/sqlserver/2004/07/showplan' AS p)
SELECT query_hash
FROM   ( SELECT n.value('@QueryHash', 'nvarchar(max)') AS query_hash
         FROM   sys.dm_exec_procedure_stats AS deps
                CROSS APPLY sys.dm_exec_query_plan(deps.plan_handle) AS deqp
                OUTER APPLY deqp.query_plan.nodes('//p:StmtSimple') AS q(n)
         WHERE  deps.plan_handle = @plan_handle
       ) AS x
WHERE  query_hash IS NOT NULL ;

Alright, now that we have our query hash(es), you can plug them into sp_Blitzcache™:

EXEC sp_BlitzCache @only_query_hashes = '0x6A26E5C5E3825653' ;

Or, if you feel like ignoring that particular set of statements:

EXEC sp_BlitzCache @ignore_query_hashes = '0x6A26E5C5E3825653' ;

And, there you have it – how to zoom in on a single procedure, or completely ignore it.

Only Look at Queries or Procedures

This is much simpler, to narrow down the scope of sp_BlitzCache™, you can use the @query_filter parameter. This parameter can be one of three values: procedures, statements, all. The default is all and, by default, sp_BlitzCache™ will look at stored procedures, statements, and triggers. By specifying either “procedures” or “statements”, you can forcesp_BlitzCache™ to narrow down its scope.

This can be particularly helpful if you have encapsulated your business logic into stored procedures and you want to know where to focus your tuning, rather than worrying about individual statements.

Rapidly Re-Running sp_BlitzCache™

Have you ever run sp_BlitzCache™ and noticed that it takes a while on production systems? And then you realized, after you wandered off and got coffee, that you forgot to use the@export_to_excel parameter?

I’ve done that, too.

sp_BlitzCache™ now ships with a @reanalyze feature. By default, sp_BlitzCache™ will analyze your plan cache fresh every time the query is executed. By using @reanalyze = 1, you’re telling sp_BlitzCache™ to skip collecting data and immediately report on the information that it already has. This is great when you want to export to excel, send the report to your co-workers, and then immediately get to work on the results. It also lets you re-sort the already collected data in a different ways.

There You Have It!

Three fancy features in sp_BlitzCache™ that you can use to make your life easier and more fun.

What are you waiting for? Download it right now!

Announcing sp_BlitzCache™ v2.4

Welcome to sp_BlitzCache™ v2.4. This release brings a few changes and bug fixes.

  • Fixed a logical error in detecting the output table. Thanks to Michael Bluett for pointing that out.
  • Sorting by executions per second finally works. Thanks to Andrew Notarian and Calvin Jones for submitting this week.
  • Added a @query_filter parameter – this allows you to only look at “statements” or “procedures” in the plan cache.
  • A check was added to identify trivial execution plans that have been cached. If you’re seeing a lot of these, you need to fix it.
  • The @reanalyze parameter was added. When set to 0, sp_BlitzCache™ will pull fresh data from the plan cache. When set to 1, though, sp_BlitzCache™ will re-read the results temporary table. This is helpful if you want to save off results in Excel and display results so you can tune queries.
  • Added the ability to see a query’s SET options. This is hidden just to the right of the plan in the results grid.
  • Moved the #procs temp table to a global temp table named ##procs. This shouldn’t be a problem because you’d probably get angry if two DBAs were running this stored procedure at the same time any way.

Download it right now!

Update: Denis Gobo noticed that sp_BlitzCache™ could potential clobber global temp tables. Global temp table names have been updated in sp_BlitzCache™ to avoid this in the future. Make sure you’re using v2.4.1.

A Guide to Contributing Code

So you’ve got a great idea for a new feature to add to sp_BlitzSomethingOrOther, what’s the best way to get started?

The Documentation Says…

If you read our code contribution guidelines, you should write new code, write a test, sign an agreement, and then send us your code. That’s technically correct, but it’s a daunting task. After all – five people are going to be looking at your code and then thousands more might be looking at your code.

The documentation is technically correct (like most documentation), but it assumes a lot.

If you go by what the documentation suggests, we’ll definitely see your code, but there’s a decent chance that we’re not going to accept your code contribution.

Start Small

The best way to get started on any existing project is to start small. It’s rare to write a major feature on your first contribution to any new project or product – there’s a significant barrier to entry around code formatting and style, future features, and work in progress.

The best way to help out is to fix a bug in the code.

Right now you’re saying, “Fix someone else’s bugs? No way!”

Hear me out.

When you find and fix a bug in the code, you’re signaling a few things. The first thing you signal is that you have a better eye for detail than the moron who wrote the code. The second thing you signal is that you want to help that moron make their software a little bit better.

Build Trust and Understanding

Contributing small fixes to an existing code base goes a long way to establishing trust. It’s one of the ways that we all work together and bring new employees up to speed with our tools. We don’t throw each other into the deep end (much). Instead we get familiar with our own software tooling by looking for issues and fixing them. We build up trust in each other as we’re building up knowledge.

By fixing bugs, you’re building trust and establishing a working knowledge around a particular code base.

Beyond building trust, you’re also getting an understanding of how a particular piece of code is put together. As an example, sp_BlitzCache is riddled with dynamic SQL, XQuery, XPath, and strange DMV ordering. It’s all done for a reason, and that reason is performance. A few changes would take sp_BlitzCache from finishing in 30-90 seconds to finishing in 30-90 minutes – I should know, I’ve introduced those changes before.

As you’re in the code fixing bugs, you’ll spot places to add more features and functionality. This is a great place to strike up a conversation with the authors about adding those new features, or at least getting them on a roadmap.

Sometimes, we’re already working on a feature but we haven’t made anything about it public yet. You don’t want to spend hours writing a new feature only to see it come out in a completely different format. Building up that relationship of trust means we’ll be chatting with you about our ideas and you’ll be aware of our crazy ideas as they happen.

Code review is hard!

Code review is hard!

…But Test First

The best reason to start out by fixing bugs is that we have a very strange test set up. By testing your changes the same way we test our changes, you can rest assured that your changes will be accepted on their merit, and not rejected on a technicality.

We test our code changes on multiple versions of SQL Server and we use case sensitive instances. A simple mistake in a column name can stop a query from running, for some users, and we’d rather be safe than sorry.

Too Long; Didn’t Read

In short, the best way to get started contributing to sp_BlitzWhatever is:

  1. Find a bug.
  2. Fix the bug.
  3. Submit your fixes.
  4. Rinse. Repeat.
  5. Work up to implementing bigger fixes & features.

Get started today, head over to http://support.brentozar.com and pick out a bug that someone has found. Submit your ideas at http://brentozar.com/contributing-code.

Introduction to the Oracle Data Dictionary

If you’re going to be working with Oracle, you need to be able to get a better handle on what’s going on with the Oracle database. Just like other database platforms, Oracle provides a data dictionary to help users interrogate the database system.

Looking at System Objects

Database administrators can view all of the objects in an Oracle system through the DBA_% prefixed objects.

You can get a list of all available views through the dba_objects system view:

/* There's a gotcha here:
   if you installed Oracle as suggested, you'll be using a
   case sensitive collation. That's not a big deal, just
   don't forget that while you don't need to capitalize object
   names in SQL*Plus, you do need to capitalize the names while
   you're searching.
 */
SELECT COUNT(DISTINCT object_name)
FROM dba_objects
WHERE object_name LIKE 'DBA_%';

And the results:

  COUNT(*)
----------
      1025

Just over 1000 views, eh? That’s a lot of system views. If you just want to examine a list of tables stored in your Oracle database you can use the dba_tables view to take a look. Here we’ll look at the EXAMPLE database schema:

SELECT owner,
       tablespace_name,
       table_name
FROM   dba_tables
WHERE  tablespace_name = 'EXAMPLE'
ORDER BY owner,
       tablespace_name,
       table_name ;

The curious can use the desc command to get a list of all columns available, either in the dba_tables view, or any of the tables returned by querying dba_tables.

User Objects

A user shouldn’t have access to the DBA_ views. Those are system level views and are best left to people with administrative access to a system. If a user shouldn’t have that level of access, what should they have? Certainly they should have access to their own objects.

Users can view their own data with the USER_ views. There’s a user_objects table that will show information about all objects visible to the current user. If you just want to see your own tables, you can use the user_tables view instead:

SELECT table_name,
       tablespace_name
FROM   user_tables ;

Of course, users may have access to more than database objects that they own. In these cases, users can use the ALL_ views to see everything that they have access to:

SELECT COUNT(DISTINCT object_name) FROM all_objects
UNION ALL
SELECT COUNT(DISTINCT object_name) FROM dba_objects ;

Running this query nets 52,414 rows in all_objects and 54,325 in dba_objects. Clearly there are a few things that I don’t have direct access to, and that’s a good thing.

System Status with V$ Views

Oracle’s V$ views record current database activity. They provide insight into current activity and, in some cases, they also provide insight into historical activity. There are a number of dynamic performance views (Oracle’s term for the V$ views) covering everything from waits to sessions to data access patterns and beyond.

As an example, you can view all sessions on an Oracle database using the v$session view:

SELECT sid, username, machine
FROM v$session
WHERE username IS NOT NULL ;

Oracle has a wait interface, just like SQL Server. Waits are available at either the system or session level. The v$system_event view shows wait information for the life of the Oracle process. The v$session_event view shows total wait time at a session level (what has this process waited on since it started). You can look at currently running (or just finished sessions) using v$session_wait.

Using this, we can look into my session on the system with:

SELECT  wait_class,
        event,
        total_waits,
        time_waited,
        average_wait,
        max_wait,
        time_waited_micro
FROM    v$session_event
WHERE   wait_class <> 'Idle'
        AND SID = 255 ;

 

Sample output from the Oracle v$session_event table.

I’m waiting on me

Don’t be afraid to explore on your local installation. There’s no harm in playing around with different Oracle features to determine how they work and what kind of information you can glean from them.

You can also use the GV$ views, thanks to Jeff Smith for pointing out my omission. These are views that are designed for Oracle RAC so you can see the health of every node in the RAC cluster. The upside of this is that you can get a big picture of an entire cluster and then dive into individual nodes using the V$ views on each node. You can even execute queries that use the GV$ views, even if you don’t have RAC, and you’ll be just fine.

A Word of Warning

Be careful with the both the data dictionary and the V$ views – querying certain views may trigger license usage to show up in the dba_feature_usage_statistics view. Before using features like Active Session History or the Automatic Workload Repository, make sure that you have the proper features licensed for your Oracle database. Using these optional features for your own education is fine.

Choosing a Cloud Deployment Model [Video]

Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, October 7! Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!

Have questions? Feel free to leave a comment so we can discuss it on Tuesday!

Five Oracle Myths

It’s Hard to Configure

Historically speaking, Oracle was a bit painful to configure. A DBA needed to be able to size internal components like the rollback segment, buffer cache, large object cache, sort area, and a number of other memory structures. This gave Oracle a reputation for being difficult to configure. Rightfully so – compared to SQL Server at the time, Oracle was difficult to configure.

Starting with Oracle 9i, the database included limited automatic memory management features. Instead of having to size many aspects of memory, Oracle DBAs just had to size two. And with the introduction of Oracle 11g, Oracle memory management became a matter of configuring a max memory target.

confused-plumber

A database is a series of tubes, right?

Tuning is Complicated

Database tuning is hard. Thankfully databases just come with GUI wizards that work every time, right?

Database tuning is difficult in both SQL Server and Oracle. Oracle DBAs have a wealth of system views to choose from when designing performance reports. There are the usual tools to get information about instance-level CPU, disk, and other waits.

On top of the system views, Oracle users who have licensed the Performance Pack have access to the Automatic Workload Repository (AWR). AWR constantly collects information about Oracle performance and allows DBAs to get a fine-grained view of performance at a number of levels. On top of the system views provided by AWR, it’s also possible to generate AWR reports that generate analysis of database performance over a period of time.

The User Interface is Bad

SQL Server DBAs and developers who are used to SQL Server Management Studio are initially horrified when they’re exposed to Oracle’s command line user interface through SQL*Plus or RMAN. Although the command line is a rough introduction to a product, it’s also a rich environment where users can run scripts, prompt for input mid-script, and create full featured applications with little more than PL/SQL. Although the command line tools appear unforgiving, they offer a wealth of information, built-in help, and query editing capabilities that tie into the user’s primary tools.

Users who refuse to get on the command line aren’t left out in the cold. Oracle has a pair of tools – Enterprise Manager and SQL Developer that provide additional tooling for DBAs and developers. Enterprise Manager provides a dashboard for DBAs and system administrators to review server health at many different levels – from the enterprise through to the datacenter and all the way down to a single server. SQL Developer is a development tool with built-in reports; SQL Server professionals will find SQL Developer to be very familiar.

It Doesn’t Run Well on Windows

“Oracle just doesn’t run well on Windows.” I’ve heard this phrase a lot. Oracle runs on Windows and Windows is officially supported by Oracle for production deployments. Anecdotally, there are very few Windows only bugs for the Oracle database proper; most bugs are cross-platform.

However, you will find that almost all Oracle examples assume you’re running Oracle on a Linux or UNIX system. A quick scan of various forums, blogs, and other online resources indicates that maybe 20% of Oracle deployments are on Windows. Don’t let that stop you from learning about Oracle – most functionality can be accessed with only minimal knowledge of the operating system. For everything else, there’s always your favorite search engine.

You Need a Team of DBAs

Everyone knows that a SQL Server DBA can manage far more SQL Servers than an Oracle DBA, right? After all, with all that manual memory management, lack of tuning, and no Windows support, you need a team of talented UNIX system administrators to keep Oracle running well.

While it may have required a village to run an Oracle database in the past, it hasn’t been that way for some time. Recent versions of Oracle have automated many of the involved processes. Other features like RMAN and AWR reports provide time-saving features that make it easier for DBAs to do more work.

Your Turn

What other misconceptions have you heard about Oracle’s place in the world of databases?

Is Azure Really 60% Faster?

Microsoft just announced a new round of D-grade VMs that have 60% faster CPU and local SSD than can go up to 7,000 IOPS in a canned IOmeter test. Before jumping to conclusions or, even worse, picking a cloud provider, it’s best to look at these numbers critically.

CPU Speeds

The new CPU is being advertised as 60% faster than the previous generation of processors. Clearly this has got to be some next generation hardware, right? Maybe we’ll get access to the new Xeon v3 – it’s not that outlandish of an idea; Amazon Web Services (AWS) had Xeon v2s in their datacenters before the chips were generally available.

Glenn Berry, a consultant who digs into computers for fun, did some initial testing with these new Azure instance types. In his investigations, he saw 2.2GHz E5-2660 chips. These aren’t even the slower end of the new generation of Intel Xeon v2 chips – they’re the previous generation of CPU… from 2012. Azure trades raw power for power efficiency.

If these not-so-fast CPUs are 60% faster, what are your current Azure VMs and SQL Database instances running on? Anecdotal evidence indicates that the current generation of A and P series VMs are running on older AMD Opteron hardware. Older AWS hardware is in the same boat, but it’s slowly being phased out.

When 7000 IOPS really means 437.5 64KB IOPS

When 7000 IOPS really means 437.5 64KB IOPS

SSD Speeds

Microsoft are reporting performance of up to 7000 IOPS per local Azure SSD but persistent storage is still rotational. During the D Series SSD VMs interview a screenshot of iometer at 7,000 IOPS is shown, but no additional information is provided. Iometer tests typically use a 4k read/write block size for tests, which is a great size for random file access. It’s not awesome for SQL Server, but we can divide that by 16 to get a representative SQL Server number…

437.5 64KB IOPS.

Or so the Azure Product Manager says in the original interview. I don’t believe what I hear, and you shouldn’t either, so I fired up an Azure D14 VM to see for myself. What I saw was pleasantly surprising:

All the MBps

All the MBps

If we dig into the IOPS provided by Crystal Disk Mark, we see a decent looking picture unfold:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
 Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 705.103 MB/s
 Sequential Write : 394.053 MB/s
 Random Read 512KB : 528.562 MB/s
 Random Write 512KB : 398.193 MB/s
 Random Read 4KB (QD=1) : 16.156 MB/s [ 3944.4 IOPS]
 Random Write 4KB (QD=1) : 26.506 MB/s [ 6471.1 IOPS]
 Random Read 4KB (QD=32) : 151.645 MB/s [ 37022.8 IOPS]
 Random Write 4KB (QD=32) : 167.086 MB/s [ 40792.5 IOPS]

 Test : 4000 MB [D: 2.0% (16.2/800.0 GB)] (x5)
 Date : 2014/09/23 0:24:10
 OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

What’s it really mean? It means that the 7,000 IOPS number reported was probably for 4KB random writes. It’s hardly representative of SQL Server workloads, but we also can see what kind of numbers the drives will pull under significant load.

Comparing AWS and Azure Performance

AWS offers an instance called the r3.4xlarge. It comes with 16 cores and 122GB of memory. The AWS instance type is about the same as the D14 (16 cores and 112GB of memory). The D14 is $2.611 / hour. The AWS instance is $1.944 / hour.

All prices include Windows licensing.

So far, the Azure D-grade instance costs 70 cents more per hour for 4.8GHz fewer clock cycles and 10GB less memory. Not to mention the computational differences between the current generation of CPU and what Azure is running.

Surely the SSD must be amazing…

Not so fast. Literally.

Some AWS local SSDs benchmark have reported numbers as high 20,000 16KB IOPS for random write and 30,000 16KB IOPS for sequential read. Sure, the AWS instance only has a 320GB disk, but it’s capable of performing 5,000 64KB IOPS compared to the 440 IOPS (I rounded up to be generous) that Azure supplies.

In my testing, the AWS local SSD beat out the Azure SSD on random I/O by a reasonable margin:

A reasonable margin (or 100MB/s faster)

A reasonable margin (or 100MB/s faster)

How about those IOPS?

-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 404.856 MB/s
Sequential Write : 350.255 MB/s
Random Read 512KB : 348.770 MB/s
Random Write 512KB : 349.176 MB/s
Random Read 4KB (QD=1) : 21.337 MB/s [ 5209.3 IOPS]
Random Write 4KB (QD=1) : 38.448 MB/s [ 9386.7 IOPS]
Random Read 4KB (QD=32) : 261.320 MB/s [ 63798.8 IOPS]
Random Write 4KB (QD=32) : 237.201 MB/s [ 57910.4 IOPS]

Test : 4000 MB [Z: 0.0% (0.1/300.0 GB)] (x5)
Date : 2014/09/23 1:05:22
OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)

So… First – Azure offers really good local SSD performance if you decide to purchase the entire instance. Using a D14 instance type is a reasonable expectation for customers deploying SQL Server – SQL Server is a power hungry monster and it deserves to be fed.

Despite their truth, the Azure numbers aren’t all they’re cracked up to be. Here’s how it breaks down:

Cost: 34% more expensive
Sequential Reads: 74% faster
Sequential Writes: 12.5% faster
Random Reads: 42% slower/fewer IOPS
Random Writes: 30% slower/fewer IOPS

Azure has a history of mediocre performance, but it’s well-documented mediocre performance. Azure persistent storage currently maxes out at 500 no-unit-given IOPS per disk (compared to AWS’s 4,000 256KB IOPS for EBS volumes), but these limits are well-documented.

The Bottom Line

Not all clouds are created equal and 60% more doesn’t mean that it’s any better than it was before. It’s up to you, dear reader, to determine what 60% faster means and how that applies to your environment. For companies dipping their toes in the cloud waters, be very wary with the new improved Azure performance. You may find that you’re deploying far more VMs than you thought, just to handle the same workload.

Getting Started with Oracle

Let’s assume you want to get started with Oracle. Maybe your employer is switching to Oracle, maybe you just want a career change. Where do you go to get started?

shutterstock_132421424.resized

There’s no need to feel lost.

Getting the Database

You can get a hold of the Oracle database in two main ways – a VM or installing it yourself. Using a VM is definitely the easiest way to get started. Oracle have provided a Oracle VM VirtualBox image that you can install. If you’re not familiar with VirtualBox, that’s okay; Oracle has set up instructions that will get you up and running quickly.

What if you want to install Oracle yourself?

You can get started with Oracle Express Edition. Hit that link and scroll all the way to the bottom. You can download Oracle Express Edition 11g Release 2. 11gR2 is the previous release of Oracle but it’s good for learning basic Oracle concepts and you’ll find a lot people are happily running Oracle 11gR2 in production.

If you want to be on the latest and greatest version of Oracle, you’ll need to download a full edition of Oracle. Even though there’s no Developer Edition of Oracle, there are five editions available to choose from. Personal Edition contains most of the features of Oracle Enterprise Edition and can be purchased from the Oracle store. If you want practice with complex DBA tasks, you’ll want to use Enterprise Edition. Otherwise, Personal Edition is the right choice.

You can also download and install the binaries directly from the Oracle database download page and run a full copy of Oracle while you evaluate the software. To the best of my knowledge, it’s only servers that are part of the development-production cycle that need to be fully licensed.

If you’re even lazier, you can spin up an instance of Oracle in one of many different clouds. Both Microsoft Azure and Amazon Web Services have a variety of different Oracle database configurations available for you to choose from.

Finding Exercises

Some people are self-directed, others prefer guided learning. I find that I’m in the second camp until I develop some skills. If you need to get started quickly, guided labs are a great way to ramp up your skills.

Oracle has created a huge amount of content about the Oracle database. The Oracle Documentation Library is the Oracle equivalent of TechNet. In addition to product documentation, ODL contains several courses – the 2 Day DBA is a good place to get started. From there you can head off into various tuning or development courses or even explore on your own.

Wrapping Up

It’s easy to get started with Oracle. You can either:

Once you’re set up, training is available through the Two Day DBA course, but there’s a wealth of information in the Oracle Documentation Library. A summary of training options is also available through the Oracle Learning Library.

Oracle Backup Basics for SQL Server DBAs [Video]

To get ready for Tuesday’s webcast, here’s what you have to do:

  1. Watch the video below, but watch it today (or over the long weekend). There will be no live presentation this week and we won’t be rehashing all of the material in the video.
  2. Write down your questions or comments. (You don’t have to do this, but it’ll make it more fun.)
  3. Attend the live webcast on Tuesday at the usual time (11:30AM Central). Register here.
  4. During the first 10 minutes of the webcast, we’ll give away a prize. The catch is that you have to be there to win.

The live discussion of the video and Q&A won’t be recorded and published, and you also need to be present to win the prize. See you on Tuesday!

css.php