Blog

Querying Extended Events Target XML [Video]

If you’ve poked at Extended Events in the last five years, you know the targets store data as…XML. Has XML been the reason you haven’t adopted Extended Events? Don’t let it stop you! In this video, Jes breaks down her method for unraveling the XML of targets.

The scripts I used can be downloaded here.

Jeremiah says: This is great! I wish I’d had this kind of tutorial when I started working with Extended Events.

Kendra says: XQuery is rough, and totally weird when you’re first looking at it. It’s great to get a guided tour of this strange part of the TSQL language.

How to Query Extended Events Target XML

When Extended Events (XE) were released with SQL Server 2008, I was excited – something new to learn! I read Microsoft articles about them, read blog posts about them, and tried to use them. At that time, there was no GUI for XE. I had to create the sessions with T-SQL. That wasn’t awful – I learned the syntax after some trial and error, and could start and stop sessions. There was also no GUI to view the saved data – and it was all saved as XML.

I ran headfirst into a brick wall.

This is the wall I ran into.

I gave up when faced with XML. When SQL Server 2008R2 came out, I once again looked at XE. With no GUI for creating or viewing data, I retreated in defeat again. With SQL Server 2012 came the GUI – a new session wizard, a new session dialog box, the (not-recommended) ability to view live data, and the ability to see the information you collected. But that didn’t solve the root problem for me – all the good data, all the results I wanted to view, aggregate, and slice and dice, were still in XML.

So, I learned to query XML with T-SQL. It hasn’t been easy for me to understand. But, by breaking it down into small steps I was able to conquer it. If querying XML has been the reason you’ve been avoiding XE, I’m here to help you get past it!

Create a session

I’m going to create a session that collects wait statistics and puts them in the Ring Buffer target, then run a small workload to capture waits.

CREATE EVENT SESSION [Ring Buffer - Track Waits] ON SERVER 
ADD EVENT sqlos.wait_info(
    ACTION(sqlserver.database_id,sqlserver.database_name,sqlserver.is_system,sqlserver.session_id,sqlserver.sql_text)) 
ADD TARGET package0.ring_buffer
WITH (STARTUP_STATE=OFF)
GO

/* Start session */
ALTER EVENT SESSION [Ring Buffer - Track Waits] ON SERVER STATE=START;

/* Run a workload or series of queries. */ 
USE StackOverflow_20130906;
GO 
SELECT Id, DisplayName, CreationDate, LastAccessDate 
FROM Users 
WHERE UpVotes > 1000; 

SELECT V.PostId, V.VoteTypeId 
FROM Votes V 
  INNER JOIN Users U ON U.Id=V.UserId 
WHERE U.DownVotes > 100;

I used the default query from Books Online to view the data. This is not helpful!

SELECT name, target_name, CAST(xet.target_data AS xml)
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe
   ON (xe.address = xet.event_session_address)
WHERE xe.name = 'Ring Buffer - Track Waits';

ring buffer default query

Even if I click the target_data column (which I can do!), I just get a page of XML I have to read through. Yuck!

ring buffer XML

Querying XML

So, how do I make this data understandable, so I can count the number of each wait type that happened? We want to get the XML into a format we can use. I’m going to start by inserting the data into a temp table and selecting it.

SELECT CAST(target_data as xml) AS targetdata
INTO #capture_waits_data
FROM sys.dm_xe_session_targets xet
JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
WHERE xes.name = 'Ring Buffer - Track Waits'
  AND xet.target_name = 'ring_buffer';

SELECT * 
FROM #capture_waits_data;

This gives us the same link to a page of XML. What we can do now that that data is in a temp table, though, is “shred” the XML. XML is just nodes of data. You only need to understand the hierarchy of the nodes to query it.

Let’s look at this XML. I can see there is one element, event name=”wait_info” (highlighted in green) that I want to dig into. Under that, I want to get the “wait_type”, “duration”, and “signal_duration” (highlighted in yellow) attributes.

xml nodes i want to see

To do so, I use the T-SQL XML value function to pull out wait_type, like so.

SELECT xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type
FROM #capture_waits_data
  CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data);

Once I understand how to use the value function, I can do the same for other data I want to see.

SELECT xed.event_data.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
  xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type, 
  xed.event_data.value('(data[@name="duration"]/value)[1]', 'int') AS wait_type_duration_ms, 
  xed.event_data.value('(data[@name="signal_duration"]/value)[1]', 'int') AS wait_type_signal_duration_ms 
FROM #capture_waits_data
  CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data);
I can read this!

I can read this!

Level up: Aggregating data

For this XE session (and wait statistics in general), having every wait listed out isn’t helpful. I want to aggregate the data so I can see which wait occurred the most frequently. What happens when I try to aggregate this query?

SELECT xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type, 
  COUNT (xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)')) AS count_wait_type, 
  SUM (xed.event_data.value('(data[@name="duration"]/value)[1]', 'int')) AS sum_wait_type_duration_ms, 
  SUM (xed.event_data.value('(data[@name="signal_duration"]/value)[1]', 'int')) AS sum_wait_type_signal_duration_ms 
FROM #capture_waits_data
  CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data) 
GROUP BY xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)');
I get an error.

I get an error.

I have to take a different approach. I’m going to use a derived table. I will put my initial query to select the data from the temp table into the FROM clause, then aggregate on the results.

SELECT xei.wait_type, 
  COUNT(xei.wait_type) AS count_wait_type, 
  SUM(xei.wait_type_duration_ms) AS sum_wait_type_duration_ms, 
  SUM(xei.wait_type_signal_duration_ms) AS sum_wait_type_signal_duration_ms 
FROM 
  (SELECT xed.event_data.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
    xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type, 
    xed.event_data.value('(data[@name="duration"]/value)[1]', 'int') AS wait_type_duration_ms, 
    xed.event_data.value('(data[@name="signal_duration"]/value)[1]', 'int') AS wait_type_signal_duration_ms 
  FROM #capture_waits_data
    CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data)) AS xei 
GROUP BY xei.wait_type
ORDER BY SUM(xei.wait_type_duration_ms) DESC;

I have usable data!

This is one sleepy system.

This is one sleepy system.

Last but not least, I’ll clean up after myself.

/* Clean up */ 
DROP TABLE #capture_waits_data; 
GO 
/* Stop session */
ALTER EVENT SESSION [Ring Buffer - Track Waits] ON SERVER STATE=STOP;
/* Drop session */
DROP EVENT SESSION [Ring Buffer - Track Waits] ON SERVER;

Querying the XML output from Extended Events sessions is easy if you take it step-by-step. When you are looking at the raw XML, establish what elements you want to extract. (These will vary by the events you choose to capture.) Move the data into a temp table you can work with. Use the XML value function to extract one element at a time, until you have the data you need. At last, if you need or want to, you can aggregate.

Don’t let XML be the reason you won’t use Extended Events – it isn’t for me any longer!

Want to learn more about Extended Events? Start here! 

How to Use SQL Server’s Dedicated Admin Connection (DAC) – Video

You may have heard that there’s a secret back door into your SQL Server– but have you enabled access to it properly? Do you know when you might need to use it, and how you can connect? Kendra Little explains why the Dedicated Admin Connection (or “DAC”) is so important and how to configure remote access.

Don’t have 11 minutes / don’t have YouTube access? Head on over to our blog post on the DAC.

Jeremiah says: I had to use this once. I ran into a low memory situation on a SQL Server and Windows decided to page out SQL Server in the middle of the business day. The system ground to a halt, it was even unresponsive over RDP, and we had to hard power cycle the server (hold down the power button in the server room). I set up the DAC and the next time the issue happened, I was able to jump into the SQL Server, collect information, and then issue a graceful shutdown. This made it easier to diagnose the problem, too.

Brent says: Using the DAC is like knowing how to read the basics of DBCC CHECKDB output. You’re probably never going to need it, but when you do, just ten minutes of preparation will make a world of difference.

Jes says: What Brent said. A few minutes of practice every six months can save minutes or even hours of frustration – and potentially downtime – when faced with a real emergency.

Testing ALTER INDEX REBUILD with WAIT_AT_LOW_PRIORITY in SQL Server 2014

One of the blocking scenarios I find most interesting is related to online index rebuilds. Index rebuilds are only mostly online. In order to complete they need a very high level of lock: a schema modification lock (SCH-M).

Here’s one way this can become a big problem:

  1. An online index rebuild starts against a large index
  2. A modification query runs before the index rebuild query. At the end of the index rebuild, it tries to get a schema modification lock. It can’t, because the modification query is still running
  3. The index rebuild’s super-high-level lock request then causes a massive pile-up behind it– blocking even queries that want to do dirty reads*

*I’m not saying dirty reads are a good thing, just pointing out that even NOLOCK can be blocked

Here’s a screenshot of a quick repro of that scenario:

Traditional Online Index Rebuild

And here’s what the blocking looks like in Adam Machanic‘s free tool, sp_WhoIsActive, if those are three queries are left running for a bit. It can stay ugly for a long time.

Traditional Online Index Rebuild-Blocking

Session 56’s need for a schema modification lock can cause a big backup!

Let’s Test WAIT_AT_LOW_PRIORITY and ABORT_AFTER_WAIT?

SQL Server 2014 introduced some new options for index rebuilds:

  • WAIT_AT_LOW_PRIORITY
  • MAX_DURATION (in minutes)
  • ABORT_AFTER_WAIT (Kill yourself / Kill who’s blocking you / Kill others)

This sounds pretty good, but how exactly does it impact the scenario above? Does “WAIT_AT_LOW_PRIORITY” really mean that it won’t start that nasty blocking chain? Or is it just that I can use “MAX_DURATION” to control the duration of the pain in this case?

Let’s take a look. Here’s the modified command I’m testing:

ALTER INDEX PK_Votes 
ON dbo.Votes
REBUILD WITH (ONLINE = ON (
    WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 1 MINUTES, ABORT_AFTER_WAIT = SELF ))
 );

Here’s the commands I ran this time:

2014 Online Index Rebuild-Blocking

And here’s what the blocking looked like:

LCK_M_SCH_M_LOW_PRIORITY

LCK_M_SCH_M_LOW_PRIORITY lock wait type saves the day

One thing to notice about the screenshot of the blocking: there’s no third query being blocked. There’s no pileup! “LOW_PRIORITY” really does mean low priority – the “NOLOCK” query is able to get a shared schema lock and move forward.

What Happened with ABORT_AFTER_WAIT?

As instructed, my online index rebuild called it quits. Here’s the error message it threw when it gave up:

abort_after_wait_self

I Like It!

Blocking chains during maintenance window can be a huge pain– especially if you’ve paid a high licensing dollar to get online index operations. Having more options is definitely a plus.

Learn More

Eight Things A DBA Should Always Do with SQL Server

I started tweeting various single-words you could start an argument with the other day. I really liked this suggestion from @SQL_Kiwi:

It got me thinking. What are the things that I would say you a DBA should always do or be ready for with SQL Server, unconditionally?

You should always:

As for the “nevers”, I’ll let you come up with that list.

Oracle Flashback: Undeleting Data

Or, “Oops, I didn’t mean to do that.”

We’ve all had to deal with an accidental deletion or even a dropped table. Sometimes you leave off a where clause, sometimes you drop a table and find out that the change request listed the wrong table. Oracle uses a feature called Flashback to give database professionals the power to view data at different moments in time and take action based on what they find.

Using Flashback on a Dropped Table

No matter how rigorous your practices are, mistakes happen. How do you deal with accidentally dropping a table?

In the SQL Server world, we’d reach for our most recent backup, restore a copy of the database, and hope that the outage didn’t take too long. With Oracle, we can look in the recycle bin.

First we create a table:

CREATE TABLE test ( id INT, things VARCHAR2(50) );
INSERT INTO test VALUES ( 1, 'ONE' );
INSERT INTO test VALUES ( 2, 'TWO' );
COMMIT;

SELECT * FROM test ;

Results:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO

And then we drop the table:

DROP TABLE test;
COMMIT;

SELECT * FROM test;

Results:

SELECT * FROM test
              *
ERROR at line 1:
ORA-00942: table or view does not exist

Oh no, I meant to drop test_two! Now what?

Thankfully, I can take a look in the recycle bin using the command SHOW RECYCLEBIN:

ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
---------------- ------------------------------ ------------ -------------------
BIG_TABLE        BIN$AZEU5yqWPZXgUwpdqMBL1Q==$0 TABLE        2014-08-26:15:51:31
TEST             BIN$CyLJjKhZPPDgUwpdqMAvJw==$0 TABLE        2014-12-26:09:23:46

Okay, now I can see that back in August I dropped a table named BIG_TABLE and our table named TEST is still hanging around. Let’s restore the TEST table:

FLASHBACK TABLE test TO BEFORE DROP RENAME TO test_dropped ;

Crisis averted! I’ve been able to restore the previously deleted table with a new name. I could leave out the RENAME TO portion and restore the table back with its original name, but it’s always possible that someone else has restored the table. Instead, it’s a probably a better practice to safely restore the table with a different name and then change the name with an ALTER TABLE:

ALTER TABLE test_dropped RENAME TO test;

Fixing Row Level Mistakes with Flashback

Let’s keep using our test table. Here’s what the data looks like now:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 FIVE

That data isn’t right. I need to know when the right data was present so I can get the right row in place. In order to do that, I can use the VERSIONS clause to tell Oracle to bring back specific versions of rows present for this particular table:

SELECT versions_startscn,
       versions_starttime,
       versions_endscn,
       versions_endtime,
       versions_xid,
       versions_operation,
       test.*
FROM   test
       VERSIONS BETWEEN TIMESTAMP
                TO_TIMESTAMP('2014-12-26 09:30:00', 'YYYY-MM-DD HH24:MI:SS')
                AND TO_TIMESTAMP('2014-12-26 09:49:00', 'YYYY-MM-DD HH24:MI:SS') ;

All the rows, all the time.

All the rows, all the time.

Now we can see a history of the row with an id of 6!

  • The row was inserted at 9:38AM.
  • At 9:41, someone corrected the things column to read SIX.
  • At 9:48, someone update the things column back to FIVE – which we know is wrong.

How do we get the correct row back?

If we’d configured the database to track supplemental log data, we could use the view flashback_transaction_query and grab the UNDO_SQL column to rollback that one command. Since I have not configured tracking supplemental log data, we’ll have to be a little bit trickier.

Thankfully, we can use some UPDATE shenanigans to get exactly the rows that we want:

UPDATE  test t1
SET     things = (SELECT  t2.things
                  FROM    test AS OF TIMESTAMP
                  TO_TIMESTAMP('2014-12-26 09:42:00', 'YYYY-MM-DD HH24:MI:SS') t2
                  WHERE   t1.id = t2.id )
WHERE t1.id = 6 ;

SELECT * FROM test;

Results:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 SIX

Holy cow, we got our row back! Make sure you COMMIT the transaction (remember – Oracle won’t auto-commit like SQL Server).

Looking Into the Past

If you hadn’t guessed, it’s even possible to view the contents of a table as of a moment in time using the AS OF TIMESTAMP clause:

SELECT * FROM test AS OF TIMESTAMP
TO_TIMESTAMP('2014-12-26 09:40:00', 'YYYY-MM-DD HH24:MI:SS');

Results:

        ID THINGS
---------- --------------------------------------------------
         1 ONE
         2 TWO
         3 THREE
         4 FOUR
         5 FIVE
         6 FIVE

This functionality makes it possible for analysts to undo changes to the database, review changes for auditing purposes, or even to recalculate historical reports based on newer formulas.

The Tip of the Iceberg

Undeleting tables and reviewing previous rows is only a small portion of what you can accomplish with Oracle Flashback. Entire transactions can be flashed back, databases can be rolled back to a point in time, or the Data Archive feature can be used to house all changed versions of rows for all time. But even without diving into more advanced features, Oracle Flashback gives database professionals the powerful ability to correct for “oops” deletions in the application.

Does Updating Statistics Cause a Recompile if No Data Has Changed?

tl;dr — Not necessarily.

One of our students asked me a great question: if you update statistics on every table in the database, is that effectively the same as dumping the procedure cache on the instance? Will every execution plan have to be recompiled the next time it is run? I thought it was a great question and it spurred an interesting discussion about maintenance and recompilation.

Updating statistics can certainly cause recompilation. But based on my observations, it won’t necessarily cause every execution plan to recompile.

To test this out, I’m going to use a free tool that Jeremiah and I wrote, sp_BlitzTrace™, to trace activity in Extended Events and show me when a recompile occurs.

Our Test Setup: Users Table with No Nonclustered Indexes

I’m running a simple, non-optimized query against a restored copy of the StackOverflow database on SQL Server 2014. To keep things simple, I have left the dbo.Users table without any non-clustered indexes. The auto-create statistics option is on with the default synchronous setting. I cleared the procedure cache before I started my test.

sp_BlitzTrace™ starts an Extended Events trace against my own session. Then I run my (somewhat troubled) query and stop the trace. Then I read the results.

[code]
exec sp_BlitzTrace @Action=’start’, @TargetPath=’S:\XEvents\Traces\’, @SessionId=@@SPID, @TraceRecompiles=1;
GO
SELECT DisplayName, Reputation
FROM dbo.Users
WHERE DisplayName LIKE ‘Kendra%’
ORDER BY Reputation DESC;
GO
exec sp_BlitzTrace @Action=’stop';
GO
exec sp_BlitzTrace @Action=’read';
GO
[/code]

Here are the results from the first run. This compiled a plan and put it into cache:01-initial run-fresh compile

I Find the Statistic To Update and Get Out My SledgeHammer

I find the statistics on the table by running sp_helpstats. This shows that I have a column statistic on the DisplayName column. I copy the name of that off to my clipboard. This is the statistic I’m going to update, because it’s in the “where” clause of my query.

01b-related row in statistics

I update the statistics with FULLSCAN with this code:

[code]
UPDATE STATISTICS dbo.Users _WA_Sys_00000005_1D7B6025 WITH FULLSCAN;
GO
[/code]

I’m not a fan of updating all your stats with fullscan. I just got out the sledgehammer for the purpose of demonstrating if it would cause a recompile.

… And I Don’t See A Recompile

After the statistics update, I run my test query block again (the same one above) with sp_BlitzTrace™.  I do NOT see a row returned showing a recompile event:

02-after statistics update

I can run the query multiple times, and the trace never shows a recompile event.

Hmm. Well, maybe stats didn’t update somehow? Well, we can check. I query sys.dm_db_stats_properties and it shows that this statistic DID update:

03-updated statistics

.. Let’s Fake It Out and Make a Modification

Nothing has been inserting, updating, or deleting data in my copy of the StackOverflow database. I run an insert statement (and roll it back, just for fun), and then update the statistics.
04-fake out insert and update stats

There’s Our Recompile!

Following the “fake insert” and statistics update, I run my query again and the Extended Events trace shows a sql_statement_recompile event. The cause is statistics change:

05-recompile

Finding: Statistics Update Alone Didn’t Cause a Recompile

SQL Server was smart enough to check if the data had changed. Updating statistics alone doesn’t always invalidate execution plans.

What Does This All Mean?

Mostly, I think it just means that SQL Server is pretty clever about when to do recompiles — at least SQL Server 2014, since that’s where I ran this test. But don’t take this as an excuse to be lazy. I’m a fan of using the @OnlyModifiedStatistics=’Y’ option on Ola Hallengren’s index maintenance solution if you’re using @UpdateStatistics=’Y': because why even update the statistic if it looks like nothing has changed?

Announcing New 2015 Sessions: Database Maintenance Starter Kit

Preview-Database-Maintenance-Starter-KitJes and I are thrilled to announce three new sessions of our interactive Database Maintenance Starter Kit course! The new sessions will be held on:

Learn More about Database Maintenance Next Year

For $149, you get 3 hours of recorded video content on SQL Server database maintenance– and you get access to that for 18 months.

You also get to attend a 3 hour guided discussion and Q&A session with Jes and myself on the date you purchase. We keep this session full of quizzes and polls to let you test your knowledge. The questions also naturally help YOU think of questions and dig deeper into the topics we’re talking about.

We Keep Your Class Small!

Here’s what’s really unique about this course: we keep your class small. A maximum of 30 students can buy a seat each class. That means you get time to participate!

You’ll Learn the Intricacies of Database Maintenance

This course teaches you all about database maintenance, namely…

  • The pros and cons of maintenance plans vs SQL Server Agent jobs
  • How to select the right database recovery model
  • How to plan a backup strategy
  • How to check for corruption in your databases (and get notified if it happens)
  • The different types of index maintenance operations (and what can go wrong)
  • Why you need to be careful with index maintenance (and where to start)

Learn more about the class here.

How to Ask Microsoft a SQL Server Licensing Question

I’ve never been so excited to write about licensing.

Licensing is hard, y’all. It changes often and it’s full of lots of little details. Microsoft puts out licensing datasheets, but they’re in a difficult spot — if they explain absolutely everything, the documents get longer and harder to understand. So they have to balance between brevity and clarity.

Recently I had a question I wanted to confirm for SQL Server 2014 licensing. I looked for phone numbers and email addresses that I could use and share with clients and readers, and came up with nothing. As a last resort, I decided to try out the “Chat now” feature on the SQL Server sales page to see if I could get an answer.

licensing-chat-00

I clicked the blue “Chat now” button on the right side of the page

I Asked A Question About Passive Licenses and Cold Failover Licenses

Here’s the question I submitted:

I have a question about SQL Server 2014 licensing with Software Assurance. It is clear from the documentation that there is a single “passive” license for High Availability if you license an instance. I want to check if there is additionally a cold failover license that can be used for Disaster Recovery as described here (http://blogs.technet.com/b/volume-licensing/archive/2013/08/22/licensing-how-to-sql-server-fail-over-vs-cold-disaster-recovery-rights.aspx), or if that was removed in SQL Server 2014.

My SQL Licensing Question Was Answered

I had absolutely no idea what to expect from an on-demand chat window at 6:45 am Pacific time on a Monday. My chat request went into a queue and I waited for several minutes, which was totally fine. But then, Gilbert appeared!

He asked if it was OK for me to wait for a bit while he researched my question. (Willing to wait? I was willing to send flowers if he answered this.) After a bit, he came back and gave me an answer:

licensing-chat-05

Cold disaster recovery rights have not been removed. In SQL Server 2014, you need Software Assurance to take advantage of this right, but using it doesn’t mean you lose the license for a “passive” node for failover.

Operators Are Standing By

I can practically hear angels singing. I asked a really specific licensing question before 7 am on a Monday morning and someone was there to help me almost immediately. They took the time to research my question AND I could ask follow up questions to make sure I understood properly.

When you’ve got a tough SQL Server licensing question, first check the licensing datasheet. If it’s not covered in there, head on over to that SQL Server sales page and click on “Chat now”. Cause that thing is awesome.

Tip for Learning Extended Events – Use “New Session” (Not “New Session Wizard”)

Extended Events have become one of my favorite tools for learning about tuning SQL Server. Whenever I wonder how something works, I can fire up a test instance and play around with XEvents. It’s always interesting, and at least 5% of the time I end up learning what I intended.

Not everything is sunshine and rainbows. Finding the right events and making sure you understand the output can be really frustrating. And you want a SQL Server 2012 or 2014 test instance to do this. The events are much richer and there’s an interface to help you along.

But like a lot of things, the interface isn’t all that intuitive.

New Session, or New Session Wizard?

New Session- No WizardI used to click on “New Session Wizard”, simply because I had no idea what the difference was between the two options, and it was first in the list.

The “Wizard” option gives you some templates that you can choose to customize. If you’re just starting out, that could be interesting to use on your test instance.

But it always left me frustrated that I couldn’t use some of the more advanced options in Extended Events. I felt like I was just scratching the surface, and trying to code sessions from scratch using Books Online was slow and painful.

“New Session” Just Gives You a Better Wizard

When you click new session, it turns out that you still get a wizard. It starts out asking you for a session name and some basic properties:

It is still wizard enough

Then you can choose your events. I decided I wanted to test some plan guides, and see if I could use Extended Events to count how many times a plan guide was used. (This is something that the DMVs and perf counters don’t make accessible.) I added the plan_guide_successful event:

plan guide successful

Not all events are created equal– some can slow down your SQL Server, just like in Profiler/SQLTrace. Be careful out there.

Extended Events Targets: I Can Pick a Histogram!

The “Data Storage” tab is where this gets good. I was always frustrated when I was using the Wizard that I had limited choices about how to set up a target. If you use this version of the Wizard (the Un-Wizard Wizard?), then you get to pick your target type and it’ll help you script that out.

In this case, I want to use the histogram target. I don’t care when the plan guides were executed– I just want a count of which ones are being used. This can help keep my trace lightweight, plus if that’s the only information I want then I don’t have to do any complicated grouping on the output.

The GUI is nice and friendly and lets me say I want my “Buckets” to be on the plan_guide_name field:

Histogram Target

After I create my session, it shows up in my Extended Events session list in the SSMS Object Explorer. I can right click on it and script it out, start it, stop it, etc.

Start Session

I Ran My Plan Guides Test and … It Worked!

I fired up my session, created some plan guides, and ran some queries against my restored copy of the StackOverflow database.  After building up a bit of data, I just double-clicked on package0.histogram to open the results so far:

doubleclick on the package

And it worked! I created a ‘template’ plan guide that I executed via a HammerDB workload– so that got hit a ton. Then I created two other plan guides that I just ran twice and once, respectively.

voila-histogram-target-plan-guide-execution-count

Extended Events Aren’t Always This Easy

Putting together this little example went smoothly. Surprisingly smoothly. So smoothly that I really want to quit while I’m ahead.

css.php