Blog

Three reasons to use Extended Events

I have a new mission: to convince you, the long-time Profiler user, to switch to Extended Events. I realize I have a long, difficult task ahead of me. Profiler has been around for a long time. You’re comfortable with it. You know it’s not perfect, but you have learned to live with its imperfections. Now I want you to step outside your comfort zone and learn a new tool, with new terminology, a new interface and new capabilities.

I’m going to start with a few reasons for you to consider Extended Events. I want you to think about these things, and if they could make your job easier.

1. The number of events in Profiler has remained the same since SQL Server 2008. The number of events in Extended Events (XE) has more than tripled.

Number of events

What kind of events are being added to XE, but not Profiler? Events for any new features, such as Availability Groups, In-Memory OLTP, and Windows Azure Storage. You can query sys.trace_events to see which events are available for Profiler, and sys.dm_xe_objects to see the events for XE.

/* Profiler */
SELECT trace_event_id, name
FROM sys.trace_events;

/* XE */
SELECT pkg.name AS PkgName, 
  pkg.description as PkgDescr, 
  obj.name as EventName, 
  obj.description as EventDescr
FROM sys.dm_xe_objects obj
  inner join sys.dm_xe_packages pkg ON pkg.guid = obj.package_guid
WHERE obj.object_type='event'
  and pkg.name NOT IN ('qds', 'ucs', 'SecAudit');

If I am using XE and have an Availability Group, I can track availability_replica_state_change to see what the replica’s previous state and current state are – such as when a replica goes from a secondary to primary. If using SQL Server 2014’s buffer pool extensions, I can count how many times pages are read, using buffer_pool_extension_pages_read, or when they are removed, using buffer_pool_extension_pages_evicted. Even without any special features, XE lets me dig deeper into what SQL Server is doing. I can track async_io_requested and async_io_completed events, which I can’t do with Profiler. You can count page_split as it’s happening, to see if that’s causing a performance problem. XE has many, many events!

2. XE gives you multiple options to save and view the data. With Profiler, you can view the action live on-screen, and risk bringing the server down, or you can save it to a trc file and view it after the fact. XE allows you to choose one or more targets for each session, and they allow a lot of flexibility.

You can “View Live Data” and watch the events you’ve selected scroll across the screen (although I still wouldn’t recommend this approach). You can save your events to a file on disk, then review them later. You can also collect data in ways Profiler didn’t allow. For example, using the event_counter target allows you count the instances of an event happening – while it’s happening, no going back and having to aggregate after the fact. The histogram target is similar, but allows you to put the event in groups – again, as it’s happening, not later. You can also use the pair_matching target to find related events that match.

histogram

A sample of the histogram, capturing the number of SELECT statements executed per database.

 

3. XE sessions are easily scriptable & portable. Yes, you could script out a Profiler session. Does this really make sense to you?

Do you know what @TraceId, 15, 10, @on means?

An XE script is simple to read and understand. I can create this once and deploy it to any server.

I can clearly tell I'm capturing sp_statement_completed and sql_statement_completed events.

I can clearly tell I’m capturing sp_statement_completed and sql_statement_completed events.

It’s time to consider moving away from Profiler and using Extended Events. In a future release of SQL Server, Profiler will be gone – so learn more about Extended Events starting today!

Querying Extended Events Target XML [Video]

If you’ve poked at Extended Events in the last five years, you know the targets store data as…XML. Has XML been the reason you haven’t adopted Extended Events? Don’t let it stop you! In this video, Jes breaks down her method for unraveling the XML of targets.

The scripts I used can be downloaded here.

Jeremiah says: This is great! I wish I’d had this kind of tutorial when I started working with Extended Events.

Kendra says: XQuery is rough, and totally weird when you’re first looking at it. It’s great to get a guided tour of this strange part of the TSQL language.

How to Query Extended Events Target XML

When Extended Events (XE) were released with SQL Server 2008, I was excited – something new to learn! I read Microsoft articles about them, read blog posts about them, and tried to use them. At that time, there was no GUI for XE. I had to create the sessions with T-SQL. That wasn’t awful – I learned the syntax after some trial and error, and could start and stop sessions. There was also no GUI to view the saved data – and it was all saved as XML.

I ran headfirst into a brick wall.

This is the wall I ran into.

I gave up when faced with XML. When SQL Server 2008R2 came out, I once again looked at XE. With no GUI for creating or viewing data, I retreated in defeat again. With SQL Server 2012 came the GUI – a new session wizard, a new session dialog box, the (not-recommended) ability to view live data, and the ability to see the information you collected. But that didn’t solve the root problem for me – all the good data, all the results I wanted to view, aggregate, and slice and dice, were still in XML.

So, I learned to query XML with T-SQL. It hasn’t been easy for me to understand. But, by breaking it down into small steps I was able to conquer it. If querying XML has been the reason you’ve been avoiding XE, I’m here to help you get past it!

Create a session

I’m going to create a session that collects wait statistics and puts them in the Ring Buffer target, then run a small workload to capture waits.

CREATE EVENT SESSION [Ring Buffer - Track Waits] ON SERVER 
ADD EVENT sqlos.wait_info(
    ACTION(sqlserver.database_id,sqlserver.database_name,sqlserver.is_system,sqlserver.session_id,sqlserver.sql_text)) 
ADD TARGET package0.ring_buffer
WITH (STARTUP_STATE=OFF)
GO

/* Start session */
ALTER EVENT SESSION [Ring Buffer - Track Waits] ON SERVER STATE=START;

/* Run a workload or series of queries. */ 
USE StackOverflow_20130906;
GO 
SELECT Id, DisplayName, CreationDate, LastAccessDate 
FROM Users 
WHERE UpVotes > 1000; 

SELECT V.PostId, V.VoteTypeId 
FROM Votes V 
  INNER JOIN Users U ON U.Id=V.UserId 
WHERE U.DownVotes > 100;

I used the default query from Books Online to view the data. This is not helpful!

SELECT name, target_name, CAST(xet.target_data AS xml)
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe
   ON (xe.address = xet.event_session_address)
WHERE xe.name = 'Ring Buffer - Track Waits';

ring buffer default query

Even if I click the target_data column (which I can do!), I just get a page of XML I have to read through. Yuck!

ring buffer XML

Querying XML

So, how do I make this data understandable, so I can count the number of each wait type that happened? We want to get the XML into a format we can use. I’m going to start by inserting the data into a temp table and selecting it.

SELECT CAST(target_data as xml) AS targetdata
INTO #capture_waits_data
FROM sys.dm_xe_session_targets xet
JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
WHERE xes.name = 'Ring Buffer - Track Waits'
  AND xet.target_name = 'ring_buffer';

SELECT * 
FROM #capture_waits_data;

This gives us the same link to a page of XML. What we can do now that that data is in a temp table, though, is “shred” the XML. XML is just nodes of data. You only need to understand the hierarchy of the nodes to query it.

Let’s look at this XML. I can see there is one element, event name=”wait_info” (highlighted in green) that I want to dig into. Under that, I want to get the “wait_type”, “duration”, and “signal_duration” (highlighted in yellow) attributes.

xml nodes i want to see

To do so, I use the T-SQL XML value function to pull out wait_type, like so.

SELECT xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type
FROM #capture_waits_data
  CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data);

Once I understand how to use the value function, I can do the same for other data I want to see.

SELECT xed.event_data.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
  xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type, 
  xed.event_data.value('(data[@name="duration"]/value)[1]', 'int') AS wait_type_duration_ms, 
  xed.event_data.value('(data[@name="signal_duration"]/value)[1]', 'int') AS wait_type_signal_duration_ms 
FROM #capture_waits_data
  CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data);
I can read this!

I can read this!

Level up: Aggregating data

For this XE session (and wait statistics in general), having every wait listed out isn’t helpful. I want to aggregate the data so I can see which wait occurred the most frequently. What happens when I try to aggregate this query?

SELECT xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type, 
  COUNT (xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)')) AS count_wait_type, 
  SUM (xed.event_data.value('(data[@name="duration"]/value)[1]', 'int')) AS sum_wait_type_duration_ms, 
  SUM (xed.event_data.value('(data[@name="signal_duration"]/value)[1]', 'int')) AS sum_wait_type_signal_duration_ms 
FROM #capture_waits_data
  CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data) 
GROUP BY xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)');
I get an error.

I get an error.

I have to take a different approach. I’m going to use a derived table. I will put my initial query to select the data from the temp table into the FROM clause, then aggregate on the results.

SELECT xei.wait_type, 
  COUNT(xei.wait_type) AS count_wait_type, 
  SUM(xei.wait_type_duration_ms) AS sum_wait_type_duration_ms, 
  SUM(xei.wait_type_signal_duration_ms) AS sum_wait_type_signal_duration_ms 
FROM 
  (SELECT xed.event_data.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
    xed.event_data.value('(data[@name="wait_type"]/text)[1]', 'varchar(25)') AS wait_type, 
    xed.event_data.value('(data[@name="duration"]/value)[1]', 'int') AS wait_type_duration_ms, 
    xed.event_data.value('(data[@name="signal_duration"]/value)[1]', 'int') AS wait_type_signal_duration_ms 
  FROM #capture_waits_data
    CROSS APPLY targetdata.nodes('//RingBufferTarget/event') AS xed (event_data)) AS xei 
GROUP BY xei.wait_type
ORDER BY SUM(xei.wait_type_duration_ms) DESC;

I have usable data!

This is one sleepy system.

This is one sleepy system.

Last but not least, I’ll clean up after myself.

/* Clean up */ 
DROP TABLE #capture_waits_data; 
GO 
/* Stop session */
ALTER EVENT SESSION [Ring Buffer - Track Waits] ON SERVER STATE=STOP;
/* Drop session */
DROP EVENT SESSION [Ring Buffer - Track Waits] ON SERVER;

Querying the XML output from Extended Events sessions is easy if you take it step-by-step. When you are looking at the raw XML, establish what elements you want to extract. (These will vary by the events you choose to capture.) Move the data into a temp table you can work with. Use the XML value function to extract one element at a time, until you have the data you need. At last, if you need or want to, you can aggregate.

Don’t let XML be the reason you won’t use Extended Events – it isn’t for me any longer!

Want to learn more about Extended Events? Start here! 

[Video] Statistics: Beyond Performance

Tables and indexes in your databases have statistics associated with them, and you know they help the query optimizer determine the best way to execute a query. But do you know where the information about them is stored, what types of maintenance you should be performing on them, how to check if they need to be updated, and what causes them to change? Jes will explain the care and feeding of database statistics in this webinar.

Corrupt Your Database – On Purpose! [Video]

Corruption: it can strike at any time. You know this, so you have your page verification option set to CHECKSUM and you run DBCC CHECKDB regularly. When the dreaded day finally arrives, what do you do? If you haven’t faced corruption yet, or you want to brush up your repair skills, Jes will show you how to corrupt your database with a sample database and a hex editor – and how to fix it!

Want to learn more about repair options? Read up on how to restore a page in SQL Server Standard and Enterprise Edition.

The script is available for download, with the understanding that this is purely for test purposes, and will never, ever be used on a production database. Ever.

How to Restore a Page in SQL Server Standard and Enterprise Edition

One of the many restore features in SQL Server is the ability to restore one or more pages of data. This can be very convenient in some narrow situations – for example, corruption occurs on one page or an oops update is made to one record.

The page restore process is not straightforward, however, and, as I recently discovered, the Books Online article about it is confusing. See, you have to perform the restore offline in all versions except Enterprise Edition – but the only example Books Online gives is…Enterprise Edition.

Here’s a straightforward breakdown of how to do a page-level restore both offline and online. For the sake of brevity, let’s say I have two databases – TestRestoreOnline and TestRestoreOffline. Both are in Full recovery. Each has one damaged page, which I’m going to restore. (For a full demo script, click here.)

Offline

You should already have an existing full backup – mine is at D:\SQL Backups\TestRestoreOffline-Backup1.bak. I also have one transaction log backup, D:\SQL Backups\TestRestoreOffline-LogBackup1.trn.

/* This example uses an OFFLINE restore, which is applicable to all versions of SQL Server. */
USE master;
GO

/* The NORECOVERY statement in this last log backup makes the database "offline" - you don't actually set it OFFLINE. */
BACKUP LOG TestRestoreOffline TO DISK=N'D:\SQL Backups\TestRestoreOffline-LogBackup2.trn'
WITH NORECOVERY;
GO

/* Restore full backup, specifying one PAGE.
I used sys.dm_db_database_page_allocations to find the page number. */
RESTORE DATABASE TestRestoreOffline
PAGE='1:293' --Have multiple? Separate with commmas.
FROM DISK=N'D:\SQL Backups\TestRestoreOffline-Backup1.bak'
WITH NORECOVERY;

/* Restore log backups */
RESTORE LOG TestRestoreOffline FROM DISK=N'D:\SQL Backups\TestRestoreOffline-LogBackup1.trn'
WITH NORECOVERY;

RESTORE LOG TestRestoreOffline FROM DISK=N'D:\SQL Backups\TestRestoreOffline-LogBackup2.trn'
WITH NORECOVERY;

/* Bring database "online" */
RESTORE DATABASE TestRestoreOffline
WITH RECOVERY;

That is an offline page restore. By putting the database in a NORECOVERY mode before the restores begin, the database can’t be accessed.

Online

An online page restore is only available in Enterprise Edition. This will allow users to access other objects in the database while you are restoring the page(s) needed.

You should already have an existing full backup (D:\SQL Backups\TestRestoreOnline-Backup1.bak) and log backup(s) (D:\SQL Backups\TestRestoreOnline-LogBackup1.trn).

/* This is an example of an online page restore. */
USE master;
GO

/* Restore full backup, specifying one PAGE.
I used sys.dm_db_database_page_allocations to find the page number. */
RESTORE DATABASE TestRestoreOnline
PAGE='1:293' --Have multiple? Separate with commmas.
FROM DISK=N'D:\SQL Backups\TestRestoreOnline-Backup1.bak'
WITH NORECOVERY;

/* Restore log backups */
RESTORE LOG TestRestoreOnline FROM DISK=N'D:\SQL Backups\TestRestoreOnline-LogBackup1.trn'
WITH NORECOVERY;

/* With Enterprise Edition, the "online" restore - a log backup with NORECOVERY - goes here. */
BACKUP LOG TestRestoreOnline TO DISK=N'D:\SQL Backups\TestRestoreOnline-LogBackup2.trn'
WITH NORECOVERY;
GO

/* Restore the last log backup */
RESTORE LOG TestRestoreOnline FROM DISK=N'D:\SQL Backups\TestRestoreOnline-LogBackup2.trn'
WITH NORECOVERY;

/* Restore database */
RESTORE DATABASE TestRestoreOnline
WITH RECOVERY;

The steps for an online restore differ slightly. The tail-log backup is taken after all the other log backups are applied, instead of at the beginning of the sequence.

Want to know more?

Backup and recovery skills are essential for every DBA, but it can be complicated.

Are SQL Server Functions Dragging Your Query Down?

In most coding languages, functions are often-used blocks of code that can be reused from multiple locations, leading to less code – and cleaner code. SQL Server also lets us create functions that can be used the same way. They are reusable blocks of code that can be called from multiple locations. So, if you need to format phone numbers a certain way, or parse for specific characters, you can do so using a function.

The question is, how much work is SQL Server doing when you call a function? If it’s the SELECT clause, is it called once – processing all rows – or once for each row in the result set, regardless if that’s 1 row or 100,00? What if it’s in the WHERE clause?

I’ll let you in on a little secret: if a function is used in the SELECT or WHERE, the function can be called many, many times. If the function is very resource-intensive, it could be causing your query to be very slow – and you would never see the execution of the function within the execution plan of the calling query.

Yep, SQL Server’s execution plans can be a bit vague when it comes to functions – and by “a bit vague”, I mean, “They don’t show up at all”. You need to dig deeper!

I’m going to run a few demos against the AdventureWorks2012 sample database in a SQL Server 2014 instance to show this!

First, I create a scalar-value function that will return the five left-most letters of a LastName.

CREATE FUNCTION [dbo].[ParseLastName](@LastName VARCHAR(50))
RETURNS VARCHAR(5)
AS
-- Returns the 5 left characters of the last name
BEGIN
DECLARE @ret VARCHAR(5);

SET @ret =
LEFT(@LastName, 5)

RETURN @ret
END;

Then, I create an Extended Events session to track statement completion. (Note: I have only tested this on SQL Server 2014, no lower versions.) (Using SQL Server 2008 R2 or earlier? You could create a server-side trace to capture sp_statement_completed and sql_statement_completed, but it won’t give you some functionality I’ll show later.)

CREATE EVENT SESSION [CaptureFunctionExecutions] ON SERVER
ADD EVENT sqlserver.sp_statement_completed(
ACTION(sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.sql_statement_completed(
ACTION(sqlserver.sql_text,sqlserver.tsql_stack))
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)

GO

I start the Extended Events session, and then turn on actual execution plans.

I start with a simple query, which returns 19,972 rows.

SELECT LastName
FROM Person.Person;

The execution plan shows an index scan and has a cost of 0.10451.

function 1

Looking at the details of the index scan, I see Estimated Number of Executions is 1, and Number of Executions is 1.

Let’s look at the same query when it performs the same calculation as the function – LEFT(LastName, 5).

SELECT LastName, LEFT(LastName, 5)
FROM Person.Person;

There’s now an additional operator – a compute scalar. The cost has risen slightly to 0.106508.

function 2

Now, I will modify the query to call the function from the SELECT clause.

SELECT LastName, dbo.ParseLastName(LastName)
FROM Person.Person;

Looking at the execution plan, I see an index scan and a compute scalar. The cost is the same as before –  0.106508.

function 3

Expanding the properties for the compute scalar, I see the function, but it says there is only one execution.

function 4

A quick glance at my Extended Events live feed tells a different story.

function 5

If I add grouping by statement, I can see the function was actually executed 19,972 times – once for each row in the result set.

function 6

That’s a lot more work than advertised!

Does the same thing happen if the function is in the WHERE clause?

SELECT FirstName, LastName
FROM Person.Person
WHERE dbo.ParseLastName(LastName) = 'McCar';

Two rows are returned. The execution plan now has an index scan, a compute scalar, and a filter. The cost is 0.118091.

function 7

The Extended Events session again shows 19,972 executions – once for each row in the index.

function 8

The data isn’t filtered out until after the function is called, so it is executed once for each row.

Conclusion

These examples prove that whether one or many rows are returned as the query result set, if a function is used in the SELECT or WHERE, the function can be called many, many times. It could be one of the top resource-consuming queries in your server!

How can you see if a function is bringing your server’s performance down? Look at the top queries in your plan cache using our sp_BlitzCache tool, by total CPU and by number of executions, to see if this is happening to you.

Email Query Results Using a SQL Server Agent Job

SQL Server Agent is one of my favorite tools – it allows you to automatically schedule jobs, alert you if things are going badly, and capture information into database tables.

One common request is to execute a query and send the results via email on a regular basis. This task is a great match for SQL Server Agent and sp_send_dbmail. Setting up a job and a step that uses sp_send_dbmail with the @query parameter, and building on it with other available parameters, you can easily accomplish this.

Let’s take a look at a job I’ve created. The first step is to gather metrics from the sys.dm_os_sys_memory DMV. I insert the data into a table in my master database.

Job Step 1

Job Step 1

The second step in this job uses sp_send_dbmail to send the results of the query from today to a database mail profile account.

Job Step 2

Job Step 2

Let’s look at the command more closely.

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Main DB Mail profile',
@recipients = 'jes@brentozar.com',
@subject = 'Memory Values',
@query = N'SELECT total_physical_memory_kb, available_physical_memory_kb, system_memory_state_desc, collection_date_time
FROM MemoryDMVHistory
WHERE CAST(collection_date_time AS Date) = CAST(GETDATE() AS DATE)
ORDER BY collection_date_time DESC;',
@attach_query_result_as_file = 1,
@query_attachment_filename = 'Memory Values.txt'
  • @profile_name is the Database Mail profile you want to use to send the email. You must have at least one profile already set up.
  • @recipients is the list of email addresses to send the email to. If you have more than one, separate them with a semi-colon.
  • @subject is the email subject.
  • @query is the query you wish to run. MSDN says it “can contain any valid Transact-SQL statements”, but I haven’t tested the limits of this statement yet.
  • @attach_query_results_as_file has two options: 0 or 1. When set to 0, the query results are included in the body of the email. When set to 1, the results are included as an attachment.
  • @query_attachment_filename is optional, and is the name and file extension of the attachment. I recommend setting this, with a .txt or .csv extension.

There are other options you can add as well, such as @execute_query_database, @query_result_header, @query_result_width, and more.

When I execute the job, I receive this email:

agent job query email

When I open the .txt attachment, I will see my query results:
agent job query text file

This isn’t well-formatted, so I may want to modify the results with @query_result_header and @query_result_width.

Both SQL Server Agent jobs and sp_send_dbmail give you a lot of flexibility for creating and emailing query results. Learn to use them effectively!

Get in the (SQL Server) Community Spirit [Video]

Thanks for tuning in to our webcast again this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, September 9! Not only do we answer your questions, we also give away a prize at 12:25 PM – don’t miss it!

The SQL Server community is unlike any other technical community. We have so many helpful people and our pick of resources. When you want to contribute, you have many options – from speaking to writing, attending conferences to running user groups, answering forum questions to writing articles. Which should you invest your time in to help your career and personal growth? Let me share my experiences with all of these so you can make the best choice.

Have questions? Feel free to leave a comment so we can discuss it on Tuesday!

SQL Server Transparent Data Encryption (TDE) Common Questions

Transparent Data Encryption is a way to encrypt your data in SQL Server. It affects the data and log files of the database. I recently gave a webcast on this topic, and got some great questions about TDE.

Lock that data up tight!

Lock that data up tight!

What versions of SQL Server is it available in?
It is in 2008+. (Let me ask again…why are you still using SQL Server 2005?)

Is TDE only available in Enterprise Edition?
Yes, this is an expensive-edition-only feature.

Can TDE encrypt at the table or column level?
No, it’s all or nothing.

Does TDE provide encryption of backups?
When you enable TDE on a database, the backups will, by nature of the feature, be encrypted. However, it’s not possible to encrypt only the backup files and not the data. Native backup-only encryption is available as of SQL Server 2014 (Enterprise, BI, and Standard editions).

When I turn TDE on, will all the data pages need to be read? Will this take some time?
Yes, and yes. Test it in your development environment first to determine how long it will take to perform this in production.

Does TDE work with…
Failover clustering? Yes – there is only one copy of shared data.
AlwaysOn Availability Groups? Yes, but you have to set it all up with T-SQL.
Database mirroring? Yes – the data will be encrypted on both instances.
Log shipping? Yes – the data is encrypted on both instances.
Replication? Technically…but it’s a pain. The certificate must be installed on all subscribers, and the data is not encrypted as it is distributed.

What other questions do you have?

css.php