Blog

A funny thing happened on my way to set up Mirroring…

I’ve set up Mirroring about a billion times

I’m not bragging about that. I’d rather say that I set up a billion AGs, and not one of them ever failed. But then I’d be lying to you; those things fail like government programs. One thing I’d never done, though, is set up Mirroring with a Witness. I never wanted automatic failover, because it’s only one database at a time. If for some reason one database out of all that I had mirrored ever turned Ramblin’ Man and failed over to another server, there would understandably be some application consternation. Not to mention any maintenance and internal operations. They don’t react well to sudden database unavailability.

Of course, doing anything for the first time is horrible. Just ask my second wife.

Here’s where things got awkward

I have my databases! This is my top secret development environment. Stack Overflow is in an AG, and I had set up two other Mirrors: one synch and one asynch. I wanted to have a variety of setups to test some scripts against.

Everything looks good!

Everything looks good!

Alright, let’s set up Mirroring…

Configuring stuff is cool, right?

Configuring stuff is cool, right?

Yeah yeah next next next

Yeah yeah next next next

Service accounts whatever BORING

Service accounts whatever BORING

GREEN LIGHT GO!

GREEN LIGHT GO!

This is so easy. Seriously. Why doesn’t everyone do this? Why do you complicate your short, short lives with Availability Groups? Are they AlwaysOn? Are they Always On? WHO KNOWS? Not even Microsoft.

I'm hitting this button and jumping into a mile of Laphroaig.

I’m hitting this button and jumping into a mile of Laphroaig.

HIGH FIVES ALL ARO-

Y THO?

Y THO?

This is the error text:

The ALTER DATABASE command could not be sent to the remote server instance 'TCP://ORACLEDB.darling.com:5022'. The database mirroring configuration was not changed. Verify that the server is connected, and try again.

Super Sleuth

Alright, that’s silly. I used the GUI. Instead of going to bed I’ll spend some time checking all my VM network settings. BRB.

I’m back. They were all correct. I could ping and telnet and set up linked servers and RDP. What in the name of Shub-Niggurath is going on with this thing?

I can even see the Endpoint! So close, and yet so far~~

The Endpoint is showing up on the Witness and what is this?

The Endpoint is showing up on the Witness and what is this?

Where are we now?

This is a good time for a quick recap

  1. Mirroring is up and running synchronously
  2. The endpoint is configured on the witness
  3. We get an error when we try to connect the witness

TO THE ERROR LOG!

I should have done this hours ago.

I should have done this hours ago.

Well whaddya know? That’s a really good clue. Encryption and stuff. There’s no compatible algorithm. Ain’t that somethin’? You’d think that Microsoft would be cool about setting up the same kind of encryption across all the different Endpoints, if using different encryption would cause the setup to fail. Right guys? Heh. Right? Hey, hello?

NOPE.

Alright, let’s see what I need to be a matchmaker.

The Cure - Primary

The Cure – Primary

Oh. AES. Okay. Cool. Thanks.

Oh. AES. Okay. Cool. Thanks.

RC4 WHY WOULD YOU DO THAT?

RC4 WHY WOULD YOU DO THAT?

Since we have them both scripted out already, let’s just drop and re-create the Witness Endpoint with the right encryption algorithm.

INTO THE TREES

INTO THE TREES

That did not result in a forest fire. I’m hopeful. Sort of. It’s been a long night and I think I can see tomorrow from here.

Scriptin'

Scriptin’

Meanwhile, back on the Primary…

I AM SO SMRT

I AM SO SMRT

It worked! Now I have a Witness, and I can shut all my VMs down. That was so much fun.

What did we learn?

Microsoft hates you and doesn’t want you to sleep. Just kidding. Mostly. But seriously, why would they do that?

It mostly goes to show that it’s always a smart idea to use that little script button at the top of (most) GUIs in SSMS. Who knows what kind of foolishness you’ll find? A little reading can save you a lot of time troubleshooting errors that make you feel insane.

Thanks for reading!

Another reason why scalar functions in computed columns is a bad idea

As if there weren’t enough reasons

In my last blog post I talked about different things that cause plans, or zones of plans, to execute serially. One of the items I covered was computed columns that reference scalar functions. We know that they’ll make queries go parallel, but what about other SQL stuff?

Oh no my index is fragmented

If you’re running Expensive Edition, index rebuilds can be both online and parallel. That’s pretty cool, because it keeps all your gadgets and gizmos mostly available during the whole operation, and the parallel bit usually makes things faster.

That is, unless you have a computed column in there that references a scalar function. I decided to write my test function to not perform any data access so it could be persisted. It’s dead simple, and I’m tacking it on to a column in the PostLinks table of the Stack Overflow database.

CREATE FUNCTION dbo.PIDMultiplier (@pid int)
RETURNS INT
    WITH RETURNS NULL ON NULL INPUT,
         SCHEMABINDING
AS
    BEGIN
        DECLARE @Out BIGINT;
        SELECT  @Out = @pid * 2
        RETURN @Out;
    END;
GO

ALTER TABLE [dbo].[PostLinks]
ADD [Multiplied] AS dbo.[PIDMultiplier]([PostId]) PERSISTED

For this one, all we have to do is turn on actual execution plans and rebuild the index, then drop the column and rebuild again.

ALTER TABLE [dbo].[PostLinks] REBUILD WITH (ONLINE = ON)

ALTER TABLE [dbo].[PostLinks] DROP COLUMN [Multiplied]

Here are my execution plans. The rebuild I ran when the table had my computed column in it stayed serial.

Hi, I'm garbage.

Hi, I’m garbage.

Parallel, sans computed column:

Dude, you're getting a parallel.

Dude, you’re getting a parallel.

But there’s a bigger fish in the pond

Probably the most important maintenance item you should be doing, aside from backups, is running DBCC CHECKDB. Seriously, if you’re not doing them both, start today. Ola Hallengren has basically done all the work for you. Back when I had a real job, I used his scripts everywhere.

Before we were so rudely interrupted by a soap box, we were talking about parallelism. This part was a little bit more complicated, but don’t worry, you don’t have to follow along. Just look at the pretty pictures. Sleep now. Yes. Sleep.

The first couple times I tried, the DBCC check never went parallel. Since I’m on my laptop, and not a production server, I can set Cost Threshold for Parallelism to 0. You read that right, ZE-RO! Hold onto your drool dish.

With that set, I fire up Ye Olde Oaken sp_BlitzTrace so I can capture everything with Extended Events. You’ll need all three commands, but you’ll probably have to change @SessionId, and you may have to change @TargetPath. Run the first command to start your session up.

EXEC [dbo].[sp_BlitzTrace] @SessionId = 61 , @Action = 'start' , @TargetPath = 'c:\temp\' , @TraceParallelism = 1 , @TraceExecutionPlansAndKillMyPerformance = 1 

EXEC [dbo].[sp_BlitzTrace] @Action = 'stop' 

EXEC [dbo].[sp_BlitzTrace] @Action = 'read'

With that running, toss in your DBCC command. I’m only using DBCC CHECKTABLE here to simplify. Rest assured, if you run DBCC CHECKDB, the CHECKTABLE part is included. The only checks that DBCC CHECKDB doesn’t run are CHECKIDENT and CHECKCONSTRAINT. Everything else is included.

DBCC CHECKTABLE('dbo.PostLinks') WITH NO_INFOMSGS, ALL_ERRORMSGS

ALTER TABLE [dbo].[PostLinks]
ADD [Multiplied] AS dbo.[PIDMultiplier]([PostId]) PERSISTED

Run DBCC CHECKTABLE, add the computed column back, and then run it again. When those finish, run the sp_BlitzTrace commands to stop and read session data. You should see execution plans for each run, and they should be way different.

Hell Yeah.

Hell Yeah.

Hell No.

Hell No.

So even DBCC checks are serialized. Crazy, right? I’d been hearing about performance hits to varying degrees when running DBCC checks against tables with computed columns for a while, but never knew why. There may be a separate reason for regular computed columns vs. ones that reference scalar functions. When I took the equivalent SQL out of a function, the DBCC check ran parallel.

ALTER TABLE [dbo].[PostLinks]
ADD [Multiplied] AS [PostId] * 2 PERSISTED

Of course, those online index rebuilds running single threaded might be a blessing in disguise, if you haven’t patched SQL recently.

I don’t have much of a grand closing paragraph here. These things can seriously mess you up for a lot of reasons. If you’re a vendor, please get away from using scalar functions, and please please don’t use them in computed columns.

Thanks for reading!

Still Serial After All These Years

With each new version of SQL comes a slew of new stuff

While some changes are cosmetic, others bewildering, and the rest falling somewhere between “who cares about JSON?” and “OH MY GOD TAKE MY MONEY!”, but not really my money, because I only buy developer edition. Aaron Bertrand has done a better job finding, and teaching you how to find new features than I could. Head over to his blog if you want to dive in.

What I wanted to look at was something much more near and dear to my heart: Parallelism.

(It’s so special I capitalized it.)

In the past, there were a number of things that caused entire plans, or sections of plans, to be serial. Scalar UDFs are probably the first one everyone thinks of. They’re bad. Really bad. They’re so bad that if you define a computed column with a scalar UDF, every query that hits the table will run serially even if you don’t select that column. So, like, don’t do that.

What else causes perfectly parallel plan performance plotzing?

Total:

  • UDFs (Scalar, MSTVF)
  • Modifying table variables

Zone:

  • Backwards scan
  • Recursive CTE
  • TOP
  • Aggregate
  • Sequence
  • System tables

Other:

  • CLR functions (that perform data access)
  • Dynamic cursors
  • System functions

We’ll just look at the items from the first two lists. The stuff in Other is there because I couldn’t write a CLR function if I had Adam Machanic telling me what to type, cursors don’t need any worse of a reputation, and it would take me a year of Sundays to list every internal function and test it.

I’m going to depart from my normal format a bit, and put all the code at the end. It’s really just a mess of boring SELECT statements. The only thing I should say up front is that I’m leaning heavily on the use of an undocumented Trace Flag: 8649. I use it because it ‘forces’ parallelism by dropping the cost threshold for parallelism to 0 for each query. So if a parallel plan is possible, we’ll get one. Or part of one. You get the idea.

Just, you know, don’t use it in production unless you really know what you’re doing. It’s pretty helpful to use as a developer, on a development server, to figure out why queries aren’t going parallel. Or why they’re partially parallel.

All of this was run on 2016 CTP 3.1, so if RTM comes along, and something here is different, that’s why. Of course, backwards scans are probably close to 15 years old, so don’t sit on your thumbs waiting for them to get parallel support.

Backwards scan!

This is what happens when your ORDER BY is the opposite of your index.

This is what happens when your ORDER BY is the opposite of your index.

Scalar!

Not only do they run serial, but they run once for every row returned. Have a nice day!

Not only do they run serial, but they run once for every row returned. Have a nice day!

Table with computed column

Hint: the Id column isn't the computed one.

Hint: the Id column isn’t the computed one.

MSTVF

Multi-statement Table Valued Garbage

Multi-statement Table Valued Garbage

Table Variable Insert

You probably can't name one good reason to use a table variable.

You probably can’t name one good reason to use a table variable.

Top

Top o' the nothin!

Top o’ the nothin’!

Aggregating

You're making me wanna use Excel, here.

You’re making me wanna use Excel, here.

Row Number (or any windowing/ranking function)

You're gonna get all those row numbers one by one.

You’re gonna get all those row numbers one by one.

Accessing System Tables

Twisted SYSter

Twisted SYS-ter

The recursive part of a recursive CTE

This part was fun. I liked this part.

This part was fun. I liked this part.

Picture time is over

Now you get to listen to me prattle on and on about how much money you’re wasting on licensing by having all your queries run serially. Unless you have a SharePoint server; then you have… many other problems. If I had to pick a top three list that I see people falling victim to regularly, it would be:

  1. Scalar functions
  2. Table variables
  3. Unsupported ORDER BYs

They’re all relatively easy items to fix, and by the looks of it, we’ll be fixing them on SQL Server 2016 as well. Maybe Query Store will make that easier.

Thanks for reading!

Das Code

USE [StackOverflow];
SET NOCOUNT ON;
GO

/*Scalar*/
CREATE FUNCTION [dbo].[ScalarValueReturner] ( @id INT )
RETURNS INT
       WITH RETURNS NULL ON NULL INPUT,
            SCHEMABINDING
AS
    BEGIN
        RETURN @id * 1;
    END;
GO

/*MSTVF*/
CREATE FUNCTION [dbo].[MSTVFValueReturner] ( @id INT )
RETURNS @Out TABLE ( [Returner] INT )
       WITH SCHEMABINDING
AS
    BEGIN
        INSERT  INTO @Out
                ( [Returner] )
        SELECT  @id;
        RETURN;
    END;
GO

SELECT  [Id]
FROM    [dbo].[Users]
WHERE   [Id] BETWEEN 1 AND 10000
ORDER BY [Id] DESC
OPTION  ( QUERYTRACEON 8649, RECOMPILE );


SELECT  [Id] ,
        [dbo].[ScalarValueReturner]([Id])
FROM    [dbo].[Users]
WHERE   [Id] BETWEEN 1 AND 10000
OPTION  ( QUERYTRACEON 8649, RECOMPILE );

SELECT  [Id] ,
        [ca].[Returner]
FROM    [dbo].[Users]
CROSS APPLY [dbo].[MSTVFValueReturner]([Id]) AS [ca]
WHERE   [Id] BETWEEN 1 AND 10000
OPTION  ( QUERYTRACEON 8649, RECOMPILE );

DECLARE @t TABLE ( [Id] INT );
INSERT  @t
SELECT  [Id]
FROM    [dbo].[Users]
WHERE   [Id] BETWEEN 1 AND 10000
OPTION  ( QUERYTRACEON 8649, RECOMPILE );

SELECT TOP ( 10000 )
        [u].[Id] ,
        [ca].[Id] AS [Returner]
FROM    [dbo].[Users] AS [u]
CROSS APPLY ( SELECT    [u2].[Id]
              FROM      [dbo].[Users] AS [u2]
              WHERE     [u2].[Id] = [u].[Id] ) AS [ca]
WHERE   [u].[Id] BETWEEN 1 AND 10000
OPTION  ( QUERYTRACEON 8649, RECOMPILE );

SELECT  AVG(CAST([dt].[Reputation] AS BIGINT)) AS [WhateverMan]
FROM    ( SELECT    [u].[Id] ,
                    [u].[Reputation]
          FROM      [dbo].[Users] [u]
          JOIN      [dbo].[Posts] [p]
          ON        [u].[Id] = [p].[OwnerUserId]
          WHERE     [p].[OwnerUserId] > 0
                    AND [u].[Reputation] > 0 ) AS [dt]
WHERE   [dt].[Id] BETWEEN 1 AND 10000
OPTION  ( QUERYTRACEON 8649, RECOMPILE );

SELECT  [Id] ,
        ROW_NUMBER() OVER ( PARTITION BY [Id] ORDER BY [Id] ) AS [NoParallel]
FROM    [dbo].[Users]
WHERE   [Id] BETWEEN 1 AND 10000
OPTION  ( QUERYTRACEON 8649, RECOMPILE );


SELECT TOP ( 10000 )
        [sc1].[column_id]
FROM    [master].[sys].[columns] AS [sc1]
CROSS JOIN [master].[sys].[columns] AS [sc2]
CROSS JOIN [master].[sys].[columns] AS [sc3]
CROSS JOIN [master].[sys].[columns] AS [sc4]
CROSS JOIN [master].[sys].[columns] AS [sc5]
OPTION  ( QUERYTRACEON 8649, RECOMPILE );

CREATE TABLE [dbo].[NorseGods]
    (
      [GodID] INT NOT NULL ,
      [GodName] NVARCHAR(30) NOT NULL ,
      [Title] NVARCHAR(100) NOT NULL ,
      [ManagerID] INT NULL ,
      CONSTRAINT [PK_GodID] PRIMARY KEY CLUSTERED ( [GodID] )
    );

INSERT  INTO [dbo].[NorseGods]
        ( [GodID], [GodName], [Title], [ManagerID] )
VALUES  ( 1,    N'Odin',  N'War and stuff', NULL )
,       ( 2,  N'Thor',    N'Thunder, etc.', 1 )
,       ( 3,  N'Hel',     N'Underworld!',   2 )
,       ( 4,  N'Loki',    N'Tricksy',       3 )
,       ( 5,  N'Vali',    N'Payback',       3 )
,       ( 6,  N'Freyja',  N'Making babies', 2 )
,       ( 7,  N'Hoenir',  N'Quiet time',    6 )
,       ( 8,   N'Eir',    N'Feeling good',  2 )
,       ( 9,   N'Magni',  N'Weightlifting', 8 );

WITH    [Valhalla]
          AS ( SELECT   [ManagerID] ,
                        [GodID] ,
            [GodName],
                        [Title] ,
                        0 AS [MidgardLevel]
               FROM     [dbo].[NorseGods]
               WHERE    [ManagerID] IS NULL
               UNION ALL
               SELECT   [ng].[ManagerID] ,
                        [ng].[GodID] ,
            [ng].[GodName],
                        [ng].[Title] ,
                        [v].[MidgardLevel] + 1
               FROM     [dbo].[NorseGods] AS [ng]
               INNER JOIN [Valhalla] AS [v]
               ON       [ng].[ManagerID] = [v].[GodID]
             )
    SELECT  [Valhalla].[ManagerID] ,
            [Valhalla].[GodID] ,
      [Valhalla].[GodName] ,
            [Valhalla].[Title] ,
            [Valhalla].[MidgardLevel]
    FROM    [Valhalla]
    ORDER BY [Valhalla].[ManagerID]
  OPTION (QUERYTRACEON 8649, RECOMPILE)

Filtered Indexes: Just Add Includes

I found a quirky thing recently

While playing with filtered indexes, I noticed something odd. By ‘playing with’ I mean ‘calling them horrible names’ and ‘admiring the way other platforms implemented them‘.

I sort of wrote about a similar topic in discussing indexing for windowing functions. It turns out that a recent annoyance could also be solved by putting the column my filter expression is predicated on in the included columns definition. That’s the fanciest sentence I’ve ever written, BTW. If you want more, don’t get your hopes up.

Ready for Horrible

Let’s create our initial index. As usual, we’re using the Stack Overflow database. We’ll look at a small group of users who have a Reputation over 400k. I dunno, it’s a nice number. There are like 8 of them.

CREATE UNIQUE NONCLUSTERED INDEX [Users_400k_Club] 
ON dbo.[Users] ([DisplayName], [Id]) 
WHERE [Reputation] > 400000;

With that in place, we’ll run some queries that should make excellent use of our thoughtful and considerate index.

--Will I Nill I?
SELECT [u].[Id], [u].[DisplayName]
FROM [dbo].[Users] AS [u]
WHERE [u].[Reputation] > 400000

SELECT [u].[Id], [u].[DisplayName]
FROM [dbo].[Users] AS [u]
WHERE [u].[Reputation] > 400000 AND [u].[Reputation] < 450000

SELECT [u].[Id], [u].[DisplayName]
FROM [dbo].[Users] AS [u]
WHERE [u].[Reputation] > 400001

SELECT [u].[Id], [u].[DisplayName]
FROM [dbo].[Users] AS [u]
WHERE [u].[Reputation] > 500000

If you were a betting organism, which ones would you say use our index? Money on the table, folks! Step right up!

Yes, Sorta, No, No.

Yes, Sorta, No, No.

That didn’t go well at all. Only the first query really used it. The second query needed a key lookup to figure out the less than filter, and the last two not only ignored it, but told me I need to create an index. The nerve!

Send me your money

Let’s make our index better:

CREATE UNIQUE NONCLUSTERED INDEX [Users_400k_Club] 
ON dbo.[Users] ([DisplayName], [Id])
INCLUDE([Reputation]) 
WHERE [Reputation] > 400000
WITH (DROP_EXISTING = ON)
;

Run those queries again. You don’t even have to recompile them.

Can't you tell by the way I run every time you make eyes at me?

Can’t you tell by the way I run every time you make eyes at me?

They all magically found a way to use our New and Improved index.

What was the point?

When I first started caring about indexes, and filtering them, I would get so mad when these precious little Bloody Mary recipes didn’t get used.

I followed all the rules!
There were no errors!

But why oh why didn’t SQL use my filtered indexes for even smaller subsets of the filter condition? It seemed insane to me that SQL would know the filter for the index is on (x > y), but wouldn’t use them even if (z > x).

The solution was to put the filtered column in the include list. This lets SQL generate statistics on the column, and much like getting rid of the predicate key lookup, allows you to search within the filtered index subset for even more specific information.

Tracking tempdb growth using Extended Events

Everyone knows tempdb is a wierd [sic] place

Brent refers to it as a public toilet. I agree with that sentiment. It will let anyone in to do whatever they want.

Recently I was trying to track down what was causing tempdb’s log to grow. I was thinking about using a Trace to do it, but then I remembered that it’s at least 2015, and I should be using Extended Events. Don’t worry, there won’t be any PowerShell. You can keep reading.

I use the below command to fire up my session. It worked on my 2012 and 2014 instances. Anything older or newer than that, and YMMV. You will likely have to change the output directory to whatever exists on your server. Most people have c:\temp, though.

CREATE EVENT SESSION [PublicToilet] ON SERVER
ADD EVENT [sqlserver].[database_file_size_change] (
    ACTION ( [sqlserver].[session_id], [sqlserver].[database_id],
    [sqlserver].[client_hostname], [sqlserver].[sql_text] )
    WHERE ( [database_id] = ( 2 )
            AND [session_id] > ( 50 ) ) ),
ADD EVENT [sqlserver].[databases_log_file_used_size_changed] (
    ACTION ( [sqlserver].[session_id], [sqlserver].[database_id],
    [sqlserver].[client_hostname], [sqlserver].[sql_text] )
    WHERE ( [database_id] = ( 2 )
            AND [session_id] > ( 50 ) ) )
ADD TARGET [package0].[asynchronous_file_target] (  SET filename = N'c:\temp\publictoilet.xel' ,
                                                    metadatafile = N'c:\temp\publictoilet.xem' ,
                                                    max_file_size = ( 10 ) ,
                                                    max_rollover_files = 10 )
WITH (  MAX_MEMORY = 4096 KB ,
        EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS ,
        MAX_DISPATCH_LATENCY = 1 SECONDS ,
        MAX_EVENT_SIZE = 0 KB ,
        MEMORY_PARTITION_MODE = NONE ,
        TRACK_CAUSALITY = ON ,
        STARTUP_STATE = ON );
GO

ALTER EVENT SESSION [PublicToilet] ON SERVER STATE = START;

With that fired up, let’s kick tempdb around a little. Don’t do this in production, it’s going to suck. We’re going to shrink files and run an insert and update in a loop. You can increase the number of loops if you want, but 3 is good enough to illustrate how this works.

USE [tempdb]
SET NOCOUNT ON

IF (OBJECT_ID('tempdb..#Users')) IS NOT NULL
DROP TABLE [dbo].[#Users]

DBCC SHRINKFILE('tempdev', 1)
DBCC SHRINKFILE('tempdev2', 1)
DBCC SHRINKFILE('tempdev3', 1)
DBCC SHRINKFILE('tempdev4', 1)

DBCC SHRINKFILE('templog', 1)

CREATE TABLE [#Users](
  [Id] [int] NULL,
  [Age] [int] NULL,
  [DisplayName] [nvarchar](40) NULL,
  [AboutMe] [nvarchar](max) NULL
  )

DECLARE @i INT = 0;
DECLARE @rc VARCHAR(20) = 0

WHILE @i < 3
      BEGIN

            INSERT  [#Users]
                    ( [Id] ,
                      [Age] ,
                      [DisplayName] ,
                      [AboutMe] )
            SELECT TOP ( 75000 )
                    [Id] ,
                    [Age] ,
                    [DisplayName] ,
                    [AboutMe]
            FROM    [StackOverflow].[dbo].[Users];

            UPDATE  [u1]
            SET     [u1].[DisplayName] = SUBSTRING(CAST(NEWID() AS NVARCHAR(MAX)), 0, 40) ,
                    [u1].[Age] = ( [u2].[Age] * 2 ) ,
                    [u1].[AboutMe] = REPLACE([u2].[AboutMe], 'a',
                                             CAST(NEWID() AS NVARCHAR(MAX)))
            FROM    [#Users] [u1]
            CROSS JOIN    [#Users] [u2];

      SET @rc = @@ROWCOUNT
      RAISERROR ('Current # rows %s', 0, 1, @rc) WITH NOWAIT

            SET @i += 1;			
      RAISERROR ('Current # runs %i', 0, 1, @i) WITH NOWAIT
      END;

Now you can either stop the XE session, or keep it running. I don’t care. It’s your ~~~DEVELOPMENT SERVER~~~ and definitely ~~~NOT PRODUCTION~~~, right?

Right.

Let’s see what kind of germs are on the toilet seat

	SELECT  [evts].[event_data].[value]('(event/action[@name="session_id"]/value)[1]',
                                    'INT') AS [SessionID] ,
        [evts].[event_data].[value]('(event/action[@name="client_hostname"]/value)[1]',
                                    'VARCHAR(MAX)') AS [ClientHostName] ,
        COALESCE(DB_NAME([evts].[event_data].[value]('(event/action[@name="database_id"]/value)[1]',
                                                            'BIGINT')), ';(._.); I AM KING KRAB') AS [OriginatingDB] ,
        DB_NAME([evts].[event_data].[value]('(event/data[@name="database_id"]/value)[1]',
                                                            'BIGINT')) AS [GrowthDB] ,
        [evts].[event_data].[value]('(event/data[@name="file_name"]/value)[1]',
                                    'VARCHAR(MAX)') AS [GrowthFile] ,
        [evts].[event_data].[value]('(event/data[@name="file_type"]/text)[1]',
                                    'VARCHAR(MAX)') AS [DBFileType] ,
        [evts].[event_data].[value]('(event/@name)[1]', 'VARCHAR(MAX)') AS [EventName] ,
        [evts].[event_data].[value]('(event/data[@name="size_change_kb"]/value)[1]',
                                    'BIGINT') AS [SizeChangeInKb] ,
        [evts].[event_data].[value]('(event/data[@name="total_size_kb"]/value)[1]',
                                    'BIGINT') AS [TotalFileSizeInKb] ,
        [evts].[event_data].[value]('(event/data[@name="duration"]/value)[1]',
                                    'BIGINT') AS [DurationInMS] ,
        [evts].[event_data].[value]('(event/@timestamp)[1]', 'VARCHAR(MAX)') AS [GrowthTime] ,
        [evts].[event_data].[value]('(event/action[@name="sql_text"]/value)[1]',
                                    'VARCHAR(MAX)') AS [QueryText]
FROM    ( SELECT    CAST([event_data] AS XML) AS [TargetData]
          FROM      [sys].[fn_xe_file_target_read_file]('c:\temp\publictoilet*.xel',
                                                        NULL, NULL, NULL) ) AS [evts] ( [event_data] )
WHERE   [evts].[event_data].[value]('(event/@name)[1]', 'VARCHAR(MAX)') = 'database_file_size_change'
        OR [evts].[event_data].[value]('(event/@name)[1]', 'VARCHAR(MAX)') = 'databases_log_file_used_size_changed'
ORDER BY [GrowthTime] ASC;

Look, I never said I was good at this XML stuff. If you are, feel free to tidy up all the VARCHAR(MAX) and BIGINT types into something more sensible. If you don’t like my King Krab emoji, blame Jeremiah.

But hey! The results are in. That didn’t take long.

The first file to get it is the log file. All sorts of helpful stuff got collected. If you can’t figure out what was going on from this, go home. Seriously. The query text is right there.

Meaningful data.

Meaningful data.

Yay. There’s more. The log file wasn’t the only one that blew up. All four tempdb files got used, as well. Is there an important lesson here? Probably. Even on my laptop, multiple tempdb files help. Don’t let me catch you out there with one tempdb data file. Or two tempdb log files. I’m watching you.

Happy Birthday!

Happy Birthday!

In case you were wondering…

Yes, it will capture shrinking files, too. So if anyone is being a horrible, you can catch them, and throw them out several windows.

For the love of Milton Berle, please stop shrinking files.

For the love of Milton Berle, please stop shrinking files.

You can use this on other databases as well, just change the database ID in the XE session definition. Just, you know, use it sparingly. There’s overhead for any type of monitoring, and XE is no XcEption (GET IT?). If you have a growth-heavy environment, capturing them all and the accompanying information could be a real burden on your server.

Building a better BAND-AID®

Stuff like this is good to diagnose problems short term, but it’s not meant to be a replacement for a full time monitoring solution. A lot of people spent a lot of time making nice ones. Go try them out and find one you like.

Thanks for reading!

Trace Flag 2330: Who needs missing index requests?

Hey, remember 2005?

What a great year for… not SQL Server. Mirroring was still a Service Pack away, and there was an issue with spinlock contention on OPT_IDX_STATS or SPL_OPT_IDX_STATS. The KB for it is over here, and it’s pretty explicit that the issue was fixed in 2008, and didn’t carry over to any later versions. For people still on 2005, you had a Trace Flag: 2330.

Like most things, there are trade offs. When you enable it, SQL stops collecting missing index requests. Probably not a great idea unless you’re working with extreme edge cases where you’ve already tuned your indexes really well. Most people will never fall into that category, though many of them will think they have.

The issue that it quite commonly addressed was around the creation of many objects in tempdb. You probably don’t need missing index details on a bunch of temp tables. What a lot of people didn’t realize was that it also made SQL stop collecting them for every other database.

Here in 2015

That Trace Flag can still be enabled with the same effect. Some people may not be aware that it’s not fixing anything, and still hurting things. Below is a script to reproduce the lousiness in a user database.

First we’ll clear out missing index requests by rebuilding the index (you know this happens, right?), and verify that there are no missing index requests with sp_BlitzIndex®. Running it in @mode = 3 will generate only missing index request details.

USE [StackOverflow]

ALTER INDEX [PK_Users] ON [dbo].[Users] REBUILD;

EXEC [master].[dbo].[sp_BlitzIndex] 
    @DatabaseName = N'StackOverflow' , -- nvarchar(128)
    @Mode = 3

Once we verify that’s empty, we’ll run a query that will generate a missing index request in the Stack Overflow database, and verify it registers with sp_BlitzIndex® again.

DECLARE @id INT

SELECT @id =  id 
FROM [dbo].[Users] AS [u]
WHERE [u].[LastAccessDate] > 2015-01-01
ORDER BY [u].[LastAccessDate]

EXEC [master].[dbo].[sp_BlitzIndex] 
    @DatabaseName = N'StackOverflow' , -- nvarchar(128)
    @Mode = 3

Whaddya know? We got ourselves a missing index.

You look helpful! What's your name?

You look helpful! What’s your name?

Now comes the ugly

We’ll enable TF 2330 globally, and use some GO magic to run the same query 5x consecutively, then check back in on sp_BlitzIndex®.

DBCC TRACEON(2330, -1)

GO 

DECLARE @id INT

SELECT @id =  id 
FROM [dbo].[Users] AS [u]
WHERE [u].[LastAccessDate] > 2015-01-01
ORDER BY [u].[LastAccessDate]
GO 5

EXEC [master].[dbo].[sp_BlitzIndex] 
    @DatabaseName = N'StackOverflow' , -- nvarchar(128)
    @Mode = 3

The compiles ticked up one, but SQL stopped counting requests for the index.

Bummer.

Bummer.

There is a light

But hey! We can turn that off. And… Well, you know the drill.

DBCC TRACEOFF(2330, -1)
GO 

DECLARE @id INT

SELECT @id =  id 
FROM [dbo].[Users] AS [u]
WHERE [u].[LastAccessDate] > 2015-01-01
ORDER BY [u].[LastAccessDate]
GO 5

EXEC [master].[dbo].[sp_BlitzIndex] 
    @DatabaseName = N'StackOverflow' , -- nvarchar(128)
    @Mode = 3

We have stopped breaking SQL!

1 + 5 = 6, FYI

1 + 5 = 6, FYI

What did we learn?

2005! What a lousy year for hair, and Trace Flags, and, uh, I don’t remember much else about 2005. Feel free to tell me what you hated about 2005 in the comments. Just check all your servers to make sure no one turned on this Trace Flag as an April Fool’s Day joke and then never came back from lunch, first.

Thanks for reading!

The case for Query Store in tempdb

Query Store is so cool

Billed as a flight data recorder for SQL Server, the Query Store is a repository of execution plan information, like the plan cache, except a bit more actionable. And it has a GUI.

"Try looking into that place where you dare not look! You'll find me there, staring out at you!"

“Try looking into that place where you dare not look! You’ll find me there, staring out at you!”

You can read all about what it does and what you can do with it around the internet. You can be suitably impressed and enchanted by the promise of data that’s persisted between restarts, being able to quickly and easily address plan regression issues, and so forth.

I’m here to bum you out when you’re done with all that.

You can’t use it in tempdb

On tempdb? Whatever.

What? Really? With all the stuff that goes on there? Aside from the many selfish demo reasons I have for wanting it, I know lots of people who make tempdb their default database for user logins so they don’t go messing things up in the real world, or more fragile system databases. I’m also aware of several vendor apps that dump users into tempdb and then use fully qualified names to point them to application data.

I found this all out by accident. I had just nabbed CTP 3 of SQL Server 2016, and was settling in for an evening of staring contests with execution plans and new features.

That’s when tragedy struck I got an error message:

ALTER DATABASE tempdb SET QUERY_STORE = ON; 

Query Data Store cannot currently be enabled on tempdb

Msg 12438, Level 16, State 1, Line 1
Cannot perform action because Query Store cannot be enabled on system database tempdb.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

And yeah, it’s an official no-go

Oh, but I model and msdb are fair game?

Oh, but model and msdb are fair game?

I’m all disillusioned and stuff, man.

This is going to be frustrating out in the wild for a lot of people. Not just fancy-pants camo shorts consultants who like to make demos where they treat tempdb like it owes them money. Real people. Who pay real money, for real servers.

Anyway, I started a Connect item to get this changed. Vote for it. Or don’t. Just don’t vote more than once, or if you’re dead.

SELECT INTO and non-nullable columns

SELECT…INTO  is one of my favorite SQL Server features.

It’s great for creating table skeletons with false WHERE clauses (1=2), moving a real table to a staged/temp location for testing, etc.

In SQL Server 2014

It acquired the ability to go parallel, which is pretty neat, but that’s not what we’re talking about here.

It has some limitations

Chief among them is this:

Indexes, constraints, and triggers defined in the source table are not transferred to the new table, nor can they be specified in the SELECT…INTO statement. If these objects are required, you can create them after executing the SELECT…INTO statement.

Which is… Sort of true. There’s a trick, and I’ll show you a quick example here with another of my favorite things: a Numbers table.

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT [N].[N]
INTO #NumbersTest
FROM [Numbers] N

ALTER TABLE #NumbersTest ADD CONSTRAINT [PK_Numbers] 
PRIMARY KEY CLUSTERED (N) WITH (FILLFACTOR = 100)

Trying to add the PK constraint here fails, because the column is NULLable

Msg 8111, Level 16, State 1, Line 37
Cannot define PRIMARY KEY constraint on nullable column in table '#NumbersTest'.
Msg 1750, Level 16, State 0, Line 37
Could not create constraint or index. See previous errors.

We can verify this by looking at the table metadata:

SELECT [columns].[name], [columns].[is_nullable]
FROM tempdb.sys.columns 
WHERE [object_id] = OBJECT_ID(N'tempdb..#NumbersTest');


name     is_nullable
N           1

So how do we fix this? We could alter the table, but that won’t leave us with the lasting satisfaction of proving BOL wrong on a technicality. We’ll adjust our code a bit, and try again.

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT 
ISNULL([N].[N], 0) AS N  /* <--- The magic is here! */
INTO #NumbersTest_IN
FROM [Numbers] N

ALTER TABLE #NumbersTest_IN ADD CONSTRAINT [PK_Numbers] 
PRIMARY KEY CLUSTERED (N) WITH (FILLFACTOR = 100)

This time, with the addition of an ISNULL check on the column, it ‘inherits’ the not NULLable property, and the PK constraint adds successfully. We can verify that by checking the table metadata, if you don’t believe me:

name     is_nullable
N           0

Note that this same behavior does not occur if you replace ISNULL() with COALESCE()

And with that, you can clean up the test tables. Or keep them around. Do some math. Whatever blows your hair back.

Got a favorite use for a numbers table outside of string splitting or doing date math? Let me know in the comments! I may write a follow up.

DROP TABLE [#NumbersTest]
DROP TABLE [#NumbersTest_IN]

Brent says: Wow. That is a really slick trick.

Kendra says: I know some people have bias against SELECT INTO because it seems sloppy and you have to go the extra mile to get the right types, but it can be a great tool. Don’t rule it out.

Jeremiah says: I really like this method – it’s a quick way to copy tables without using other cumbersome techniques

Mirror Mirror on the Server, Who is the Principal of Us All? [video]

Have you ever wished your SQL Server could have an identical twin, holding the same data, in case you ever needed it? SQL Server mirroring provides just that, and you can choose if you want it for high availability or disaster recovery. If you’ve ever been curious about what mirroring really is, and what the pros and cons are, this is the session for you.

Resources 

Database Mirroring (Books Online) 

Configure SQL Server Database Mirroring Using SSMS 

Database Snapshots 

From the Dept of Corrections: During the webcast a viewer asked in Q&A if automatic page repair was a one way or two way street. Kendra answered that if the mirror gets an IO error it will go into a suspended state. This is somewhat correct but incomplete– the mirror will also try to correct the issue with a page from the principal afterward and attempt to resume mirroring. More info here.

SQL Server Table Partitioning Tutorial: Videos and Scripts

There’s a secret to learning about SQL Server’s table partitioning feature: you need to get your hands on some code and really play with it in a test environment.

In this tutorial, Kendra will walk you through test scripts that set up a sample table partitioning environment. These scripts demonstrate gotchas to look out for and the cool features that can dramatically change your database’s performance.

Want to run the demos yourself? Download Table Partitioning Tutorial: Scripts.

Part 1: Our Mission

How can you use the demo and scripts from today in the best way possible? Find out here. 3.5 minutes

Part 2: How to create your test database and helper objects

It’s demotime! We create a test database with views that will help us quickly see critical information about out partitioned object. 3.5 minutes

Part 3: How to create partitioned objects

Next up we create a partition function. We create filegroups and files, and use a partition scheme to map the filegroups. We then create a partitioned object and indexes. Our helper views (created in Part 2), help us make sure things went as expected. 15 minutes

Part 4: Switch IN! How to use partition switching to add data to a partitioned table

Now for the cool stuff. In this session we explore how partition switching can allow us to snap a pile of data quickly into a partitioned table– and a major gotcha which can derail the whole process. 12 minutes

Part 5: Switch OUT! How to move data out of a partitioned table and merge boundary points

Deleting data is never fun… unless you’re switching data out of a partitioned table. But make sure to mind those best practices. In this section we show how to remove a lot of data from a partitioned table quickly and safely. 8 minutes

Part 6: Table partitioning takeaways

What are the big things we covered that you need to remember? Make sure you got all the important points here. 3 minutes

Learn More

The good news: we’ve got tons more information on table partitioning at http://brentozar.com/go/partitioning

The bad news: things get even more complicated than you see in this post. If you’re looking at switching out and merging partitions, before you get started on that read my post here on best practices and merging boundary points.

Looking for the scripts? Download Table Partitioning Tutorial: Scripts.

See http://BrentOzar.com/go/eula for our End User Licensing Agreement which governs use of the scripts.

css.php