Blog

[Video] Free Training of the Week: Number and Date Tables

This year, we’re trying something new: every Monday, we’re going to show you one module out of our online training classes.

It’s the whole module, not a sneak peek, and it’s available the whole week. But you’d better not put it off – this post will disappear the following week, and get replaced by a new post.

This week, it’s Doug Lane explaining number and date tables:

EXPIRED!

(If you don’t see the video above, you’re reading this somewhere that doesn’t support video embedding. You’ll need to read the post on our blog.)

Now with closed captioning and transcripts! The eagle-eyed amongst you will notice a CC button in the player – you can turn on closed captioning for the T-SQL Level Up class. Plus, when you’re a student watching the video in your class page, we’ve got the full text of the module:

Transcripts below the videos

Transcripts below the videos

Wanna see more? Check out Doug’s T-SQL Level Up course. He covers case expressions, computed columns, windowing functions, and replacing cursors. For $29, you get 18 months of access to stream it on your desktop, laptop, tablet, or phone.

[Video] Office Hours 2015/12/30

Every Wednesday, we run a free Office Hours session where you can wander in, ask SQL Server questions, and get answers of dubious quality.

In this week’s Office Hours, Angie and Brent talk about how to configure TempDB, when to use SAN snapshot backups, how sequential guids work, which service packs to install, and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Announcing the Office Hours Podcast (And 78% Off Videos)

Every Wednesday, we get together on a live webcast to take your questions. (We don’t actually give answers, we just admire the questions.) Hundreds of folks attend every week to hear us flail about.

People kept emailing in saying, “I really wanna attend Office Hours, but I’m working/asleep/drunk. Can you turn it into a podcast so I can download it and play it on the way to church/home/therapy?”

Office Hours ArtworkYou got it:

And if you rate us on iTunes before January 31, 2016, we’ll give you a coupon for 78% off any of our online courses. (Why 78? Because we record the show masters on 78 rpm vinyl. We’re hipster like that.)

Here’s how to rate us and claim your 78% off coupon. Enjoy the podcast!

Does index fill factor affect fragmentation?

Everybody wants to know about index fragmentation

It is an inescapable vortex of malaise and confusion. Like that swamp in The Neverending Story that killed the horse. Sorry if you haven’t seen that movie. The horse wasn’t that cool, anyway.

Neither is index fragmentation, but it’s not worth losing sleep over. Or a horse.

I see a lot of people messing with the fill factor of their indexes. Sometimes you gotta. If you use GUIDs for a clustering key, for example. If you don’t lower fill factor from 100, you’re going to spend a lot of time splitting pages when you insert records. GUIDs are hard to run out of, but they’re even harder to put in order.

Setting fill factor under 100 tells SQL to leave free space on index pages at the leaf level for new records to be add to. If you don’t, and a record needs to be added to a page, it will do about a 50/50 split to two other pages.

When does it hurt?

Like most things, not at first. To prove it, let’s rebuild an index at different fill factors, and insert some fragmentation information into a table. It’s pretty easy. Create a table, rebuild the index, insert record to table. I could have done this in a loop, but I’m kind of lazy today.

CREATE TABLE [dbo].[FillFactor](
  [TableName] [nvarchar](128) NULL,
  [IndexName] [sysname] NULL,
  [index_type_desc] [nvarchar](60) NULL,
  [avg_fragmentation_in_percent] [float] NULL,
  [page_count] [bigint] NULL,
  [fill_factor] [tinyint] NOT NULL
) ON [PRIMARY]
GO

INSERT [dbo].[FillFactor]
        ( [TableName] ,
          [IndexName] ,
          [index_type_desc] ,
          [avg_fragmentation_in_percent] ,
          [page_count] ,
          [fill_factor] )
SELECT  OBJECT_NAME([ddips].[object_id]) AS [TableName] ,
        [i].[name] AS [IndexName] ,
        [ddips].[index_type_desc] ,
        [ddips].[avg_fragmentation_in_percent] ,
        [ddips].[page_count],
    i.[fill_factor]
FROM    [sys].[dm_db_index_physical_stats](DB_ID(N'StackOverflow'),
                                           OBJECT_ID('dbo.Votes'), 
                       NULL, 
                       NULL,
                                           'LIMITED') AS [ddips]
JOIN    [sys].[tables] AS [t]
ON      [t].[object_id] = [ddips].[object_id]
JOIN    [sys].[indexes] AS [i]
ON      [i].[object_id] = [ddips].[object_id]
        AND [i].[index_id] = [ddips].[index_id];

ALTER INDEX [PK_Votes] ON [dbo].[Votes] REBUILD WITH (FILLFACTOR = 100)
ALTER INDEX [PK_Votes] ON [dbo].[Votes] REBUILD WITH (FILLFACTOR = 80)
ALTER INDEX [PK_Votes] ON [dbo].[Votes] REBUILD WITH (FILLFACTOR = 60)
ALTER INDEX [PK_Votes] ON [dbo].[Votes] REBUILD WITH (FILLFACTOR = 40)
ALTER INDEX [PK_Votes] ON [dbo].[Votes] REBUILD WITH (FILLFACTOR = 20)

SELECT [ff].[TableName] ,
       [ff].[IndexName] ,
       [ff].[index_type_desc] ,
       [ff].[avg_fragmentation_in_percent] ,
       [ff].[page_count] ,
       [ff].[fill_factor]
FROM [dbo].[FillFactor] AS [ff]
ORDER BY [ff].[fill_factor] DESC

Put on your thinking cap. Fragmentation percent doesn’t budge. Granted, we rebuilt the index, so that’s expected. But look at page counts. Every time we reduce fill factor, page count gets higher. Why does that matter? Each page is 8kb. The more pages are in your index, the more you’re reading from disk into memory. The lower your fill factor, the more blank space you’re reading from disk into memory. You could be wasting a lot of unnecessary space both on disk and in memory by lowering fill factor.

Space Age Love Song is probably the best Flock of Seagulls song, just so you know.

Space Age Love Song is probably the best Flock of Seagulls song, just so you know.

Let’s do math!

Because everyone loves math. Let’s take page count, multiply it by 8, and then divide it by 1024 twice to get the size of each index in GB.

SELECT [ff].[TableName] ,
       [ff].[IndexName] ,
       [ff].[index_type_desc] ,
       [ff].[avg_fragmentation_in_percent] ,
       [ff].[page_count] ,
       [ff].[fill_factor],
  ([ff].[page_count] * 8.) / 1024. / 1024. AS [SizeGB]
FROM [dbo].[FillFactor] AS [ff]
ORDER BY [ff].[fill_factor] DESC

Even reducing this to 80 takes up about an extra 600MB. That can really add up. Granted, disk and memory are cheap, but they’re not infinite. Especially if you’re on Standard Edition.

Ack! Numbers!

Ack! Numbers!

It’s in everything

It’s not just queries that reading extra pages can slow down. DBCC CHECKDB, backups, and index and statistics maintenance all have to deal with all those pages. Lowering fill factor without good reason puts you in the same boat as index fragmentation does, except regular maintenance won’t “fix” the problem.

You can run sp_BlitzIndex® to help you find indexes that have fill factor set to under 100.

What’s Covered in the Senior DBA Class of 2016?

You’re a SQL Server DBA who is ready to advance to the next level in your career but aren’t sure how to fully master your environment and drive the right architectural changes. That’s about to change in one week of learning and adventure with me, Brent Ozar.

Join me to learn how size, architect, maintain, and troubleshoot your SQL Servers.

During the training, we’ll provide you with breakfast, mid-morning breaks, lunch, and afternoon snacks. Dinner is on you, but during the training, we’ll organize Birds of a Feather groups so you can dine with fellow folks who share the same interests.

Schedule subject to change in the event of natural disasters, food poisoning, exploding demos:

Monday – Enterprise-Grade Processes

Building an Inventory and Support Matrix – You have dozens of SQL Servers, and you don’t have the time to dig into each one of them to understand what’s going on. You need a fast way to explain to management, “Here’s what we’re up against, and here’s how much time I need to get us to a safe place.”

Architecture Deep Dive for High Availability and Disaster Recovery – You’re building a new SQL Server, and you need to protect it. You want to learn when it’s right to use clustering, AlwaysOn Availability Groups, virtualization, or storage replication. You don’t have an unlimited budget, either, so you need to learn how 2014-2016 licensing impacts these choices.

How to Test Your High Availability and Disaster Recovery Setups – Once you’ve built your new HA/DR setup, you need to test planned failovers, unplanned failovers with data loss, and planned fail-backs without data loss. Get Brent’s battle-tested checklists for testing high availability inside the same data center, plus disaster recovery failovers to another data center.

Activity: Deciding Between Availability Solutions – You’ll play architect, and Brent will give you a set of requirements. Work together in groups to sketch out what you believe is the right HA and DR setup, and then see Brent’s recommendations. We’ll discuss the options chosen by different groups, and their pros and cons.

What’s New in SQL Server 2014 and 2016 – Microsoft is pouring on new features – even in service packs! You’ll get a fast-paced tour of what’s new, understand the drawbacks, and learn when you should consider – or avoid – each of the new features.

Tuesday – The Plumbing: Hardware, Storage, and Virtualization

Performance Workload Analysis – Before you try to guesstimate how much hardware a SQL Server needs, you need to understand its workloads. You’ll learn how to analyze an existing server’s Perfmon counters and DMVs to determine whether it’s doing OLTP queries, DW queries, or what the rough mix is between the two.

Server Hardware Sizing – Armed with details about your workload, now you can make a better prediction about the kinds of hardware required. You’ll see the differences between 2-socket and 4-socket servers, learn how much memory Enterprise Edition needs, and understand when local solid state storage is a better fit than shared storage.

Shared Storage for SQL Server DBAs – Your data lives on the SAN, but…what does that even mean? Learn the components of shared storage, what connects them together, and how to do real performance testing to identify the bottleneck.Understand shared storage and its impact on SQL Server, including tiering, snapshots, replication.

Advanced SAN Features – Shared storage offers automated tiering, snapshots, and replication. Learn how these features work, discover why the data and log file choices are different, and understand when these features make sense for your SQL Server sizes and workloads.

Virtualization Management & Troubleshooting – Virtualization changes the way you build, size, and troubleshoot SQL Server. Understand the basic differences, and learn how to use wait stats to identify the most common VMware and Hyper-V bottlenecks.

Activity: Size Your Environment – Give them a list of client requirements, and pick the right hardware size, VM vs physical, local or shared storage, SQL Server version and edition.

Wednesday – High Availability – Clusters, AGs, Mirroring, Cloud

How to Design Quorum for Windows Failover Clusters – Before you touch anything with AlwaysOn in the name, you need to understand how cluster quorum works and how it can bring your SQL Server down. Learn how dynamic quorum and dynamic witness should be configured and see 3 real-world scenarios.

Triaging Failures in Availability Groups – You’ve built an AlwaysOn Availability Groups, and it hasn’t failed yet – so what’s the big deal? Watch as Brent deals with a broken Availability Group, stepping through his thought process during a failure, and get a glimpse into just how complex clustering can become – even with a simple 2-node AG.

Field Medic’s Guide to Database Mirroring – Sure, database mirroring is technically deprecated – but it still works just as great as ever, and it’s still a good option for single-database failover scenarios. You’ll see why we still recommend it, discover a few common implementation gotchas, and learn how to test your failovers before going live.

Cloud HA and DR Options – Your executives have mumbled the terms “AWS” and “Azure” at meetings, and you want to know how to react. Running a SQL Server up there really is different, and just knowing what HA and DR options exist can help you talk more comfortably with management.

Activity: Recovering from Failover – You just failed over from one data center to another – how much data are you really going to lose? Can you get any of it back? You’ll role-play a scenario from Hurricane Sandy using simple pieces of paper and see exactly how aftermath recovery really works.

Thursday – The Daily Grind: Backups, Maintenance, Monitoring

Troubleshooting Backup and Restore Problems – You’re already familiar with full, differential, and log backups – but what happens when someone says they accidentally deleted 15 records an hour ago? What happens when one log file backup disappears? What happens if you need to restore to an exact point in time? Learn some surprising answers before you get bitten by a real emergency.

Optimizing Transaction Log Shipping – In theory, log shipping is easy: just restore your transaction log backups. In reality, you need to understand how to deal with unplanned failovers, reversing log shipping the other direction, and fixing a lagging secondary.

Optimizing CHECKDB – DBCC CHECKDB is easy with small databases – just run it every day. But as your database grows, this quickly stops becoming an option, and you need to start cutting corners. Learn how to cut them safely by running DBCC CHECKDB at a realistic frequency, discover how to run it faster, and see which SQL Server options like backup checksums and page checksums can give you CHECKDB-like protection without the long maintenance windows.

The Right Counters for Monitoring Performance and Availability – Managers say things like “We want five nines of uptime” and “We can’t build an SLA, just make queries fast.” Senior DBAs are stuck in the middle between screaming users and moaning managers. Learn what metrics to monitor in order to reduce the noise.

Index Maintenance for Enterprise Environments – You’ve learned over time that maintenance plans are the wrong way to go, and you’ve implemented index maintenance scripts from Ola Hallengren, the MidnightDBAs, or Michelle Ufford. However, you’re not really sure exactly what you’re fixing, or if it’s the least invasive way to improve performance. Learn the difference between internal and external fragmentation, see how fill factor destroys performance, and discover a simple set of parameters that can keep everybody happy.

Activity: AlwaysOn Availability Group Backup and CHECKDBs – You’ve decided to implement AlwaysOn Availability Groups, but now the really tricky part starts: where should you run backups and DBCCs? You’ll get a set of client requirements, design your own maintenance strategy, and then Brent will throw surprise curveballs at your design to see how it handles various real-world emergency scenarios.

Friday – Troubleshooting and Building Health Check Reports

Design Your Troubleshooting Process – When the brown stuff hits the fan, reach for an easy-to-understand, time-tested troubleshooting checklist. You’ll get a starter document that Brent Ozar Unlimited hands out to their own clients, learn how to use it, and see how to modify it based on your own environment. When you get back to the office, you can give this same checklist to your junior staff and sysadmins in order to make your on-call rotation easier.

Building a Health Check Report with sp_Blitz® – Your end users, managers, and sysadmins want a simple, easy-to-read report that shows whether or not their SQL Server is screwed. Learn the easy process we use every week to build assessment reports for clients around the world.

Building a Performance Check Report with sp_AskBrent®, sp_BlitzCache®, and sp_BlitzIndex® – You’ve got a slow SQL Server – but what’s the primary bottleneck? You don’t need in-depth performance analysis, T-SQL tuning advice, or index improvements, but you just need to know where to focus your efforts. Learn how to use our completely free stored procedures to get started performance tuning.

Afternoon: Open Q&A – Bring your toughest senior DBA questions, your project requirements, and your end user complaints. As long as you can talk through your evidence in front of the group, Brent will discuss his approaches to the problem. Everybody learns from real-world scenarios. (Or, if you need to fly out early, that’s fine too.)

Here’s the 2016 Cities and Dates

  • January 11-15, 2016 in Newark, NJ – at the Hilton Penn Station, an easy train ride from the NYC area
  • March 7-11, 2016 in Chicago, IL – at one of our favorite training spots, Catalyst Ranch downtown
  • May 2-6, 2016 in Denver, CO – at The Curtis Doubletree, a really fun, funky hotel downtown with lots of great eating options in walking distance
  • August 1-5, 2016 in Philadelphia, PA – at the Hilton at Penn’s Landing

These are the only cities & dates for that class in 2016, so if you’re interested, here’s your chance to burn up the last of your 2015 training budget – or next week, jump in to make sure you get a seat with your fresh 2016 budget.

Register now. See you there!

Who’s Really Doing Continuous Integration of SQL Server Databases?

One of our training class attendees asked a great question: who’s really doing this stuff? We talked about it in class, and I figured it’d make a great blog post question too.

So if you’re doing CI (automatic deployment of database changes), answer in the comments with:

  1. How many database developers you have?
  2. How many person-hours did it take to get CI working?
  3. What tools do you use?
  4. What benefits have you achieved by using CI?
  5. (New) Is the application deployed in only one database, or across many nearly-identical copies of the same database? (Could be internal or external.)

(Update 12/24 – I added question #5 because there’s a bit of a separate topic there. If you manage an application that gets deployed to customers – either hosted internally or externally – like if you call yourself an ISV, or sell access to these databases & applications to multiple clients – mention that in your answer.)

Give Your T-SQL a Semicolonoscopy

In theory, all of your T-SQL statements are supposed to end with a semicolon, like this:

SELECT * FROM dbo.Posts;
SELECT * FROM dbo.Comments;
GO

Why? Well, SQL Server 2005’s Books Online says:

Many code examples use a semicolon (;) as a Transact-SQL statement terminator. Although the semicolon is not required, using it is considered a good practice.

You know, like flossing your teeth, and puff-puff-give. But SQL Server 2008 marked a change:

Although the semicolon is not required for most statements in this version of SQL Server, it will be required in a future version.

semicolonoscopyWhoa. When exactly is “a future version?” The current BOL page for T-SQL syntax conventions still has 2008’s vague language, so at least Microsoft has given us a heck of a long grace period to get our act together.

The gambler in me doubts that Microsoft will ever be able to enforce this rule. A ton of legacy T-SQL code would suddenly start failing, which means businesses just wouldn’t adopt a new version of SQL Server (or Azure SQL DB) that included this rule. However, the DBA in me isn’t a gambler, so when I write new code, I try to end every statement with a semicolon.

We Are Ready For Risk-Taking Presenters

June 2, 2011. I woke up around 5:00 AM, got dressed, left my cruise ship cabin and went downstairs to find a place to write about the dream I’d just had: A database murdered. Suspects isolated together on a ship. Technical sleuthing.

I knew immediately this was a presentation I needed to give. It would be a huge departure from anything I’d done before. And I wanted to do it on the biggest stage I knew of: the PASS Summit.

My first delivery in 2013 (left), and at the 2015 PASS Summit (right)

My first murder mystery session at the 2013 SQLSaturday in Denver (left), and at the 2015 PASS Summit in Seattle (right)

Four and a half years later, I did exactly that; I presented SQL Server Mystery Hour: Dead Reports Don’t Talk at the PASS Summit. It’s been a couple of months since the Summit; the euphoria has worn off, and I want to share some of the risks I took — both in getting there and delivering the session — and why that matters to you as a presenter (or potential presenter).

Risks i’m glad i took

Field Testing: I had to run this session several times before feeling reasonably confident I could do it at the Summit. That meant delivering it at SQL Server User Group meetings and SQLSaturdays. They didn’t always go well — sometimes I was done in 45 minutes (too quickly), sometimes I read my lines very obviously just so I didn’t miss something important. I had to learn what worked and what didn’t, and the only way to find out was to do it live.

Call for Speakers: I started submitting my SSRS mystery session to events in 2012, but I also submitted conventional sessions at the same time. I was competing with myself as well as all the other abstracts in that track. This year, if selected, would be my fifth year speaking at the PASS Summit. I hoped my established speaker history might make the Program Committee trust me with a more off-beat abstract (as long as it was well-written). I decided to go all-in this year and submit nothing but murder mystery abstracts. Attempting to up my chances, I did some A/B testing by not explicitly titling them all mysteries (compare the SSRS abstract with this one: Living and Dying by Dynamic SQL). I was incredibly fortunate to get two of them selected for the 2015 Summit.

(Honestly, I’m still shocked they trusted me twice.)

Multimedia: I got a lot of positive comments on the A/V aspect of it. Both sessions begin with a faux Skype call from the CEO (the one and only Buck Woody!) explaining the situation. The SSRS mystery ends with a dramatic re-enactment video featuring the perpetrator. There’s background music while attendees discuss the case. I think that went well and hopefully made the session (and its content) more memorable.

Casting: For a mystery like this to go smoothly, I can’t do all the talking or recap suspect interviews. I have to do them live. I got experienced speakers to fill the cast: Mark Vaillancourt, Jes Borland, Mickey Stuewe, Jason Strate, Gina Meronek, Bill Fellows, Hope Foley, Jason Horner, and Allen White. There’s no way I could have pulled it off without their help. (Thank you, cast!)

risks i wish I had mitigated

Rehearsal: Even though I chose experienced speakers to be the suspects, the sessions felt unpolished to attendees. For SQL Saturdays and user group meetings (where I have audience volunteers read the parts), unpolished is okay. At a major conference, I needed to do more to make sure we didn’t stumble through lines. It’s on me to make sure that, as busy as other speakers are with their own sessions, we get together — if only once briefly — to run through our lines together.

Having said that, spontaneity created the most memorable moments — Mark Vaillancourt’s stream of puns, Jes Borland’s unicorn-flipping exit, and a joke that had Mark crying just a few minutes into the session.

Outline and notes taken during my first dynamic SQL mystery.

Outline and notes taken during my first dynamic SQL mystery. Discussion gaps were problematic right away.

Not planning the gaps well enough: There are breaks in between each chapter of the mystery where attendees turn to each other and discuss the clues and interviews they just saw. This can result in dead time if the groups either veer off-topic or just don’t talk much. I need to do a better job of keeping things on track, perhaps by shortening those discussion intervals.

Pushing the Networking Aspect: The abstract defines the murder mystery as part technical presentation, part networking event. I had roughly 80 and 50 attendees for the two sessions. This was great because people who showed up were willing to talk to others. However, I could have toned down the networking angle and gotten more attendees without their expectations.

risks i wish i had taken

Slide Decks: Before presenting at the PASS Summit, I had given the SSRS session several times at user group meetings and SQL Saturdays. Each time, I’d done them without any supporting slides. No bullet points, no summary slides. The only reasons I needed a projector at all were for the demo and re-enactment video at the end.

At the Summit, I panicked a little and decided not to go with an empty deck. I saw this as a make-or-break year for my mystery sessions and I wasn’t willing to risk screwing it up by forgetting material. I also didn’t want to risk getting skewered by attendees for not having bullet points to follow. I’m not afraid of that anymore (and I shouldn’t have been in the first place, honestly).

Marketing: I went out of my way to say as little about myself and my real-life company as I could. There was already some chatter about the free magnetic poetry we were giving away and I didn’t want to make any more waves. I removed the About Me slide from my decks and didn’t mention my BlitzRS script, even though it would’ve been a natural fit (and of some benefit) to those in my SSRS session. I did have SQL Sleuth badge ribbons that I offered to people for having come to the session, but those don’t advertise anything except the session itself. In hindsight, I could’ve left the About Me slides in without any fuss.

we are ready for risk-takers and storytellers

Whether it’s a user group presentation or a major worldwide conference, our SQL Server community has settled into a comfortable spot regarding session format. The expectation is we sit in a crowd and give a speaker our attention for 60-75 minutes. We take notes, maybe mention something about the session on twitter. That’s perfectly all right.

But what would happen if we took more risks with that model, or broke from it entirely? I found at least one way it can be done. There’s another group who’s been doing something similar (for much longer than I have), presenting a collection of short stories — with demos even! — and the audience absolutely loves it.

We need more storytellers. Audiences love storytellers.

Be a storyteller.

Weaving technical details into a story makes your content memorable for months after it’s delivered. If you have the technical topic in mind but need help with the storytelling or how to convey the material more memorably, read these books:

I’m only beginning to transition from presenter to storyteller. I’m still looking for ways to make the content I share stick in your mind long after the session is over. I’ve found these books to be invaluable and will be working more of their concepts into my 2016 talks.

risks are risky — what if I bomb?

You’ll always be taking risks, but preparation and practice will mitigate the largest ones. You can prepare alone, but if you’re going to try something our community has never seen before, you need to practice with a real-live audience. Find a user group to present to; that’s about as low-stakes as you can go while still having your target audience. If sixty minutes of daring is too much, try thirty. Try ten. Just give it a shot. From there, you’ll get feedback on what works and what doesn’t. Even if you bomb, it won’t hurt much, and you will be closer to realizing your vision.

Stories want to be told

There wasn’t a single day out of the 1,610 days between conception and realization that I didn’t think about this mystery session. I couldn’t get it off my mind. Instead of me having the idea, it’s like the idea chose me — I was just along for the ride.

Is there an idea, a story that has you captivated? Something nagging that won’t let you rest? Stop trying to rest.

Write it!

Accept that you will have to take risks. Know that the risks are worth it. Tell us a story the way only you can. We’re ready for you to take risks.

Improved diagnostics for query execution plans that involve residual predicate pushdown

I love stuff like this!

Even though it’s not on my list of dream features, it’s pretty neat. Getting new views into what SQL is doing when queries execute is pretty cool. You can read the short and gory details at the KB here: Improved diagnostics for query execution plans that involve residual predicate pushdown in SQL Server 2012

The wording might seem a little odd at first. The phrase “predicate pushdown” might make your face scrunch like you just stepped in something wet when the only thing you have on your feet are your lucky socks, and “residual” might just remind you of the time you left Tupperware in your desk drawer before a long weekend.

I would imagine this is coming to SQL Server 2014 in an upcoming CU/SP, but it hasn’t yet. As of this writing, the newest CTP for 2016 is 3.1, and it is working there as well.

You’re rambling, man. Get on with it.

Let’s take a look at what this ol’ dog does, and how we can use it to troubleshoot query problems. We’ll start with a small-ish table and some indexes.

USE [tempdb]

SET NOCOUNT ON

DROP TABLE [dbo].[RowTest]

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT
        IDENTITY (BIGINT, 1,1) AS [ID] , 
        ISNULL(SUBSTRING(CONVERT(VARCHAR(255), NEWID()), 0, 9) , 'AAAAAAAA') AS [PO] ,
        ISNULL(CONVERT(DATE, DATEADD(HOUR, -[N].[N], GETDATE())),     '1900-01-01') AS [OrderDate] ,
        ISNULL(CONVERT(DATE, DATEADD(HOUR, -[N].[N], GETDATE() + 1)), '1900-01-01') AS [ProcessDate] ,
        ISNULL(CONVERT(DATE, DATEADD(HOUR, -[N].[N], GETDATE() + 3)), '1900-01-01') AS [ShipDate] 
INTO [RowTest]
FROM    [Numbers] [N]
ORDER BY [N] DESC;

ALTER TABLE [dbo].[RowTest] ADD CONSTRAINT [PK_RowTest] PRIMARY KEY CLUSTERED ([ID]) WITH (FILLFACTOR = 100)

CREATE UNIQUE NONCLUSTERED INDEX [IX_RowsRead] ON dbo.RowTest ([PO], [OrderDate], [ProcessDate]) WITH (FILLFACTOR = 100)

A million rows and nothing to do

Before the addition of this element, the only way to get any information like this right in SSMS was to SET STATISTICS IO ON, but that only gave us a partial story about the pages read by our query. Let’s look at a couple examples.

SELECT [rt].[ID]
FROM [dbo].[RowTest] AS [rt]
WHERE [rt].[PO] LIKE '%0%'
AND [rt].[OrderDate] >= CAST('2015-01-01' AS DATE)
AND [rt].[ProcessDate] <= CAST('2015-12-31' AS DATE)

Running the above query with the actual execution plan turned on, the tool tip that pops up over the index scan looks like this:

We read all million rows! Bummerino duder.

We read all million rows! Bummerino duder.

This is something Brent wrote about years ago. Great post, Brent! It’s further evidenced here, by the new Number of Rows Read line. We had to scan the index, all million rows of it, to search out the double wildcard LIKE predicate.

If we alter the query slightly, we can cut down dramatically on the number of rows we’re looking at. I get that this changes the logic, but really, you need to take care when allowing people to search full strings like above.

SELECT [rt].[ID]
FROM [dbo].[RowTest] AS [rt]
WHERE [rt].[PO] LIKE '0%'
AND [rt].[OrderDate] >= CAST('2015-01-01' AS DATE)
AND [rt].[ProcessDate] <= CAST('2015-12-31' AS DATE)

 

Note that we're no longer scanning the index, either.

Note that we’re no longer scanning the index, either.

Granted, getting away from double wildcard searches and towards more sane search methods is a big leap. What if we just tighten our last query’s predicates up a bit? Say that we only needed POs that start with ’00’, and we only needed results since June. We’re filtering on [ProcessDate] to make sure that the order was actually fulfilled, or something. It’s a dream!

SELECT [rt].[ID]
FROM [dbo].[RowTest] AS [rt]
WHERE [rt].[PO] LIKE '00%'
AND [rt].[OrderDate] >= CAST('2015-06-01' AS DATE)
AND [rt].[ProcessDate] <= CAST('2015-12-31' AS DATE)
That's way less, boss.

That’s way less, boss.

Now we’re down to reading just a few thousand rows to find what we need.

So what?

If you’re on SQL 2012, or, for some reason on CTP 3.1 of SQL 2016, you should take a new look at troublesome queries. Perhaps you can track down similar predicate inefficiency using this new tool. You may be reading way more data than you think. Anything you can do to cut down on data movement will very likely speed things up. The queries, in order, ran for an average of 110ms, 47ms, and 3ms respectively. Small changes can make differences.

To get this to work, I had to be on the SSMS November preview. It wasn’t showing up in other versions of SSMS for me.

I’d also like to thank everyone who voted for my Connect Item. It’s nice to know that 162 of you are at least as weird as I am about tempdb.

[Video] Office Hours 2015/12/09

Every Wednesday, we run a free Office Hours session where you can wander in, ask SQL Server questions, and get answers of dubious quality.

In this week’s Office Hours, Angie, Doug, Jessica, and Brent talk about how to measure database administrators, how we come up with blog post topics, what we do when we encounter clustered index corruption, how to tell what’s filling up TempDB, and more.

Here’s the video on YouTube:

Wanna join in and ask your own question? Register for this week’s Office Hours.

css.php