Blog

An Update to SQL Server Management Studio is Available

That’s the message you get in your tray if you open SSMS 2016 today:

SSMS Update Available

SSMS Update Available

Then click Tools, Check for Updates, and you get:

Check for Updates

Check for Updates

What’s awesome is that I didn’t even install SQL Server Management Studio from the new standalone installer – this is my full CTP2.1 installation of SQL Server, and it’s still smart enough to know that SSMS can be updated separately.

Upon clicking the Update link, I’m taken to the Download SQL Server Management Studio page.

Some folks will probably say that it should update itself the way most apps do these days, by downloading and applying the update without launching a web browser. I think launching a browser is the right call, though – there are release notes to read, important gotchas to be aware of, and hey, who doesn’t like reading documentation? Okay, maybe that last part is just me.

The Fastest Way to Reconfigure a Bunch of Servers

… is to use a SQL Server where a robot does it for you!

I stole that from Kendra – someone else already wrote the sentence for me and why not keep re-using things, right?

Configuring SQL Servers the Usual Way

What’s the usual way to do this?

There are a few commercial tools on the market that let you deploy scripts to multiple servers, they all vary in how they apply the changes and deal with errors.

You could also roll your own tools. This gives you full control of the process, but you have to spend time implementing and maintaining code over time.

Finally, you could just hire junior DBAs and make them implement the changes individually. This will give all of your servers a delightful hand-crafted flair. Sure, the changes may or may not all be applied with equal rigor, but hey, you’re a job creator now.

SQL Server RDS Parameter Groups

SQL Server RDS is part of Amazon’s database as a service offering. We’ve covered RDS a couple times before and we’re still in love with RDS all these years later.

One of my favorite features of RDS is the idea of parameter groups. Parameter groups are a unified set of parameters. A parameter group contains a bunch of different things that we’d normally set up through trace flags and sp_configure changes.

Parameters everywhere

Parameters everywhere

How Can I Change a Parameter Group?

Changing parameters in a parameter group is incredibly easy. The first step is to create a new parameter group.

Creating parameter groups is hard!

Creating parameter groups is hard!

Once that parameter group is created, we can highlight the parameter group and click “Edit Parameters”.

Edit parameters with ease!

Edit parameters with ease!

The UI makes it relatively easy to sort and filter. Here, I’m looking to adjust how parallelism is handled on these particular RDS instances. These parameters are applied to all instances assigned to the parameter group; make sure that you put similar SQL Server instances in the same parameter group.

Change doesn't have to be hard.

Change doesn’t have to be hard.

If you’re unsure about what you’ve changed, you can even select “Preview Changes” and get a quick overview of current and new values.

It's almost like we can see the future.

It’s almost like we can see the future.

Applying Changes with a Parameter Group

Once you’ve changed the parameter groups, you’d be tempted to think the changes will be immediately applied to the servers. They won’t be applied immediately. You’ll still need to schedule a restart to each SQL Server RDS instance for the new parameter group settings to be applied.

This approach works well for many applications – you can restart a small portion of the servers in the system, make sure that the application is performing correctly, and then restart more servers. By leveraging the power of the cloud – scale out, not up – we can make operations easier and simpler.

What’s it mean for you?

Once you’ve gotten the hang of parameter groups, new SQL Server RDS instances can be created with the parameter group already applied. Existing SQL Server RDS instances can be moved into a parameter group. Managing settings across 10s or 100s of instances just got significantly easier – change a parameter group and you’re off to the races.

Brent says: But I put all this work into learning PowerShell! Oh, wait, no, I didn’t. Carry on.

Jeremiah retorts: Good news! There’s a PowerShell suppository toolkit.

Indexing for GROUP BY

it’s not glamorous

And on your list of things that aren’t going fast enough, it’s probably pretty low. But you can get some pretty dramatic gains from indexes that cover columns you’re performing aggregations on.

We’ll take a quick walk down demo lane in a moment, using the Stack Overflow database.

query outta nowhere!

SET NOCOUNT ON

SET STATISTICS TIME, IO ON 

SELECT [v].[UserId], [v].[BountyAmount], SUM([v].[BountyAmount]) AS [BountyTotal]
FROM [dbo].[Votes] AS [v]
WHERE [v].[BountyAmount] IS NOT NULL
GROUP BY [v].[UserId], [v].[BountyAmount]

Looking at the plan, it’s pretty easy to see what happened. Since the data is not ordered by an index (the clustered index on this table is on an Id column not referenced here), a Hash Match Aggregate was chosen, and off we went.

Look how much fun we're having.

Look how much fun we’re having.

Zooming in a bit on the Hash Match, this is what it’s doing. It should look pretty familiar to you if you’ve ever seen a Hash Match used to JOIN columns. The only difference here is that the Hash table is built, scanned, and output. When used in a JOIN, a Probe is also built to match the Residual buckets, and then the results are output.

It's basically wiping its hands on its pants.

It’s basically wiping its hands on its pants.

It took quite a bit of activity to do a pretty simple thing.


/*
Table 'Votes'. Scan count 5, logical reads 315406, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 3609 ms, elapsed time = 1136 ms.
*/

Since this query is simple, our index is simple.

CREATE NONCLUSTERED INDEX [IX_GRPBY] ON dbo.[Votes]
(
[BountyAmount], [UserId]
)

I’m using the BountyAmount column in the first position because we’re also filtering on it in the query. We don’t really care about the SUM of all NULLs.

Taking that new index out for a spin, what do we end up with?

Stream Theater

Stream Theater

The Hash Match Aggregate has been replaced with a Stream Aggregate, and the Scan of the Clustered Index has been replaced with a Seek of the Non-Clustered Index. This all took significantly less work:


/*
Table 'Votes'. Scan count 1, logical reads 335, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 278 ms.
*/

Zooming in on the Stream Aggregate operator, because we gave the Hash Match so much attention. Good behavior should be rewarded.

You make it look so easy, Stream Aggregate.

Filters, filters, filters

If we want to take it a step further, we can filter the index to avoid the NULLs all together.

CREATE NONCLUSTERED INDEX [IX_GRPBY] ON dbo.[Votes]
(
[BountyAmount], [UserId]
) WHERE [BountyAmount] IS NOT NULL
WITH (DROP_EXISTING = ON)

This results in very slightly reduced CPU and IO. The real advantage of filtering the index here is that it takes up nearly 2 GB less space than without the filter. Collect two drinks from your SAN admin.


/*
Table 'Votes'. Scan count 1, logical reads 333, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 233 ms.
*/

And, because I knew you’d ask, I did try making the same index with the column order reversed. It was not more efficient, because it ended up doing a Scan of the Non-Clustered Index instead, which results in a bit more CPU time.

If you like this sort of thing, you might be interested in our Advanced Querying & Indexing class this August in Portland, OR.

Should You Put Things in the master Database?

Growing up as a young boy, my elementary teachers always warned me about the perils of putting stuff in master. But what’s the real deal? Why is this such a bad idea?

System databases are often located on space-limited drives. The default installation parameters for SQL Server throw master, model, and msdb all on the C drive. I’d rather not create tables that might grow when I’m not looking, and then possibly run my server’s limited boot drive out of space. Starting Windows with a full C drive is not a lot of fun.

In the event of a disaster, we don’t usually restore master. While you can indeed restore one server’s master database over to another, it’s just not a good idea. You’re failing over to another server because things are already going wrong – why make things worse by doing something you rarely practice? Plus, if you’re failing over due to corruption, you probably don’t want to bring over a possibly corrupt master database.

Even if you could restore it, you’ll lose data. You can’t do transaction log backups for the master database, so you’ll lose whatever changes were made since the last full backup.

Now, having said all this, I’m completely okay with putting utility stored procedures in the master database – things like sp_WhoIsActive, sp_Blitz®, and the like. After all, those objects take hardly any space, and we don’t need to restore them over to our DR environment.

Kendra says: You may have more in your master database than you think! It just takes one ‘oops’ on a deployment script, and there you go.

WANT TO LEARN MORE ABOUT HIGH AVAILABILITY AND DISASTER RECOVERY?

We just launched our new DBA’s Guide to SQL Server High Availability and Disaster Recovery – a 6-hour online video course that teaches you about clustering, AlwaysOn AGs, quorum, database mirroring, log shipping, and more.

How to Configure AlwaysOn AG Backups with Ola Hallengren’s Scripts

With SQL Server AlwaysOn Availability Groups, you can offload backups to a replica rather than running them on the primary. Here’s how to do it:

1. Install Ola Hallengren’s utility scripts on all of the replicas. During the install, it creates a laundry list of SQL Agent jobs, but doesn’t set up schedules for any of them. More on that in a couple of steps.

Backup preference settings

Backup preference settings

2. Set your backup preferences. In SSMS, right-click on your Availability Group, click Properties, and click the Backup Preferences pane.

The first option, “Prefer Secondary,” means that your backups will be taken on a secondary server unless all secondaries go offline, at which point they’ll be taken on the primary. There’s some risk here: if communication falls behind, your secondary may be running backups of old data, as Anthony Nocentino explains. In that case, you won’t get backup failure alerts, but you won’t be able to meet your RPO. Monitoring for that is an exercise left for the reader.

In the “Replica backup priorities” window, rank your replicas to choose who will do the backups first.

Say I have three servers – two in my primary data center, and one in my disaster recovery site. I’d rather have my full backups running in my primary data center because if I need to do a restore, I want the backups nearby. (You can also run backups in both places – and I would – but more on that in a future post.)

To configure that, I’d set priorities as:

  • SQLPROD1 and SQLPROD2 (my primary data center replicas) – both 50 priority
  • SQLDR1 (my disaster recovery replica) – priority 40

3. Test your backup preferences. Run this query on each replica:

SELECT d.database_name, 
  sys.fn_hadr_backup_is_preferred_replica (d.database_name) AS IsPreferredBackupReplicaNow
FROM sys.availability_databases_cluster d

This returns a list of databases in an AG, and whether or not they’re the preferred backup replica right now. Check that on all of your replicas to make sure backups are going to run where you expect, and if not, revisit your backup preferences in the last step.

4. Configure Ola’s Agent full backup jobs. On any replica, in the Agent job list, right-click on the “DatabaseBackup – USER_DATABASES – FULL” job, click Properties, click Steps, and edit the first step. By default, it looks like this:

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q “EXECUTE [dbo].[DatabaseBackup] @Databases = ‘USER_DATABASES’, @Directory = N’C:\Backup’, @BackupType = ‘FULL’, @Verify = ‘Y’, @CleanupTime = NULL, @CheckSum = ‘Y’, @LogToTable = ‘Y'” -b

You need to change these parts:

  • @Directory – set this to your backup path. I like using a UNC path that all of the replicas can access.
  • @Verify – I’d recommend turning this off to make your backup jobs go faster. If you really want to verify your backups, restore them on another server.
  • If you want to run the backups on a secondary replica rather than the primary, add a parameter for @CopyOnly=’Y’
  • If you only want to back up specific databases, modify the @Databases parameter. I don’t like doing that – I’d rather have one job for all of my backups. The way this is set now (USER_DATABASES), this one job will back up all of my databases that aren’t in an AG, plus it’ll back up the AG-contained databases where this replica is the preferred backup right now.

With AlwaysOn AGs, replicas can only run copy-only backups, and people often think that’s a problem. It’s only a problem if you want to use differential backups – otherwise, with AGs, it doesn’t affect transaction log handling at all. I don’t recommend using differentials with AlwaysOn AGs, but if you insist on doing it, you’ll be running your full backups on your primary replica.

Other parameters you may want to set:

  • @CleanupTime – how many hours of backup files you want to keep around
  • @Compress – Ola inherits the default compression setting at the server level, but you can hard code this to Y if you want to make sure backups are always compressed

So here’s what my Agent job script ends up looking like, with my changed parts in bold:

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q “EXECUTE [dbo].[DatabaseBackup] @Databases = ‘USER_DATABASES’, @Directory = N’\\FILESERVER1\SQLBackups’@CopyOnly=’Y’, @CleanupTime=48, @Compress=’Y’, @BackupType = ‘FULL’, @Verify = ‘N’, @CleanupTime = NULL, @CheckSum = ‘Y’, @LogToTable = ‘Y'” -b

5. Copy these changes to all replicas and test the full jobs. Make sure each replica that isn’t supposed to run the fulls, doesn’t, and the replica that IS supposed to run the fulls, DOES. In really mission-critical environments where we’re building the new AG servers from the ground up, we actually fail the AG around to different servers to test behavior when different servers are the primary – and when entire data centers go down.

6. Repeat the setup with your log backup jobs. Right-click on the “DatabaseBackup – USER_DATABASES – LOG” job and click Properties, Steps, and edit the first step. Set the @Directory and @Verify steps as we did earlier.

Here’s where things get a little tricky – you don’t have to add @CopyOnly=’Y’ for the log backup steps. There’s no such thing as a copy-only log backup in an Availability Group secondary, much to my dismay.

You might also consider setting the @ChangeBackupType parameter to Y. By default, if Ola can’t do a transaction log backup (like if it’s a brand new database that has never had a full backup before), then the log backup is skipped. If you set @ChangeBackupType=’Y’, then Ola will do a full backup in that situation, and then do a log backup. However, if it’s a large database, this might take a while to perform the full, and this will tie up your log backup job while it runs. Say the full takes 20 minutes to perform – this might blow your RPO/RTO commitments.

7. Copy these changes to all replicas and test the jobs. Same thing here that we did with the full backups.

8. Configure your DBCC CHECKDB jobs. You need to check for corruption on any server where you’re running backups – here’s why.

9. Design your monitoring. Sadly, SQL Server doesn’t centralize backup history, so it’s up to you to poll all of your replicas to find out where backups are happening for any given Availability Group. In one case, I had a DBA change the backup preferences and Ola’s job settings incorrectly, and all of the backup jobs were succeeding – but none of them were backing up one of his Availability Groups.

10. Set yourself a weekly reminder to test restores. AG backups are notoriously complex, and if you cared enough to set up this whole expensive infrastructure, then you should care enough to test it. Make sure you have a good, restorable set of backups.

Kendra says: In a complex environment, I’m a fan of some paid third party tools that help you control the compression on your backups and which can keep a central repository of backup history, to help you monitor. Good news: Ola’s solution integrates with them! You can have your backup cake and eat it, too.

Want to Learn More About AlwaysOn Availability Groups?

We just launched our new DBA’s Guide to SQL Server High Availability and Disaster Recovery – a 6-hour online video course that teaches you about clustering, AlwaysOn AGs, quorum, database mirroring, log shipping, and more.

New SQL Server Management Studio 2015/June – with Check for Updates!

This morning, Microsoft announced a new standalone SQL Server Management Studio download:

This is our first effort to release SQL Server Management Studio (SSMS) in a mechanism outside of the SQL Engine releases. Our goal is to update this frequently with new features, fixes and support for the newest SQL Server features in SQL Server Engine and Azure SQL Database.

Awww yeah! Lightweight installer, and here’s the best part: there’s a built-in check for updates.

Check for Updates in SSMS

Check for Updates in SSMS

That’s right: Microsoft might start shipping updates to SSMS outside of the regular SQL Server Service Pack scenario!

This telegraphs a couple of interesting things: first, they had to do it because Azure SQL Database ships updates much faster than SQL Server does, so this gives Microsoft a way to enable those updates in SSMS without releasing a separate service pack.

But more interestingly – to me at least – is that this means SQL Server Management Studio is now considered The Way to Manage Azure SQL Database. There isn’t a separate tool coming, nor will designer/developer tools like Visual Studio become a management tool.

SSMS is finally getting some much-needed love. Rejoice, dear readers.

Save 50% ($149.50) On Our New High Availability and Disaster Recovery Course – Today Only.

It’s your fault if the SQL Server goes down.

You don’t build a lot of clusters or AGs, and you aren’t confident that your SQL Servers will survive a patching problem, a SAN outage, or a corruption hiccup. Brent and Kendra’s new 6-hour online course has advanced modules on:

  • Choosing the right HA/DR features for your needs
  • Learning pitfalls to avoid so you can be confident your server will stay up
  • Troubleshooting advanced features when they fall apart

And to celebrate, it’s 50% off Tuesday only – and the discount will drop by 10% each day.

  • Wednesday, 40% off
  • Thursday, 30% off
  • Friday, 20% off
  • Saturday, just 10% off

Every day, the price goes up. Isn’t shrinking awful?

Better move fast. And hey, it’s better to ask for forgiveness than permission. The clock is ticking down for the next price increase…

New Updates for sp_Blitz®, sp_BlitzCache™

We’ve just updated our First Responder Kit to include new versions of these tools:

sp_Blitz® v41 – June 18, 2015:

  • Added check 162 for CMEMTHREAD waits on servers with >= 8 logical processors per NUMA node.
  • Added check 159 for NUMA nodes reporting dangerously low memory in sys.dm_os_nodes.
  • Added check 161 for a high number of cached plans per KB 3026083.
  • Fixed a bug in the SkipChecks routines. Reported by Kevin Collins.
  • Backup-to-same-drive-as-databases check (93) now includes the number of backups that were done so you can tell if it was a one-off problem, or if all backups are going to the wrong place.
  • Bug fixes and improvements.

sp_BlitzCache™ v2.4.6 – June 18, 2015:

  • Temporary object cleanup will actually occur – thanks to Bob Klimes for
    spotting this
  • Adding memory grants to expert mode and export to excel mode
  • Parent object name is now displayed next to statements that come from a
    stored procedure
  • Run clean up in ##bou_BlitzCacheProcs before executing – this should
    prevent duplicate records from building up over multiple executions on
    the same SPID.
  • Added a @sql_handle parameter to filter on queries from a specific
    sql_handle or sql_handles
  • Added support for filtering on database name

How will the sp_BlitzCache™ changes look?

When you have a stored procedure with an expensive statement inside it, you’ll now see the parent procedure’s name in the Query Type column:

Presenting-2015-06-19T07-46-27

Database name filtering works just like you’d expect – use the @database_name parameter to limit the analysis to a single database.

EXEC sp_BlitzCache @hide_summary = 1, 
                   @results = 'expert', 
                   @database_name = 'AdventureWorks2014' ;

And, finally, if you want to look for a single stored procedure or batch of statements, you can grab the SQL handle. A SQL handle identifies a procedure or batch, but they’re only included in the expert mode results. Use @results = 'expert' and scroll very far to the right in the results. You’ll see a bunch of binary gibberish in there – one of the columns will be SQL Handle. Grab that and add a @sql_handle parameter to your execution of sp_BlitzCache™, just like this:

EXEC sp_BlitzCache @hide_summary = 1, 
                   @results = 'expert', 
                   @sql_handle = '0x030005006D9E2E1F6CCBE500BBA4000001000000000000000000000000000000000000000000000000000000';

What next?

If you’ve already subscribed to email updates about our tools, you got an email this morning with a direct link to the zip file. For the rest of you, hit up the download/EULA page, and enjoy! Hope this makes your database work easier.

Indexing for Windowing Functions

Hooray Windowing Functions

They do stuff that used to be hard to do, or took weird self-joins or correlated sub-queries with triangular joins to accomplish. That’s when there’s a standalone inequality predicate, usually for getting a running total.

With Windowing Functions, a lot of the code complexity and inefficiency is taken out of the picture, but they still work better if you feed them some useful indexes.

What kind of index works best?

In general, what’s been termed a POC Index by Itzik Ben-Gan and documented to some extent here.

POC stands for Partition, Order, Covering. When you look at your code, you want to first index any columns you’re partitioning on, then any columns you’re ordering by, and then cover (with an INCLUDE) any other columns you’re calling in the query.

Note that this is the optimal indexing strategy for Windowing Functions, and not necessarily for the query as a whole. Supporting other operations may lead you to design indexes differently, and that’s fine.

Everyone loves a demo

Here’s a quick example with a little extra something extra for the indexing witches and warlocks out there. I’m using the Stack Exchange database, which you can find out how to make your favorite new test database here.

SET NOCOUNT ON

SET STATISTICS IO, TIME ON  

SELECT  [p].[OwnerUserId] ,
        [p].[CreationDate] ,
        SUM([p].[ViewCount]) OVER ( PARTITION BY [p].[OwnerUserId] ORDER BY [p].[CreationDate] ) AS [TotalViews]
FROM    [dbo].[Posts] AS [p]
WHERE   [p].[PostTypeId] = 1
        AND [p].[Score] > 0
        AND [p].[OwnerUserId] = 4653
ORDER BY [p].[CreationDate]
OPTION  ( RECOMPILE );

/*
Table 'Posts'. Scan count 5, logical reads 488907, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 1180, logical reads 7095, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 2328 ms,  elapsed time = 760 ms.
*/

The query above is running on the Posts table which only has a Clustered Index on the Id column, that does us absolutely no good here. There are tons of access operations and logical reads. Taking a look at the plan doesn’t offer much:

I am a plan. Love me.

I am a plan. Love me.

Let’s try a POC index to fix this up. I’m keeping ViewCount in the key because we’re aggregating on it. You can sometimes get away with just using it as an INCLUDE column instead.

CREATE NONCLUSTERED INDEX [IX_POC_DEMO] ON [dbo].[Posts] 
(
[OwnerUserId], [CreationDate], [ViewCount]
) 

We can note with a tone of obvious and ominous foreshadowing that creating this index on the entire table takes about 15 seconds. Insert culturally appropriate scary sound effects here.

Here’s what the plan looks like running the query again:

I'm a Scorpio. I like Datsuns and Winston 100s.

I’m a Scorpio. I like Datsuns and Winston 100s.

That key lookup is annoying.

Not all key lookups are due to output columns. Some of them are predicates.

Not all key lookups are due to output columns. Some of them are predicates.

We did a good job of reducing a lot of the ickiness from before:

/*
Table 'Worktable'. Scan count 1180, logical reads 7095, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Posts'. Scan count 1, logical reads 4973, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 168 ms.
*/

But we’re not happy. Why? Because we’re DBAs. Or developers. Or we just have to use computers, which are the worst things ever invented.

Behold the filtered index

CREATE NONCLUSTERED INDEX [IX_POC_DEMO] ON [dbo].[Posts] 
(
[OwnerUserId], [CreationDate], [ViewCount]
) 
WHERE [PostTypeId] = 1 AND [Score] > 0
WITH (DROP_EXISTING = ON)

Cool. This index only takes about three seconds to create. Marinate on that.

This query is so important and predictable that we can roll this out for it. How does it look now?

Well, but, WHY?

Well, but, WHY?

That key lookup is still there, and now 100% of the estimated magickal query dust cost. For those keeping track at home, this is the entirely new missing index SQL Server thinks will fix your relationship with your dad:

/*
USE [StackOverflow]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[Posts] ([OwnerUserId],[PostTypeId],[Score])
INCLUDE ([CreationDate],[ViewCount])
GO
*/

But we took a nice chunk out of the IO and knocked a little more off the CPU, again.

/*
Table 'Worktable'. Scan count 1180, logical reads 7102, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Posts'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 91 ms.
*/

What can we do here?

Include!

CREATE NONCLUSTERED INDEX [IX_POC_DEMO] ON [dbo].[Posts] 
(
[OwnerUserId], [CreationDate], [ViewCount]
) INCLUDE ([PostTypeId], [Score])
WHERE [PostTypeId] = 1 AND [Score] > 0
WITH (DROP_EXISTING = ON)

Running the query one last time, we finally get rid of that stinky lookup:

Bully for you!

Bully for you!

And we’re still at the same place for IO:

/*
Table 'Worktable'. Scan count 1180, logical reads 7102, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Posts'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 90 ms.
*/

What did we learn? Windowing functions are really powerful T-SQL tools, but you still need to be hip to indexing to get the most out of them.

Check out our free resources on Windowing Functions here.

If you like this sort of thing, you might be interested in our Advanced Querying & Indexing class this August in Portland, OR.

Jeff Moden talks at length about triangular joins here (registration required).

Announcing the Dell DBA Days: August 25-28

If you want to work on the cutting edge, you can’t just read best practices – you have to go create them. Somebody’s gotta teach you how to use SQL Server 2016, right?

We asked Dell if we could fly out to their headquarters in Round Rock, Texas and borrow a bunch of hardware and storage gear for a week. Doug, Erik, Jeremiah, Kendra, and I will set up shop to do lab tests, experiments, and learning with the latest hardware, shared storage gear, and SQL Server 2016.

And we bet you wanna watch online.

So we’re bringing you Dell DBA Days. Twice a day, we’ll take a break from our work to do a one-hour webcast with you, talking about how we’re testing and what we’re learning.

Here’s the webcast lineup:

  • Tuesday morning – How Do You Fix a Slow TempDB?
  • Tuesday afternoon – TempDB Load Testing with SQL 2012, 2014, 2016, and SSDs
  • Wednesday morning – Introducing This Week’s Lab
  • Wednesday afternoon –  Advanced Shared Storage Features
  • Thursday morning – Find Your Slow SQL Server’s Bottlenecks Fast
  • Thursday afternoon – How to Prove Hardware is a Problem
  • Friday morning – SQL Server High Availability Options Explained
  • Friday afternoon – Watch SQL Server Break and Explode

Register now to watch us live.

css.php