Blog

Foreign Keys in SQL Server (video)

Learn the basics of Foreign Keys in SQL Server: trust, join elimination, cascading deletes, and indexing child keys. Limited on time? Check out the Table of Contents below.

Table of Contents:

03:46 – Foreign keys change performance
04:12 – Let’s say we have no foreign key
04:41 – This query has a join
05:05 – The plan with no foreign key
05:34 – Add a foreign key relationship
05:54 – Run the same query
06:30 – Foreign key join elimination
06:38 – Try to add an invalid PostTypeId
07:08 – You can use NOCHECK
07:54 – Now can I add that row?
08:16 – But when I try to re-enable my untrusted foreign key…
08:29 – I get an error if I use WITH CHECK
09:29 – Does foreign key join elimination work now?
09:33 – I get an error if I use WITH CHECK
09:36 – Does foreign key join elimination work now?
09:37 – Nope, the foreign key isn’t trusted
09:51 – Delete the row that violates the key
10:11 – Checking keys needs heavy locks
11:42 – Join elimination may not happen
12:24 – What about cascading updates/deletes?
13:31 – Example syntax – cascading deletes
14:34 – Behind the scenes…
15:51 – Cascading deletes
17:03 – Do I always need to index foreign keys?
18:14 – Creating the foreign key
19:12 – Indexes on the child table can help
19:38 – Takeaways
20:24 – BrentOzar.com/go/Trust

Brent says: when you’re deciding whether or not you should include foreign keys in your database, this video will help.

Upgrading to SQL Server 2014: Frequently Asked Questions

Can I upgrade an existing instance without migrating?

Keep your columns and rows inside the database while you prepare for your upgrade.

Keep your columns and rows inside the database while you prepare for your upgrade.

This is nothing against SQL Server 2014, but I can’t stand in-place upgrades. Over the years I’ve had in-place upgrades work flawlessly on a few instances, and then had an install issue cause it to fail in the middle on other instances. Usually the critical instances, just because I’m not always lucky. And when upgrade fails, it doesn’t always roll back completely, or allow you to just re-run it. You may be down for a good long time.

But you’ve got backups, right? Really recent ones? Even so, how long does it take to restore them, if you need to do that to a different location? (And did you set it up right?)

While in-place upgrades may be fine for test and dev environments, they aren’t a good fit for your production instances where RPO and RTO are critical.

Should I raise the database compatibility level to 120 to use the new cost-based optimizer?

If you can test it for all your queries and know if it’ll be right for you, you can turn it on. Most folks can’t be sure of this so they start with it off to reduce risk from the migration.

The new cost based optimizer is very exciting, but there’s definitely a chance you can hit performance regressions. If you can’t test in advance, turning it on at the same time you migrate makes your troubleshooting more complex if you hit a problem.

Can we restore a full backup with NORECOVERY, run CHECKDB, and then later restore a differential?

You can’t run CHECKDB unless you’ve done a full restore and made the database writable. That means you can’t apply a differential backup afterwards.

You can potentially mitigate the risk by running a full CHECKDB against the database prior to running the backup. You may also run the backup with CHECKSUM (not a substitute for CHECKDB but it does apply some protection), and then run CHECKDB in a maintenance window shortly after the migration.

It’s all about your risk tolerance.

Is it still helpful to run DBCC UPDATEUSAGE after a migration or upgrade?

DBCC UPDATEUSAGE is typically no longer needed and only impacts output from sp_spaceused, anyway. Check the ‘remarks’ section on its page in books online for the full details: https://msdn.microsoft.com/en-us/library/ms188414.aspx

People got into the habit of this because it was needed to upgrade to SQL Server 2005. But it’s OK, you can let go of that habit (and it’s worth it, this command can be pretty slow).

Should I run sp_updatestats after a migration or upgrade?

This was a big deal when upgrading to 2005 because of changes they made to statistics, but it’s not needed specifically for SQL Server 2014 upgrades. Some folks like to do this to kick the tires, but don’t go crazy or think it’s magical.

What should I set ‘max server memory’ to for SQL Server 2014 if I’m running Standard Edition?

Possibly to more than you think. Max server memory in Standard Edition is limited to 128GB for the buffer pool in SQL Server 2014, but you may want to set it higher so that other parts of SQL Server can access memory above that level. Read more here.

How do I turn on the tempdb IO enhancement in SQL Server 2014?

You don’t have to, it’s just on.

Are there any known upgrade issues?

Microsoft keeps a list here, along with a pre-upgrade checklist: https://msdn.microsoft.com/en-us/library/bb933942.aspx

Which cumulative update should I use?

You should definitely use one if you care about performance, but the choice can be very complicated. Especially if you’re using Availability Groups.  There is no easy answer: read all the Cumulative Update articles and test heavily before you go live.

Want more SQL Server setup help? Check out our setup guide for SQL Server.

And while you’re here, please don’t forget to think about your version of Windows.

How to Become an Expert at Anything in SQL Server

First, decide what you want to get really good at. Then try to break it in many possible ways and analyze why it broke.

I think it's going to explode

I think it’s going to explode

The more you break something, the more you’ll understand it.

I use this technique all the time. Last year, I encountered a problem with a lack of worker threads in SQL Server in a client environment. That issue was particularly difficult, because when the problem occurred it was difficult to observe the SQL Server without using the Dedicated Admin Connection (DAC). At the time, I built some repro scripts in my own environment to show how the issue started, why it was tricky to observe, and how to observe it and confirm the queries at the root of it all without restarting the SQL Server. And just recently I wrote new scripts breaking the same thing in different ways — and showing how parallelism can be a factor — for our 2015 Performance Troubleshooting class. Taking the time to break it myself taught me nuances about workers in SQL Server that I wouldn’t have learned otherwise.

“But Kendra, I don’t have time for this!”

Here’s how to make time. Mix and match these three ideas:

1) Make it a team exercise.

One person breaks something in pre-production, the other has to fix it. You save and publish notes on the error messages and how you responded.

2) Tie it to a learning program.

Purchase one of our training video or in-person classes, and design a learning program that uses the training class as a launching board. Set a goal to write your own scripts breaking something for at least 5 of the modules.

If you really want to lock in the knowledge, write down a summary of what you’ve learned in your own words. You can blog it or publish it as notes for your team at work to reference. Your own notes will help you over time more than you expect.

3) Set goals for mastering specific items for the year.

Talk through it with your manager and document why you want to master the topic, and three things you want to achieve at work after you’re done.

Local Variables vs. Parameterized Stored Procedures in SQL Server (Video)

Local variables don’t behave like true parameters in stored procedures in SQL Server. Join Kendra to explore why local variables are so tricky in this free 10 minute video.

How to Tell if You Have a Development Environment

Take my three question survey…

Slide1 Slide2 Slide3

Brent Says: SQL Server Developer Edition licensing is crazy cheap, and it has no per-core licensing fees. I like taking the oldest server I have with a huge number of cores, something that doesn’t make licensing sense for any other purpose, and throwing Developer Edition on there.

Monitoring Snapshot Isolation with Perfmon in SQL Server (video)

Snapshot Isolation can be a great way to reduce lock waits and speed up your SQL Server, but long running transactions could still slow you down. Join Kendra Little to learn how to monitor for these sneaky killers using performance monitor.

Brent says: if you’re a developer and you’re getting started building a new SQL Server application, you should totally check out RCSI as a default setting. Learn more about isolation levels here.

Quiz: Are You the Next Brent Ozar Unlimited Consultant?

Hey SQL Server DBAs — we’re hiring!Help-Wanted-Unicorn

Here’s a quick five question quiz. Give yourself one point for every ‘yes':

  1. Have you been a database administrator for a few years?
  2. Does planning how to keep an application reliable and safe from disasters sound like fun?
  3. Do you love helping people make SQL Server go faster?
  4. Would you like to have a cartoon character of yourself?
  5. Can you see yourself dropping into our Office Hours sessions occasionally to help people for free?

If you got five points, you just might be the next Brent Ozar Unlimited employee!

Here’s what to do next:

Read more about the job here to find out what we’re looking for and what benefits we offer.

Then tell us about you!

We plan to accept applications through Friday, March 6, but don’t wait too long to submit — life moves fast sometimes.

Update March 7 – applications are closed, and we’re interviewing candidates. Thanks!

Extended Events Sessions: Messing Up Filters

Recently, I wanted to play around with the auto_stats event against a test system running SQL Server 2014. I ran through the session setup GUI and added the auto_stats event. I configured it with a filter (predicate) to only show me auto_stats event in a database named AutoStatsTest. There’s a cost to events that may fire frequently and a cost to filters, but this is my test box and I was just using this to learn– so no biggie, right?

The wizard worked just fine. It created a session which scripted out as this:

CREATE EVENT SESSION [AutoStatsTest] ON SERVER 
ADD EVENT sqlserver.auto_stats(
    WHERE ([database_name]=N'AutoStatsTest')) 
ADD TARGET package0.event_file(SET filename=N'S:\XEvents\Traces\AutoStatsTest')
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
    MAX_DISPATCH_LATENCY=5 SECONDS, MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,
    TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF)
GO

I started up the session and started running some code to create the AutoStatsTest database, populate data in a table, and then run some SELECT queries. I left the auto_create_statistics property enabled on the database so that the SELECT queries should trigger automatic creation of stats. I then deleted data and ran the selects again.

After running my code, I opened up my trace file and saw…. nothing.

Extended Events None Found

Nothing to see here, folks

Hmm. I thought maybe it was a latency issue. I waited and re-checked the file. I reran the code. No change. I set it up from the beginning, same thing happened. I drank more coffee.

Then I realized I had probably screwed up my filter.

I had set up a filter on my “Filter (predicate)” tab, but I hadn’t actually told it to collect database_name. I went back in and checked that off.

Whoops, I didn't ask for that!

Whoops, I didn’t ask for that!

Now the session scripts out a bit differently — we’ve got SET collect_database_name=(1) in there:

CREATE EVENT SESSION [AutoStatsTest] ON SERVER 
ADD EVENT sqlserver.auto_stats(SET collect_database_name=(1)
    WHERE ([database_name]=N'AutoStatsTest')) 
ADD TARGET package0.event_file(SET filename=N'S:\XEvents\Traces\AutoStatsTest')
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO

After recreating the trace with the new definition and saving it off, I re-ran my test script and this time it worked as expected: I got trace data to explore!

Oh, that's interesting.

Oh, that’s interesting.

The Extended Events wizard can be a little rough around the edges. It didn’t warn me that I was doing something problematic. But it also made it pretty easy to set up my trace, and also test my theory about what was wrong with it.

Summing up: when you set up an Extended Events trace, just because you set a filter or predicate on a field doesn’t necessarily mean that it can use the filter. You may have to additionally specify that the field needs to be collected.

“OR” and “IN” with SQL Server’s Filtered Indexes

You can’t do everything with filtered indexes in SQL Server. For instance, you can’t create the following index:

CREATE INDEX IX_Votes_filter ON dbo.Votes (PostId) 
    WHERE (VoteTypeId = 1 OR VoteTypeId = 2);
GO

If you try, you’ll get the error message:

Msg 156, Level 15, State 1, Line 5
Incorrect syntax near the keyword 'OR'.

Instead, you can use ‘IN’ and create the index this way:

CREATE INDEX IX_Votes_filter ON dbo.Votes (PostId) 
    WHERE (VoteTypeId IN (1,2));
GO

That works– and good news, even queries written with ‘OR’ can use that filtered index, because SQL Server is clever like that. Here’s an execution plan that shows it in action.

Index found!

The filtered index can also be used for a subset of the VoteTypeIds, too. Here’s an example where I use it just for VoteTypeId = 1. (Note that I had to force it to do it– it preferred a different plan.)

filtered index subset of in list

Takeaway: SQL Server’s filtered indexes can be complicated and limited sometimes, but don’t get too tripped up by the fact that they don’t allow “OR”.

Need to learn more about indexes?

Measuring Performance Can Slow SQL Server Down

I’ve written a terrible query. It might be the worst query in the world. Sure, there are other queries that may be slower, but this query is terrible and elegantly short.

The query is gloriously bad for a two reasons:

  1. It’s incredibly inefficient in the way it uses a scalar function
  2. Observing the query’s performance can make it much, much slower

Don’t believe me on #2? Let’s take a look!

Here’s the query:

SELECT TOP 10
    Id, dbo.BadgeCountForUser(Id)
FROM dbo.Users
WHERE dbo.BadgeCountForUser(Id) > 50
ORDER BY dbo.BadgeCountForUser(Id) DESC
GO

The query uses a restored copy of the StackOverflow database, and the following (terrible) function and related index.

IF OBJECT_ID('dbo.BadgeCountForUser') IS NOT NULL
    DROP FUNCTION dbo.BadgeCountForUser;
GO
CREATE FUNCTION dbo.BadgeCountForUser (@UserId INT)
RETURNS INT
AS
BEGIN
    DECLARE @badgecount INT;
    SELECT @badgecount=COUNT(*)
    FROM dbo.Badges
    WHERE UserId=@UserId;

    RETURN @badgecount;
END
GO

CREATE NONCLUSTERED INDEX ix_FunctionAbuser ON [dbo].[Badges] (UserId)
GO

Now, let’s run it and see how long it takes. All my tests are running against a virtual machine on a laptop with 6GB of RAM, SSD storage, and no other activity running. I ran each test a few times and made sure I got consistent results.

SQL Server Dynamic Management Views (DMVs)

The easiest way to measure performance stats for a query using the dynamic management views is to dump the procedure cache, run the query, and then run our free tool, sp_BlitzCache®.

Note that I’m dumping the ENTIRE execution plan cache– I’m running this on a dedicated test instance, so my FREEPROCCACHE can’t harm anyone else.

DBCC FREEPROCCACHE; 
GO
SELECT TOP 10
    Id, dbo.BadgeCountForUser(Id)
FROM dbo.Users
WHERE dbo.BadgeCountForUser(Id) > 50
ORDER BY dbo.BadgeCountForUser(Id) DESC
GO
exec sp_BlitzCache @hide_summary=1;
GO

Here’s the duration and CPU from the output:

blitzcache-results

This gives me a lot of info — my total duration is over 14 seconds. Of that, 9+ seconds were the SELECT statement from inside the scalar function. And by the way, that executed 2.3 million times. WHOA. 2.3 million times? Yep, that’s how bad my query is.

Extended Events (XEvents)

I can easily run an Extended Events trace against my session using our tool, sp_BlitzTrace™ on SQL Server 2012 and higher. Here’s the syntax to do that:

exec sp_BlitzTrace @Action='start', 
    @SessionId=@@SPID,
    @TargetPath='S:\XEvents\Traces\';
GO
SELECT TOP 10
    Id, dbo.BadgeCountForUser(Id)
FROM dbo.Users
WHERE dbo.BadgeCountForUser(Id) > 50
ORDER BY dbo.BadgeCountForUser(Id) DESC
GO
exec sp_BlitzTrace @Action='stop';
GO
exec sp_BlitzTrace @Action='read';
GO

According to the trace, the query’s duration and CPU time are in the same ballpark as we saw before:

sp_BlitzTrace-Output

But I only have one line here– that’s because by default sp_BlitzTrace™ only collects sql_batch_completed and does not collect statement level completion events. You can turn that on, but remember how the dynamic management views told us the scalar function was executed 2.3 million times?

If you capture sp_statement_completed, that means capturing 2.3 million events. Which means your trace starts churning large amounts of data to the file system. And yes, if you collect the statement level stuff, you’ll slow the query down — and besides, querying the trace files is just awful.

I generated these trace files pretty quickly by collecting statement level events for this terrible query.

How to generate 1GB of XEvents trace files with a single SELECT statement… oops!

STATISTICS TIME AND STATISTICS IO

Another very common way to measure query performance is to turn on STATISTICS TIME and STATISTICS IO. I’m lazy, so I usually do these in a single statement, like this:

SET STATISTICS TIME, IO ON;
GO
SELECT TOP 10
    Id, dbo.BadgeCountForUser(Id)
FROM dbo.Users
WHERE dbo.BadgeCountForUser(Id) > 50
ORDER BY dbo.BadgeCountForUser(Id) DESC
GO
SET STATISTICS TIME, IO OFF;
GO

Here’s how that performed:

statistics-io-time

This performance is consistent across multiple executions. I am not making this up.

Wait just a second– those times were different!

Yep, there’s something weird about these numbers. Let’s sum up the CPU time measurements:

  • Dynamic Management Views via sp_BlitzCache®:~14 seconds
  • XEvents sql_batch_completed via sp_BlitzTrace™: ~14 seconds
  • SET STATISTICS IO, TIME ON: 70 + seconds
  • XEvents sp_statement_completed: gave up after one query generated 1GB of trace data

Dang, this stuff is complicated.

I still like STATISTICS IO and STATISTICS TIME. I do also like sometimes looking at statement level trace data, too! But with any method of measuring performance, I try to be aware that watching what’s going on can impact the results I’m seeing.

This post was inspired by Grant Fritchey’s recent post. If you’ve got your own story about “the observer effect” slowing things down, I’d love to hear it, too!

Brent Says: Whenever I’m there to watch a client’s server, it seems to go faster, and they’re mad because the problem is gone. Hmm.

css.php