Blog

Breaking News, Literally: 2014 SP1 CU6 Breaks NOLOCK

Just announced on the Microsoft Release Services blog, if you run a SELECT query with the NOLOCK hint and your query goes parallel, it can block other queries.

This is a bug, and it will be fixed soon, but it is a very big deal for people who think NOLOCK means, uh, NOLOCK.

More technical details:

  • While one transaction is holding an exclusive lock on an object (Ex. ongoing table update), another transaction is executing parallelized SELECT (…) FROM SourceTable, using the NOLOCK hint, under the default SQL Server lock-based isolation level or higher. In this scenario, the SELECT query trying to access SourceTable will be blocked.
  • Executing a parallelized SELECT (…) INTO Table FROM SourceTable statement, specifically using the NOLOCK hint, under the default SQL Server lock-based isolation level or higher. In this scenario, other queries trying to access SourceTable will be blocked.

If you haven’t already installed CU6, don’t.

If you have installed it, Microsoft recommends that you leave it in place unless you experience this exact issue, at which point you’d need to uninstall CU6.

To know when a fix comes out, watch the CU6 download page, or subscribe to SQLServerUpdates.com and we’ll email you.

sp_Blitz v51: @IgnorePrioritiesAbove = 50 gets you a daily briefing

You have a monitoring tool, but you’ve set up an email rule to dump all the alerts into a folder.

You’re not particularly proud of that, but it is what it is. You’re just tired of the spam.

Group query in the registered servers list

Group query in the registered servers list

But when you get in in the morning, you want a simple screen that shows you if anything is really and truly broken in your environment.

Step 1: set up a list of registered servers or a Central Management Server. This lets you execute a single query across multiple servers.

Step 2: start a group query. Right-click on the group of servers, and click New Query.

Step 3: run sp_Blitz @IgnorePrioritiesAbove = 50, @CheckUserDatabaseObjects = 0. This gets you the fast headline news, especially when used with the improved priorities in the latest version in our SQL Server download kit:

sp_Blitz across multiple servers

sp_Blitz across multiple servers

You’ll discover when:

  • Corruption has been detected
  • Databases in full recovery mode aren’t getting log backups
  • You’re running a known dangerous build of SQL Server
  • Poison waits have struck
  • And much more

On Monday mornings, start here. I know, you’re probably not going to find anything, because your servers are in flawless shape and nothing ever goes wrong.

But just in case….

SQL Interview Question: “Tell me what you see in this screenshot.”

You’re a data professional working with (or at least applying to work with) a company using the StackOverflow database (I’m using the March 2016 version today). Your users are complaining that this stored procedure is slow:

usp_GetPostsByOwnerUserId

usp_GetPostsByOwnerUserId

They didn’t give you parameter 26837 – I’m just giving you that so you can see an execution plan.

You don’t have to talk me through the query itself, or what you’d want to do to fix it. In fact, I want you to avoid that altogether.

Instead, tell me what things you need to know before you start tuning, and explain how you’re going to get them.

I’ll follow up in a few days with my thoughts.


Update 2016/05/28 – 71 comments, holy smokes! One thing is clear: you folks like interviewing for jobs. Because there were so many comments, here’s what I’ll do this week: I’m going to start by talking about what I had in mind when I wrote the interview question, without looking at your answers, then I’m going to read yours because I bet you had even better ideas than I did.

For me, the most important part is, “Can you follow directions?” It’s so tempting to jump in and start fixing the query, but I asked two very specific questions, and I was looking for the answers to those.

Before I start tuning this query, I want to know:

  • What parameters make this query run slow?
  • What does “slow” mean – in concrete time terms?
  • Is it always slow for those same parameters, or does the time vary widely?
  • How fast does it need to run when I’m done? Or, how long should I spend tuning it? (Typically, I ask the user to tell me if I should spend 15 minutes, 1 hour, or 1 day tuning the query.)
  • How often does it run?
  • Could we cache the data in the application layer?

To get these answers, I’m going to:

  • Talk to the users for speed guidance
  • Query the execution plan cache using sp_BlitzCache® to see if this query shows up in our top 10 most resource-intensive queries, and if so, does it have warnings for Frequent Executions, Parameter Sniffing, and/or Long-Running Queries
  • Look at the execution plan to see what parameters it was compiled for
  • Talk to the developers to see if caching in the app tier is an option

Now, let’s see what you submitted, and look for neat ideas.

  • Mazhar wanted to know the table’s size – great idea! You’d want to tune indexes on a 1GB table differently than you’d tune a 1TB one.
  • Raul @SQLDoubleG was one of the first to point out that this code and execution plan are a perfect example of a parameter sniffing problem, good job.
  • Mike Taras asked who’s running this proc, users or apps? Good question – you might tune end-user facing code differently than service-based code. He also asked if we really need to return all of these fields.
  • Russ suggested zooming out and checking the server’s health overall. A+! That’s the first step in my BE CREEPY tuning process, blitzing the box with sp_Blitz®.
  • James Anderson turned it around on the users and said, how do you know this is the proc? Are you running traces to find out that it’s slow? I like James. I bet he has ways of making his users talk.
  • Thomas Pullen asked if it was lunchtime yet. I’ll meet him at the pub for a pint and we can laugh about the slow queries in our shops.
  • Mike F wondered what the users’ SET operations are, because that’s one of the things that can make troubleshooting parameter sniffing harder.
  • Jason Strate emphasized getting real numbers because without that, it’s like “knocking on doors at random in Chicago hoping to find Brent’s swag penthouse.” For the record, I have a doorman, but you’re on the guest list, Jason.
  • Stephen Falken wants to know what’s changed on that server recently, ask who has elevated permissions on the box, and what they changed.
  • Chintak Chhapia asked how frequently data is added & updated to this table, and what columns are updated. Very good question because it determines your indexing strategy.
  • And If_Luke_Skywalker_Was_A_Troll gets very high marks for asking excellent followup and challenge questions throughout, getting folks to think deeper through their suggested answers.

SQL Server 2016 Maintenance Plan Improvements

Let’s just get one thing out of the way first.

Yes, I understand that you, dear reader, probably hate maintenance plans. You associate them with everything dumb that you did early on in your career, like setting up a single plan that reorganized indexes, then rebuilt them, then updated their statistics. You are older and wiser now, and you swear by tools like Ola Hallengren’s maintenance scripts or Minion Reindex.

This blog post, however, is not about you.

It’s about all of the SQL Servers out there that have not yet had the wonderful opportunity to feel the tender loving hand of a qualified database administrator such as yourself. It’s about the tools that accidental DBAs will use over the years to come.

So let’s start with the index rebuild task:

Index rebuilds in SQL Server 2016

Index rebuilds in SQL Server 2016

Be still, my beating heart.

You can tell it’s kinda slapped together haphazardly – note the awkward spacing of the “Used in last” line at the bottom – but God bless ’em, Microsoft’s heart is in the right place. We have new options to only rebuild indexes if they’re a certain percent fragmented, or a certain size, or they’ve been used recently.

The goodness continues on the reorg screen:

Reorganizing indexes in SQL Server 2016

Reorganizing indexes in SQL Server 2016

Same nice options about only optimizing indexes that are in use, or are in bad shape.

The CHECKDB screen shows off its new MAXDOP capabilities, now that DBCC CHECKDB can take a hint:

maintenance-plan-checkdb

Part of me is happy because undereducated database caretakers now have new, more powerful tools at their disposal.

The other part of me is a little bit sad because it’s still not easy to use. If maintenance plans are designed for the accidental and junior DBAs amongst us, I don’t think a lot of this stuff should even be an option. It should just default to the right thing, and take care of the database with Microsoft’s best practices set up as standard.

But that is a really, really small part of me. Maintenance plans are getting better, and that means something good.

What to Do When DBCC CHECKDB Reports Corruption

The instant you encounter corruption in a production SQL Server database, here’s what to do:

1. Query the msdb.dbo.suspect_pages table. This system table tracks up to the last 1,000 corrupt pages detected. This will give you a quick idea of how widespread the corruption might be. If it spans multiple databases, you need to consider getting off this hardware rather than repairing the corruption. The storage itself may be bad, and your corruption repair efforts may be fighting a losing battle. If you see corruption in multiple databases, send an email to your manager, team, and app stakeholders:

SQL Server (INSERT NAME HERE) just reported corruption across multiple databases. This indicates that it has storage problems. We need to fail over to another SQL Server immediately – come to my cube (OR ONLINE MEETING OR WHATEVER) to discuss our options.

At that point, look at your disaster recovery plan to figure out where you’re going to fail over (like a log shipping secondary.) You’ll need to check that location for corruption as well, but if we’re dealing with widespread, multi-database corruption, it’s beyond the scope of what I can teach you to do in a blog post. You can keep reading for single-database corruption recovery options, and use these same tactics across multiple databases, but it’s going to be a ton of work.

2. If it’s just one database, send an email to your manager and your team. Copy/paste this in:

SQL Server (INSERT NAME HERE) just reported corruption in one of our databases. This is really bad – we may be facing data loss, and in order to find out, I’m going to have to drop everything and do emergency troubleshooting. I’ll let you know more in half an hour.

In this step – and the next couple – I’ve got you sending emails, and people are likely to come running in asking for more details. Be polite but firm – you need to do 30 minutes of homework before you can give answers.

3. If you’re using a SAN, alert the storage team. If your data and log files reside on shared storage, there may be a more widespread issue. Multiple databases or servers may be affected if the shared storage is having problems. Copy/paste this into an email:

SQL Server (INSERT NAME HERE) just reported corruption in a database that lives on the SAN. I don’t know yet whether this is due to a storage issue or a SQL Server bug. I’m dropping everything to do emergency troubleshooting, and I’ll let you know more in half an hour, but if you hear anything from other server admins about SAN issues, please let me know.

4. Alert the application owners and stakeholders. Copy/paste this into an email:

SQL Server (INSERT NAME HERE) just reported database corruption. I’m dropping everything to do emergency troubleshooting to find out if we’ve lost data, and how we’ll recover it. I’ll let you know more in half an hour, but in the meantime, I would recommend taking the applications offline. If more data is added after this point, I may not be able to recover it.

Step -1: pour a glass of wine. That's for your spouse, while you tackle the bottle.

Step -1: pour a glass of wine. That’s for your spouse, while you tackle the bottle.

This sounds paranoid, but as an example, here’s a corruption case I had recently: shortly after detecting corruption, the team realized they’d have to revert to a backup of the database from a few days ago. Rather than telling users about that possibility, they let the users keep adding data into the already-corrupt database while the DBAs did troubleshooting. Several days later, as the corruption got worse, even Microsoft couldn’t repair the corruption – and the affected tables went completely offline, permanently. If the users would have been alerted earlier, they could have avoided even more data loss.

5. Turn off your backup-delete jobs: you’re gonna need ’em. If you have any kind of job that automatically deletes backups older than a certain number of days, turn off that job. Depending on the type of corruption, you may have to restore a clean full backup from before the corruption occurred, plus all of the transaction log backups since.

6. Review the history of your CHECKDB and backup jobs. Fill out these answers:

  • When was the most recent clean CHECKDB? (Meaning, when did CHECKDB run against this database without errors?) ________
  • Do we have a full backup available from prior to that date? (For example, if CHECKDB said this database was okay on January 15th at 10PM, what’s the latest backup we have BEFORE that date, like January 14th at 11PM.) ________
  • Do we have all of the transaction log backups since that date, yes/no? ________

If you can pass all three of these, great news! You’re going to have an alternate recovery path. If any of these backups are not online (like if they’re on a slow virtual tape library, or have been sent offsite), get one of your Windows admins to start retrieving these backups and putting them on a very fast network share as quickly as possible. You may need to restore all of these backups in a subsequent step.

7. If you have the full CHECKDB output, look for an easy fix. DO NOT RUN CHECKDB in this step – that can be a long process, and before you go down that road, I need you to do other stuff first. I’m just saying that if you do happen to have the CHECKDB output that shows the corruption, look at the index IDs involved. If the only corruption involves an index with an ID of 2 or higher, that means it’s only corruption in a nonclustered index, and you’ll be able to repair that relatively quickly by:

  • Run sp_BlitzIndex® to list out the index definitions on the affected table:
    EXEC sp_BlitzIndex @DatabaseName = ‘MyDB’, @SchemaName = ‘dbo’, @TableName = ‘MyTable’
  • Find the corrupt index by its ID
  • Copy out the CREATE INDEX statement from sp_BlitzIndex’s output
  • Drop the index
  • Recreate it again
  • To verify that no other corruption exists in the database, run:
    DBCC CHECKDB(‘MyDB’) WITH NO_INFOMSGS, ALL_ERRORMSGS
  • If it comes back clean, you’re good, and you can email the teams that things are OK, but jump to the Aftermath section at the end of this post
  • If it doesn’t come back clean, keep going

8. Fork your recovery efforts into multiple teams. Grab other members of your team and assign them tasks that will be done in parallel:

  • Team 1: Restore clean backups to another server – this team grabs a different SQL Server of the same version as production. For example, if production is a SQL 2012 box, grab a development server that’s also SQL 2012. Restore the full and log backups from the dates specified in step 5. After it finishes, run DBCC CHECKDB to make sure there’s no corruption in this copy. This may take a long time, but if it works, you’ve got an excellent plan B.
  • Team 2: Open a support call with Microsoft. If you don’t have a support agreement with Microsoft, grab a credit card and call professional support. It’s about $500, and they work the problem with you until it’s done. Don’t be afraid to say, “We’re not making progress fast enough – I’d like to escalate this call to the next support level.” (But at the same time, you’d better be doing what they tell you to do.)
  • Team 3: Run CHECKDB if you don’t have the output. If the corruption was detected by an end user’s query or a monitoring system alert, run this, save the output to a text file, and then analyze the output with Gail Shaw’s corruption recovery instructions. Depending on the database’s size, this can take hours:
    DBCC CHECKDB(‘MyDB’) WITH NO_INFOMSGS, ALL_ERRORMSGS
  • Team 4: If you don’t have a clean backup, restore the most recent backups to yet another server. If you’re going to try your own attempts at corruption repair, you’re going to want another copy of the corrupt database on a different server where you can test first before trying in production. As soon as that restore finishes, if the other teams aren’t ready to try their corruption repair commands yet, create a new database on this same server, and start copying as many tables as you can from the corrupt database into the new one. You want to be able to salvage as many tables as you can.
  • Team 5: Contact a data recovery company. If things are looking bad, and management wants another option, contact Kroll Ontrack for a quote for database recovery. The pricing isn’t public, and it will likely vary based on the complexity of the situation.

9. Send out a status update as work starts. As soon as the above team task assignments are done – not when the work stops, but when it starts – it’s time to communicate your status to your manager, team members, and the application stakeholders.

Your update email will be based on what you’ve learned so far, but here’s an example:

Update on the corruption issue: we have identified that the corruption is confined to the SalesSite database, and it occurred at some point after 2016/05/04. We have split up our recovery efforts into three teams. Based on a quick assessment, we believe Team 1 will be the first to finish, and we hope to have a clean copy of the database up on DevSQL2012. If we have to fail over to that, we’ll have a one-hour outage. We’ll update you in another half-hour as we know more.

10. Keep working the plan. Don’t panic – let each of the teams do their thing independently. Keep doing check-ins across teams every 30 minutes, see which team is the closest to a solution the business is comfortable with, and keep the end users informed.

The Aftermath: Avoiding Corruption Next Time

As soon as practical, get on the most current service pack and cumulative update for your SQL Server version. You may have hit a bug like the notorious online index rebuild bug or the NOLOCK UPDATE bug, and  patching will reduce the chances that you hit that issue again.

If you can’t rule out a SQL Server bug as the cause of the corruption, work with your storage teams to make sure your storage gear is patched up to its most recent firmware and driver versions. Check to see if there were any storage errors over the last several weeks – not just in the Windows event logs, but in the storage gear’s logs as well.

Corrupted system tables may require more drastic measures

Corrupted system tables may require more drastic measures

Once you’ve hit corruption on a server, consider taking a few proactive measures:

  • Run CHECKDB more often, like every night (which sounds crazy – until you hit a corruption scenario)
  • Keep your backups for a longer length of time – if you’re only running CHECKDB every week, you need at least a week’s worth of full and transaction log backups so that you can recover via restores
  • Run transaction log backups more frequently, like every minute
  • Put your RPO and RTO in writing – the business always assumes databases can’t lose data, and you assume that the business understands technology breaks sometimes. Get everybody on the same page.
  • Based on your RPO and RTO, revisit how many databases you have on each server. If your hardware isn’t able to keep up with backups and corruption checks, it’s time to up your horsepower or switch to snapshot backups.

Erik says: Brent is spot on with how to react to corruption scenarios. I’m all about not being past the point of no return in the first place. Whether it’s:

sp_Blitz® can help you find all sorts of dangers and inadequacies with your server set up. Run it on all your servers. You may be surprised what you find.

Tara says: I’ve encountered corruption 3 times in my career. The last event was the most memorable, and I don’t mean that as a good thing. If we had setup alerts for the 823, 824 and 825 errors as Erik mentioned, we would have been notified of the corruption sooner and had less data loss. We implemented the alerts through SCOM after the event, but we should have also created SQL Server Agent alerts for those errors as SCOM wasn’t always reliable (your mileage may vary). Reacting to corruption quickly is very important.

SQL Interview Question: “Talk me through this screenshot.”

After writing about “For Technical Interviews, Don’t Ask Questions, Show Screenshots”, lots of folks asked what kinds of screenshots I’d show. Here’s this week’s example.

I show each screenshot on a projector (or shared desktop) to the candidate and say:

  1. What’s this screen from?
  2. What does the screen mean?
  3. If it was a server you inherited from someone else, would there be any actions you’d take?
  4. What questions might you want to ask before you take those actions?
  5. Would there be any drawbacks to your actions?
  6. What would be the benefits of your actions?
Rorschach test

Rorschach test

After a few days, I’ll follow up with my own thoughts.


UPDATE 2016/05/20 – Great thoughts, everybody. This one was fun because it stems from real-life scenarios I’ve seen several times. You wouldn’t believe how long it takes folks to recognize this screen in real-life interviews – often it takes DBAs tens of seconds to realize they’re looking at TempDB. (They often start by talking about some other database because the file name tempdev is so deceiving.)

The DBA heard that they were supposed to create a file for each core, but they misunderstood the difference between cores and processors. The server had 2 processors, each with 4 cores – but they created 2 data files originally.

They had a super-fast SSD attached to the SQL Server as E:, and it’s got a relatively limited amount of space – say 128GB – so they started with small file sizes and let them autogrow.

At some point, the SSD ran out of space, so the DBA added another emergency overflow file on a slower drive (M:). Maybe they shrank it back manually, or maybe they have a job to shrink it – in either case, I get a little suspicious when I see small file sizes because there’s probably shrinking going on.

I got a chuckle out of the answer about the server being a dev box because the database file is named tempdev – even though I see a ton of servers, the default “tempdev” makes me pause every time because it was such an odd file name choice by Microsoft. Funny how everybody’s just as bad at naming conventions as I am.

So to answer the questions:

3. Would I take actions? I’d check to see if there are shrink jobs set up on TempDB, and if so, I’d start by disabling those. I might consider adding more TempDB data files, although if it only had one data file, I’d be a little more careful because it can have a funny side effect.

4. What questions would I ask? What wait types is this server facing? Is the E drive actually a good spot for TempDB? How are the file stats looking on that drive? Have we had a history of running out of space here? How big are the user databases? Are we sorting indexes in TempDB?

5. Any drawbacks? If TempDB is getting regularly hammered, and it runs out of space and needs the overflow file, I might not know it due to the shrinks. I’d start by disabling the shrink jobs so that I can see if this thing grows, and what it ends up growing to. That’ll help me plan for capacity.

6. Benefits to my actions? Some folks mentioned adding files or pre-growing files can make it faster for end users, but be really careful there. Anytime you say something will be faster, then as an interviewer, I’m going to challenge you to define what you would measure, and how it would change. If you don’t have metrics at the ready, then I’m going to suspect cargo cult programming.

What I Look For When I’m Hiring Database Professionals

Matan, Guy, and I recording the podcast

Matan, Guy, and I recording the podcast

On today’s episode of the SQL Server Radio podcast, I talk with Guy Glantser and Matan Yungman about what we look for when we’re hiring.

In the broadest sense, don’t think junior or senior:

  • I’m hiring someone for what they already know, or
  • I’m hiring someone for their capability to learn

(In reality, it’s usually a blend of both, but just think big picture for now.)

If I’m hiring you for what you already know, then I’ve got a list of skills, and I want to see your proficiency in those skills. If one of those skills includes communication, then I’m going to judge you based on how you communicate your mastery of the other skills. For example, I might be looking at your blog posts, presentations, or webcasts about the topics you’re great at.

If I’m hiring your excellent learning skills, then I want to see what you’ve been interested in learning in the past, and how you’ve gone about learning those topics. It doesn’t have to be technical, either – maybe you were interested in perfecting an Eggs Benedict recipe. Show me what resources you used, your preferred style of learning, what lessons you picked up along the way, and how you would recommend that I learn that same thing as fast as possible.

To hear more about my philosophies on that, and hear how Guy and Matan approach hiring for their own companies, check out the half-hour SQLServerRadio podcast.

New Online Classes, Including Performance Tuning When You Can’t Fix the Queries

Every time we announce a new SQLSaturday Pre-Con, the number one question I get is, “When are you going to teach Performance Tuning When You Can’t Fix the Queries in my city?”

Since I can’t get to all cities, let’s do this online!

The hip bone's connected to the ... wait, this is the wrong set of slides.

The hip bone’s connected to the … wait, this is the wrong set of slides.

Performance Tuning When You Can’t Fix the Queries – Friday, May 20th Online, $299 – Your users are frustrated because the app is too slow, but you can’t change the queries. Maybe it’s a third party app, or maybe you’re using generated code, or maybe you’re just not allowed to change it. Learn more.

SQL Server Performance Tuning – Tuesday-Friday, July 5-9 Online, $3,995 – You’re a developer or DBA who needs to speed up a database server that you don’t fully understand – but that’s about to change in a class of learning and fun with Brent. Learn more.

The Senior DBA Class of 2016 – Tuesday-Friday, July 26-29 Online, $3,995 – You’re a SQL Server DBA who is ready to advance to the next level in your career but aren’t sure how to fully master your environment and drive the right architectural changes. Learn more.

These online training classes run from 9AM to 5PM Eastern time using GoToWebinar, live with Brent Ozar, with a one-hour lunch break. Audio can come through either your computer audio (headset recommended), or by dialing into a US phone number. For best performance, test your computer’s compatibility before the meeting to make sure you’re not blocked by a corporate firewall. After purchasing your ticket, you’ll receive an email with your personal GoToWebinar link – don’t share this between users, because only one person will be able to log into the meeting with it. During the session, you can ask questions via text chat, or at the end during the audio Q&A as well.

Or if you’d like to join me at an in-person event, check out our upcoming classes.

Creating Tables and Stored Procedures in TempDB – Permanently

No, not #tables – actual tables. Here’s how:

USE tempdb;
GO
/* This one is only available during my session: */
CREATE TABLE #myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO


/* This one is global, meaning it's available to other sessions: */
CREATE TABLE ##myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO
/* You can create both of those at the same time. They're different. */


/* This one is just like a user table, but in TempDB: */
CREATE TABLE dbo.myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO

The first one disappears when my session is over, but the latter two persist until the SQL Server is restarted.

Why would you ever do the latter two? Say you need to share data between sessions, or between different applications, or staging tables for a data warehouse, or just faster tables that live on local SSDs in a cluster (as opposed to slower shared storage), or you wanna build a really crappy caching tier.

If you use global temp tables or user-space tables, though, you have to check for duplicates before creating your tables. Local temp tables are just all yours, and you can have a thousand users with the exact same-name local temp tables.

Next up, the ever-so-slightly different magic of temporary stored procedures:

USE tempdb;
GO

/* This one is only available during my session: */
CREATE PROC #usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO

/* This one is global, meaning it's available to other sessions,
    but ONLY as long as my session is available: */
CREATE PROC ##usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO

/* This one is just like a user stored proc, but in TempDB: */
CREATE PROC dbo.usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO

Here, the first TWO disappear when my session is over, and only the latter one sticks around. Diabolical. So the ## temp stored proc doesn’t really help me here because I can never tell when the creator’s session is going to finish. (Not God. His session keeps right on going.)

So why would you ever create stored procedures – temporary or user – in TempDB? You might not have permissions in the user databases, just might not be technically allowed to change things, or maybe you’ve got monitoring queries that you want to hide, or you want to create procs temporarily to check parameter sniffing issues.

All of the above will disappear when the SQL Server is restarted – or will they? Not if you create them permanently in the model database, which is the source of TempDB’s creation when SQL Server restarts:

USE model;
GO
CREATE PROC dbo.usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO
CREATE TABLE dbo.myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO

/* Now restart your SQL Server, and check in TempDB */
USE tempdb;
GO
EXEC dbo.usp_myTempWorker;
GO
SELECT * FROM dbo.myTempTable;
GO

Why would you ever wanna do this? Well, say you need to make sure that, uh, in case … look, I’m just an idea man. Somebody, somewhere, is looking for a really bad idea. That’s what I’m here for.

SQL Interview Question: “Talk me through this query.”

Last month’s post “For Technical Interviews, Don’t Ask Questions, Show Screenshots” was a surprise hit, and lots of folks asked for more details about the types of screenshots I’d show. Over the next few weeks, I’ll share a few more.

Normally I’d show this query as a screenshot, but for easier copy/pasting into comments, I’m showing it as code here.

CREATE PROC dbo.usp_ByCategory @Category NVARCHAR(20) AS
IF @Category = NULL SET @Category = 'Default'
SELECT i.itemID, i.itemName,
   COALESCE(po.Price, i.Price, 0) AS Price
FROM Items I
   LEFT OUTER JOIN PriceOverrides po
   ON i.itemID = po.itemID
   AND po.SaleStartDate >= GETDATE()
   AND po.SaleEndDate <= GETDATE()
WHERE i.Category = @Category

I’d say to the job candidate, “You’ve been asked to take a quick look at this code as part of a deployment. Explain what the business purpose of the code is, and tell me if there’s anything that concerns you.”

After a few days, I’ll follow up with my own thoughts in the comments.

css.php