Blog

Upgrading to SQL Server 2014: Frequently Asked Questions

Can I upgrade an existing instance without migrating?

Keep your columns and rows inside the database while you prepare for your upgrade.

Keep your columns and rows inside the database while you prepare for your upgrade.

This is nothing against SQL Server 2014, but I can’t stand in-place upgrades. Over the years I’ve had in-place upgrades work flawlessly on a few instances, and then had an install issue cause it to fail in the middle on other instances. Usually the critical instances, just because I’m not always lucky. And when upgrade fails, it doesn’t always roll back completely, or allow you to just re-run it. You may be down for a good long time.

But you’ve got backups, right? Really recent ones? Even so, how long does it take to restore them, if you need to do that to a different location? (And did you set it up right?)

While in-place upgrades may be fine for test and dev environments, they aren’t a good fit for your production instances where RPO and RTO are critical.

Should I raise the database compatibility level to 120 to use the new cost-based optimizer?

If you can test it for all your queries and know if it’ll be right for you, you can turn it on. Most folks can’t be sure of this so they start with it off to reduce risk from the migration.

The new cost based optimizer is very exciting, but there’s definitely a chance you can hit performance regressions. If you can’t test in advance, turning it on at the same time you migrate makes your troubleshooting more complex if you hit a problem.

Can we restore a full backup with NORECOVERY, run CHECKDB, and then later restore a differential?

You can’t run CHECKDB unless you’ve done a full restore and made the database writable. That means you can’t apply a differential backup afterwards.

You can potentially mitigate the risk by running a full CHECKDB against the database prior to running the backup. You may also run the backup with CHECKSUM (not a substitute for CHECKDB but it does apply some protection), and then run CHECKDB in a maintenance window shortly after the migration.

It’s all about your risk tolerance.

Is it still helpful to run DBCC UPDATEUSAGE after a migration or upgrade?

DBCC UPDATEUSAGE is typically no longer needed and only impacts output from sp_spaceused, anyway. Check the ‘remarks’ section on its page in books online for the full details: https://msdn.microsoft.com/en-us/library/ms188414.aspx

People got into the habit of this because it was needed to upgrade to SQL Server 2005. But it’s OK, you can let go of that habit (and it’s worth it, this command can be pretty slow).

Should I run sp_updatestats after a migration or upgrade?

This was a big deal when upgrading to 2005 because of changes they made to statistics, but it’s not needed specifically for SQL Server 2014 upgrades. Some folks like to do this to kick the tires, but don’t go crazy or think it’s magical.

What should I set ‘max server memory’ to for SQL Server 2014 if I’m running Standard Edition?

Possibly to more than you think. Max server memory in Standard Edition is limited to 128GB for the buffer pool in SQL Server 2014, but you may want to set it higher so that other parts of SQL Server can access memory above that level. Read more here.

How do I turn on the tempdb IO enhancement in SQL Server 2014?

You don’t have to, it’s just on.

Are there any known upgrade issues?

Microsoft keeps a list here, along with a pre-upgrade checklist: https://msdn.microsoft.com/en-us/library/bb933942.aspx

Which cumulative update should I use?

You should definitely use one if you care about performance, but the choice can be very complicated. Especially if you’re using Availability Groups.  There is no easy answer: read all the Cumulative Update articles and test heavily before you go live.

Want more SQL Server setup help? Check out our setup guide for SQL Server.

And while you’re here, please don’t forget to think about your version of Windows.

How to Become an Expert at Anything in SQL Server

First, decide what you want to get really good at. Then try to break it in many possible ways and analyze why it broke.

I think it's going to explode

I think it’s going to explode

The more you break something, the more you’ll understand it.

I use this technique all the time. Last year, I encountered a problem with a lack of worker threads in SQL Server in a client environment. That issue was particularly difficult, because when the problem occurred it was difficult to observe the SQL Server without using the Dedicated Admin Connection (DAC). At the time, I built some repro scripts in my own environment to show how the issue started, why it was tricky to observe, and how to observe it and confirm the queries at the root of it all without restarting the SQL Server. And just recently I wrote new scripts breaking the same thing in different ways — and showing how parallelism can be a factor — for our 2015 Performance Troubleshooting class. Taking the time to break it myself taught me nuances about workers in SQL Server that I wouldn’t have learned otherwise.

“But Kendra, I don’t have time for this!”

Here’s how to make time. Mix and match these three ideas:

1) Make it a team exercise.

One person breaks something in pre-production, the other has to fix it. You save and publish notes on the error messages and how you responded.

2) Tie it to a learning program.

Purchase one of our training video or in-person classes, and design a learning program that uses the training class as a launching board. Set a goal to write your own scripts breaking something for at least 5 of the modules.

If you really want to lock in the knowledge, write down a summary of what you’ve learned in your own words. You can blog it or publish it as notes for your team at work to reference. Your own notes will help you over time more than you expect.

3) Set goals for mastering specific items for the year.

Talk through it with your manager and document why you want to master the topic, and three things you want to achieve at work after you’re done.

Erik Darling Joins Brent Ozar Unlimited

Kendra here– we’re really excited to announce that we’ve hired Erik Darling on to join our team as a consultant. Erik’s first day is today.

To introduce Erik, we’re sharing some of his answers to our call for job applications. We were so impressed, we knew we had to talk to him right away.Erik-Robot

What are your favorite things about your DBA job?

Getting a 10 production server/100tb of data environment configured to best practices and maintained and (WEEKLY) DBCC checked.

Migrating legacy apps to new VMs.

Helping the devs with in-house applications and reports (lots of code and index tuning).

Setting up Failover Clusters, install and config SQL.

Learning, learning, learning.

Being the only DBA.

What parts of the DBA job do you not enjoy?

Filling out paperwork, archiving to tape, being on call from 7am to midnight seven days a week, being the only DBA.

Tell us about a time you broke a production server.

So this one time I had to move a 5tb database to a near-line SATA LUN, and I was all like, “GUI? GU you! I’m gonna PoSH this ogre to icy hell.”

That’s when I found out that the Move-Item command basically eats like all the memory it can if you don’t have LPIM set up for SQL, and this dual 12, 512GB behemoth basically churned to a halt, couldn’t RDP, couldn’t VNC, nothing.

On the plus side, I could use PowerShell to kill the PowerShell process remotely.

After that I just restored a copy of the database to the new LUN and dropped the old one. That went a lot better.

Why would you be great at this job?

I like solving big problems. I like teaching people how to solve little problems. I once taught a developer who barely spoke English how to cross apply with a drawing. It may or may not have been a New Order album cover. I can talk about SQL all day. Most of it is right, too, especially if I think about it first. I am all self taught and have never had anyone senior to me, so I have plenty of straight up horribly grim determination to learn things and put them into practice.

What do you think will be the hardest part of this job for you?

Not sneaking onto Jeremiah’s Twitter account to change his name to Bodega Mayonnaise.

Brent says: I swear, we didn’t just hire him because of the tattoos. But I’m not going to say they weren’t a contributing factor.

Jeremiah says: I’m glad to have a co-worker who shares my appreciation of plaid, tattoos, and crazy T-SQL.

Local Variables vs. Parameterized Stored Procedures in SQL Server (Video)

Local variables don’t behave like true parameters in stored procedures in SQL Server. Join Kendra to explore why local variables are so tricky in this free 10 minute video.

How to Tell if You Have a Development Environment

Take my three question survey…

Slide1 Slide2 Slide3

Brent Says: SQL Server Developer Edition licensing is crazy cheap, and it has no per-core licensing fees. I like taking the oldest server I have with a huge number of cores, something that doesn’t make licensing sense for any other purpose, and throwing Developer Edition on there.

Overheard in Performance Tuning Training Class

A sampling of things overheard at our 2015 Performance Troubleshooting Class in Denver:

  • “I’m not a GUID hater.”
  • “Oh, God help me. I’ve clicked on XML.”
  • “Reporting Services looks around at the available memory, makes itself at home, maybe takes its pants off.”
  • “Where would I go to check if I — hypothetically — accidentally set my fill factor to 10%?”
  • “I use the Database Tuning Advisor for humor.”
  • “SQL Server gives you this great set of [partitioning] tools and then sprinkles broken glass all over them.”
  • <sirens outside> “The partitioning police are here!”
  • “If the index fits, I sits.”
  • “You don’t need foreign keys in a data warehouse. Because you have a source system.”
  • “This is the title slide. It is also the agenda slide.”
  • “It sounds like a bad ad, but the more you query the more you save!”
  • <Execution plan recognizes an indexed view and swaps it in> “Let it be known that for once, SQL Server did the right thing!”
  • “Great questions. You should have prizes. We don’t have any prizes left, so have some smug.”
  • “When there’s no uninstall, there’s your clue that this feature may not be a delicious lava cake. It may in fact be a disease.”
  • Attendee: “So how do we know Hekaton ran out of memory?”
    Kendra Little: “It stops working.”
  • “If your query lasts more than four hours, discontinue using Hekaton.”
  • “Tune queries with this one simple trick: TRUNCATE TABLE”
  • “There’s a lot of empty lines at the end of this file. I keep scrolling down, expecting to find a bonus album track or something.”
  • “Even SHAREPOINT knows better!”
  • “The SQL Server is hung. We need a back door.”

The Top 3 Mistakes T-SQL Developers Make

Over the years, I’ve done all kinds of awful things with T-SQL and made countless mistakes. Some were harmless; others were borderline catastrophic (exciting times!). I was curious what kind of horrible mistakes other people make, so I threw the question out to Twitter.

Doug: Worst Dev Mistakes?

Every answer I got was unique, which was both telling (so many ways for developers to mess up) and fascinating (no consensus which was worst). Since I didn’t get any winners by popular vote, here are the top three answers I agree with most, with the worst mistake first:

#1: CODING FOR TODAY

MidnightDBA:Growth

We’ve all been there — we just need a script or stored procedure to run and we’re under the gun to get it deployed. The problem here is that as soon as it works and passes QA — you do have QA, don’t you? — we call it a success and don’t look back. We don’t account for any kind of future growth in data size or user base. Instead of moving on, this is the perfect time to check the working copy into source control and then start refactoring to improve performance.

If we don’t revisit and revise our code, we end up with a server full of code that “ran well enough at the time” but now won’t scale. Future-proofing is never at the top of anyone’s list of priorities but it’s just like exercise and healthy eating. We all know we need to do it. It’s a matter of making the commitment to write leaner, healthier code before it clogs the SQL Server’s arteries or gives it a heart attack outright.

There is no better time to refactor than when you get it to meet requirements. If you don’t start refactoring right away, you’ll forget all of the nuance and context involved with the code, you probably won’t remember how/why you arrived at the code that’s there now, and you may not even get permission to spend time on it later. Future You will thank Present You for it.

THE FIX: Buffer your development time to include refactoring, and make sure you have a performance goal in mind. For example: “We need this procedure to return the top five recommendations for the specified account in under 1500 milliseconds.”

#2: NOLOCK

Nic Cain: Nolock

Do you care if your query results are wrong? No? Congratulations, NOLOCK might be right for you!

The trouble with NOLOCK is twofold: developers usually don’t fully understand the risks involving dirty reads, and when they do, they often leave it in code because it’s better to risk sketchy results than move back to the suburbs of Slowville.

There are appropriate circumstances for NOLOCK, but developers often add it blindly as a performance enhancer, not a legitimate solution.

THE FIX: If the risks that come with NOLOCK aren’t acceptable, you can usually fix those performance problems either with code or index changes. For example, if a SELECT and UPDATE are fighting over the same index and locking each other out, an index tailored to the SELECT statement will prevent it from waiting for the UPDATE to release its lock.

#3: CURSORS

Aaron Bertrand: cursors

Cursor-Stalking Owl

“I shall hunt my cursors like silent death.”

Cursors do terrible, terrible things to performance. Every developer has this moment of enlightenment at some point. As soon as we learn cursors are performance killers, we hunt them down like a starving owl in a field of mice. We shouldn’t be embarrassed that we wrote them in the first place; we were solving the problem the best way we knew how at the time. Still, there’s a universal sense of shame when one developer sees another developer’s cursor, as if the neighbor passed by our open garage and spotted the recycling bin full of cheap beer cans and Cat Fancy magazines.

Like NOLOCK, there are times it’s appropriate to use a cursor, but those occasions are very rare.

THE FIX: Write code that operates in sets, rather than one row at a time.

HONORABLE MENTION:

  • Overuse of dynamic SQL
  • Security by obscurity
  • No indexes
  • Incorrect data types, leading to implicit conversions
  • Not following their own best practices
  • The same mistakes they made two years ago
  • Hundreds of columns in a table
  • Triggers

Monitoring Snapshot Isolation with Perfmon in SQL Server (video)

Snapshot Isolation can be a great way to reduce lock waits and speed up your SQL Server, but long running transactions could still slow you down. Join Kendra Little to learn how to monitor for these sneaky killers using performance monitor.

Brent says: if you’re a developer and you’re getting started building a new SQL Server application, you should totally check out RCSI as a default setting. Learn more about isolation levels here.

Signing off

Dear Brent Ozar Unlimited readers: thanks for following me for three great years. I’m off on a new adventure, but I hope to see you at a user group, a SQL Saturday, or a conference in the future!

I’m leaving with a lot of great memories – and here are a few of my favorites from the last three years.

My first company retreat was also my first SQL Cruise!

Coffee at Victrola before leaving port

Coffee at Victrola before leaving port

Lunch in Ketchikan with Jeremiah and Kendra and other cruisers

Lunch in Ketchikan with Jeremiah and Kendra and other cruisers

I got to co-present my first full-day pre-con at PASS Summit with Brent, Jeremiah, and Kendra in 2013.

This was the only time I've ever been told I wasn't talking loudly enough.

This was the only time I’ve ever been told I wasn’t talking loudly enough.

Body boarding at the company retreats in Mexico is always a blast.

We could do this all day. And we did.

We could do this all day. And we did.

We had awesome training weeks in Chicago in 2013 and 2014.

Teaching in Chicago. Talking with my hands, as usual.

Teaching in Chicago. Talking with my hands, as usual.

We ate at Little Goat on May the 4th. Or, Doug usually dresses up for nights out. You can decide.

Little Goat! We love Stephanie Izard.

Little Goat! We love Stephanie Izard.

And I’ll always fondly remember our mascot from Mexico: donk!

IMG_1606

 

Kendra says: Jes has been amazing and fun to work with. But you can tell that from the pictures! We wish her all the best and are proud to have her as an alumnus.

Doug says: Thanks to you, Jes, I’ve learned a whole new repertoire of workplace-appropriate donkey jokes. (Not appropriate for other workplaces. Just this one.) We’re gonna miss you!

Jeremiah says: It’s been a blast working with Jes – for once I wasn’t the loudest person in the room! I’ll miss your energy and your love of log shipping.

Brent says: I can’t believe it’s been three years already since we first talked to Jes about the job – and in a coffee shop, of course. I’m so happy we got to be part of her journey, and I can’t wait to see what she does next.

Are “bad” statistics the reason my query is slow?

How often are “bad” statistics to blame for a “slow” or “bad” query? Chances are, not nearly as often as you think.

What are “bad” statistics?

Most often, when people say statistics are “bad”, what they mean is “out-of-date”. But what does that mean?

Let’s compare two tables – dbo.OrderStatuses and dbo.Orders. OrderStatuses is a 25-row table that contains a master list of all possible order statuses. Only one or two changes occur per year. Orders is a 2-million-plus-row table that contains information about every order placed in our system. There are, on average, 10,000 new orders placed per day (currently).

Let’s say that statistics were last updated on dbo.OrderStatuses 90 days ago, and last updated on dbo.Orders 9 days ago. Are either of those “out of date”? The date the statistics were last updated isn’t as important as how many changes have occurred since the last update.

Rougly, all tables with more than 500 rows will need a number of changes equal to 20% of the rows plus 500. (The formula for updating statistics is documented here.) This formula applies to tables of 1,000 rows, 560,000 rows, or even 2,000,000 rows. Larger tables require more changes to automatically update – thus, large tables are more susceptible to “bad” statistics.

How do I tell if statistics are my problem?

Or, as the question was asked in a different way, is there a way to tell if statistics are my problem without looking at the query plan? When asked that way, I would answer, “Not easily.”

I most commonly find tables that have statistics problems by looking at a query’s actual execution plan and comparing the estimated and actual rows returned. not matched rows

Finding out-of-date stats in other ways would require me to do some digging. I’d have to find the number of rows in an index, calculate the change threshold, and use sys.dm_db_stats_properties modification_counter to determine if it’s close to the threshold. Then I’d have to decide if I should manually update the stats or not.

I prefer to approach the problem on a query-by-query basis instead.

What can you do to combat this?

Leave the default options to auto-create and auto-update stats turned on.

Run Ola Hallengren’s index maintenance script – or, his entire set of scripts. Ola’s index maintenance script has the parameter @UpdateStatistics, which allows you to choose index stats, column stats, or all. You can also choose to set @OnlyModifiedStatistics to true, so stats are only modified if rows have changed since the last stats update. This is a great way to include stats maintenance along with index maintenance in your regular routine.

If you have large tables which have frequent updates, but automatic stats updates and a weekly stats job aren’t enough to keep up, it’s acceptable to create a SQL Server Agent Job that updates the stats on that table on a regular basis – perhaps once a day.

In SQL Server 2008 R2 SP1, Microsoft introduced trace flag 2371. With this trace flag enabled, rather than a set 20% change threshold, the more rows you have in a table, the lower the change percentage gets for an auto update. What is the formula? That’s not published yet, but one example is given in the KB – “For example, if the trace flag is activated, update statistics will be triggered on a table with 1 billion rows when 1 million changes occur.” Be careful with this option, though, and test it thoroughly before putting it in production. Frequent stats updates can cause frequent query recompilations – queries can take a bit longer to run, and CPU will be used more.

What if statistics aren’t my problem?

The chances of a poor query performance being caused by out-of-date statistics is very low.

venn diagram

There are so many things that factor into query performance that you should look at first. Is the query slow only for you, or for one set of values? The problem could be parameter sniffing. Evaluate the overall health of your SQL Server by checking your wait statistics, or using a tool like sp_AskBrent to see what is happening in your server right now. Is your server suffering from blocking? Review the query execution plan – are there obvious issues such as large scans, index seeks combined with key lookups, large sorts, or other common problems? And last but not least, do you have baselines to compare to?

Remember: bad, or out-of-date, statistics, can cause problems, but it’s rare. A poorly-performing query is always worth tuning!

Want to learn more about statistics? Check out my video Statistics: Beyond Performance. Need performance tuning help? Check out Developer’s Guide to SQL Server Performance!

Kendra says: I completely agree with Jes– I was once a “statistics blamer” and thought it was the source of all my slow query problems. It turns out that there was just a lot I didn’t understand.

css.php