Blog

“Breaking” News: Don’t Install SQL Server 2014 SP1

Yesterday, Microsoft announced availability of Service Pack 1, saying:

As part of our continued commitment to software excellence for our customers, this upgrade is available to all customers with existing SQL Server 2014 deployments via the download links below.

Yeah, about that commitment to software excellence.

This morning, the download is gone:

Notice: The SQL SSIS team has found an issue with SP1 installation if SSIS catalog is present in the SQL Server instance.They are currently investigating this issue including possible workarounds and fixes.

Oof – the term “possible workarounds and fixes” doesn’t sound good for those who jumped in and applied the patch. A commenter on the MS Data Platform Insider blog reported that it hosed the master database broke the instance in a way suspiciously similar to a similar bug in SQL Server 2012 SP2.

(And jeez, what is it with service packs lately? Remember the SQL 2012 SP1 100% CPU issue? I’m starting to think you’re safer with cumulative updates than with service packs.)

Remember, kids, don’t rush into patching. If your servers are mission critical, test in your staging environment first – staging is the DBA’s development. (No, your development environment isn’t staging – because your developers make their living in the dev environment, and if you broke that with SP1 yesterday, you’ll be slaving away today to get your dev instance back up and running.)

Update: the fix is in. If you applied SP1, follow the instructions in this StackExchange post.

Moving Databases Made Easy – SQL Server on a File Share

We’ve all had to move a database. For databases above a small size, this is a painful process. While there are a number of ways that you can move a database without downtime, they all involve a lot of extra work. What if there were something easier?

Every database is unique and precious.

Every database is unique and precious.

What’s the Real Problem?

For most people, moving a database is a one time thing – you’re moving to a new server. That’s pretty much it. I spent 5 minutes staring at my ceiling trying to figure out other reasons that most people might move a database from one SQL Server to another.

For some people, though, databases could move on a regular basis. Think about it like this – you host a software as a service (SaaS) application. You’ve got one database per relative unit of work (customer, forum, cat picture collection). These databases can sit idle for months before, suddenly, there’s tremendous activity.

Normally, you wouldn’t make each SQL Server capable of handling peak load for any section of your SaaS. Instead, you would probably have a few servers ready to handle the worst load and move your busiest database over to those servers.

Typically, this is going to involve a small outage and setting up something like database mirroring or log shipping in order to move the database with minimal down time.

SMB to the Rescue!

Microsoft has supported housing SQL Server databases on a file share for a while now. If you’re not familiar with the idea, I recommend checking out the MSDN article Install SQL Server with SMB Fileshare as a Storage Option.

Moving a database with SMB is straight forward – detach the database from the old server, attach the database to the new server, bring the database online. For most of us, this quick change is much preferred to other methods of moving the database.

SMB File Shares and You

Although SMB file shares are a new technology, they’re worth considering for new SQL Servers. SMB file shares make it easy to move databases between SQL Servers. If you want to learn more about using SMB check out the MSDN article on using SMB and SQL Server.

Brent says: If your storage area network (SAN) is using older connectivity methods like 2Gb or 4Gb FC, and your Ethernet is 10Gb, you might even see a performance boost.

SELECT INTO and non-nullable columns

SELECT…INTO  is one of my favorite SQL Server features.

It’s great for creating table skeletons with false WHERE clauses (1=2), moving a real table to a staged/temp location for testing, etc.

In SQL Server 2014

It acquired the ability to go parallel, which is pretty neat, but that’s not what we’re talking about here.

It has some limitations

Chief among them is this:

Indexes, constraints, and triggers defined in the source table are not transferred to the new table, nor can they be specified in the SELECT…INTO statement. If these objects are required, you can create them after executing the SELECT…INTO statement.

Which is… Sort of true. There’s a trick, and I’ll show you a quick example here with another of my favorite things: a Numbers table.

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT [N].[N]
INTO #NumbersTest
FROM [Numbers] N

ALTER TABLE #NumbersTest ADD CONSTRAINT [PK_Numbers] 
PRIMARY KEY CLUSTERED (N) WITH (FILLFACTOR = 100)

Trying to add the PK constraint here fails, because the column is NULLable

Msg 8111, Level 16, State 1, Line 37
Cannot define PRIMARY KEY constraint on nullable column in table '#NumbersTest'.
Msg 1750, Level 16, State 0, Line 37
Could not create constraint or index. See previous errors.

We can verify this by looking at the table metadata:

SELECT [columns].[name], [columns].[is_nullable]
FROM tempdb.sys.columns 
WHERE [object_id] = OBJECT_ID(N'tempdb..#NumbersTest');


name     is_nullable
N           1

So how do we fix this? We could alter the table, but that won’t leave us with the lasting satisfaction of proving BOL wrong on a technicality. We’ll adjust our code a bit, and try again.

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT 
ISNULL([N].[N], 0) AS N  /* <--- The magic is here! */
INTO #NumbersTest_IN
FROM [Numbers] N

ALTER TABLE #NumbersTest_IN ADD CONSTRAINT [PK_Numbers] 
PRIMARY KEY CLUSTERED (N) WITH (FILLFACTOR = 100)

This time, with the addition of an ISNULL check on the column, it ‘inherits’ the not NULLable property, and the PK constraint adds successfully. We can verify that by checking the table metadata, if you don’t believe me:

name     is_nullable
N           0

Note that this same behavior does not occur if you replace ISNULL() with COALESCE()

And with that, you can clean up the test tables. Or keep them around. Do some math. Whatever blows your hair back.

Got a favorite use for a numbers table outside of string splitting or doing date math? Let me know in the comments! I may write a follow up.

DROP TABLE [#NumbersTest]
DROP TABLE [#NumbersTest_IN]

Brent says: Wow. That is a really slick trick.

Kendra says: I know some people have bias against SELECT INTO because it seems sloppy and you have to go the extra mile to get the right types, but it can be a great tool. Don’t rule it out.

Jeremiah says: I really like this method – it’s a quick way to copy tables without using other cumbersome techniques

The @DBAreactions Guide to In-Memory OLTP (Hekaton)

At SQLbits last month, I presented a new session: Cool Story, Bro – The DBAreactions Guide to SQL Server 2014. I wanted to have some fun while educating folks about the surprise gotchas of the newest features.

Here’s the In-Memory OLTP (Hekaton) section of the session:

Our sp_Blitz® has long warned you if Hekaton is in use, and its Hekaton detail page shows some of the limitations.

How Do I Know My Query Will Be Fast In Production?

We’ve all had this problem before – the production server is significantly more powerful than the development server. You tune a query and you’re pretty sure you’ve made it faster, but there’s that doubt in the back of your mind. There are so many things that can change performance – memory, CPU count, disk performance – how can you tell that you’ve made a query better?

Measuring Slowness

When you know you have a slow query that you need to tune, run it in production one last time. But, before you do that, gather CPU and IO metrics using STATISTICS IO and STATISTICS TIME:

SET STATISTICS IO, TIME ON; 

EXEC dbo.MyAwfulQuery @slow = 'yes';

You’ll get a bunch of jibber jabber in the “Messages” tab that documents how much CPU time was spent compiling and running your query as well as information about logical IO. Save this off to a text file for now.

Measuring Fastness

Go to your dev server. Tune your query. I’ll wait.

As you’re tuning your query, pay attention to the CPU time and logical IO numbers that you’re getting back from STATISTICS IO and STATISTICS TIME. These are a relative indication that things are getting better. If there are CPU differences, you’re likely to see different CPU numbers on that dev machine, but that’s OK. We’re looking to compare total logical IO and total CPU time in milliseconds.

If you want an easy route to compare your results, you can paste the output from STATISTICS IO and STATISTICS TIME into statisticsparser.com. This will go through the results and push the data into a nice table for your perusal.

How Much Faster Will my Query Be?

Using STATISTICS IO and STATISTICS TIME to tune helps me figure out how well I’m doing compared to where I started from. Some queries move around a lot of data. In these cases, it can help to know how much faster dev is than production.

If you really want a reasonable comparison between dev and production, you can find some guidance in the Fast Track Data Warehouse Reference Guide. There’s a section of this document about measuring the MCR (Maximum Consumption Rate) and BCR (Base Consumption Rate) of your SQL Server. These numbers reflect 100% cache reads and 100% disk reads, respectively.

Once you have MCR and BCR, you make effective predictions about how query performance could differ between the two. This gives you the power to predict, with some margin of error, how fast newly developed queries can run.

But is it Faster?

Between STATISTICS IO and STATISTICS TIME you can determine if a query is going to use fewer resources. The MCR and BCR give you relative numbers for how fast a server can process data. Between these measurements, you can predict how a query will perform between environments and what kind of impact your changes will have.

Brent says: as you get used to doing these measurements, you’ll also be able to tell if a new dev-only query is going to be fast enough in production, too. You’ll know how many logical reads is slow (or fast) in your environment.

Erik says: The hardest thing for me to ever get was commensurate data to test on in dev. If you don’t have it, get it. If you have it, don’t waste it.

Updated sp_Blitz®, sp_AskBrent® Tutorial Videos [Video]

Every month, tens of thousands of people get our First Responder Kit – a free SQL Server download pack with scripts, e-books, worksheets, and posters. They usually get started by watching our tutorial videos, so we figured it’s time to update ’em.

Here’s how to use sp_Blitz®:

And here’s how to use sp_AskBrent®:

To get ’em, get our First Responder Kit now. Enjoy!

Defragmentation Proven to Completely Fix Performance Issues

SPRINGFIELD – Local systems administrator Jackie Toole was carried through city streets today on the backs of ecstatic Acme Tooling employees.

For months, Acme’s accounting department had complained of slow performance in their payroll application. “Saving a new hire took minutes, and we had no idea what was going on. Jackie did something, and next thing you know, everything was just blazing fast. It was incredible!”

Fellow admin Marianne O’Reilly took a moment out of the parade to say, “We’d been throwing hardware at it left and right, from a terabyte of memory to a huge new solid state storage device. We just couldn’t crack the problem. Jackie just nailed it right away with that defrag. Even the worst database queries just fly now!”

In related news, local training schools have suffered a dramatic decrease in enrollment as word continues to spread that programming quality just doesn’t matter.

Online Index Creation and Cruel Defaults

Have you ever been watching queries crawl, feeling sort of helpless because you can’t change them? And even if you could, it’s hitting a new custom column in an ISV table that no one bothered to tell you about. I’ve been there. I’ve been pomade deep there.

I’ve also had my Robot tell me: “there’s an index for that, Erik.”

And Robot was right! But how could I sneak an index on this 180 million row table without making things worse? It’s the middle of a production day. Slow is bad; blocked is worse.

Well, creating an index ONLINE just needs a lil’ lock at the beginning and then a lil’ lock at the end, and just like the Ghost of Milton Berle that watches you while you sleep, you’ll never know it was there.

But you have to specify the ONLINE option, even in Enterprise Edition, that your company lovingly spent your bonuses for the next decade to license. Here is an example that breaks contrived speed:

No Business Use Whatsoever

AdventureWorks went out of business because of queries like this.

Seems reasonable. What say you, execution plan?

>Implying I need an index

Hey, look, it’s that thing Robot told me about. Thanks, Robot.

Except unlike Robot, I have a feeling. Just one. And today that feeling cares about end user experience and satisfaction. I want to get this index into the party pronto. This is the index that SQL and Robot agree on. But c’mon, we both know there’s WAYYYY more stuff going on here.

/*
Missing Index Details from SQLQuery4.sql - NADA.AdventureWorks2014 (sa (59))
The Query Processor estimates that implementing the following index could improve the query cost by 99.0347%.
*/

/*
USE [AdventureWorks2014]
GO
CREATE NONCLUSTERED INDEX [IX_SOD_SODID]
ON [Sales].[SalesOrderDetail] ([SalesOrderDetailID])

GO
*/

To prove it, we can script out simple index out from the GUI.

04-Go To The GUI

And here’s what that gives us. Note that ONLINE = OFF here.

05-GUI Results

 

If we go back in and check out the options, we can turn ONLINE = ON here.

 

06-Back To The GUI

And now looking at how it scripts out, ONLINE = ON

07-Better GUI Results

Which means it’s totally safe to just roll this out any ol’ time.

Just kidding, follow change management procedures and TEST TEST TEST. Also remember that robots and missing index hints are sometimes just plain dumb. They’re not terribly aware of existing indexes, and they don’t consider the columns they’re asking you to index. I’ve seen SQL demand indexes on 4 – 5 (MAX) columns, or ask for just one column to be indexed that could easily be covered or included in another existing index.

If you want to see what’s up with your indexes, missing, existing, hypothetical, or somewhere in the middle of that Bizarre Love Triangle, click here to check out sp_BlitzIndex®.

Kendra says: I keep trying to use ONLINE=MOSTLY, because that seems accurate, but it never works.

Brent says: Once I got started typing out CREATE INDEX by hand, I fell into the rut of doing it every single time, and I forgot about all the cool options available in the GUI. I’m conditioned to just say ONLINE = ON, but there’s other cool options too – like 2014’s amazing performance improvements if you sort in tempdb.

Why You Shouldn’t Upgrade SQL Server

Stop. Before you think about upgrading your existing SQL Server to the latest version, think it through.

In retrospect, putting a Prius engine in the Gran Prix wasn't my best idea.

In retrospect, putting a new Prius engine in the Gran Prix wasn’t my best idea.

Your hardware is old. Even if it’s just a couple of years old, the latest advances in processors, memory, and storage mean it’s practically an antique.

Your Windows is old, too. Over the years, you installed a bunch of crap that you probably shouldn’t have put on the production box. You could uninstall it, but even that will leave leftovers all over the place.

You’ve learned a lot since the first time. When you first installed it, you didn’t know that you shouldn’t install extra services you don’t need, or that you should use 4 TempDB data files, or that you should format the drives with 64K NTFS allocation units. Now that you know about really good setup checklists, you wish you could do it all over again – and now is your chance.

Patching takes forever. You need to bring both Windows and SQL Server completely up to date, but it can take minutes or hours of downtime to get all the right patches applied. And then there’s the BIOS and firmware, too. Instead of guessing how long the old box will take to shore up, just build a new one and get it perfect.

You need to test the new Cardinality Estimator. For several versions, your query plans haven’t changed when you upgraded, but those days are over. SQL Server 2014 brings dramatic changes to the Cardinality Estimator, and you need to check your queries against it to prevent a 100%-CPU-surprise on go-live day.

You need a plan B. Patching and upgrading has a serious potential for failure. In theory, you could restore the OS backups and try again, but have you ever tested that? What happens if it fails, and you have to try it all on the fly? What’s your recovery time objective?

Kendra says: It’s funny, in-place upgrades always worked fine for me, unless it was on a production server.

Watch Brent Tune Queries [Video]

Ever wonder how somebody else does it? Watch over my shoulder for this beautifully recorded one-hour session at SQLRally Nordic in Copenhagen from a couple weeks ago:

The resources, scripts, and my Be Creepy process are all free too. Enjoy, and check out all of the session recordings for even more free learnin’.

css.php