Blog

And Party and Alt Shift

This is a cool SSMS trick I picked up a while back

Though not nearly as far back as I wish I had. It’s so cool I made a .gif of it in action. When you’re done putting your socks back on, I’ll tell you how it works.

AltShiftDemo

This .gif was brought to you by the Cool SSMS Tricks Foundation, in association with Worldwide .gifs

 

Pure ALT+SHIFT magic.

Hold down both keys at the same time, and use your up and down arrow keys to navigate vertically. There will be a thin grey line showing you exactly which rows you’ve grabbed. Then Just type normally. I uh, simulated a typing error, to illustrate that you can also delete text doing this. Yeah.

It really makes doing simple multi-line edits a breeze, especially if you don’t feel like setting up Excel formulas to do similar tasks. These are random Massachusetts zip codes, which is why they get a leading zero, and quotes.

Can you feel the efficiency?!

Kendra says: What in the…. holy cow, that actually works!

Brent says: I knew about that trick, but ZOMG PEOPLE THERE IS A PRODUCTIVITY GIF IN OUR BLOG

Happy Fourth Birthday to Us, 40% Off Presents for You

Four years ago this month, we turned this blog into a consulting company.

We’ve had so much fun over the last few years, and we’re really proud of what we’ve built:

  • Over 500 SQL Critical Care® patients
  • Thousands of training video customers
  • Thousands of in-person training attendees at dozens of classes and pre-cons
  • Hundreds of free YouTube videos watched by over a million people
  • Almost 50,000 email subscribers
  • Several million web site viewers
  • A handful of awesome full time employees

Let’s celebrate. Between now and Tuesday 4/28, discount code Our4thBirthday gets you 40% off online sales of our training videos. Enjoy!

How to Evaluate NoSQL Case Studies [Video]

Every now and then, one of our clients considers adopting an alternative database platform – sometimes NoSQL, sometimes a brand new relational database. They’ll ask for our help in evaluating the vendor’s solution.

One of the best ways to do it is ask the database vendor to set us up on a WebEx or GoToMeeting with one of their happy customers. I want to hear from the technical folks, not management.

I run the call with six PowerPoint slides – here’s how:

“Breaking” News: Don’t Install SQL Server 2014 SP1

Yesterday, Microsoft announced availability of Service Pack 1, saying:

As part of our continued commitment to software excellence for our customers, this upgrade is available to all customers with existing SQL Server 2014 deployments via the download links below.

Yeah, about that commitment to software excellence.

This morning, the download is gone:

Notice: The SQL SSIS team has found an issue with SP1 installation if SSIS catalog is present in the SQL Server instance.They are currently investigating this issue including possible workarounds and fixes.

Oof – the term “possible workarounds and fixes” doesn’t sound good for those who jumped in and applied the patch. A commenter on the MS Data Platform Insider blog reported that it hosed the master database broke the instance in a way suspiciously similar to a similar bug in SQL Server 2012 SP2.

(And jeez, what is it with service packs lately? Remember the SQL 2012 SP1 100% CPU issue? I’m starting to think you’re safer with cumulative updates than with service packs.)

Remember, kids, don’t rush into patching. If your servers are mission critical, test in your staging environment first – staging is the DBA’s development. (No, your development environment isn’t staging – because your developers make their living in the dev environment, and if you broke that with SP1 yesterday, you’ll be slaving away today to get your dev instance back up and running.)

Update: the fix is in. If you applied SP1, follow the instructions in this StackExchange post.

Moving Databases Made Easy – SQL Server on a File Share

We’ve all had to move a database. For databases above a small size, this is a painful process. While there are a number of ways that you can move a database without downtime, they all involve a lot of extra work. What if there were something easier?

Every database is unique and precious.

Every database is unique and precious.

What’s the Real Problem?

For most people, moving a database is a one time thing – you’re moving to a new server. That’s pretty much it. I spent 5 minutes staring at my ceiling trying to figure out other reasons that most people might move a database from one SQL Server to another.

For some people, though, databases could move on a regular basis. Think about it like this – you host a software as a service (SaaS) application. You’ve got one database per relative unit of work (customer, forum, cat picture collection). These databases can sit idle for months before, suddenly, there’s tremendous activity.

Normally, you wouldn’t make each SQL Server capable of handling peak load for any section of your SaaS. Instead, you would probably have a few servers ready to handle the worst load and move your busiest database over to those servers.

Typically, this is going to involve a small outage and setting up something like database mirroring or log shipping in order to move the database with minimal down time.

SMB to the Rescue!

Microsoft has supported housing SQL Server databases on a file share for a while now. If you’re not familiar with the idea, I recommend checking out the MSDN article Install SQL Server with SMB Fileshare as a Storage Option.

Moving a database with SMB is straight forward – detach the database from the old server, attach the database to the new server, bring the database online. For most of us, this quick change is much preferred to other methods of moving the database.

SMB File Shares and You

Although SMB file shares are a new technology, they’re worth considering for new SQL Servers. SMB file shares make it easy to move databases between SQL Servers. If you want to learn more about using SMB check out the MSDN article on using SMB and SQL Server.

Brent says: If your storage area network (SAN) is using older connectivity methods like 2Gb or 4Gb FC, and your Ethernet is 10Gb, you might even see a performance boost.

SELECT INTO and non-nullable columns

SELECT…INTO  is one of my favorite SQL Server features.

It’s great for creating table skeletons with false WHERE clauses (1=2), moving a real table to a staged/temp location for testing, etc.

In SQL Server 2014

It acquired the ability to go parallel, which is pretty neat, but that’s not what we’re talking about here.

It has some limitations

Chief among them is this:

Indexes, constraints, and triggers defined in the source table are not transferred to the new table, nor can they be specified in the SELECT…INTO statement. If these objects are required, you can create them after executing the SELECT…INTO statement.

Which is… Sort of true. There’s a trick, and I’ll show you a quick example here with another of my favorite things: a Numbers table.

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT [N].[N]
INTO #NumbersTest
FROM [Numbers] N

ALTER TABLE #NumbersTest ADD CONSTRAINT [PK_Numbers] 
PRIMARY KEY CLUSTERED (N) WITH (FILLFACTOR = 100)

Trying to add the PK constraint here fails, because the column is NULLable

Msg 8111, Level 16, State 1, Line 37
Cannot define PRIMARY KEY constraint on nullable column in table '#NumbersTest'.
Msg 1750, Level 16, State 0, Line 37
Could not create constraint or index. See previous errors.

We can verify this by looking at the table metadata:

SELECT [columns].[name], [columns].[is_nullable]
FROM tempdb.sys.columns 
WHERE [object_id] = OBJECT_ID(N'tempdb..#NumbersTest');


name     is_nullable
N           1

So how do we fix this? We could alter the table, but that won’t leave us with the lasting satisfaction of proving BOL wrong on a technicality. We’ll adjust our code a bit, and try again.

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT 
ISNULL([N].[N], 0) AS N  /* <--- The magic is here! */
INTO #NumbersTest_IN
FROM [Numbers] N

ALTER TABLE #NumbersTest_IN ADD CONSTRAINT [PK_Numbers] 
PRIMARY KEY CLUSTERED (N) WITH (FILLFACTOR = 100)

This time, with the addition of an ISNULL check on the column, it ‘inherits’ the not NULLable property, and the PK constraint adds successfully. We can verify that by checking the table metadata, if you don’t believe me:

name     is_nullable
N           0

Note that this same behavior does not occur if you replace ISNULL() with COALESCE()

And with that, you can clean up the test tables. Or keep them around. Do some math. Whatever blows your hair back.

Got a favorite use for a numbers table outside of string splitting or doing date math? Let me know in the comments! I may write a follow up.

DROP TABLE [#NumbersTest]
DROP TABLE [#NumbersTest_IN]

Brent says: Wow. That is a really slick trick.

Kendra says: I know some people have bias against SELECT INTO because it seems sloppy and you have to go the extra mile to get the right types, but it can be a great tool. Don’t rule it out.

Jeremiah says: I really like this method – it’s a quick way to copy tables without using other cumbersome techniques

The @DBAreactions Guide to In-Memory OLTP (Hekaton)

At SQLbits last month, I presented a new session: Cool Story, Bro – The DBAreactions Guide to SQL Server 2014. I wanted to have some fun while educating folks about the surprise gotchas of the newest features.

Here’s the In-Memory OLTP (Hekaton) section of the session:

Our sp_Blitz® has long warned you if Hekaton is in use, and its Hekaton detail page shows some of the limitations.

How Do I Know My Query Will Be Fast In Production?

We’ve all had this problem before – the production server is significantly more powerful than the development server. You tune a query and you’re pretty sure you’ve made it faster, but there’s that doubt in the back of your mind. There are so many things that can change performance – memory, CPU count, disk performance – how can you tell that you’ve made a query better?

Measuring Slowness

When you know you have a slow query that you need to tune, run it in production one last time. But, before you do that, gather CPU and IO metrics using STATISTICS IO and STATISTICS TIME:

SET STATISTICS IO, TIME ON; 

EXEC dbo.MyAwfulQuery @slow = 'yes';

You’ll get a bunch of jibber jabber in the “Messages” tab that documents how much CPU time was spent compiling and running your query as well as information about logical IO. Save this off to a text file for now.

Measuring Fastness

Go to your dev server. Tune your query. I’ll wait.

As you’re tuning your query, pay attention to the CPU time and logical IO numbers that you’re getting back from STATISTICS IO and STATISTICS TIME. These are a relative indication that things are getting better. If there are CPU differences, you’re likely to see different CPU numbers on that dev machine, but that’s OK. We’re looking to compare total logical IO and total CPU time in milliseconds.

If you want an easy route to compare your results, you can paste the output from STATISTICS IO and STATISTICS TIME into statisticsparser.com. This will go through the results and push the data into a nice table for your perusal.

How Much Faster Will my Query Be?

Using STATISTICS IO and STATISTICS TIME to tune helps me figure out how well I’m doing compared to where I started from. Some queries move around a lot of data. In these cases, it can help to know how much faster dev is than production.

If you really want a reasonable comparison between dev and production, you can find some guidance in the Fast Track Data Warehouse Reference Guide. There’s a section of this document about measuring the MCR (Maximum Consumption Rate) and BCR (Base Consumption Rate) of your SQL Server. These numbers reflect 100% cache reads and 100% disk reads, respectively.

Once you have MCR and BCR, you make effective predictions about how query performance could differ between the two. This gives you the power to predict, with some margin of error, how fast newly developed queries can run.

But is it Faster?

Between STATISTICS IO and STATISTICS TIME you can determine if a query is going to use fewer resources. The MCR and BCR give you relative numbers for how fast a server can process data. Between these measurements, you can predict how a query will perform between environments and what kind of impact your changes will have.

Brent says: as you get used to doing these measurements, you’ll also be able to tell if a new dev-only query is going to be fast enough in production, too. You’ll know how many logical reads is slow (or fast) in your environment.

Erik says: The hardest thing for me to ever get was commensurate data to test on in dev. If you don’t have it, get it. If you have it, don’t waste it.

Updated sp_Blitz®, sp_AskBrent® Tutorial Videos [Video]

Every month, tens of thousands of people get our First Responder Kit – a free SQL Server download pack with scripts, e-books, worksheets, and posters. They usually get started by watching our tutorial videos, so we figured it’s time to update ’em.

Here’s how to use sp_Blitz®:

And here’s how to use sp_AskBrent®:

To get ’em, get our First Responder Kit now. Enjoy!

Defragmentation Proven to Completely Fix Performance Issues

SPRINGFIELD – Local systems administrator Jackie Toole was carried through city streets today on the backs of ecstatic Acme Tooling employees.

For months, Acme’s accounting department had complained of slow performance in their payroll application. “Saving a new hire took minutes, and we had no idea what was going on. Jackie did something, and next thing you know, everything was just blazing fast. It was incredible!”

Fellow admin Marianne O’Reilly took a moment out of the parade to say, “We’d been throwing hardware at it left and right, from a terabyte of memory to a huge new solid state storage device. We just couldn’t crack the problem. Jackie just nailed it right away with that defrag. Even the worst database queries just fly now!”

In related news, local training schools have suffered a dramatic decrease in enrollment as word continues to spread that programming quality just doesn’t matter.

css.php