Blog

SQL Server Version Detection

SQL Server
22 Comments

Every now and then, you need to figure out which version of SQL Server you’re using without knowing in advance. This might happen in a script (like sp_BlitzCache) or you might be using it in a migration.

Getting the SQL Server Version with @@VERSION

THe first thing that comes to mind is @@VERSION. On my SQL Server 2014 RTM installation, this returns an ugly string. Unless you like parsing multiline strings in T-SQL, you’re going to be left using LIKE on a hideous string like this one:

Imagine if you needed to get the build number of SQL Server out of there to validate against a list of builds to see if you had the current version of SQL Server installed. Pulling the build out of the major.minor.build.revision string isn’t easy when you first have to pull that string out of a bigger string.

There’s a better way than mucking around in @@VERSION.

Getting the SQL Server Version with SERVERPROPERTY

The SERVERPROPERTY built-in function has a lot of great functionality. We’re only going to concern ourselves with the ProductVersion portion of SERVERPROPERTY. How do we use it?

That should return nothing but the major.minor.build.revision for the current version of SQL Server. In my case, this returns: 12.0.2000.8.

Rather than parse the values myself, I created a temporary table with computed columns:

The common_version column will display the version of SQL Server as a floating point number – 12.0 for SQL Server 2014 or, in the case of SQL Server 2008R2, 10.5. When you’re targeting scripts for specific families of SQL Server, knowing the high level version can make scripting a lot easier.

What’s the final output look like for me?

version common_version major minor build revision
12.0.2000.8 12.0 12 0 2000 8

There you have it

That’s really all there is to finding and parsing the SQL Server version. Just a simple insert into a temporary table and a few computed columns and you’re ready to go.


Microsoft Ignite SQL Server Keynote Liveblog #MSIgnite

SQL Server
5 Comments

Yes, there are actually multiple keynotes here at Ignite, Microsoft’s new infrastructure conference. With so many products (Windows, O365, SharePoint, Dynamics, mobile, Azure, etc.) and so many attendees (over 20K), they had to break things up a little. This morning, CEO Satya Nadella and friends delivered the overall keynote. This afternoon, Microsofties have delivered a handful of different keynotes by topic, and I’m sitting in the SQL Server one.

Post-Event Summary: Microsoft demoed a few of the features discussed in the SQL Server 2016 data sheet:

  • Always Encrypted – data is encrypted by the client drivers, and unencrypted data is never seen by SQL Server (or SSMS, or Profiler, or XE)
  • Operational Analytics – means you can put an in-memory columnstore index atop your in-memory OLTP tables, and do reporting against Hekaton transactional data
  • R analytical workloads will live inside the SQL Server engine just like Hekaton does
  • Support for Windows Server 2016 and up to 12TB of RAM (which is great, given all the in-memory features)
  • The newly acquired DataZen mobile-friendly reports will ship in the box with SQL Server
  • Stretch tables are like table partitioning, but some of the partitions will live in Azure, while the hot/current data lives on-premise in your SQL Server

My Liveblog Archive

Here’s my notes from the session, oldest to newest:

3:15PM – Shawn Bice, Engineering Partner Director, taking the stage to talk about SQL Server 2016 and Azure SQL Data Warehouse.

3:18 – “In this new social environment, people are tweeting.” YOLO.

3:20 – Paraphrasing Shawn: “In the cloud, we want an engineer to come to work, have an idea, and fail as fast as possible. You need to figure out what works and what doesn’t. In the boxed product, you can’t do that. We can try all kinds of things in the cloud, figure out what works, and then put that stuff into the boxed product.”

3:22 – Talking about how the cloud has made things more reliable. I’d be a lot more impressed with this if they hadn’t just totally borked SQL 2014 SP1.

3:23 – “By the time you get SQL Server 2016, we’ll have been running it at cloud scale for many, many months.”

3:24 – The big 3 pillars for this release: mission critical performance, deeper insights across data, and hyperscale cloud.

3:26 – You have a transactional system, then you get the data out via ETL and put it in a data warehouse. It’s delayed by 2-24 hours. If you’re doing your fraud detection in the data warehouse, you’re detecting things a day too late. How can we bring both rows (Hekaton) and columns (ColumnStore) together faster?

3:27 – Rohan Kumar onstage to demo applying an in-memory columnstore index to an in-memory OLTP table. Instant applause just at the explanation of what’s about to happen. Shawn: “Alright, we’re off to a good start!”

3:30 – “Close to zero impact on your OLTP transactional systems.” I know some people will go, “wait, that means there’s an impact!” Well, you never get something for nothing, and with today’s hardware, it’s pretty easy to buy faster gear to pay for eliminating an ETL process altogether.

3:33 – For the record, we can’t see the build number onscreen, but his instance name included CTP2X.

3:34 – On to Always Encrypted. Interesting that there’s a space between these words, unlike AlwaysOn. “Always Encrypted is literally about always encrypting the data.” Requires an enhanced ADO.NET library that uses a column encryption key.

3:36 – Rohan back onstage to show Always Encrypted with…Profiler. I LOVE MY PROFILER. It will never die.

3:42 – To migrate your existing data into Always Encrypted, they’re suggesting using SSIS to pull the table down, encrypt it on the client side, and then push it back into a new table. Not exactly seamless, but it points out that the SQL Server engine is simply not going to be involved in the encryption/decryption process. You’re not going to be directly accessing this data via T-SQL queries in stored procedures – it’s gonna be encrypted there.

3:43 – Support for Windows Server 2016 with up to 12TB of memory. I’m gonna go out on a limb and guess that’s not in SQL Server Standard Edition.

3:44 – PolyBase – query relational and non-relational data with T-SQL by introducing the concept of external tables. “We did it a few years ago but it was only available in our APS/PDW, but it’ll be available to you in the box in SQL Server 2016.”

3:45 – About the acquisition of Revolution Analytics – we’re adding a 4th workload to SQL Server. When we added in-memory OLTP, we didn’t ask you to build separate servers. It’s a feature, you just turn it on. R analytics will be the same thing, hosted in the SQL Server process.

3:48 – Yvonne Haarloev coming onstage to demo DataZen.

3:52 – Looks like the DataZen demos are happening on a Surface. Was rather hoping the mobile strategy demos would be done on, uh, a mobile device.

3:55 – Yay, mobile dashboard demos! The downside is that you have to design a dashboard for each device’s form factor – they’re not responsive design. Still, way better than what we had. But like SSIS, SSAS, and SSRS, this is yet another designer and a totally different code base that BI pros will have to deal with. No cohesive story here – but you just can’t expect one with the recency of that acquisition.

3:56 – “We are very committed to Advanced Analytics and bringing it into the engine. If you found in-memory OLTP to be favorable…” If you did, I would actually love to talk to you. Seriously. Talk to me.

3:59 – Stretch tables – basically, table partitioning that splits the table between on-premise SQL Server and Azure storage. It will support Always Encrypted and Row Level Security. The entire table is online and remains queryable from on-premises apps. Transparent to applications. (I’m just typing from the slide, no opinion here.)

4:00 – Stretch tables are activated by sp_configure ‘remote data archive’, 1 – it’s not turned on by default. From there, you can enable a specific database to stretch to Azure using a wizard that starts with a login to Azure.

4:03 – As part of the Azure sign-in process, Rohan got a call from Microsoft because he has two-factor authentication turned on. He had to put in his PIN, and then the SSMS wizard kept going. Spontaneous applause from the audience – great to see enthusiasm for security features.

4:08 – The stretch tables demo failed due to firewall configs. Oops.

4:13 – Demoing a restore of a database that has stretch tables. The Azure connection metadata is apparently stored in the SQL Server database, so it just gets attached again after the local part of the data is restored.

4:16 – Now switching to Azure SQL Data Warehouse. It has separate storage and compute – you don’t need compute power all the time for data warehouses, like during big batch jobs. You only have to pay for that compute power when you need it for the jobs. It scales elastically, so you can crank up big recommendation horsepower during peak holiday shopping seasons, for example.

4:21 – Rohan back onstage for the first public demo ever of Azure Data Warehouse.

4:27 – Discussing about what work retailers have to do when they have discount battles. What would it really cost us to match another company’s discount? Could we profitably pull that off? Power BI helps answer that question with large amounts of compute quickly.

4:28 – Like this morning, they’re talking up the differentiators between Azure SQL Data Warehouse and Amazon Redshift.

4:29 – Shawn: “This is all real, none of this is faked out.” “I’m more excited than ever.” Yeah, there’s a lot of really good stuff going on here this year. And that’s a wrap!


Reading the SQL Server 2016 Data Sheet

SQL Server
42 Comments

The SQL Server 2016 PDF data sheet is out, giving us the first public look at the next round of features in Microsoft’s database product.

Quick disclaimer: I’m a Microsoft MVP and Certified Master, and we’ve got clients that work with Microsoft. As such, I’m bound by a few non-disclosure agreements. For this post, I’m going to take off my regular hat, and put on my reverse-engineering hat.

“In-memory enhancements: 30X in-memory OLTP + 100X in-memory ColumnStore = Real-time operational analytics” – SQL Server 2014 brought us each of these features individually. ColumnStore indexes really can get you 100X performance on your analytical warehouse-type queries, and Hekaton may or may not give you rocket speed transactional performance. Both of these storage types are very specialized – for serious performance reasons, you don’t want to run analytical queries against your Hekaton tables, and you don’t really want to do transactional delete/update/insert activity against your ColumnStore tables. The graphics in the brochure show one transaction going into both tables. While this might at a glance seem like you get the best of both worlds, keep in mind that both of these say “In-memory” – and you’ll probably need two worlds’ worth of memory to pull this off. Fortunately, servers like that are available today.

“Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without any application changes.” In the past, we’ve basically had two places we could encrypt or decrypt data:

  • In the database – in which case, the database administrator could see private data, thereby giving security professionals nightmares
  • In the application – which also gave the security pros nightmares, because our developers would be rolling their own encryption code

The key phrase here says “protect your data at rest and in-motion…without any application changes.” Microsoft’s implying app-level encryption, keeping those prying DBA admins out. However, if the data is encrypted before it hits the database, you need to consider the following scenarios:

  • Analytics queries – your analytics and reporting apps will have to be equipped to decrypt the data if you need to show it onscreen
  • Range queries – if you need to see all of the users whose last names are like Smi%, you need to decrypt them
  • Replication – if you want to copy the data from one SQL Server to another, it usually needs to be decrypted on the way out

“High Availability…with the ability to have up to 3 synchronous replicas, DTC support, and round-robin load balancing of the secondaries.” No reading between the lines necessary here – these are hotly requested features for AlwaysOn Availability Groups.

“Manage document data with native JSON support.” While this has also been hotly requested, I’d caution you to be careful what you ask for. Users demanded XML support inside SQL Server, and then proceeded to use SQL Server as an XML query engine, sending CPU through the roof. SQL Server is one of the world’s most expensive application servers.

“Scale and Manage – Enhanced performance, scalability and usability across SQL Server Enterprise Information Management tools and Analysis Services.” Careful reading this one, and note which column it’s in – “Deeper Insights Across Data.” While you might read this as a better/faster/stronger SQL Server Management Studio, that’s not what Enterprise Information Management means.

“Powerful Insights on any device – Business insights through rich visualizations on mobile devices. Native apps for Windows, iOS and Android. New modern reports for all browsers.” Microsoft recently acquired DataZen, a reporting tool that focuses on mobile devices, and promptly gave it away free to most Enterprise Edition customers. The simplest way to accomplish this feature would be to simply throw DataZen’s existing code in free with the SQL Server boxed product. Like any acquisition, I wouldn’t expect this to become a seamless part of SQL Server for a year or two at least. (Think Microsoft’s acquisition of DATAllegro back in 2008, then gradually transitioned into Parallel Data Warehouse aka APS. Some will say, “Well, that was a tougher product to build,” but native apps for mobile BI ain’t an easy nut to crack either – as evidenced by Microsoft’s difficulty in getting to that sweet pistachio. Look, this analogy had to go off the rails at some point, we all saw that coming)

“Advanced Analytics at massive scale – Built-in advanced analytics provide the scalability and performance benefits of running your “R” algorithms directly in SQL Server.” Just like you can run your C# code in the database, too. Or your XML parsing. Or your JSON. If you wanna do your processing in a $7k USD per core database server, Microsoft wants to empower you to do it. I would too, if I was wearing their exceedingly good-looking and expensive shoes. I like the way they think.

“Stretch Database technology keeps more of your customers’ historical data at your fingertips by transparently [sic] and stretching your warm and cold OLTP data to Microsoft Azure on-demand without application changes.” Database admins often come to me and say, “I’ve got a ton of data in a table, but most of it just isn’t queried that often. I need to archive it. Will table partitioning help?” Sadly, it usually doesn’t, because good table partitioning requires changes to your application. If you want great performance, you have to make sure your WHERE clause specifically excludes the archival partitions. Otherwise, SQL Server isn’t terribly good at excluding those partitions, and it ends up scanning all of the partitions.

Microsoft sounds like they’re moving the cold table partitions up into Azure storage. That had damn well better not be the only portion of the solution that they deliver – they have gotta deliver better partition elimination. If not, this feature is going to be legendarily bad, because scanning partitions up in Azure is going to be even worse than scanning partitions locally. Questions around a feature like this would include:

  • How will backups work?
  • How will restores work?
  • How will DBCCs work?
  • How will data be moved from on-premise to Azure?
  • Can multiple servers attach to the Azure partitions? (For example, if you regularly back up production and restore it over to development, how will that work?

“Temporal Database: Track historical changes” – Whenever Jeremiah shows me an Oracle feature, I get so jealous. Oracle’s got this really cool feature where you can query a table to see what it looked like at 5PM yesterday, or at the start of the last financial quarter. It ain’t free/cheap – after all, you have to store all the historical data to make those queries work. For scenarios that need that level of archival detail, though, that’s killer.

“Row Level Security: Apply fine-grained access control to table rows based on users rights”I blogged about this when it was unveiled in Azure SQL Database.

“Dynamic Data Masking: Real-time obfuscation of data to prevent unauthorized access.” – We’ve got clients who work with payroll, health, and financial data. They need to restore their production databases into their development and QA environments, but it’s critical that the developers not see personally identifiable information like social security numbers. These numbers are stored in the database. So what’s a DBA to do? Currently, the solution involves scrambling the data as soon as the restore completes, but that’s painful.

“Enhanced Database Caching: Cache data with automatic, multiple TempDB files per instance in multi-core environments” – First off, I would just like to point out that Microsoft uses my preferred capitalization of TempDB. Take that, rest of Brent Ozar Unlimited®. But back to the issue at hand – TempDB (BOOM!) configuration has long been a sore spot for database administrators. TempDB defaults to just one data file, no matter how many cores you have. In a perfect world, Microsoft would fix the SGAM and PFS page contention problem. Or they could just duct tape it by automatically adding multiple files when necessary. Sounds like they chose the latter. I’m okay with that.

“Enterprise-grade Analysis Services” – wait, what was it before?

“PowerBI with on-premises data: New interactive query with Analysis Services. Customer data stays behind your firewall.” – PowerBI looks great, but it’s for Office365 customers only right now. A lot of our clients have smiled and nodded past that. The cloud is indeed the future, and most of us believe most of the data will end up there eventually, but Microsoft’s acknowledging the reality that for most of the market, it’s still the future, not the present.

“Easy migration of on-premises SQL Server: Simple point and click migration to Azure.” Migrating databases isn’t the problem. Moving all of the application servers and related services up to the cloud, that’s where the hard work is.

And now it’s off to the afternoon SQL Server keynote here at Ignite! Wow, this is really turning out to be the conference to be at if you want to see the future of SQL Server.


Microsoft Ignite Morning Keynote Liveblog #MSIgnite

SQL Server
0

Summary: Microsoft announced SQL Server 2016, with the first public preview coming this summer. The SQL Server 2016 release dates for the download were not announced. Here’s the SQL Server 2016 PDF data sheet, and here’s the SQL 2016 announcement.

As it happened:

8:00AM – Good morning, folks. This week, Microsoft’s new infrastructure conference, Ignite, opens in my current hometown of Chicago. Gotta love it when the conference comes to you – along with a sold-out crowd of 20,000 attendees.

Before talking about this morning’s keynote, let’s put the timing in context. Last week at the Microsoft Build conference in San Francisco, Microsoft made a few data-related announcements:

Given that Build is a developer-focused conference, where the audience would really have loved to hear dates, see demos, etc, and didn’t get them, I don’t expect to see more details coming out of this morning’s Ignite keynote.

However, there are two types of keynotes today. First up this morning, we have Satya Nadella doing the overall keynote. With the infrastructure focus of Ignite, I’d expect to see a heavy focus on Windows, O365, SharePoint, System Center, and Azure.

Later this afternoon, we have another set of keynotes: foundation keynotes, including one dedicated just to SQL Server. I’m much more excited about that one, and I’ll liveblog that one as well.

This morning, though, it’s all about Windows. So why am I here? Well, I got my start as a sysadmin, and I’ve never seen Satya Nadella in person before, and hey, it’s only about 20 blocks south of my house.

The keynote starts in about 90 minutes, and you can watch online at Ignite.Microsoft.com.

8:18AM – the logistics of filling a room this big, wow. It’s an air traffic control style job, with dozens of staffers waving lit batons around, filling up one section at a time. The front sections are reserved for press, a tech leaders forum, and so on.

8:30AM – the button-down DJ explained that the live online feed is about to start, so he left the stage. He’s a warmup act, evidently. The music continues even though he’s not onstage. This is actually a political statement that human beings are no longer needed for block-rockin’ beats, and Microsoft is about to unveil their new product, IntelliDJ.

8:36AM – The audience is cackling about the awkward transcriptions onscreen.

8:38AM – How the hell is the WiFi still working this well? Witchcraft!

8:40AM – A Microsoft Band on one hand and an old-school watch on the other is the new socks with sandals.

8:43AM – Transcriptions good for laughs. “So whale you’re watching the keynote…”

8:48AM – Sounds like like the keynote streaming site is suffering issues already. If there’s 20K people here, I wonder how many people are trying to stream the video live. I don’t envy Microsoft – that’s hard work.

8:53AM – The pre-keynote-show is a neat idea, but talking to people about the motivation behind their tweets? Really?

8:58AM – It took two days to lay out the 15,000 chairs in this room. I believe it. Wow.

9:00AM – The WiFi failed me briefly, but only briefly. Nice job keeping this up and running!

9:03AM – The room goes dark, and they’re playing Microsoft’s version of the Dolby theater intro.

9:06AM – Common walks off the stage, and Satya Nadella takes over. The air conditioning just kicked on big time, too. Or maybe it’s the cloud coming in.

9:09AM – Satya: “We are the only company that feels deeply about both companies and organizations, and bringing them together to empower transformations.” Uhh, okay, no.

9:11AM – Mobile first, cloud first – but it’s not about the mobility of a single device. It’s about the mobility of the experience. It’s about the cloud back end, and adding intelligence to your experiences. (Interesting take – one way to read that is, me: “We’re giving up on one device to rule them all, and just putting our experiences on every device.”)

9:14AM – Satya: “There are going to be more devices than people on the planet.” In an environment like that, Microsoft can be a winner. Not THE winner, just one of multiple winners, and that’s a good thing. Microsoft has had a tough time selling devices that consolidate – for example, Surface Pro simply isn’t selling as the tablet that can replace your laptop, but if it’s just one of your devices, it makes much better sense.

9:17AM – Satya: IT is Innovation & Transformation. Cute. “Microsoft is the productivity and platform company that will thrive in the mobile-first, cloud-first world.”

9:20AM – Satya: “It is important to us to build trust into the core of the operating system…that’s not a bolt-on, you have to build that in.” That’s a tough sell here – it hasn’t historically been one of this platform’s strengths.

9:22AM – Announcing Windows Update for Business, but no details on that yet.

9:25AM – Satya’s talking a lot about letting people make their own technology choices, but letting IT staff get the control and compliance they need.

9:26AM – Office 2016 is going into a new public preview, Skype for Business will have live broadcasting features, Office Delve will have more organizational analytics. Will show you all of these as the keynote proceeds.

9:27AM – There’s been multiple announcements so far, but just kinda glossed over, no clapping.

9:28AM – Satya says SQL Server 2016 is “the biggest database breakthrough you’ve ever seen.” Makes the version number official, apparently – doesn’t seem to be a code name.

9:34AM – Satya telling the story about Fujitsu selling lettuce grown in their old clean rooms. “So think about it,” he says to the attendees. Okay. I’m thinking. How is Microsoft reinventing themselves, is that what you want me to think about?

9:35AM – “Realmadrid is an amazing brand. Everyone knows about them.” I have no idea who they are. The attendees apparently do, though – the first round of applause we’ve had yet as the CEO comes onstage.

9:38AM – Satya: “You’re like a software company now.” Talking about the work Realmadrid does in analyzing data about their fans. Pretty vague.

9:40AM – Details are starting to come out about Windows Update for Business.

9:41AM – Joe Belfiore coming out to talk about Windows 10, then Gurdeep Singh Pall will talk about reinventing productivity with Windows, O365, and Dynamics. Brad Anderson will talk about security.

9:44AM – Joe: “My mission is to convince you, and give you the tools to go back with, to get your end users to love Windows 10.” Sounds perfect for this audience. Alright, I’m in. Show me what you’ve got.

9:48AM – Joe says about 5-8% of users use Alt-Tab to switch between apps. Adding multiple desktop support. Hold down control/Windows/right-arrow and switch between desktops, or left-arrow to go back. Dragging an app from one desktop to another got the first spontaneous audience applause.

9:51AM – Demoing Cortana, asking her questions aloud. This is especially useful in today’s workplace with open cubes and meeting rooms. Oh, wait, hold on, I’m being told…

9:54AM – Demoing Cortana connecting to PowerBI. “What’s the number of people at Ignite by country?”  Pops them up on a map with circle graphs and no numbers, not really useful at all.

9:58AMAnnouncing SQL Server 2016 – and the feature list!

10:04AM – Demoing Continuum, a new way of changing the Windows user interface to change between tablet mode and laptop mode depending on how you hold the device.

10:12AM – Demoing Windows Hello, a new authentication mechanism that works through the webcam. He takes a cloth off a webcam, and it unlocks his laptop in a matter of seconds without the user doing anything.

10:18AM – Belfiore off the stage. Good demos, relevant to this audience. Nothing jaw-dropping or magical, just steady progress. I don’t think he delivered on his commitment to get users excited about the Windows 10 UI, though.

10:19AM – Gurdeep: “Even as I speak, some of you are swiping right on Tinder.” HA!

10:21AM – Gurdeep is explaining how millennials are different by saying that his kid came up and told him “yolo.” Uhhhh..

10:23AM – Millennials work wherever they are. (News flash – seriously, this is not a millennial thing.)

10:24AMOffice 2016 Public Preview now available.

10:25AM – At Microsoft, we consider ourselves the custodians of productivity. (Again, news flash – millennials would probably tell you otherwise.)

10:27AM – Users can now create their own groups and teams on the fly, share things with them, schedule meetings. (I literally can’t even.)

10:29AM – “We’re making a huge bet on video for meetings….Polycom, Crestron…” (What year is this?)

10:32AM – Showing a HoloLens video and saying it’s a real, live product that’s that good. The reviews say otherwise.

10:36AM – Julia White coming onstage to do demos.

10:37AM – “This is the YouTube for the enterprise.” Well, in our company, I know what we would have: cats. Lots and lots and lots of cats.

10:40AM – Demoing the use of Delve to store files, share expertise, contact people. Isn’t this what SharePoint was supposed to be? When would I use one over the other?

10:46AM – Demoing live collaboration in Word that works just like Google Docs. Widespread applause. Have that many people really not seen Google Docs? Or are they clapping because Microsoft’s finally getting closer after all these years?

10:47AM – Sway will be part of O365 Business and Education soon.

10:52AM – Some awkward demos around collaboration and inking.

10:55AM – Julie’s off the stage. Her part seemed extreeeeemely rushed – guessing the prior presenters ran too long. This keynote is supposed to be over in 5 minutes. Oops, my bad. Thought it was 9AM-11AM, but it’s til 11:45AM.

10:59AM – Brad Anderson taking the stage to talk security. After 2 hours in a folding chair, I’m out. Time to grab some coffee and snacks, then I’ll blog the afternoon keynote that focuses on SQL Server.


New sp_Blitz® and sp_BlitzCache® Updates

SQL Server
16 Comments

Ah, spring, when a young man’s fancy lightly turns to thoughts of updating his DMV queries.

sp_Blitz®, our free health check stored procedure, brings several new checks and a whole crop of fixes and improvements. I’d like to call your attention to one in particular.

Julie Citro fixed check 1, THE VERY FIRST CHECK IN THE SCRIPT, which had a logic bug when looking for databases that had never been backed up. Sure, the next line had a workaround, but nobody caught this for years. All of you who ever read the source code, Julie Citro is officially a better tester than you.

sp_BlitzCache™, our performance tuning tool, has new updates too.

You can now run with @reanalyze = 1 first and sp_BlitzCache™ won’t break. You can run sp_BlitzCache™ from multiple SPIDs! If you had triggers, sp_BlitzCache™ would start to pick itself up – this has stopped.

Also, an integer overflow has been fixed – queries using more than 28,000 days of CPU since start up will no longer cause an arithmetic overflow.

You can grab the updated scripts in our First Responder Kit.

 


Basic VMware and Hyper-V Terminology for the SQL Server DBA

Virtualization
3 Comments

SQL Server DBAs often need to work with virtualization– and frequently need to work with multiple virtualization platforms. Many platforms have similar features, but they often have different names, and it’s hard to remember what everything’s called.

I’ve got a little cheat sheet for you for two of the platforms! (No offense, Xen fans, please don’t come after me.)

What is it? VMware name Hyper-V name
Management Tools, Free vSphere / web client Hyper-V Manager (snap-in)
Management Tools, Paid vCenter Server System Center Virtual Machine Manager
Automatic migration to another host if one fails High Availability (HA) High Availability (configured via a role in a Windows Failover Cluster)
Moving a VM from host to host vMotion Live Migration
Moving a VM from one storage subsystem to another Storage vMotion Storage Migration
Automatic load balancing DRS (Distributed Resource Scheduler) Dynamic Optimization in VMM (Virtual Machine Manager, a part of System Center)
Rules to keep VMs from being too close Affinity Rules for DRS Availability Sets, part of Intelligent Placement in VMM

 


Microsoft SQL Server Licensing Simplified into 7 Rules

Licensing, SQL Server
203 Comments

Licensing is really complex, but as long as you know these seven rules, you can go a long way:

  1. If you query it, you have to license it.
  2. “It” means the Windows environment – all of the processor cores that Windows sees. (Things get a little weirder under virtualization.)
  3. Offloading a backup or a DBCC is considered querying until you license 2019 with Software Assurance.
  4. If you license it, and you pay Software Assurance, you get exactly one or more standby servers of equivalent size. (Standby means you’re not querying it.) Details here.
  5. Standard Edition costs about $2k USD per core, but all currently supported versions cap out at 24 cores and 128GB RAM.
  6. Enterprise Edition costs about $7k USD per core.
  7. Software Assurance is an additional annual fee that gives you free upgrades as long as you keep paying for it.

Then to learn more, check out Microsoft’s SQL Server licensing resources.


Should You Be a SQL Server DBA? (video)

SQL Server
24 Comments

Have you ever wondered if you should be a SQL Server Database Administrator? If you think this might be the right career path for you, watch this video and take our quiz!

Got limited time or need to restart the quiz? No problem, we’ve got a table of contents with time indicators below.

“Should You Be a SQL Server DBA?” Table of Contents:

00:21 – Getting the wrong job stinks
00:44 – Getting the right job is awesome
01:44 – Get paper and pen, it’s quiz time!
02:11 – Question 1
02:38 – Question 2
03:09 – Question 3
03:22 – Question 4
03:39 – Question 5
03:59 – Question 6
05:39 – Question 7
06:41 – Question 8
07:26 – Time to score this thing!
07:33 – Scoring Question 1
08:45 – Scoring Question 2
10:25 – Scoring Question 3
11:38 – Scoring Question 4
12:49 – Scoring Question 5
13:43 – Scoring Question 6
15:37 – Scoring Question 7
17:25 – Scoring Question 8
19:28 – What does this all mean?
20:29 – Traits of a happy DBA
21:33 – Learn more at BrentOzar.com/go/Career

Brent says: if you think this quiz is just for people who aren’t in the DBA business already, think again. Kendra’s title is “SHOULD you be a DBA?” And, uh, some of us probably shouldn’t. (I love this stuff though.)


Are Index ‘Included’ Columns in Your Multi-Column Statistics?

Indexing, SQL Server
3 Comments
Internals Cat populates the density vector
Internals Cat populates the density vector

When you create an index in SQL Server with multiple columns, behind the scenes it creates a related multi-column statistic for the index. This statistic gives SQL Server some information about the relationship between the columns that it can use for row estimates when running queries.

But what if you use ‘included’ columns in the index? Do they get information recorded in the statistics?

Here’s my Index

To test, we’ll create an index with multiple key columns and included columns. I’m using a restored copy of the StackOverflow database.

First Stop: sys.Stats and Friends

The sys.stats DMV lets me query metadata out about my statistics, and I can join up to find out how many columns it has, what order they are in, when they were updated, and all sorts of info:

Here’s the first few columns of the results:

statistics-columns-order

This doesn’t show FavoriteCount or LastEdit date. My index key columns are in the statistic, in the same order they appear in the index.

What About DBCC SHOW_STATISTICS?

We can see a representation of the statistic with more information. We just plug in the name of the table and the statistic to DBCC SHOW_STATISTICS, like this:

This returns a few result sets. We get information about the header of the statistics, some density information about the relationship between the columns, and a histogram that shows the distribution of rows for the leading column in the statistic.

dbcc show statistics output

 

That Density Vector Has Three Rows!

Check that out, the density vector has a third row to help SQL Server estimate data distribution for OwnerUserId, PostTypeId, and the Id column. We didn’t specify the Id column anywhere in our index!

The Id column is the key of the clustered index on this table. SQL Server decided that it would also be interesting to know the density information of that as a third column.

What if the Index Was Unique? Would it Still Sneak In the Clustering Key?

The index above doesn’t have unique data, but let’s test a different index:

Here’s the density vector:

dbcc show statistics output unique index

SQL Server decided that tracking the density information for the combination of Name and Id was still useful, even though this is a unique index and SQL Server didn’t have to sneak that column into the key of the index to make it unique behind the scenes.

Included Columns Weren’t in the Density Vector, but My Clustering Key Was!

In other words, included columns won’t be in the density vector unless they happen to also be in the key of your clustered index.


And Party and Alt Shift

SQL Server
53 Comments

This is a cool SSMS trick I picked up a while back

Though not nearly as far back as I wish I had. It’s so cool I made a .gif of it in action. When you’re done putting your socks back on, I’ll tell you how it works.

AltShiftDemo
This .gif was brought to you by the Cool SSMS Tricks Foundation, in association with Worldwide .gifs

 

Pure ALT+SHIFT magic.

Hold down both keys at the same time, and use your up and down arrow keys to navigate vertically. There will be a thin grey line showing you exactly which rows you’ve grabbed. Then Just type normally. I uh, simulated a typing error, to illustrate that you can also delete text doing this. Yeah.

It really makes doing simple multi-line edits a breeze, especially if you don’t feel like setting up Excel formulas to do similar tasks. These are random Massachusetts zip codes, which is why they get a leading zero, and quotes.

Can you feel the efficiency?!

Kendra says: What in the…. holy cow, that actually works!

Brent says: I knew about that trick, but ZOMG PEOPLE THERE IS A PRODUCTIVITY GIF IN OUR BLOG


Happy Fourth Birthday to Us, 40% Off Presents for You

Company News, SQL Server
15 Comments

Four years ago this month, we turned this blog into a consulting company.

We’ve had so much fun over the last few years, and we’re really proud of what we’ve built:

  • Over 500 SQL Critical Care® patients
  • Thousands of training video customers
  • Thousands of in-person training attendees at dozens of classes and pre-cons
  • Hundreds of free YouTube videos watched by over a million people
  • Almost 50,000 email subscribers
  • Several million web site viewers
  • A handful of awesome full time employees

Let’s celebrate. Between now and Tuesday 4/28, discount code Our4thBirthday gets you 40% off online sales of our training videos. Enjoy!


How to Evaluate NoSQL Case Studies [Video]

SQL Server
3 Comments

Every now and then, one of our clients considers adopting an alternative database platform – sometimes NoSQL, sometimes a brand new relational database. They’ll ask for our help in evaluating the vendor’s solution.

One of the best ways to do it is ask the database vendor to set us up on a WebEx or GoToMeeting with one of their happy customers. I want to hear from the technical folks, not management.

I run the call with six PowerPoint slides – here’s how:


When is a Hyper-V Virtual Machine Not Really a Virtual Machine?

SQL Server
2 Comments
Former Sysadmins
Former Sysadmins

When you set up a Windows Server, you might be tempted to add a bunch of roles, just in case you want to use them in the future. Don’t do it: things can get weird.

If you add the Hyper-V role to Windows Server 2012 and higher, it changes some things about how Windows behaves. Your server is now a host. And the host starts to behave in some ways like a virtual machine — it’s now the “Parent Partition”, which is a “quasi-virtual machine”.

There are limits to the number of logical processors that any virtual machine partition might see, so if you have more than 64 logical processors, this can limit what’s available in the Parent Partition / host operating system, and really confuse you.

When SQL Server starts up, it checks to see if you’re virtualized. If the Hyper-V role is enabled, it will think it’s running in a VM. This could end up in a sticky situation if you have a licensing audit and hyper-threading is enabled — you’re supposed to license all the logical processors in a virtual machine regardless of hyper threading.

But good news, this is an easy problem to solve:

  • Don’t install the Hyper-V role “just in case” on a SQL Server
  • If someone else sets up your SQL Servers, check the installed roles before going live
  • Don’t ever design an architecture where a Hyper-V host does double-duty and is also a SQL Server
  • If you log onto a Hyper-V host, look at ‘Logical processors’ to see the total count (‘Host logical processors’ may be lower)

“Breaking” News: Don’t Install SQL Server 2014 SP1

Yesterday, Microsoft announced availability of Service Pack 1, saying:

As part of our continued commitment to software excellence for our customers, this upgrade is available to all customers with existing SQL Server 2014 deployments via the download links below.

Yeah, about that commitment to software excellence.

This morning, the download is gone:

Notice: The SQL SSIS team has found an issue with SP1 installation if SSIS catalog is present in the SQL Server instance.They are currently investigating this issue including possible workarounds and fixes.

Oof – the term “possible workarounds and fixes” doesn’t sound good for those who jumped in and applied the patch. A commenter on the MS Data Platform Insider blog reported that it hosed the master database broke the instance in a way suspiciously similar to a similar bug in SQL Server 2012 SP2.

(And jeez, what is it with service packs lately? Remember the SQL 2012 SP1 100% CPU issue? I’m starting to think you’re safer with cumulative updates than with service packs.)

Remember, kids, don’t rush into patching. If your servers are mission critical, test in your staging environment first – staging is the DBA’s development. (No, your development environment isn’t staging – because your developers make their living in the dev environment, and if you broke that with SP1 yesterday, you’ll be slaving away today to get your dev instance back up and running.)

Update: the fix is in. If you applied SP1, follow the instructions in this StackExchange post.


Moving Databases Made Easy – SQL Server on a File Share

Storage
12 Comments

We’ve all had to move a database. For databases above a small size, this is a painful process. While there are a number of ways that you can move a database without downtime, they all involve a lot of extra work. What if there were something easier?

Every database is unique and precious.
Every database is unique and precious.

What’s the Real Problem?

For most people, moving a database is a one time thing – you’re moving to a new server. That’s pretty much it. I spent 5 minutes staring at my ceiling trying to figure out other reasons that most people might move a database from one SQL Server to another.

For some people, though, databases could move on a regular basis. Think about it like this – you host a software as a service (SaaS) application. You’ve got one database per relative unit of work (customer, forum, cat picture collection). These databases can sit idle for months before, suddenly, there’s tremendous activity.

Normally, you wouldn’t make each SQL Server capable of handling peak load for any section of your SaaS. Instead, you would probably have a few servers ready to handle the worst load and move your busiest database over to those servers.

Typically, this is going to involve a small outage and setting up something like database mirroring or log shipping in order to move the database with minimal down time.

SMB to the Rescue!

Microsoft has supported housing SQL Server databases on a file share for a while now. If you’re not familiar with the idea, I recommend checking out the MSDN article Install SQL Server with SMB Fileshare as a Storage Option.

Moving a database with SMB is straight forward – detach the database from the old server, attach the database to the new server, bring the database online. For most of us, this quick change is much preferred to other methods of moving the database.

SMB File Shares and You

Although SMB file shares are a new technology, they’re worth considering for new SQL Servers. SMB file shares make it easy to move databases between SQL Servers. If you want to learn more about using SMB check out the MSDN article on using SMB and SQL Server.

Brent says: If your storage area network (SAN) is using older connectivity methods like 2Gb or 4Gb FC, and your Ethernet is 10Gb, you might even see a performance boost.


SELECT INTO and non-nullable columns

SQL Server
22 Comments

SELECT…INTO  is one of my favorite SQL Server features.

It’s great for creating table skeletons with false WHERE clauses (1=2), moving a real table to a staged/temp location for testing, etc.

In SQL Server 2014

It acquired the ability to go parallel, which is pretty neat, but that’s not what we’re talking about here.

It has some limitations

Chief among them is this:

Indexes, constraints, and triggers defined in the source table are not transferred to the new table, nor can they be specified in the SELECT…INTO statement. If these objects are required, you can create them after executing the SELECT…INTO statement.

Which is… Sort of true. There’s a trick, and I’ll show you a quick example here with another of my favorite things: a Numbers table.

Trying to add the PK constraint here fails, because the column is NULLable

Msg 8111, Level 16, State 1, Line 37
Cannot define PRIMARY KEY constraint on nullable column in table '#NumbersTest'.
Msg 1750, Level 16, State 0, Line 37
Could not create constraint or index. See previous errors.

We can verify this by looking at the table metadata:


name     is_nullable
N           1

So how do we fix this? We could alter the table, but that won’t leave us with the lasting satisfaction of proving BOL wrong on a technicality. We’ll adjust our code a bit, and try again.

This time, with the addition of an ISNULL check on the column, it ‘inherits’ the not NULLable property, and the PK constraint adds successfully. We can verify that by checking the table metadata, if you don’t believe me:

name     is_nullable
N           0

Note that this same behavior does not occur if you replace ISNULL() with COALESCE()

And with that, you can clean up the test tables. Or keep them around. Do some math. Whatever blows your hair back.

Got a favorite use for a numbers table outside of string splitting or doing date math? Let me know in the comments! I may write a follow up.

Brent says: Wow. That is a really slick trick.

Kendra says: I know some people have bias against SELECT INTO because it seems sloppy and you have to go the extra mile to get the right types, but it can be a great tool. Don’t rule it out.

Jeremiah says: I really like this method – it’s a quick way to copy tables without using other cumbersome techniques


The @DBAreactions Guide to In-Memory OLTP (Hekaton)

At SQLbits last month, I presented a new session: Cool Story, Bro – The DBAreactions Guide to SQL Server 2014. I wanted to have some fun while educating folks about the surprise gotchas of the newest features.

Here’s the In-Memory OLTP (Hekaton) section of the session:

Our sp_Blitz® has long warned you if Hekaton is in use, and its Hekaton detail page shows some of the limitations.


Can DBCC SHRINKFILE Cause Blocking in SQL Server?

It sure can.

The lock risks of shrinking data files in SQL Server aren’t very well documented. Many people have written about shrinking files being a bad regular practice— and that’s totally true. But sometimes you may need to run a one-time operation if you’ve been able to clear out or archive a lot of data. And you might wonder what kind of pains shrinking could cause you.

One pain it could cause you is blocking and lock waits. It’s easy to reproduce, here’s how.

A Recipe for LCK_M_X Lock Waits and Blocking

I was able to easily reproduce blocking in my restored copy of the StackOverflow database on SQL Server 2014 by doing a few things.

I rebuilt a non-clustered index on the Posts table with this command:

This ate up some space, and gave shrink something to do!

I then started a data modification and left an open transaction running:

And then, in another session, I started to shrink the data file:

Behold, Blocking!

Shrink starts up and slogs its way through things, and soon enough, lock waits appear. This view is from Adam Machanic‘s sp_WhoIsActive:

lock waits
I’ll take LCK_M_X Waits for 15,000, Alex!

If I run that with @get_locks=1, I can see details on exactly what lock SHRINKFILE is after:

Yep, it’s waiting on a page lock in kl_Posts_OwnerUserId_CreationDate.

But How Many Locks did SHRINKFILE Really Need?

Maybe it needed just a few locks… right?

Well, I ran a trace against my session that did that shrinkfile command, and here’s the number of locks by database and type that it caught…

a few locks
Lock mode: XTreme

And this was a FAST run, except for that blocking!

Shrinking files is painful

It can actually be worse than this — on a few runs, I was able to reproduce blocking with SCH_M locks that could cause even nastier blocking chains.

Shrinking is slow, tedious work. It eats CPU, steals your IO, and always takes longer than you want. And if it has work to do and you’ve got users or processors modifying that data, lock contention ensues.

Worst of all, at the end of shrinking a database, nobody will ever thank you. Be careful out there, and if you must run it make sure you don’t leave it unattended.

Brent says: whoa, I never even thought about that. Maybe we should design indexes to improve DBCC SHRINKDB speed.

Erik says: if you do this more than once, I will come find you.


Is Your SAN Administrator Out to Get You?

SQL Server
9 Comments
WE WANT RAID 10!!!!
WE WANT RAID 10!!!!

DBAs often complain about SAN Administrators. “You can’t trust them.” I’ve seen established DBAs write publicly that they resort to lying about how SQL Server works to get what they want from a SAN Administrator.

That’s pretty toxic. Other DBAs start to think it’s normal to have a terrible relationship with your SAN Administrator.

But it’s not normal. And if there’s a terrible relationship between DBAs and the SAN team, that’s the DBA team’s fault, too.

The first SAN team I ever worked with

I’m pre-disposed to like SAN Administrators because I’ve worked with great people. When I started out with SQL Server at a small dot com, we had a two person SAN team and many terabytes of data across both SQL and NOSQL solutions. Our software team and our data grew insanely quickly.

The SAN team had more fun than almost anyone in the building. They had to work super hard. They got paged a lot. They were sometimes grumpy on the phone at 3 am, just like the DBAs. But they were funny and smart and the kind of people who could magically turn a crappy situation into a good time.

They didn’t always want to make everything RAID 10, and they didn’t always automatically believe the problem was the SAN when I said, “the SQL Server’s slow.” But they worked with me every time when I had a problem, and we always found a solution.

Over time, I learned ways to show them real metrics when I really needed more storage speed rather than just saying, “it’s slow.”

Most SAN Administrators I Work With Today

I still work with SAN Administrators frequently. They’re usually helpful – in fact, they’re often happy that someone would like to hear how the SAN is configured and why it’s set up that way.

Most SAN Admins I meet work alone or in small groups. They’re super busy, and sometimes mistakes get made (just like DBAs). But also like DBAs, I’ve found them to be pretty happy when there’s a real reason that justifies investing in better hardware. They’re able to admit when something’s not right, fix it, and move on.

Remember, The SAN Admin had just as much training as you did

That’s right, they probably didn’t get any training either. The storage world changes fast, and they have to try to keep up.

Yes, they get taken out to nice dinners by the SAN vendor, and you don’t. But think about how your job looks to the people over at the helpdesk. Ever gotten a bottle of scotch from the developers as a reward for saving the day? Ever had flextime and rolled into the office late? It’s not just the SAN admins who have some perks.

Your SAN Admin isn’t out to get you. They just have a lot of customers.

Your mission as a DBA is to make your databases perform as well as the business needs them, and protect the RPO and RTO of your customers. The SAN Administrator’s goal is to provide enough storage capacity and performance as the business needs. They’ve got a lot of customers– the performance of every one of your databases isn’t at the top of their list. When the database is slow, it’s hard for them to know that the issue is the storage.

I’m not saying that there aren’t bad SAN Admins out there. There certainly are.

But don’t be the person who misrepresents things and thinks someone’s out to get them. Aspire to be more like my old SAN team: the kind of person who can turn a crappy situation into a good time. That’s usually a lot more effective.

Brent says: wait a minute – I never got a bottle of Scotch from my developers. I mean, I got hit over the head with a bottle once, but there wasn’t any alcohol left in it.


How Do I Know My Query Will Be Fast In Production?

SQL Server
9 Comments

We’ve all had this problem before – the production server is significantly more powerful than the development server. You tune a query and you’re pretty sure you’ve made it faster, but there’s that doubt in the back of your mind. There are so many things that can change performance – memory, CPU count, disk performance – how can you tell that you’ve made a query better?

Measuring Slowness

When you know you have a slow query that you need to tune, run it in production one last time. But, before you do that, gather CPU and IO metrics using STATISTICS IO and STATISTICS TIME:

You’ll get a bunch of jibber jabber in the “Messages” tab that documents how much CPU time was spent compiling and running your query as well as information about logical IO. Save this off to a text file for now.

Measuring Fastness

Go to your dev server. Tune your query. I’ll wait.

As you’re tuning your query, pay attention to the CPU time and logical IO numbers that you’re getting back from STATISTICS IO and STATISTICS TIME. These are a relative indication that things are getting better. If there are CPU differences, you’re likely to see different CPU numbers on that dev machine, but that’s OK. We’re looking to compare total logical IO and total CPU time in milliseconds.

If you want an easy route to compare your results, you can paste the output from STATISTICS IO and STATISTICS TIME into statisticsparser.com. This will go through the results and push the data into a nice table for your perusal.

How Much Faster Will my Query Be?

Using STATISTICS IO and STATISTICS TIME to tune helps me figure out how well I’m doing compared to where I started from. Some queries move around a lot of data. In these cases, it can help to know how much faster dev is than production.

If you really want a reasonable comparison between dev and production, you can find some guidance in the Fast Track Data Warehouse Reference Guide. There’s a section of this document about measuring the MCR (Maximum Consumption Rate) and BCR (Base Consumption Rate) of your SQL Server. These numbers reflect 100% cache reads and 100% disk reads, respectively.

Once you have MCR and BCR, you make effective predictions about how query performance could differ between the two. This gives you the power to predict, with some margin of error, how fast newly developed queries can run.

But is it Faster?

Between STATISTICS IO and STATISTICS TIME you can determine if a query is going to use fewer resources. The MCR and BCR give you relative numbers for how fast a server can process data. Between these measurements, you can predict how a query will perform between environments and what kind of impact your changes will have.

Brent says: as you get used to doing these measurements, you’ll also be able to tell if a new dev-only query is going to be fast enough in production, too. You’ll know how many logical reads is slow (or fast) in your environment.

Erik says: The hardest thing for me to ever get was commensurate data to test on in dev. If you don’t have it, get it. If you have it, don’t waste it.