Blog

Why We Made a Video About SQL Critical Care®

Yesterday, we unveiled our 106-second video about our SQL Critical Care® service. Today, let’s talk about why we did it.

Our web site explains who we are. It’s our storefront. The vast majority of our clients find us in Google, get to know us by reading our blogs, start to trust us by using our scripts, and then finally contact us for personalized help and training.

For that to work well, our web site needs to say, “Brent Ozar Unlimited is a boutique database consulting firm who helps you make SQL Server faster and more reliable. They’re brilliant, fun, and relatable.”

That sounds really simple now, but a few years ago, we had no idea what we wanted the web site to say. To figure it out, we hired Pixelspoke to build our brand identity, logo, and web site. They interviewed us, talked to our clients, and checked at our competitors. They painted a really clear picture of who we were, what we do, and what makes us different. The end result is the web site you see today. They did a magical job, and I think it says a lot that we love their work just as much today as we did two years ago. I still grin when I see our logo.

Last year, we upped our game by having Eric Larsen draw more funny portraits of the team, and rotating those on the site’s home page. We loved the results – the guy is crazy talented. He drew us teaching various SQL Server concepts, and his illustrations managed to capture exactly what we wanted the site to say.

Leveling Up: Explaining Our Process in 106 Seconds

Some customers don’t have time to do the long courtship of gradually getting to know us. Their SQL Server is desperately sick right now, and they don’t have the luxury of getting to know all the possible health providers out there. They’re Googling and judging people by what they see in the first minute.

So this year, we wanted to take our web game up another notch by hiring a partner to build us an amazing 2-minute video. Eric pointed us to his friends at Epipheo. Like us, they’re a company that focuses on exactly one thing – but theirs is making short videos. After examining their portfolio (my personal favorite is Projetech’s) and talking to them, we were hooked.

Epipheo’s mission was to distill who we are, what we do, and what makes us different down into a single video. I’m absolutely convinced they did it.

The video tells the viewer:

  • Your SQL Server is sick, and you don’t know how to get relief.
  • You could throw money at it, but that doesn’t usually work. (Many of our clients even tell us that they’ve already tried this by the time they contact us.)
  • You could hire The Specialist, but he won’t explain what he’s doing, and you have no idea if it’s good or bad
  • You could bring in The Large IT Firm, but it costs a fortune, and they’ll overwhelm you with irrelevant data

Or you could hire us:

  • We focus on the pain that’s bothering you, not a gazillion unrelated “best practices”
  • We work side by side with you, explaining your SQL Server’s symptoms as we go
  • We teach you our techniques so you can repeat the process on your other SQL Servers
  • And then we leave you alone, because we’re not here to drain you dry.

Yep. That’s exactly what our SQL Critical Care® is, and why it’s unique. Next up, we’ll talk about our favorite parts of the video where Epipheo really captured our personalities.

So What Exactly Does Brent Ozar Unlimited® Do, Anyway?

We’ll sum it up in a 106 second video about our SQL Critical Care®:

In tomorrow’s post, we’ll talk about why we made a video. In the meantime, enjoy. I know I certainly do – I’ve re-watched this dozens of times, and I still giggle. It’s on our home page, and we can’t wait for everyone to see it and giggle too.

I’m on the latest RunAs Radio Podcast

I love going on the RunAs Radio podcast because Richard Campbell is so much fun to talk to. In this show, we talk about the amazing new hardware that has come out lately for SQL Server. Dell and HP have brought out some amazing gear – support for 1.8″ SSDs, 64GB DIMMs, and more. The 2-socket server market is such an amazing space today.

Head over to listen now.

Choosing a Cloud Deployment Model [Video]

Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, October 7! Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!

Have questions? Feel free to leave a comment so we can discuss it on Tuesday!

How SQL Server Licensing Works [Video]

You have a sneaking suspicion that your servers aren’t all paid for, and you need to get a rough idea of how SQL Server licensing works. You’ve never bought a box of SQL Server before, and you have no idea where to get started.

Microsoft Certified Master Brent Ozar will break it down into a few simple, easy-to-understand slides and show you the most popular licensing options. He’ll also explain 3 classic licensing mistakes and help you avoid ‘em in this 20-minute video.

To ask licensing questions after watching the video, join our weekly webcast for live Q&A. Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!

Have questions? Feel free to leave a comment so we can discuss it on Tuesday!

Change Your #SQLPASS Password Right Now.

What you need to do: log into SQLpass.org and change your password. If you use the same password anywhere else, change it on all of those sites immediately as well.

Why you need to do it: anytime you ever logged into SQLpass.org or SQLsaturday.com, or updated your password, or created an account, your password (along with everything else) went unencrypted between your computer and PASS’s web servers. Anyone sniffing network packets along the way saw your username, email, password, etc in clear text. (Think about what a security gold mine this would have been for someone sniffing WiFi at a SQLSaturday or the PASS Summit.) There’s a nice side benefit for updating your account – you also become eligible to vote in the current PASS elections.

Who you need to thank: the vulnerability was discovered by George Stocker on Friday, and PASS HQ finished the fixes & testing on Saturday. That’s a fantastic turnaround time – kudos to PASS for reacting so fast!

Who you should blame: yourself, for not noticing for years that you were putting login information into a web site that wasn’t using https. What kind of data professional are you, anyway? You’re probably even using that same password on multiple web sites, or heaven forbid, your company email. Get it together and use 1Password or LastPass.

Who you should not blame: the current PASS Board of Directors because this has likely been in place ever since PASS set up their current web site, and the current management inherited this little surprise. (You know how it goes – it’s like your ancient SQL Server 2000 in the corner where everybody knows the SA password.)

SSL test results for SQLpass.org - click here to rerun the test, takes about 30 seconds

SSL test results for SQLpass.org – click here to rerun the test, takes about 30 seconds

What’s still left to do: PASS needs to clearly, concisely communicate the severity of this issue to members as well – I really wish they’d set all passwords in the database to null, and force everybody to go through the password-reset process. The SSL setup still needs some work, as shown in the SSL tests, but considering the whole thing was done in 24 hours, it’s one heck of a good first step. (SQLsaturday.com also fails that test.)

Where to go for more information: logged-in PASS members can ask PASS public questions in their blog post comments, email info@sqlpass.org privately, or ask questions here in the comments.

Is it Cheating to Solve A Performance Problem By Adding Memory?

We had a comment recently on Five Things That Fix Bad SQL Server Performance that got me thinking. The comment came from a frustrated system administrator, who wrote:

Chucking resources at a problem is not solving that problem, it’s just temporarily masking the symptoms of that problem.

memoriesFunnily enough, I’ve heard the exact same thing from developers who are facing a tough problem on a dramatically undersized SQL Server. The basic gist of what they ask is:

Doesn’t it make me a bad developer if I can’t solve this problem with code?

Everybody’s worried that adding memory is somehow cheating.

Performance Problems and Memory

There are three general categories of performance problems relating to memory:

  1. Problems you can solve with memory OR code / indexes
  2. Problems you can solve only with code / indexes and NOT just memory
  3. Problems you can solve only memory and NOT just code / indexes

For category #1, solving the problem with memory is often cheaper than changing code and indexes: the operational cost of the hours of development and testing can be quite high.

But I specifically started thinking about category #3. This is the category that the frustrated sysadmin and proud developer think doesn’t exist!

Here are three examples of problems that you can solve with memory, and not simply with code and indexes:

1. Memory Reduces IO On a Read-Hungry Workload

You’ve got a critical OLTP application. You’re using shared storage. Most of the time, your storage throughput is pretty good. But sometimes, other users of the storage get busy, and your read and write latencies go through the roof. Your queries slow down, blocking increases, and timeouts start hitting the application. You’ve tuned your indexes, but parts of the application rely on realtime reports that need to scan large tables.

SQL Server is designed so that in this case you can add more memory and reduce read IO to the storage, giving you more stable performance. It’s absolutely not cheating to give the server more memory — speeding up the storage would be much more expensive.

2. Some Queries Need Lots of Workspace Memory

In another database, you do lots of reporting and aggregation. You’ve got large tables. Your queries frequently join them, do sorts, and use parallelism. All of these operations need memory, and the more data you’re aggregating and querying, the more memory each of these queries can need.

Before a query starts running, it figures out how much of this workspace memory it needs, and looks at what is available given other queries that are running. SQL Server has to balance the memory used for data pages, execution plans, and this workspace memory: it doesn’t want to let one query take over! If not enough memory is available, your query has to wait.

You’ve optimized your queries and indexes and made sure the memory estimates are realistic, but when lots of people run reports, your queries can’t even get started because of a memory crunch. Solving this problem by adding more memory isn’t cheating: it’s helping the SQL Server do what it’s designed to do.

3. Some Features Need Lots of Memory Periodically

You’re managing a data warehouse where you’re using table partitioning to implement a sliding window. This has helped your users a lot: you can add and remove large amounts of data with minimal locking and blocking.

But for query performance reasons, you’ve had to add one or two non-aligned indexes that are present during the day. To do partition switching, you have to disable these indexes to do your sliding window data load and archive each night. Creating the partitioned index can consume large amounts of memory.

Using Enterprise features can require investing in Enterprise levels of memory.

Adding Memory Isn’t Cheating

Memory is an important tool for performance in SQL Server: it isn’t just something that covers up bad code, it’s often a solution to many different types of problems. Memory isn’t the only tool you need to help your queries go fast in SQL Server, but it’s certainly NOT a tool you should ignore.

Is Azure Really 60% Faster?

Microsoft just announced a new round of D-grade VMs that have 60% faster CPU and local SSD than can go up to 7,000 IOPS in a canned IOmeter test. Before jumping to conclusions or, even worse, picking a cloud provider, it’s best to look at these numbers critically.

CPU Speeds

The new CPU is being advertised as 60% faster than the previous generation of processors. Clearly this has got to be some next generation hardware, right? Maybe we’ll get access to the new Xeon v3 – it’s not that outlandish of an idea; Amazon Web Services (AWS) had Xeon v2s in their datacenters before the chips were generally available.

Glenn Berry, a consultant who digs into computers for fun, did some initial testing with these new Azure instance types. In his investigations, he saw 2.2GHz E5-2660 chips. These aren’t even the slower end of the new generation of Intel Xeon v2 chips – they’re the previous generation of CPU… from 2012. Azure trades raw power for power efficiency.

If these not-so-fast CPUs are 60% faster, what are your current Azure VMs and SQL Database instances running on? Anecdotal evidence indicates that the current generation of A and P series VMs are running on older AMD Opteron hardware. Older AWS hardware is in the same boat, but it’s slowly being phased out.

When 7000 IOPS really means 437.5 64KB IOPS

When 7000 IOPS really means 437.5 64KB IOPS

SSD Speeds

Microsoft are reporting performance of up to 7000 IOPS per local Azure SSD but persistent storage is still rotational. During the D Series SSD VMs interview a screenshot of iometer at 7,000 IOPS is shown, but no additional information is provided. Iometer tests typically use a 4k read/write block size for tests, which is a great size for random file access. It’s not awesome for SQL Server, but we can divide that by 16 to get a representative SQL Server number…

437.5 64KB IOPS.

Or so the Azure Product Manager says in the original interview. I don’t believe what I hear, and you shouldn’t either, so I fired up an Azure D14 VM to see for myself. What I saw was pleasantly surprising:

All the MBps

All the MBps

If we dig into the IOPS provided by Crystal Disk Mark, we see a decent looking picture unfold:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
 Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 705.103 MB/s
 Sequential Write : 394.053 MB/s
 Random Read 512KB : 528.562 MB/s
 Random Write 512KB : 398.193 MB/s
 Random Read 4KB (QD=1) : 16.156 MB/s [ 3944.4 IOPS]
 Random Write 4KB (QD=1) : 26.506 MB/s [ 6471.1 IOPS]
 Random Read 4KB (QD=32) : 151.645 MB/s [ 37022.8 IOPS]
 Random Write 4KB (QD=32) : 167.086 MB/s [ 40792.5 IOPS]

 Test : 4000 MB [D: 2.0% (16.2/800.0 GB)] (x5)
 Date : 2014/09/23 0:24:10
 OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

What’s it really mean? It means that the 7,000 IOPS number reported was probably for 4KB random writes. It’s hardly representative of SQL Server workloads, but we also can see what kind of numbers the drives will pull under significant load.

Comparing AWS and Azure Performance

AWS offers an instance called the r3.4xlarge. It comes with 16 cores and 122GB of memory. The AWS instance type is about the same as the D14 (16 cores and 112GB of memory). The D14 is $2.611 / hour. The AWS instance is $1.944 / hour.

All prices include Windows licensing.

So far, the Azure D-grade instance costs 70 cents more per hour for 4.8GHz fewer clock cycles and 10GB less memory. Not to mention the computational differences between the current generation of CPU and what Azure is running.

Surely the SSD must be amazing…

Not so fast. Literally.

Some AWS local SSDs benchmark have reported numbers as high 20,000 16KB IOPS for random write and 30,000 16KB IOPS for sequential read. Sure, the AWS instance only has a 320GB disk, but it’s capable of performing 5,000 64KB IOPS compared to the 440 IOPS (I rounded up to be generous) that Azure supplies.

In my testing, the AWS local SSD beat out the Azure SSD on random I/O by a reasonable margin:

A reasonable margin (or 100MB/s faster)

A reasonable margin (or 100MB/s faster)

How about those IOPS?

-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 404.856 MB/s
Sequential Write : 350.255 MB/s
Random Read 512KB : 348.770 MB/s
Random Write 512KB : 349.176 MB/s
Random Read 4KB (QD=1) : 21.337 MB/s [ 5209.3 IOPS]
Random Write 4KB (QD=1) : 38.448 MB/s [ 9386.7 IOPS]
Random Read 4KB (QD=32) : 261.320 MB/s [ 63798.8 IOPS]
Random Write 4KB (QD=32) : 237.201 MB/s [ 57910.4 IOPS]

Test : 4000 MB [Z: 0.0% (0.1/300.0 GB)] (x5)
Date : 2014/09/23 1:05:22
OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)

So… First – Azure offers really good local SSD performance if you decide to purchase the entire instance. Using a D14 instance type is a reasonable expectation for customers deploying SQL Server – SQL Server is a power hungry monster and it deserves to be fed.

Despite their truth, the Azure numbers aren’t all they’re cracked up to be. Here’s how it breaks down:

Cost: 34% more expensive
Sequential Reads: 74% faster
Sequential Writes: 12.5% faster
Random Reads: 42% slower/fewer IOPS
Random Writes: 30% slower/fewer IOPS

Azure has a history of mediocre performance, but it’s well-documented mediocre performance. Azure persistent storage currently maxes out at 500 no-unit-given IOPS per disk (compared to AWS’s 4,000 256KB IOPS for EBS volumes), but these limits are well-documented.

The Bottom Line

Not all clouds are created equal and 60% more doesn’t mean that it’s any better than it was before. It’s up to you, dear reader, to determine what 60% faster means and how that applies to your environment. For companies dipping their toes in the cloud waters, be very wary with the new improved Azure performance. You may find that you’re deploying far more VMs than you thought, just to handle the same workload.

Our #SQLPASS Schedule (And a Downloadable Calendar)

Want to see our sessions in Seattle? Add all of our calendar events to our schedule:

Or if you’d rather add individual events:

Kendra Little – World’s Worst Performance Tuning Techniques
Wednesday 1:30PM – 2:45PM, Room 6A

Brent Ozar – CXPACKET and MAXDOP Lightning Talk
Wednesday 3PM-4:15PM, Room 606

Kendra Little – Why Does SQL Server Keep Asking for this Index?
Thursday 3PM-4:15PM, Room 6C

Brent Ozar – Developers: Who Needs a DBA?
Thursday 5PM-6:15PM, Room 6C

Doug Lane – From Minutes to Milliseconds: High-Performance SSRS Tuning
Friday 8AM-9:15AM, Room 6A

Jeremiah Peschka – Dynamic SQL: Build Fast, Flexible Queries
Friday 8AM-9:15AM, Room 6B

Jes Schultz Borland – Are Your Indexes Hurting or Helping You?
Friday 10:15AM-11:30AM, Room 618-620

The Very First Concern with SQL Virtualization: Licensing

Every single thing you do with a virtual SQL Server starts with one very important concern: how are you going to license it?

The 3 Common Ways to License Virtual SQL Servers

The software licensing section of my local library

The software licensing section of my local library

Developer Edition, by the guest – Developer Edition has all the power of Enterprise Edition, but it’s way cheaper (like in the $50/server range) and you’re not allowed to use it for production purposes. Because the licensing is so cheap, just mix these VMs in with the rest of your guests on a shared pool of hosts. Developer Edition licensing is handled differently than other SQL Server editions, typically bundled with your developers’ MSDN subscriptions.

Enterprise Edition, by the host – When you have 5 or more guests, it usually makes sense to buy a dedicated pair of host servers in an active/passive configuration. For example:

  • Dell R720 Primary – 2 sockets, 4 cores each, 768GB memory – ~$25k USD
  • Dell R720 Secondary – same hardware config – ~$25k USD
  • 8 physical cores of SQL Server Enterprise Edition licensing and Software Assurance – ~$55k USD (you only have to pay licensing on the physical cores regardless of whether you enable hyperthreading)
  • Hypervisor, Windows, management software licensing, shared storage – left as an exercise for the reader

You can configure your hypervisor (VMware, Hyper-V, Xen) to use one of the hosts as the primary, and when it fails, automatically boot up the guests on the secondary R720. This way, you only have to license one of the two hosts, and you can run an unlimited* number of SQL Servers on this host. (* – Not really.) Even better, even though they may be small VMs, you can still use Enterprise Edition on these guests and get access to cool EE-only features like online index rebuilds.

As your SQL Server needs grow, you can license the Secondary and balance the VM load across both, and even start to add additional licensed hosts. The key, though, is to avoid running any non-SQL-Server loads here because these are your utmost expensive host cores.

Standard Edition, by the guest – When you have 4 or fewer SQL Server instances, and they don’t need enough hardware horsepower to merit a dedicated host pool, then you’re usually best off licensing each guest with Standard Edition. This way, you can intermingle them on the same hardware that the rest of your virtual guests use.

At around $2k USD per core, it’s roughly 1/4 the price of Enterprise Edition, but you have to keep an eye on the number of virtual cores you’re using. You may read training material that says “just add vCPUs when you have performance pressure,” but this is expensive advice when you’re licensing each guest by the core. As you approach 16 vCPUs of Standard Edition licensing, you need to build a plan to migrate over to the host-based Enterprise Edition licensing approach. 28 vCPUs of Standard costs about the same as 8 vCPUs of Enterprise, but only the latter lets you run an unlimited number of guests, use over 64GB of memory on SQL 2012 and prior, do online index rebuilds, etc.

How Licensing Affects Your Administration

If you need SSRS, SSIS, or SSAS, and you’re licensing hosts with Enterprise Edition, you simply deploy these other services on separate virtual servers. You never make these other services fight with the core SQL Server engine for the same memory. However, if you’re on Standard, you saved money on licensing, but you spend extra performance troubleshooting time. You’ll need to install these services on the same VM as the engine to keep costs down, but you’ll need to understand how to triage performance.

When you design a high availability solution, the edition determines your choices as well. If you’re on Standard, and you need automatic failover for OS & SQL problems, you’ve backed yourself into a corner of either failover clustered instances or synchronous database mirroring. These solutions add complexity that shops without a DBA often can’t afford, so they end up going without OS/SQL protection and sticking with the simpler hypervisor-based failovers. On the other hand, once you’ve decided to license the hosts with Enterprise Edition, you can take advantage of AlwaysOn Availability Groups and asynchronous database mirroring without extra licensing costs – even for your smallest VMs.

Knowing your licensing means faster decision-making on architecture, and that’s why you want to start your virtualization projects here. Licensing is usually the most expensive and most restrictive part of the project – get it right first. To learn more about licensing SQL Server, check out the 2014 Licensing Guide PDF.

Your next concern with virtualization: recoverability.

css.php