One of the many best practices for SQL Server’s table partitioning feature is to create “extra” empty partitions around your data. This is explained in SQL Server Books Online in the page on altering a Partition Function:
Always keep empty partitions at both ends of the partition range to guarantee that the partition split (before loading new data) and partition merge (after unloading old data) do not incur any data movement.
Ever wondered why that’s the case, and what might happen if you don’t follow it? Check out this sample case to learn the nitty gritty.
A Quick Summary of Partition Switching
One of the biggest benefits to table partitioning is the ability to move a lot of data in or out of the table very quickly. You can write a bunch of new data into a table, index it, and put some constraints on it without your users ever being the wiser. Then, you can “switch” the data into the table as a new partition super quickly.
Similarly, you can create an empty table, add minimum indexes to match the partitioned table, and “switch” a whole partition of data out into the empty table. SHAZAM! That data is now out of sight of your applications and users and you haven’t run a single delete.
This is called “partition switching”.
(Need more of a general refresher on table partitioning? Check out our reference page on partitioning, or our article, “How to Decide if You Should Use Table Partitioning“.)
When You Switch Out, Will SQL Server Always Remove the Filegroup You Want?
Splitting and merging boundary points can be complicated business. Today, we’re going to focus on merging. Here’s our sample scenario. (Want to see this hands-on? Scripts that do this are available for download at the end of this article.)
- Create a Partition Function (Range RIGHT) with two boundary points: 2 and 3
- Map three filegroups to the Partition Function using a Partition Scheme. The filegroups are named: FG1, FG2, and FG3
- Create a clustered table on the partition scheme, partitioned by ID
- Add one row with ID=1. Based on the boundary points, it’ll automatically go into Partition 1 on FG1
- Add two rows with ID=2. Those two rows automatically go into Partition 2 on FG2
- Add three rows with ID=3. Those three row automatically are put into Partition 3 on FG3
So we’ve got 1 row in FG1, two rows in FG2, and three rows in FG3.
Now, the data in Partition 1 on FG1 is old and we don’t need it anymore. We decide to switch out the data, and then we want to remove FG1 from the partition function. So here’s what we do:
- Create a new table for switching out with the same definition and clustered index as our partitioned table
- Switch partition 1 out to the other table (magic!)
After the magical swtich out, we now have 0 rows in Partition 1 on FG1, 2 rows in FG2 and three rows in FG3. We’re halfway there!
We want to remove FG1 altogether, but it is still mapped to our Partition Function with our Partition Scheme. We need to tell SQL Server to just get rid of one of the partitions altogether, and to do this we have to alter our Partition Function.
We have no rows “below” the boundary point of 2 — those are all gone. So we merge the boundary point of 2. That command is successful, but to get things done SQL Server may not do exactly what you would think.
What SQL Server Actually Does in this Case
You might assume that SQL Server would see that FG1 has 0 rows on it, and remove that filegroup. After all, FG2 has rows in it, so why not just keep that one? It’s less work, right?
Well, it might be less work, but SQL Server doesn’t stop to figure that out. Instead, SQL Server decides to move the two rows from FG2 onto FG1, and then it un-maps FG2 from the partition scheme. It doesn’t ask before it does this– there’s no warning that “data movement” is going to happen. When you issue the command to merge the boundary point, you don’t have the option to CHOOSE where you want the data to go.
Whoops! Not such a big deal when it comes to two rows, but in production you’re going to have a lot more than two rows, and moving them around between filegroups can be pretty painful. It’s literally copying all the data from one filegroup into another. And if you do this by accident, by the time you figure it out, well, you may not have a lot of great options!
The fix for this is simple– follow the best practice of having extra boundary points at both ends of your partition range. Just like Books Online says, this prevents unexpected data movement. (We’ve got a demo code for that as well– check it out in the download at the end of this article.)
Why Did This Happen?
This happened because the rules of this game are complicated. We defined the partition function as RANGE RIGHT. That means that our boundary points are “lower” boundaries– the boundary point of 2 in this case is tied to all the rows with the value of 2.
Here’s how MERGE RANGE works. When you merge a range, you provide a boundary point. SQL Server then does the following (the emphasis on the last paragraph is mine):
Drops a partition and merges any values that exist in the partition into one of the remaining partitions. RANGE (boundary_value) must be an existing boundary value, into which the values from the dropped partition are merged. The filegroup that originally held boundary_value is removed from the partition scheme unless it is used by a remaining partition, or is marked with the NEXT USED property. The merged partition resides in the filegroup that originally did not hold boundary_value.
Clear as mud, right? (I’m pretty sure that paragraph was written by a German Philosopher.)
Unpacking/ decoding those sentences, essentially when you MERGE a boundary point, you are telling SQL Server to get rid of the partition associated with that boundary point. This will vary depending on whether the partition function was defined with RANGE RIGHT or RANGE LEFT.
In this case, boundary point 2 is a lower boundary point (RANGE RIGHT), and it’s associated with FG2. When we merged boundary point 2, we directly told SQL Server “ditch FG2!”. And it directly did as we asked.
But even though it had to get rid of FG2, it knew better than to lose the data on it. So based on our Partition Function, it had no choice but to move it into FG1.
How to Stay Sane with Table Partitioning
The way to stay sane with table partitioning is to always start out with the belief that your memory is imperfect. Whenever you ask a “how do I…” or “what happens if I…” question, build a quick script using a small amount of data and work out your issue in a development environment.
While you’re testing changes, go beyond just memorizing the best practices. Make sure you’re looking at details about how much data lies in each partition and where each partition is mapped. Always check your end state and make sure that what happened is actually what you expected!
Get the scripts here: Table Partitioning Tutorial – Unexpected Data Movement.
The scripts show two examples:
- The initial scenario where merging the boundary point causes “unplanned data movement”
- A second scenario where the best practice of using empty partitions at the end of your partition range makes this much simpler, and merging the boundary point goes just like you’d think.
They are elusive and hard to spot in the wild, but they exist: rare index options that can enhance the performance of your queries. If you have a solid knowledge of index basics and want to add to your collection, this is the session for you! In this 30-minute video, Jes will show you how filtered indexes, compressed indexes, and indexed views work and can improve performance.
Want to get a closer view of the demos? Try full-screening the video – it’s available in 720p and 1080p.
First things first: don’t register for this until after January 31st, because otherwise, you’ll have to deal with a free Microsoft Surface RT, and you know how I feel about those.
Now, about the conference – this new conference covers SQL Server, Visual Studio, ASP.NET, HTML5, mobile, and Windows Azure. It’s great for developers because they need to work with (and sometimes manage!) SQL Server, but they don’t really want to go to a 100% SQL Server conference. This one show covers both development and databases. It’s at the MGM Grand in Las Vegas on April 8-12, 2013.
I’m especially excited because for the first time in history, the conference has track MCs. MCs, y’all, with real working microphones, and you’re looking at MC BrentO. I’ll be stationed in the HA/DR room throughout the conference, carrying a microphone around. I’ll facilitate Q&A for the speakers, plus take your questions and run small mini-sessions during breaks. Costumes? We can
neither confirm nor deny that there will be costumes.
But enough about fun – let’s talk about learning:
This Year’s SQL Server Setup Best Practices
All-Day Workshop with Brent Ozar, $449
You don’t build a lot of SQL Servers, but this year, you need to build one for your company – and it’d better work. You need it to be reliable and fast the very first time. Unfortunately, everything keeps changing, and yesterday’s best practices don’t cover solid state drives, FusionIO, AlwaysOn Availability Groups, and clusters without shared storage. In this all-day session, Microsoft Certified Master Brent Ozar will bring you up to speed. He’ll teach you how to pick storage, when it’s okay to use virtual servers, how to turn business requirements into clustering/mirroring/AlwaysOn/log-shipping, and where to put your data files, log files, and TempDB. This session is for developers and DBAs who need to pick SQL Server hardware and aren’t sure about the options today.
Scale Up or Scale Out: When NOLOCK Isn’t Enough
All-Day Workshop with Brent Ozar, Jeremiah Peschka, and Kendra Little, $449
Partitioning, replication, caching, sharding, AlwaysOn Availability Groups, Enterprise Edition, bigger boxes, or good old NOLOCK? You need to handle more data and deliver faster queries, but the options are confusing. In this full-day workshop, Brent, Kendra, and Jeremiah will share the techniques they use to speed up SQL Server environments both by scaling up and scaling out. We’ll share what features might save you hundreds of development hours, what features have been a struggle to implement, and how you can tell the difference. This workshop is for developers and DBAs who need to plan long term changes to their environment.
BOTH WORKSHOPS, the Whole Conference,
AND A MICROSOFT SURFACE RT: $2,294
If you register by January 31st, you can get BOTH our Monday pre-conference workshop and our Friday post-conference workshop, plus get three days of great sessions in between. You can pay with credit card, company purchase order, or check. Register now.
But then you’ll have to deal with a Surface RT. Maybe you should wait until February 1st, just to be safe.
There’s a secret to learning about SQL Server’s table partitioning feature: you need to get your hands on some code and really play with it in a test environment.
In this tutorial, Kendra will walk you through test scripts that set up a sample table partitioning environment. These scripts demonstrate gotchas to look out for and the cool features that can dramatically change your database’s performance.
Want to run the demos yourself? Download Table Partitioning Tutorial: Scripts.
Part 1: Our Mission
How can you use the demo and scripts from today in the best way possible? Find out here. 3.5 minutes
Part 2: How to create your test database and helper objects
It’s demotime! We create a test database with views that will help us quickly see critical information about out partitioned object. 3.5 minutes
Part 3: How to create partitioned objects
Next up we create a partition function. We create filegroups and files, and use a partition scheme to map the filegroups. We then create a partitioned object and indexes. Our helper views (created in Part 2), help us make sure things went as expected. 15 minutes
Part 4: Switch IN! How to use partition switching to add data to a partitioned table
Now for the cool stuff. In this session we explore how partition switching can allow us to snap a pile of data quickly into a partitioned table– and a major gotcha which can derail the whole process. 12 minutes
Part 5: Switch OUT! How to move data out of a partitioned table and merge boundary points
Deleting data is never fun… unless you’re switching data out of a partitioned table. But make sure to mind those best practices. In this section we show how to remove a lot of data from a partitioned table quickly and safely. 8 minutes
Part 6: Table partitioning takeaways
What are the big things we covered that you need to remember? Make sure you got all the important points here. 3 minutes
We’ve got tons more information on table partitioning at http://brentozar.com/go/partitioning
Looking for the scripts? Download Table Partitioning Tutorial: Scripts.
See http://BrentOzar.com/go/eula for our End User Licensing Agreement which governs use of the scripts.
Databases exist to store data and answer questions about that data. The way we ask questions has to change based on the database itself – SQL changes between different RDBMS vendors. Things get even stranger when you venture outside the RDBMS world and start storing your data in Hive. Even though Hive Query Language (HiveQL) looks like SQL, there are some subtle but important differences to querying that first timers will want to keep in mind.
It All Starts With A Question
Everything starts with a business requirement. In this case the requirement was simple: “We want to see a rolling average of price ranges over the last 6 months.”
Depending on which version of SQL Server you’re using, this is a pretty easy query to build; it can be accomplished using a CTE or a sliding window function. Since a CTE is the simpler example, we’ll start using it for our sample query.
WITH cte ( SELECT city_id, date, price FROM the_table ) SELECT x.city_id, x.date, AVG(y.price) AS average_price FROM cte AS x JOIN cte AS y ON x.city_id = y.city_id AND y.date > DATEADD(month, -6, x.date) GROUP BY x.city_id, x.date;
As a user of the database, we can safely assume that the database server is going to compute an average for the six months prior to
x.date. SQL Server’s optimizer is going to do a lot of work behind the scenes to make sure this query gets executed efficiently. The upside of using a database with a complex query optimizer is that the optimizer does the hard work for us – it determines the best way to deliver the data and we only have to worry about the questions we want answered. Things don’t work quite the same way with Hive.
Rolling Averages with Big Data
Hive can be queried using the Hive Query Language (HiveQL). While based on the SQL-92 standard, HiveQL doesn’t conform to the standard; even where Hive supports SQL functionality, there are gotchas to keep in mind. The first of these is language support: Hive doesn’t support CTEs. Admittedly, CTEs aren’t a part of SQL-92 but developers will find many language features missing or slightly different. Back to the topic at hand: it’s possible to write this query without using a CTE. The first step is transforming the body of the CTE into a sub-select and joining to the derived table:
SELECT x.city_id, x.date, AVG(y.price) as average_price FROM the_table x JOIN (SELECT city_id, date, price FROM the_table) y ON x.city_id = y.city_id AND y.date > DATEADD(month, -6, x.date) GROUP BY x.city_id, x.date ;
Unfortunately, this won’t fly in Hive – only equality conditions are supported on a join. There’s a valid reason for this – it’s very difficult to translate this type of join into a map/reduce job behind the scenes in Hive. Hive 0.10, just released last week, has support for
CROSS JOIN, but this version of Hive isn’t a viable option since it is still under heavy development.
Although not quite as simple, it’s still possible to make this query work so we can produce rolling averages:
FROM ( SELECT city_id, date, date_sub(from_unixtime(date), 180) AS 180_days_ago FROM table ) y SELECT y.city_id, y.date, AVG(x.price) FROM table x JOIN y ON x.city_id = y.city_id WHERE x.date > y.180_days_ago GROUP BY y.city_id, y.date ;
Tuning the Rolling Average
This query isn’t going to win any awards for performance. In fact, that rolling average query is a good candidate to win an award for terrible performance. Although Hive will perform some optimizations to the query we provided, the re-write process is nothing like what happens with full fledged cost-based optimizer. This query will produce a large amount of intermediate data before results are delivered and moving that data around the cluster takes a lot of time. What if there were a way to make this execute faster?
Let’s re-write this query so instead of computing a rolling average, we only compute a six month average based on today’s date. The first step is to grab just the data for today. Experienced SQL Server developers would say “Ah ha, I can do this using some kind of date math function in conjunction with
GETDATE().” Hive has its own function names that accomplish the same things:
SELECT * FROM table t WHERE date > date_sub(cast(unix_timestamp() as TIMESTAMP), 180) ;
What we’re trying to accomplish here is tell Hive, “Hey, I want to see all rows where
date is greater than the current time (in seconds since 1970), converted to a
TIMESTAMP, minus 180 days.” This doesn’t work, but it’s not by lack of syntactical correctness. There’s a bug in how Hive handles converting Unix time to a
TIMESTAMP (HIVE-3454). The documenation and spec says that during a
CAST, Hive should treat an
INTEGER as seconds since epoch and other numeric types as milliseconds since epoch.
What HIVE-3454 means is that our query will give us an average of data since some time in early January, 1970. While interesting, that’s not what we’re looking for. To make sure that we get the right time, the best approach is to perform date comparison using seconds instead of time functions. This next query almost works (it finds anything less than 180 days from right now):
SELECT * FROM table t WHERE unix_timestamp() - created < 180 * 24 * 60 * 60 ;
Since we really want to get an average based on the previous 180 days, we tell Hive to round to the current time to the nearest day
from_unixtime(unix_timestamp(),'yyyy-MM-dd'), subtract 180 days
datesub(value, 180) and then treat that as a temporal value
SELECT * FROM table t WHERE created > unix_timestamp(date_sub(from_unixtime(unix_timestamp(),'yyyy-MM-dd'),180),'yyyy-MM-dd') ;
What’s The Best Solution?
Through developing these queries, I found that the best approach to computing a rolling average in Hive is to approach the problem in chunks. It’s necessary to be more explicit in the queries presented to Hive. While Hive doesn’t have a sophisticated query optimizer, Hive does give developers the ability to run queries across many servers and perform a massive amount of data processing very quickly. Although we have a lot of power at our disposal, we have to make a bit of extra effort when developing queries to ensure that they perform well.
It’s finally time– you’ve been given the thumbs up to upgrade to SQL Server 2012. You’ve selected your SQL Server Edition and worked out your licensing agreement. You’ve sized new hardware in production and sketched out your migration plan. You’re ready to install developer edition in the test environment. You grab your installation media and…
Suddenly you realize you’re not quite ready. You need to make sure that all testing goes against the same version of SQL Server you’re going to go live with. But which version of SQL Server do you choose?
What’s the SQL Server Version?
The SQL Server “version” is often called the “patch level” of SQL Server. When a new SQL Server version is released you get the “RTM” version of the product. Over time, lots of fixes are released for each version: Service Packs, Cumulative Updates, and hotfixes.
You can find your current SQL Server version by going into your Server Properties in Management studio, or running the TSQL command:
To decode your version number, check out this big list of SQL Server Builds.
Identify SQL Server Service Packs
First, check if a service pack is avaiable for your version of SQL Server, and which one is the latest. SQL Server 2012 Service Pack 1 was released in November 2012.
Service Packs contain important fixes for a product and are tested even more rigorously than incremental releases like cumulative updates, so they’re pretty important to install. But like everything else, they may contain something that has unexpected side effects– so you need to test your application with the service pack before you change it in production.
Sometimes new features get snuck into Service Packs these days — check out what features were added in SQL Server 2012 SP1.
If you forgot to install the service pack at the beginning of testing, you may wonder if it’s worth it to go back and install the service pack and re-test. If you have any of these doubts, read the list of bugs the Service Pack fixes– but note that you’ll also have to read the release notes for all the cumulative update packages it includes as well.
Cumulative UpdatesYou’ve found your service pack, so you’re good, right? Not so fast.
You also need to consider the Cumulative Updates that Microsoft releases for SQL Server. These updates are more frequent than service packs, and technically Microsoft recommends that you only install them if you are impacted in an issue that the Cumulative Update corrects. However, there are many people who prefer to test and go live on their chosen version of SQL Server + Latest Service Pack + Latest Cumulative Update– because these updates do fix some doozies.
My favorite resources for cumulative updates is the SQL Server Release Services Blog. I subscribe to this by RSS, and anytime I need to do research on cumulative updates, I head to their blog and use their posts to find all the related links for documentation.
For SQL Server 2012, you’ll find that Cumulative Update 1 has been released post SP1, so you can consider the version SQL Server 2012 SP1 CU1. This CU1 has some pretty critical sounding fixes in it, including out of memory errors, incorrect results on parallel queries, and databases offline if an index rebuild is performed when the transaction log is full. That sounds pretty compelling, right?
Don’t get too comfortable, there’s even more to consider.
Critical Hotfixes – And a New Way to Find ‘Em!
Microsoft works hard to release fixes regularly– hence those Cumulative Updates. But they still need to release out of band hotfixes sometimes for big issues, and it can take little time for those critical issues to be handled in a cumulative update.
This is the trickiest part of selecting a version, really, because you need to be able to find if there are critical hotfixes that might impact you, and what to do about them. There typically hasn’t been an easy way to find out about these. Even for people like us who work on and talk about SQL Server every single day, we’ve been pretty dependent on word of mouth in the community to find out about things.
Good news, Microsoft has recently created some new RSS feeds for KB articles that you can use to research and stay informed of new hotfixes for Windows and SQL Server! (We’re talking about SQL Server versions in this post, but guess what? The same stuff applies to Windows version. File that one under ‘job security’.)
Make sure to research your choices well, because you can make the wrong choice.
Hard Choices: When Service Packs and Critical Updates Collide
Here’s an example of a sticky situation.
SQL Server 2012 SP1 has a nasty little bug that only happens sometimes. The symptoms are that after the service pack is installed, processes named msiexec.exe keep running and use up lots and lots of CPU– up to 100%. This persists after rebooting the instance.
This doesn’t happen every time that you install the service pack. But when it happens, it can be very tricky to resolve it– for evidence of that, just read the notes in the Microsoft Connect item on the issue.
Microsoft responded to the problem and released a hotfix for the issue– KB2793634. That’s great, but if you read carefully and check that Connect item, you’ll find that this hotfix should only be installed if you experience the issue and have NOT installed Cumulative Update 1 for Service Pack 1. A hotfix is planned to be released later that is valid for that situation.
So, at this point, you have multiple options:
- If you’re really compelled by the fixes in SQL Server 2012 Service Pack 1 + Cumulative Update 1, you could install them all everywhere and gamble that you won’t get hit by the msiexec.exe bug in production. If you even consider this one, you need to be prepared with the steps you’re going to take to monitor and make sure you know if the issue is occurring, and how you’re going to respond if it happens.
- If you’d like to go the safer route, you may choose to install only SQL Server 2012 + Service Pack 1. You still want to know the steps you’re going to take to monitor for the msiexec.exe bug and how you’re going to respond if it happens, but you have more options. In this case you’ll be able to use the published hotfix if needed. (By the way, it’s not always straightforward– read the user reports in the Connect item.)
- If you’re scared off by this entirely, you may choose to install SQL Server 2012 RTM version and skip all updates. That may seem like the safest option, but research all the fixes that were made in the cumulative updates and service packs– you could burn yourself with this choice as well.
Update (Feb 16, 2013) – Potential Software Registry Bloat from SQL Server 2012 Sp1
There’s more you should know about the potential problems introduced by SQL Server 2012 SP1. If you hit the issue with the msiexec.exe installer process running post-installation (which does not happen all the time), you may experience growth of your software registry hive over time.
Symptoms include server level instability — and yes, those three words are terrifying! Learn more about how to diagnose if you have the issue and repair it from Remus Rusanu.
There’s Often No “Perfect” Choice
Many factors go into a version selection– what your vendor supports, the release/change frequency your organization allows, your business’ tolerance of risk, and the amount of testing and validation you can do are all critical factors. Not to mention your upgrade surface and whether your high availability options could be impacted by the change! (If you’re like me, one of the last things you ever want to risk is getting stuck doctoring up a Windows Registry in a critical downtime situation.)
But no matter how perfect your change process, you’ll always need to keep current on fixes as they are released AND do research to find the bugs which may impact your choice. Things change fast!
Don’t have time to do all the research, or want some expert advice? Drop us a line– we often help clients figure out the right path to take their environment to the next level.
I served with High Availability. I knew High Availability. High Availability was a friend of mine. VMware HA, you’re no High Availability.
See, for us database administrators, high availability means protection when:
- The system drive fills up because some potato decided to download a bunch of files
- An operating system or database server update goes horribly awry
- Or even when an OS or SQL update goes right – because the beauty of real high availability solutions is that they let you patch the standby node first, make sure it works, and then fail over to it so you can patch the other node.
Don’t get me wrong – I love VMware, and I love using VMware HA for database servers. It’s a fantastic way to get higher availability for those old dinosaur database servers running SQL Server 2000 that we just can’t kill, yet still run important apps. But in systems where uptime really matters, a single virtual machine isn’t the answer to high availability. That’s where solutions like clustering, database mirroring, replication, and AlwaysOn Availability Groups come into play.
Thankfully, there’s good news: when VMware HA is paired with SQL Server technologies, they can both work even better. Two standalone physical database servers running AlwaysOn Availability Groups are more reliable than just one server, but two virtual machines doing the same thing are even more reliable. They’re protected from hardware failures because they can be spun up on any VMware host in the datacenter. They’re more flexible because we can add CPU power or memory quickly based on demand.
I’ve blogged about why your SQL Server cluster shouldn’t be virtualized, and that still holds true. If you need to build a hybrid AlwaysOn solution involving both failover clustered instances (FCIs) and standalone instances, I would rather not put the FCIs in VMware first. But if you’re under pressure from management to cut costs and cut your datacenter footprint, put the rest of the instances in virtual machines. You’ll gain the power and comfort you want from physical machines while getting even higher availability from the virtual machines. Everybody wins, and the future will be better tomorrow.
Even if your SQL Server is the only guest on a host, it still might not be as fast as bare metal.
One of the reasons is NUMA, which stands for Not Ur Momma’s Architecture. Okay, no, smart reader, you caught me – it actually stands for Non-Uniform Memory Access. In your momma’s architecture (Symmetric Multi-Processing), any CPU could access any memory all at the same low price. In today’s NUMA servers, a single motherboard with two CPUs and 128GB of memory can actually be divided into different nodes.
When a process running on CPU #1 wants to access memory that’s directly connected to it, that’s local access, and it’s fast. However, when that same process wants to grab data stored in CPU #2′s memory, that’s remote access, and it’s not as fast.
The performance penalty of remote memory access varies greatly from system to system, and you can measure it with Coreinfo from Sysinternals. (That Russinovich knows everything.) Blogger Linchi Shea went so far as to test the overhead of local versus remote access on one particular system, and he saw about 5% performance reduction. He considered that the worst case scenario for the server hardware he was using, but keep in mind that the situation will be much worse on servers with higher costs for remote memory access like IBM’s x3850 and x3950.
Windows exposes NUMA configuration details to applications, and it’s up to the app to tune itself appropriately. SQL Server has been NUMA-aware since 2005, and Microsoft’s continued to add improvements for it through 2008 and 2012. To learn more about how SQL Server handles NUMA, check out Gavin Payne’s SQLbits presentation, The NUMA Internals of SQL Server 2012.
How Virtualization Screws Things Up
The good thing about virtualization is that it abstracts away the hardware. You can run any virtual SQL Server on any server in the datacenter without a reinstall. You can even move virtual machines from one host to another, live, without a service restart – even if the underlying hardware is completely different. You can use multiple VMware hosts with completely different NUMA architectures – different numbers of cores per NUMA node, different amounts of memory per node, etc.
In order to pull this off, virtualization just presents a lump of CPUs and memory to our guest. Our virtual machine has no idea what the underlying NUMA configuration is – and it can’t, because it could change at any time when we’re moved from one host to another. This isn’t a performance problem for most apps because they don’t need to know anything about NUMA. They just want a lump of CPUs and memory.
Unfortunately, this is a performance problem for SQL Server because it actually wants to know the underlying configuration – and wants to tune itself for it. This is why when even running on a host with no other guests involved, performance still won’t match bare metal.
How vSphere 5′s Virtual NUMA Fixed Things Up Again
There are three key decisions that will make your life easier (and possibly your performance better).
First, isolate your virtual SQL Servers onto their own hosts. With SQL Server 2012′s licensing, when you buy Enterprise Edition for the host’s CPUs, you get unlimited virtual machines on that host. For a while, this wasn’t easily doable in VMware because of their incredibly stupid memory limits with licensing, but thank goodness they fixed that license stupidity recently. I can’t imagine a software vendor being dumb enough to limit their product to 64GB of memory in this day and age. <cough>sqlserverstandardedition</cough> I’m so glad VMware listened to their end users and fixed that limitation. <cough>microsoftpayattention</cough> Restricting a database server to just $500 worth of memory, why, that’d be like releasing a tablet with 4 hours of battery life. <cough>mylastpostasanmvp</cough>
Second, in that pool of hosts, use identical hardware running vSphere 5. All of the hosts need to have the same NUMA architecture. This does come with a drawback: it’s harder to do hardware refreshes. Most shops just buy new hardware as it becomes available, throw it into the VMware cluster, and let VMware DRS automatically rebalance the load. Unfortunately, the SQL Servers won’t be able to vMotion onto this hardware if it has a different NUMA configuration. The guests will need to be shut down at the next maintenance window, get a different NUMA config, and then be booted on the appropriate hosts.
Finally, configure vSphere 5′s Virtual NUMA on your guests. This is done automatically for guests with more than 8 vCPUs, but at 8 or less, you’ll need to enable it manually. Presto, SQL Server will see the underlying architecture and tune itself appropriately. (Well, not entirely appropriately – now SQL Server just solves the easy problems for you, and creates new hard problems.)
To enable virtual NUMA on VMs with 8 or less vCPUs, follow the instructions on page 41 of the Performance Best Practices for VMware vSphere 5.1 PDF. And hey, while you’re in there, get your learn on – it’s an excellent resource for SQL Server DBAs who want to know if their shop is doing things right.
Without documentation, you could probably do your taxes, build a new piece of “some assembly required” furniture, or manage a database instance – but you’d probably end up living in prison, living with broken furniture, or living in fear of your servers.
Writing documentation might not seem as important or exciting as developing or tuning code, but it keeps database administrators out of prison. Join Jes to learn what you need to capture and what tools to use.
Hungry for more goods on documentation in SQL Server?
Check out http://brentozar.com/go/documentation!
Some parts of our jobs are not glamorous, but necessary. For example, I have to brush Brent’s Bob Dylan wig weekly, to make sure it’s shiny and perfect. Documentation is a task many people roll their eyes at, procrastinate about starting, have a hard time keeping up-to-date, and in general avoid.
Stop avoiding it, and embrace the benefits!
The most important part of documentation is starting, so I’d like to help you by giving you a list of things to document. It’s going to take time and won’t be as fun as tuning queries from 20 minutes to 2 seconds, but it could save the day sometime in the future.
You can call this your SQL Server Run Book, your SQL Server Documentation, your SQL Server Best Practices Guide – whatever works for your environment. Make sure it’s filled in for each server, and kept up to date, and you’ll soon realize the benefits.
Click here to download a template SQL Server Documentation Run Book.
Need an example? Here is a sample SQL Server Documentation Run Book!
You’ll want basic information about your instance recorded so that if you have to rebuild it, the environment will be the same.
- Number and type of CPUs
- Amount of memory
- Storage – type and amount
- SAN settings (if applicable)
- Server name
- Instance name
- SQL Server version and edition
- Service account
- Min and max memory settings
- Trace flags
- Any non-default options that have been configured, such as MAXDOP or Optimize for ad hoc workloads.
- Virtual server name
- Virtual IP address
Each database on your server – including system databases – should be well-documented. Beyond knowing what is contained in the database, you should also know why it exists and whom to contact with questions.
- What application or program it is for
- What person, business unit, or department is the owner
- Are two or more databases on the server related? Do they need to be restored on the same server, or in a specific order?
- Location of data and log files
- Recovery model
- Backup schedule and file locations
- Information about tables, views, stored procedures, and functions, such as table names, field names, data types, default values, or parameters
- Database roles
- Database users and permissions
- Is log shipping enabled?
- What instance are you restoring to or from?
- Frequency of backups
- Is mirroring enabled?
- Operating mode
- Principal or mirror?
- Instance that is the principal or mirror
- Is replication enabled?
- Type of replication
- Publisher, subscriber, or distribution database
- Distribution server name
Having a secure SQL Server is highly important. Only the users that need access to the server should be granted access, and they should only be granted the minimum rights needed.
- Authentication mode
- Logins – Windows and SQL authentication
- What roles a login is granted
- Databases the login has permissions to and the roles in those databases
- Server roles
- Linked Servers
- Login/Security Context
There are many important tasks that can be set up and scheduled on your database server, such as backups, restores, and maintenance. Document these, as well, so you know what each job is for, and can recreate it in the event of a disaster.
- Tasks performed
- Are any related, and do they need to be run in a specific order?
Anything that goes against best practices, and why
Do you have xp_cmdshell enabled because a third-party application requires it? Are all of your linked servers configured with an sa account? Any special circumstances should be documented and explained. In the case of an emergency, you don’t want to have to think about why these things were turned on. You also want this information readily available for anyone else who is or will be responsible for the instance.
Documentation is Worth the Effort
You will thank yourself for doing this work. Don’t let the task overwhelm you – start with one server, and work your way through your environment. Once you have to rebuild a server, train a new DBA, or explain what a specific Agent job does, you will appreciate the time you have put in to building your run book!