Blog

Last Friday, I did a series of presentations with Jason Hall (Twitter) and Geoff Hiten (TwitterBlog) on performance tuning SQL Server.  The slide decks are online at SlideShare along with a few other SQLServerPedia presentations:

Here’s the links we discussed:

The archived video/audio copy of the webcast should be online this week, and I’ll let you know when those are up.  In the meantime, here’s a recap of some of the questions that came up outside of the decks:

How do Perfmon memory guidelines change between 32-bit and 64-bit OS’s?

In my presentation, I note that if you’re averaging less than 250mb of free memory, you need to ease off SQL Server’s maximum memory settings.  That guideline holds for both 32 and 64 bit systems.  One of the attendees noted that he wants to see a much higher number (like 1gb free) and I agree, that’s fantastic.  The 250mb is a fire alarm number, though, and the presentation focuses on things that you need to focus right away when you see them.

How do Performance Monitor guidelines change for virtual systems?

Perfmon numbers are nearly meaningless under virtualization.  100% CPU use doesn’t mean that your virtual SQL Server is actually using 100% of the possible CPU power – it means 100% of the available CPU power.  Instead, it might mean:

  • You’re hitting the CPU limit that the virtualization sysadmin assigned to your virtual server (yes, they can limit you, and you’ll never know).
  • Another guest is using a lot of available CPU power, leaving less for you, and you’re using 100% of what’s available to you.
  • And if you’re looking at yesterday’s numbers, you were hitting the CPU limit of the physical CPUs you were on at the time, but you might be on different CPUs now.  The virtualization sysadmin or the virtualization software can magically move your virtual server to another physical host without you knowing.  (It’s called vMotion for VMware, or Live Migration for Microsoft Hyper-V.)

Instead of using percentage metrics, I recommend that DBAs focus on time-focused metrics and (some) queue-length metrics for virtual servers.  For storage monitoring, focus on seconds per read and seconds per write, because those tell you if the storage is responding fast enough.  For processor and memory, focus on queue length metrics, because those tell you how many things are waiting on those types of resources in order to execute.

Should TempDB be placed on local disk or on the SAN?

If it’s currently on local disk, use the Perfmon monitoring to determine if that disk is currently a bottleneck.  If it is, and if you’ve got free performance (not just space, but performance) available on the SAN arrays, then yes, you should consider moving it to the SAN.

Keep in mind that if you’re doing SAN-to-SAN replication, you don’t want to put TempDB on the same replicated array that you’re copying to a remote datacenter.  Use a different array that isn’t being replicated.

In VMware, how many CPUs should I use for SQL Server?

As few as you can get away with.  If the virtual server is set up with 4 virtual CPUs, then VMware will wait until there are four available cores before it gives any CPU time to the guest OS – even if the guest OS only needs a single thread.  I’ve personally experienced faster performance when turning down the number of virtual cores assigned to a server, and that lines up with what’s been seen by VMware forum users.  Avoid using 4 virtual CPUs unless you’re actually saturating 2 virtual CPUs, and don’t try running too many 4-virtual-cpu guests on a single host.

This is counter-intuitive for us power-hungry DBAs, who like to assign every server 4 virtual CPUs just because we can.  Ease off, though.

↑ Back to top

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

css.php