At #SQLBits, I was quite excited to attend this session because of the presenter. Christian Bolton of COEO (Twitter) was the head author on our upcoming book SQL Server 2008 Internals and Troubleshooting, and he’s one of the only 8 Microsoft Certified Masters on SQL Server that work outside of Microsoft.
Physical vs Virtual Memory
Physical memory means the fast, volatile storage of the chips themselves. We’re finding 64GB is very common these days. If every app tried to access this memory directly, we’d run into performance problems quickly. That’s why virtual address space came about. On 32-bit OSs, the default settings leave 2GB for the kernel and 2GB for applications. On 64-bit OSs, that breaks out to up to 8TB of kernel memory and 8TB of application (user mode) space.
All applications just work with virtual address space first – they don’t know whether they’re working with physical memory or the page file. The allocation of memory to either chips or the page file is the job of the Virtual Memory Manager.
What Uses SQL’s Virtual Memory
The Buffer Pool is SQL’s main memory consumer, and that’s managed by the min and max memory settings in SQL Server.
Another consumer is VirtualAlloc – which just means anything that doesn’t use the buffer pool. The heap manager, thread stacks, and multi-page allocations (things that need more than 1 page to store data). For example, execution plans are normally quite small, but they can exceed one page, and we need to store them in memory. Linked servers and extended stored procs also use VirtualAlloc to grab memory.
The buffer pool will never be bigger than physical memory. If you’ve got 32GB of memory and a 64GB page file, your buffer pool won’t be bigger than 32GB. On an x86 box, this can be problematic since you won’t have more than 2GB of user mode space by default. Because there’s not much memory, SQL Server has to reserve some memory before the buffer pool grabs hold of the rest, and that’s called memtoleave. It’s calculated with MemReserved + (NumThreads * StackSize). By default, MemReserved is 256mb. NumThreads is the max number of worker threads configured. StackSize is .5mb on x86, 2mb on x64. Christian had an excellent visual demo showing how the memory gets used up on different CPUs and memory amounts – I can’t begin to convey that here, but hey, that’s where his great memory chapter in the book comes in.
Christian demoed the RMLUtils to stress test SQL Server and affect the memtoleave space. He asked how many attendees were using wait stats for performance analysis, and the number was about the same as my presentation’s attendees – around 10%. Folks just aren’t using wait stats yet. But if you turn around and ask how many of those 10% use it as their primary troubleshooting tool, it’s usually 100%. Folks who use them, love them, as I’ve blogged before about wait stats.
SQL Server’s Memory Model
Memory nodes are the lowest level allocator, and you can view info about them in sys.dm_os_memory_nodes.
Memory clerks are the next level, and they’re used to access nodes. When something in SQL wants memory, they ask the clerks, and the clerks allocate nodes. There’s a general MEMORYCLERK_SQLGENERAL, and then heavy memory users get their own clerks, like MEMORY_CLERK_SQLBUFFERPOOL and MEMORYCLERK_SQLQUERYPLAN. You can check what they’re using with the DMV sys.dm_os_memory_clerks.
He talked about the different kinds of caches, and you can query them in sys.dm_os_memory_cache_counters. He touched on how the plan cache has hard-coded maximum sizes based on your memory size – you don’t want plans running you out of buffer cache. When you throw in the buffer pool and query memory too (for joins/hashes), it’s easy to see how SQL Server really needs all the memory it can possibly get.
Christian tied it all together by showing a slick DMV query that breaks out cache space usage by database. I glanced through his blog but I couldn’t find it – I’ll hit him up for that later.
Best Practice: Lock Pages in Memory
Locking pages in memory makes sure SQL Server’s memory is not paged out to disk. Otherwise, Windows may push it out to disk – often with bad drivers. The first thing to do when running into SQL’s memory getting swapped out is to check for updated drivers, then tweak down SQL’s max memory setting, then use AWE/lock pages in memory, then consider upgrading to Windows 2008. Even if you have bad drivers, Win 2008 won’t be as aggressive with trimming SQL Server’s address space. Memory allocated using the AWE mechanisms can’t be trimmed. Should you lock pages in memory on 64-bit systems? Yes, because it locks pages in the working set. Only the data cache is locked, though – other memory like the plan cache can still be trimmed.
You can lock pages in memory on Standard Edition, but you need to get on the right patch levels – see this knowledge base article for details.
Christian asked how many DBAs set the max server memory, and I was surprised that the vast majority of DBAs haven’t. I’m a big cheerleader for setting that. Glenn Berry recently wrote a set of guidelines for max server memory. Christian recommended leaving 2GB for the OS, plus the memory for the worker threads we calculated earlier, plus 1GB for the multi-page allocations/linked servers/etc, plus 1-3GB for other applications (if necessary.) This is a worst-case-scenario guideline because we just want to make sure our server isn’t paging to disk.
Bottom line – I learned a lot from this session, and this one alone was worth the price of admission!