Blog

Over the last few years, I’ve put together a list of best practices for installing Microsoft SQL Server 2005 and 2008 and 2012. Other folks do more, but for small businesses, this is a good start.

I don’t show the exact point-and-click steps because my target audience (you smart folks) aren’t complete newbies. I know you folks understand HOW to do most of these things, but you take shortcuts because you don’t understand WHY you’re doing these things. There are plenty of places on the web where you can find examples of how to execute these steps (if not, email me and I’ll post ‘em) – instead, this series of articles will explain the reasoning behind the gruntwork.

Create at least one new Active Directory account for this SQL Server.

This is best practices, and a popular shortcut is to run all of the database servers under the same service account. I’ve run into two problems with this approach. First, I’ve had a sysadmin repeatedly fat-finger the service account password, thereby locking out the service account. When all of the SQL Servers suddenly run into service account problems, that’s a really ugly outage.

Second, there have been times when I’ve needed to give permissions to a specific SQL Server that other servers shouldn’t get. If xp_cmdshell is enabled, then DOS commands will be run with the SQL Server’s service account permissions. I hate giving any more permissions than necessary, so if a user needs to write to remote locations using xp_cmdshell, it’s nice to lock down the permissions on that one database server’s account instead of enabling every SQL Server to write to that remote location.

I say “at least one new account” because best practices dictates using separate accounts for the database service, fulltext service, agent, etc. I usually slack here and just use one account for all SQL Server services on a given server.

Keep in mind that if you have to change the service accounts later, you must use the SQL Server Configuration Manager to make this change.  Don’t use the Services control panel.

Configure local security settings for the SQL Server 2005/2008 service account.

Go into Local Security Settings (click Start, Run, type SECPOL.MSC and hit enter) and click Local Policies, User Rights Assignment.

In the permission “Perform Volume Maintenance Tasks”, add your SQL Server service account (or a domain group if you use that). This enables SQL 2005′s Instant File Initialization, which lets SQL Server instantly grow data files without erasing the physical disk first. This is a huge performance gain for large databases like data warehouses and for cases when you need to suddenly grow a file out.  This is a new server, so you’ll probably be creating and restoring databases as part of going live, so this switch will help.  (If you’ve ever restored a database and it seems to take forever before it gets to 10% complete, this is why.)

In the permission “Lock pages in memory”, some companies prefer to add the SQL Server service account or domain group. In theory, this lets SQL Server keep all of its data in physical memory instead of swapping it out to disk.  Configuring “Lock pages in memory” used to be a best practice.  It’s not such a great idea anymore, especially for virtual machines: the host may come under real memory pressure in an emergency. Think about multiple simultaneous blade server failures – like if your blade chassis were to go down and all of the VMs were booted on another blade chassis. You’ll be overcommitted on RAM somewhere and you want SQL Server to give up memory to let other guests boot in an emergency.  The other problem is that this setting only affects the buffer pool, but that’s not the only way SQL Server uses memory.  Bottom line – this isn’t such a good setting to use by default anymore on new boxes.

If the SQL Server uses SAN storage….

Update the HBA firmware. Downlevel HBA firmware has caused me all kinds of nasty problems, especially in clustered servers. Generally, these updates can’t be done online while the server accesses data, so it’s better to get the code up to date before the box goes into production. For HP servers, this firmware isn’t shown in the System Homepage: install Emulex HBAnyware on the server instead, and it will flash the HBAs inside of Windows without a reboot. HBAnyware is available in the HP Support site by searching for downloads for the HBA’s part number instead of the server’s part number. This is the only driver/firmware at HP that works this way.

Set up multipathing drivers. Sometimes this is done by the storage team, but the DBA should get involved enough to understand whether the multipathing is active/active or just failover.

Test the multipathing & failover. Start a huge file copy to each array, and do them all simultaneously. Go into the datacenter and pull one fiber cable out. Watch to make sure the file copy continues. Some SAN drivers will take 10-15 seconds to fail over, but the file copies should not be disrupted, period. If they’re disrupted, the multipathing didn’t work. Then plug the cable back in, and pull another cable. Again, the file copy should continue. Finally, while a file copy is running, ask the SAN admin to disable one of the SAN zones for the server – that way, the fiber cable will still be lit up, but the path to the storage will be gone. (This is a tougher failover method than just pulling the fiber.)

If the SQL Server uses iSCSI storage….

Set up multipathing. Database servers can’t rely on one single network connection for iSCSI any more than a fiber-connected SAN can rely on one single host bus adapter. Ideally, two (or more) network cards will be connected to two different switches for redundancy, but at the very least, we need two network cards dedicated to iSCSI storage. The multipathing method can be active/active (meaning 2 gigs of throughput for two 1 gig nics) or active/passive.  Teamed network cards are not supported by Microsoft for iSCSI.

Test the multipathing. In my experience, I’ve usually seen active/passive on a per-array basis – meaning, if you have two different iSCSI drive letters, then themultipathing drivers will put each drive on its own network card. The EMC and LeftHand multipathing appears to do this by default. Start multiple simultaneous drive copies and go into Task Manager, in the Network tab. Look at the bandwidth used by each network card. If a network card is sitting idle, then you’re leaving performance on the table. Now is the time to tweak the multipathing software and ask questions of the vendor – it’s easier to troubleshoot file copy performance than it is to troubleshoot SQL Server performance.

Test the failover. As with the fiber cable testing, start multiple simultaneous file copies to/from the network drives and pull one network cable out. If the file copy fails (if Windows throws an error) then SQL would have crashed. Tweak the teaming software until it can fail over seamlessly, and ideally it should fail over back and forth and go back to higher bandwidth levels as the networks come back online.

Use DISKPART to create aligned partitions.

If you’re using Windows 2008 or newer, you can skip this step.  Microsoft KB article 929491 covers some of this in technical detail, but the Microsoft Exchange Team blog does a much better job of explaining why disk partition alignment is important for performance. Use this command in Diskpart.exe:

CREATE PARTITION PRIMARY ALIGN=1024

The 1024 number will work with every major SAN out there. Gurus can use smaller numbers when they know a specific SAN very well, but small numbers only save you a few hundred kilobytes in the entire disk. Be safe, be sure, and use 1024, and you won’t get burned if the underlying SAN structure changes, like with today’s virtual storage.

Format the Drives with 64K NTFS Allocation Units.

When you format the drives that will be used to store SQL Server’s user databases and system databases, the very last step in the Windows format wizard asks for the drive name. Right under that is a dropdown for allocation unit size, and it says Default. Default is 4KB, and that sucks for SQL Server in most cases – use 64K instead. (A rare exception is NetApp storage, where it doesn’t make much of a difference if it’s 4K or 64K.)

If you don’t get this right before you lay down database files, the only way to correct it is to back up the databases, erase the drives, format them correctly, and restore the databases. That’s painful.

You don’t have to worry about this setting on the OS boot drive or where the SQL Server installation files are stored.

For 32-bit servers, configure AWE and PAE.

One of my favorite sites, SQL-Server-Performance.com, has the most straightforward explanation of how to configure these obscure settings in boot.ini:

  • 4GB RAM: /3GB (AWE support is not used)
  • 8GB RAM: /3GB /PAE
  • 16GB RAM: /3GB /PAE
  • 16GB + RAM: /PAE

You can read more about AWE and PAE at SQL-Server-Performance, who does a better job of explaining this than I ever could.

Set the page file on the C drive to 2GB.

When a SQL Server is configured correctly, it doesn’t use the page file for memory.  (We’ll configure memory correctly in a later step.)  In theory, we don’t need a page file at all, but there’s a catch.

For advanced troubleshooting, there has to be a page file on the C drive big enough to do a minidump. This isn’t a strict requirement, but if SQL Server ever crashes hard, that dump file will be invaluable. I have actually had this happen, and I’ve even been able to reproduce the crash on several different servers – we even did it at the Microsoft Technology Center in Chicago. It was some stunningly bad code coupled with an ugly database, and I got a lot of laughs out of that. Anyway, the point was we needed a dump to troubleshoot it.

Configure antivirus software to skip LDF/MDF/NDF files.

SQL Server stores its data in files with LDF, MDF and NDF extensions.  These files should not be scanned by antivirus software.  Depending on your antivirus configuration, you may need to exclude these files network-wide at the central antivirus management server.

This isn’t just a performance issue; it’s an outage issue.  I’ve seen cases at boot-up time when the antivirus software grabbed a lock on a particular ldf/mdf/ndf file for virus scanning before the SQL Server service started up.  The database was not attached since SQL Server couldn’t open the file, and due to the huge file size, the antivirus software spent several minutes before releasing the file.  During a server startup, this type of thing isn’t obvious to the sysadmin – they just see a problem with a database and go into panic mode.  Avoid panic: fix the antivirus before installing SQL Server.

Time to Start the Install

But you can’t just run the setup.exe and go into production – oh no. Next up, we’ll cover what you need to do after the install finishes.

Install not just the service packs, but also the cumulative updates. Starting with SQL Server 2005′s Service Pack 2, Microsoft releases hotfixes in cumulative packs. These updates do more than just fix bugs: they improve how SQL Server performs. These updates are free performance benefits – and who doesn’t like that?

To find the latest service packs and cumulative updates, check out the SQLServerBuilds.Blogspot.com.  It’s got version numbers, build numbers, and download links for all versions of SQL Server in one place.

Double-check that Instant File Initialization is enabled.

Paul Randal wrote an excellent blog post on how to tell if instant initialization is enabled.  Follow the instructions in his post, and you’ll know for sure.

Best Practice: Move TempDB to its own drive.

By default, the TempDB files are put on the same drive as the SQL Server binaries. Even if the user chooses a custom install, TempDB still goes on the same drive as the other data files, and that’s not a good idea either. Instead, the TempDB data files should be on their own dedicated drive.

Fix this by first moving TempDB to its own drive.  In this example, I put the data file on the T drive and the log file on the L drive.  (Be aware that the directory paths must already exist.)

use master;
go

alter database tempdb modify file (name='tempdev', filename='T:\MSSQL\DATA\tempDB.MDF', size = 1mb)

go

alter database tempdb modify file (name='templog', filename='L:\MSSQL\LOGS\templog.LDF', size = 1mb)

go

I only set a 1mb file size because SQL Server does something tricky: even though we’re telling it to use a different drive letter, it will look for this amount of free space on the drive TempDB currently uses! If SQL Server was installed on the server’s C drive, for example, and we try to create a 10gb TempDB file on a T: drive, that SQL command will fail if there isn’t 10gb of free space on the C drive. Yep, it’s a bug – get over it.

After this code runs, restart the SQL Server. That will create the new TempDB file on the new drive. Manually delete the old TempDB file on the original drive, because SQL Server doesn’t delete that itself.

Now that TempDB is on the right drive, expand it to the full size you want, and then create additional TempDB files.  The current guidance from Paul Randal is to make 1/4-1/2 the number of TempDB files that you have processor cores.  If you’ve got a quad-socket, quad-core box, that’s 16 cores, so you need 4 to 8 TempDB files.  Generally I start on the lower end unless I know the server will be under heavy TempDB pressure from its applications.

Here’s the code to create one additional TempDB data file – you can modify this for more files:

USE [master]

GO

ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev2', FILENAME = N'T:\MSSQL\DATA\tempdev2.ndf' , SIZE = 10GB , FILEGROWTH = 0)

GO

The data file creation should only take a couple of seconds – if it takes more than ten seconds, then instant file initialization isn’t configured correctly. We talked about this back in the pre-installation checklist, so go back and revisit that before you create the next TempDB file. Fix the security to allow for instant file initialization now – it has a huge performance impact on database growth.

Assuming that one file growth only took a couple of seconds, then go ahead and create the rest of the TempDB data files.

Notice that I don’t have filegrowth enabled.  You want to proactively create the TempDB files at their full sizes to avoid drive fragmentation.  If you have a dual-cpu quad-core server (8 cores total) and an 80-gb array for TempDB data, you would create eight 10gb files for TempDB.  That way, each file is contiguous, all laid out in one big chunk.  If you create them as smaller files and let them autogrow, then the disk will be fragmented all over the place because the files will be growing at random times.  Plus, you could end up with differently sized TempDB files if one of them happened to grow faster than the rest.  That’s why I pre-grow all of the TempDB files ahead of time and get them at exactly the right size.

Configure SQL Server memory for best practices.

Sounds easy, right? Go into SQL Server Management Studio, right-click on the server name and click Properties, go into Memory, and just configure it. There’s only a couple of fields – how hard could it be?

SQL Server 2005 memory settings

Oh, this screen is full of danger and pitfalls.

First, that tricky checkbox that says “Enable AWE”. Check that box if you’re using a 32-bit server with more than 4 gigs of memory.

Second, the minimum and maximum memory amounts are important, especially since we gave the SQL Server account the permission to lock its pages in memory. If other applications are running on this server, we need to specify how much memory we want SQL Server to take.

Ideally, no one would ever remote desktop into a SQL Server and run programs. Unfortunately, this happens, and we have to plan for it by leaving enough free memory for people to run things like SQL Server Management Studio. When I’m first building a server that isn’t running any other applications at all, I like to leave 10% of the memory free, or 4gb, whichever is larger. Then I monitor the free memory over the course of a month or two, and adjust it up or down during the next outage window.

If the server does multiple duties like act as a web server or application server, we have to be much more conservative with memory. Application owners never seem to know how much memory they’ll really use in production: SAP BW’s Netweaver, for example, tends to use anywhere from 10% to 50% of the memory on our production server, and it’s tough to predict. As a result, we have to leave the SQL Server’s memory allocation at just 50% of the available memory on the server.

I set the minimum server memory to 50% of the server’s total memory.  This will let SQL Server release memory if the server comes under memory pressure, like if someone remote desktops in and runs a very poorly written application.

The only way to know the right answer long term is to use Perfmon or a performance monitoring utility to watch the server’s free memory. I’ve written up a separate blog post on using Perfmon for SQL Server monitoring.

Set the Default Database Path

Even if you chose this during setup, we need to revisit it because SQL Server puts both the data files and the log files in the same directory. In SSMS, right-click on the server name and click Database Settings. The paths for the data files and log files can be configured from there.

Of course, this assumes that we have separate drives for the data and log files, which is the right way to go for performance purposes.

Tweak the model database.

This tip comes courtesy of reader John Langston.  Whenever a new database is created, SQL Server uses the “model” database as – well, as the model.  You can make changes to that database, and those changes will automatically happen to any new databases.  John writes:

I also like to go to model and change the recovery model from FULL since we use SIMPLE a lot, even in production and also change the datafile autogrowth setting from 1 MB.

Great tip!

Configure Database Mail with public & private profiles.

Database Mail is a pure SMTP solution that beats the daylights out of SQL 2000′s SQLmail. It doesn’t require Outlook to be installed on the database server, doesn’t need any MAPI code, and works with any company email server that can be accessed via SMTP.

There’s plenty of sites on the web that explain how to configure Database Mail, but I want to address something: be aware that developers can use Database Mail for things that SQL Server shouldn’t be doing. For example, they may decide to use Database Mail to send out mass emails to your end users or customers. There’s nothing technically wrong with that, but it increases the load on the database server and it sends all outgoing email with the SQL Server’s Database Mail account.

At our shops, we use internal emails like (servername)@ourcompany.com to identify which server is sending the database mail. Those email addresses make sense to us because we just need to know where the alerts are coming from – we would never hit Reply to a server-generated email.

However, if developers use SQL Server to send out emails directly to customers, those customers will indeed reply. I had a nasty problem where a couple of developers decided to purge old customer accounts, and they used SQL Server’s Database Mail to broadcast an announcement to those users. The email read something like, “You haven’t used your account in 30 days, so we’re deleting it. Please contact us for questions.” Of course a lot of customers got aggravated and sent some nastygram replies, which arrived in the IT team’s inboxes, who had no idea what was going on. After some confusion, we were able to track down the guilty party, but those emails never should have gone out from the IT staff.

Bottom line: if you decide to use Database Mail (and you should), consider setting up separate private and public email profiles. The public email profile used by the developers should be sent from the developer management team’s group email address – that way, they can address any replies themselves.

Configure SQL Server Agent’s failsafe operator.

After configuring Database Mail, create at least one SQL Server Agent operator. This operator’s email address should be a distribution list for the database administrator group. Even if the company only has one DBA, never use an individual person’s email address – use a distribution list instead. When the DBA goes on vacation or gets a job at another company (or heaven forbid, gets fired), it’s easier to add someone to a single distribution list instead of modifying operators on dozens or hundreds of servers.

Then right-click on the SQL Server Agent, configure the alerting system to use Database Mail, and set up that DBA group as the failsafe operator. That way if anything happens and SQL Server doesn’t know who to alert, it can alert the group.

Create default alerts for severities 16 through 25 and 823, 824, and 825 errors.

SQL Server’s alerting system has the ability to notify operators whenever major things break inside the database. These include running out of space in log files, backup failures, failed logins and other things DBAs just need to be aware of. Don’t rely on this as your only SQL Server monitoring system, because it only sends alerts when it’s too late to take proactive action, but still, it’s better than nothing.

For scripts to configure alerts in SQL Server, check out the page for “No SQL Server Alerts Configured” from our sp_blitz script.

SQL 2005: Install the SQL Server 2005 Performance Dashboard Reports.

These are an insanely cool and free extension for SQL Server Management Studio.

You run the setup.exe on your personal workstation, and then you have to execute the setup.sql script on each server that you want to monitor. It only takes a few minutes, but the information that it gathers will help you manage your server better throughout its lifetime.

Set Up Maintenance Plans

This is where things start to get different on a shop-by-shop basis. Some places use native backups, some places use backup compression software, some places use index defrag software, and so on. I’ve written a lot of articles about my personal backup best practices, and one about why SQL native backups suck. (Hey, if I was politically correct, I’d be writing manuals instead of reviews.)

Benchmark It, and Find the Slowest Link

Before it goes into production, load test and stress test it to find where you’ll need to improve performance down the road.  Before it goes live, this is your one chance to really bang the daylights out of it without anybody complaining.

Learning More About Building and Testing SQL Server

sp_Blitz®: Free SQL Server Health Check – You’ve inherited a SQL Server from somebody, and you have no idea why it’s slow. sp_Blitz® gives you a prioritized list of health and performance issues, plus gives you URLs for more details about each issue.

Our Free 6-Month DBA Training Plan – Every Wednesday, you get an email with our favorite free SQL Server training resources. We start at backups and work our way up to performance tuning.

SQL Critical Care® - Don’t have time to learn the hard way? We’re here to help with our quick, easy 3-4 day process that gets to the root cause of your database health and performance pains. Contact us for a free 30-minute sales consultation.

↑ Back to top

Comments are closed.

css.php