SQL Server 2005 Setup Checklist Part 2: After The Install

SQL Server
98 Comments

I’ve covered what needs to happen before you install SQL Server – now, let’s talk about what to do immediately after the setup finishes.

Install not just the service packs, but also the cumulative updates.

Starting with SQL Server 2005’s Service Pack 2, Microsoft releases hotfixes in cumulative packs. These updates do more than just fix bugs: they improve how SQL Server performs. These updates are free performance benefits – and who doesn’t like that?

To find the latest service packs and cumulative updates, check out SQLServerUpdates.com. It’s got version numbers, build numbers, and download links for all versions of SQL Server in one place.

Double-check that Instant File Initialization is enabled.

Paul Randal wrote an excellent blog post on how to tell if instant initialization is enabled.  Follow the instructions in his post, and you’ll know for sure.  (While you’re there, subscribe to his blog – it’s chock full of SQL-y goodness.)

Best Practice: Move TempDB to its own drive.

By default, the TempDB files are put on the same drive as the SQL Server binaries. Even if the user chooses a custom install, TempDB still goes on the same drive as the other data files, and that’s not a good idea either. Instead, the TempDB data files should be on their own dedicated drive.

Fix this by first moving TempDB to its own drive.  In this example, I put the data file on the T drive and the log file on the L drive.  (Be aware that the directory paths must already exist.)

use master
go
alter database tempdb modify file (name=’tempdev’, filename=’T:\MSSQL\DATA\tempDB.MDF’, size = 1mb)
go
alter database tempdb modify file (name=’templog’, filename=’L:\MSSQL\LOGS\templog.LDF’, size = 1mb)
go

I only set a 1mb file size because SQL Server does something tricky: even though we’re telling it to use a different drive letter, it will look for this amount of free space on the drive TempDB currently uses! If SQL Server was installed on the server’s C drive, for example, and we try to create a 10gb TempDB file on a T: drive, that SQL command will fail if there isn’t 10gb of free space on the C drive. Yep, it’s a bug – get over it.

After this code runs, restart the SQL Server. That will create the new TempDB file on the new drive. Manually delete the old TempDB file on the original drive, because SQL Server doesn’t delete that itself.

Now that TempDB is on the right drive, expand it to the full size you want, and then create additional TempDB files.  The current guidance from Paul Randal is to make 1/4-1/2 the number of TempDB files that you have processor cores.  If you’ve got a quad-socket, quad-core box, that’s 16 cores, so you need 4 to 8 TempDB files.  Generally I start on the lower end unless I know the server will be under heavy TempDB pressure from its applications.

Here’s the code to create one additional TempDB data file – you can modify this for more files:

USE [master] GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N’tempdev2′, FILENAME = N’T:\MSSQL\DATA\tempdev2.ndf’ , SIZE = 10GB , FILEGROWTH = 0)
GO

The data file creation should only take a couple of seconds – if it takes more than ten seconds, then instant file initialization isn’t configured correctly. We talked about this back in the pre-installation checklist, so go back and revisit that before you create the next TempDB file. Fix the security to allow for instant file initialization now – it has a huge performance impact on database growth.

Assuming that one file growth only took a couple of seconds, then go ahead and create the rest of the TempDB data files.

Notice that I don’t have filegrowth enabled.  You want to proactively create the TempDB files at their full sizes to avoid drive fragmentation.  If you have a dual-cpu quad-core server (8 cores total) and an 80-gb array for TempDB data, you would create eight 10gb files for TempDB.  That way, each file is contiguous, all laid out in one big chunk.  If you create them as smaller files and let them autogrow, then the disk will be fragmented all over the place because the files will be growing at random times.  Plus, you could end up with differently sized TempDB files if one of them happened to grow faster than the rest.  That’s why I pre-grow all of the TempDB files ahead of time and get them at exactly the right size.

Configure SQL Server memory for best practices.

Sounds easy, right? Go into SQL Server Management Studio, right-click on the server name and click Properties, go into Memory, and just configure it. There’s only a couple of fields – how hard could it be?

The minimum and maximum memory amounts are important, especially since we gave the SQL Server account the permission to lock its pages in memory. If other applications are running on this server, we need to specify how much memory we want SQL Server to take.

Ideally, no one would ever remote desktop into a SQL Server and run programs. Unfortunately, this happens, and we have to plan for it by leaving enough free memory for people to run things like SQL Server Management Studio. When I’m first building a server that isn’t running any other applications at all, I like to leave 10% of the memory free, or 4gb, whichever is larger. Then I monitor the free memory over the course of a month or two, and adjust it up or down during the next outage window.

If the server does multiple duties like act as a web server or application server, we have to be much more conservative with memory. Application owners never seem to know how much memory they’ll really use in production: SAP BW’s Netweaver, for example, tends to use anywhere from 10% to 50% of the memory on our production server, and it’s tough to predict. As a result, we have to leave the SQL Server’s memory allocation at just 50% of the available memory on the server.

I set the minimum server memory to 50% of the server’s total memory.  This will let SQL Server release memory if the server comes under memory pressure, like if someone remote desktops in and runs a very poorly written application.

The only way to know the right answer long term is to use Perfmon or a performance monitoring utility to watch the server’s free memory. I’ve written up a separate blog post on using Perfmon for SQL Server monitoring.

Set the Default Database Path

Even if you chose this during setup, we need to revisit it because SQL Server puts both the data files and the log files in the same directory. In SSMS, right-click on the server name and click Database Settings. The paths for the data files and log files can be configured from there.

Tweak the model database.

This tip comes courtesy of reader John Langston.  Whenever a new database is created, SQL Server uses the “model” database as – well, as the model.  You can make changes to that database, and those changes will automatically happen to any new databases.  John writes:

I also like to go to model and change the recovery model from FULL since we use SIMPLE a lot, even in production and also change the datafile autogrowth setting from 1 MB.

Great tip!

Configure Database Mail with public & private profiles.

Database Mail is a pure SMTP solution that beats the daylights out of SQL 2000’s SQLmail. It doesn’t require Outlook to be installed on the database server, doesn’t need any MAPI code, and works with any company email server that can be accessed via SMTP.

There’s plenty of sites on the web that explain how to configure Database Mail, but I want to address something: be aware that developers can use Database Mail for things that SQL Server shouldn’t be doing. For example, they may decide to use Database Mail to send out mass emails to your end users or customers. There’s nothing technically wrong with that, but it increases the load on the database server and it sends all outgoing email with the SQL Server’s Database Mail account.

At our shops, we use internal emails like (servername)@ourcompany.com to identify which server is sending the database mail. Those email addresses make sense to us because we just need to know where the alerts are coming from – we would never hit Reply to a server-generated email.

However, if developers use SQL Server to send out emails directly to customers, those customers will indeed reply. I had a nasty problem where a couple of developers decided to purge old customer accounts, and they used SQL Server’s Database Mail to broadcast an announcement to those users. The email read something like, “You haven’t used your account in 30 days, so we’re deleting it. Please contact us for questions.” Of course a lot of customers got aggravated and sent some nastygram replies, which arrived in the IT team’s inboxes, who had no idea what was going on. After some confusion, we were able to track down the guilty party, but those emails never should have gone out from the IT staff.

Bottom line: if you decide to use Database Mail (and you should), consider setting up separate private and public email profiles. The public email profile used by the developers should be sent from the developer management team’s group email address – that way, they can address any replies themselves.

Configure SQL Server Agent’s failsafe operator.

After configuring Database Mail, create at least one SQL Server Agent operator. This operator’s email address should be a distribution list for the database administrator group. Even if the company only has one DBA, never use an individual person’s email address – use a distribution list instead. When the DBA goes on vacation or gets a job at another company (or heaven forbid, gets fired), it’s easier to add someone to a single distribution list instead of modifying operators on dozens or hundreds of servers.

Then right-click on the SQL Server Agent, configure the alerting system to use Database Mail, and set up that DBA group as the failsafe operator. That way if anything happens and SQL Server doesn’t know who to alert, it can alert the group.

Create default alerts for severities 16 through 25.

SQL Server’s alerting system has the ability to notify operators whenever major things break inside the database. These include running out of space in log files, backup failures, failed logins and other things DBAs just need to be aware of. Don’t rely on this as your only SQL Server monitoring system, because it only sends alerts when it’s too late to take proactive action, but still, it’s better than nothing.

The below script will set up an alert for severity 16. Copy this and repeat the same thing for 17-25, but change ‘Database Team’ to be the name of your default operator. Notice that @delay_between_responses is set to 60 – that means if it sends out an alert, it won’t repeat that same alert for 60 seconds. This is useful because when a database runs out of drive space, for example, all hell will break loose for a minute or two, and you don’t want hundreds of emails and pages per minute.

USE [msdb] GO
EXEC msdb.dbo.sp_add_alert @name=N’Severity 016′,
@message_id=0,
@severity=16,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N’00000000-0000-0000-0000-000000000000′
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N’Severity 016′, @operator_name=N’Database Team’, @notification_method = 7
GO

 

Set Up Maintenance Plans

This is where things start to get different on a shop-by-shop basis. Some places use native backups, some places use backup compression software, some places use index defrag software, and so on. I’ve written a lot of articles about my personal backup best practices, and one about why SQL native backups suck. (Hey, if I was politically correct, I’d be writing manuals instead of reviews.)

Benchmark It, and Find the Slowest Link

Before it goes into production, load test and stress test it to find where you’ll need to improve performance down the road.  Before it goes live, this is your one chance to really bang the daylights out of it without anybody complaining.

Last: the First Responder Kit

sp_Blitz, sp_BlitzCache, sp_BlitzIndex and more give you a SQL Server health check in a matter of seconds. They give you a prioritized list of health and performance issues, plus gives you URLs for more details about each issue.

Previous Post
SQL Server 2005/2008/2012/2014 Setup Checklist
Next Post
Submitted my abstracts for PASS 2008

98 Comments. Leave new

  • Check the error logs. Windows and SQL. I tell all my dbas to save and empty the log before install and then again when they are complete and attach those files to the as built doco.

    add the domain DBA group (same as the mailing list group) and ditch builtin\administrators.

    Enable success and failure login logging.

    Give sa a 64 character random password, test it, store it and disable it.

    Reply
  • How about setting up these:

    * Set up the Operators

    * Configure the SQL Errorlogs to save more than just the last 6

    Reply
  • So is there anything that you have from MS about how AWE handles memory differently? Everything out there says you don’t need to enable it on 64 bit systems and now this tells us differently.

    Reply
  • No, nothing they’d give me on paper – just the instruction to turn it on, unfortunately!

    Reply
  • Robert Gearhart
    November 21, 2008 4:34 pm

    Yeah, that bit about AWE is a bogus statement without documentation. I too have read you don’t need to enable it for 64-bit systems. Where I work we run Windows 2003 Standard RC2 x64 with 32GB RAM on the machine, we don’t enable AWE but SQL does use the extra memory on the box. Also, we set the max and min memory for SQL.

    Reply
  • You’re totally correct in that it uses the memory without enabling the switch, but what I’ve heard (again, w/o documentation though) is that it handles the memory differently when AWE is enabled.

    Reply
  • time to update this post with SP3 🙂

    Reply
    • Scott – good catch! I updated the checklist to point to SQLServerPedia's release date calendar, which has all the latest download links for SQL Server. Better to update one page than a bunch!

      Reply
  • Thanks Brent,

    I hope this isn't too much of a huge question, but here goes:

    I'm setting up a 2K3 Enterprise server with SQL2K5 Enterprise, 32Gb RAM. AWE is on, Min and Max memory set to 30720MB.

    How can I be sure that SQL is using the RAM, and not writing to the Page File?

    I figured it was, as some perfmon stats seemed to show 30Gb of memory eventually being committed, but am confused by two things:

    Firstly, the PF Usage in Task Mgr seems to rise and rise till it gets to 30Gb (although looking at the pagefile.sys seems to show it remaining at 2GB) – is task mgr being a little misleading/misunderstood? Is Task Mgr PF Usage really talking about virtual memory, or is it including physical memory as well?

    Secondly, SQLAGENT.OUT claims "[310] 8 processor(s) and 4096 MB RAM detected"; perhaps this is "pre-AWE"?

    I hope you can shed some light here, my interweb searching has mostly shown up more people wondering similar things without resolution.

    Reply
  • For anyone interested, Brent and I have had a look under the hood of SQL vs Task Mgr, and the short version is:

    Task Mgr is a confusing place to be. Don't take its claims about page file usage literally; the stats are very useful but the way they're presented misleading.

    There's a good rundown of how to interpret its statistics here: http://articles.techrepublic.com.com/5100-10878_11-5071997.html

    Thanks heaps to Brent for taking the time.

    Reply
  • Anders Gregersen
    February 6, 2009 4:09 pm

    Any updates if things should be done the same or differently on SQL Server 2008?

    Reply
  • hello, your link is not accessible, http://articles.techrepublic.com.com/5100-10878_1…, can you let me know which one is a better way to check on the page file?

    Reply
  • Rosie – I updated Scott’s comment to point to the right URL.

    Reply
  • Brent,

    In the section regarding configuring memory, the image shows that you have the Min and Max set to the same amount. Is this the recommended best practice?

    Reply
    • Like with everything else in SQL, the answer is that it depends. For servers that only have SQL Server running on them and nothing else, this is how I usually configure memory. If the server shares memory with other applications, though, you’ll need to leave some gap between min and max.

      Reply
  • Any more on the specific SQLAGENT message [310] 8 processor(s) and 4096 MB RAM detected. This is on W2K3 32 bit Ent Ed with 32Gb and the /PAE switch set. SQL Server 2005 SP2 Std Ed. The OS is seeing 32Gb. SQL Server memory set to 24Gb. we have issues with SQL Agent jobs starting on this server.

    Reply
  • John Langston
    October 22, 2009 4:38 pm

    I also like to go to model and change the recovery model from FULL since we use SIMPLE a lot, even in production and also change the datafile autogrowth setting from 1 MB.

    Reply
  • Brent, the comment about enable the AWE option even on 64bit servers is wrong. There is a great post of the CSS SQL Engineer’s blog talking about it (http://blogs.msdn.com/psssql/archive/2009/09/11/fun-with-locked-pages-awe-task-manager-and-the-working-set.aspx) and

    Reply
  • Hi Brent,

    First let me thank you for all of the outstanding information you’re passing on here. I have a question about placement of the tempdb. As you’ve stated here the conventional wisdom is to put tempdb on it own disks. We’re deploying our new server to an EMC SAN and in another post you advocate putting data, logs, and tempdb on a single array of dedicated disks (the more the better). If I were to have 18 total disks to play with would you create a single RAID10 for all of the files or put tempdb on its own RAID10?

    Thanks in advance.

    Reply
    • It always depends – depends on the IO profile, depends on how much load you put on TempDB, etc. If I knew I was dealing with apps that didn’t do much work in TempDB, I’d rather have, say, a 10-disk RAID 10 array for data and an 8-disk RAID 10 array for logs.

      Reply
  • This article is awesome, Thanks a lot 🙂

    I have a question regarding the tempdb files and the Max Degree of parallelism. Some CPU may have HyperThreading, this make confuse on how to configure the no. of tempdb files and the MAXDOP setting.

    For example, if I have a Dual-Core CPU with HT, there will be ‘4’ Cores appears in the Task Manager. How many tempdb files should I set and what optimal value for MAXDOP should I set?
    physical processor = 1
    physical core = 2
    logical core = 4

    Thanks in advance 🙂

    Reply
    • Hi, Panco. Like everything else in SQL Server, the answer is – it depends. First, I would generally not enable hyperthreading by default unless you’ve got a really good reason to. If you do repeatable benchmarks with your application loads, and you find that hyperthreading increases performance, then go with 4 files, but only after you’ve verified that performance actually goes up – not down – with hyperthreading. Hope that helps!

      Reply
  • Thayal Muhunthan
    February 24, 2010 5:36 am

    Hi Brent,

    Thank you for this excellent article. I managed to install the first SQL 2005 64 bit server on an ESXi VM server. See the below email from our network guy, who setup the VM server :

    The server has 3 drives, just for conformity purposes, c: D: – databases E: – Logs.
    As it is a VM it does not have physical drives, so splitting logs, data and the os drives makes no performance difference as the virtual disks runs on the same physical hardware anyway.

    Do you agree with his comments ? or it always best practise to leave the TEMPDB on a separate drive ?

    Secondly, is it ok to leave the system DBs on C drive or what is the best practise regard to the systems DBs ? By default they are always saved on C drive. How to move them ?

    Advance thank you for your advice.

    Thayal

    Reply
  • Hi Brent,

    I have got SQL Server 2005 SP3 standard edition.
    I have done some research but I could find answers.

    Would it be possible to email performance dashboard reports on a scheduled basis from the same instance?

    In other words, could I create a subscription on the top of a dashboard report?

    Could I run it from SSRS?

    Thank you.

    Reply
  • One thing I was hoping to find in either this post or the previous one, was a discussion about granting rights to DBAs.
    In my org, we have two types: DBAs that have admin rights to the SQL server, and then DBAs who have “just the rights they need” to create and manage the databases.
    The first group is easy, but the second group… It’s always difficult to make sure they have everything they need.
    SQL sets up a group called SQLServerMSSQLUser$$ at install and it places the service account for SQL in there. If I place my rank and file DBAs into that group (as well as the RDP group), does that pretty much accomplish everything I’m looking to do? Is that too much or too little?

    Thank you!

    Reply
  • Hi,

    we are planning to migrate our SQL Server 2000 Database to SQL Server 2008 R2. We just have the possibility to split the installation on two disks, one of them running the system (Windows Server 2008 R2).

    How can we get the best perfomance in this setup?
    Which component is the bottleneck?

    What do you think about:
    Disk 1:
    System
    TempDB
    Logs

    Disk 2:
    Database
    Pagefile?

    Or is it better to move the Logs to the Database Disk? Or should I change Database and TempDB?

    Feels like the setup could be turned round any way one likes and it still stays slow.

    Thanks and Greets

    Max

    Reply
  • Great site Brent!

    Use it a lot, concise good advice and not too much overload on the techy jargon.
    Been a DBA for a long time, simple is good.

    These install checklists are really handy
    🙂

    Like Panco above, I was curious about the no. of tempDB data files to allocate?
    eg.for a server with 2 x quad core CPU with HT:
    physical processor = 2
    showing as 16 CPUs in Task Manager

    I’d reckon go with 8 tempDB data files, unless as you say testing shows an improvement with HT.
    We shall see.

    Keep up the good work!
    Ewan

    Reply
  • Hi Brent,

    I have a question for creating and placing TempDB data and log files.

    We have 2 Quad core CPUs (8 processors) and the storage is SAN

    We have assigned one disk say T:\ to keep TempDB data and logs on same disk T.

    Here, if I want to make sure to have 8 data files (1 per core), then Do I need to create 1 mdf & 7 ndf files and 1 ldf file on same disk T? or

    One mdf file on T drive and the 7 ndf files need to be created on separate 7 other disks?

    or 8 data files (including 1 mdf & 7 ndf files on T) on T disk and Ldf file on other separate disk say L?

    Please clarify me..
    Thanks

    Reply
  • Hi Brent I was the impression if we have SQL Server account added to lock pages in Memory SQL Server would not release any memory so I am confused by the statement
    “set the minimum server memory to 50% of the server’s total memory. This will let SQL Server release memory if the server comes under memory pressure, like if someone remote desktops in and runs a very poorly written application.”

    Reply
  • Hi, I’m a developer but I’ve been getting into SQL tuning more and more on some projects … Read the internal troubleshooting 2008 book very helpful and well written !

    I’m working on a small 64 bit server and only have two drives to dedicate to SQL 🙁 … Where should I put the tempdp ? All with the mdf, all with the ldf, or should I split it ?

    Reply
    • David – thanks, glad you liked the book. If you only have two drives, I’m assuming you mean two RAID arrays. It all depends on the amount of user database reads, writes, and TempDB load. You need to strike a balance for your needs. I’ve seen shops that put user databases and log files on one array, and TempDB on its own array because they incur so much TempDB load.

      If you literally only have two drives, not two RAID arrays, then make a mirror out of them (RAID 1) and protect your data from failing hard drives.

      Hope that helps!

      Reply
  • Does anyone know how to add extra accounts to the SQL Administrators group on a cluster install?? I added the 3 (current) DBA domain accounts during the install on the DB Engine Configuration screen but now need to add a new starters account. Where can I do this??

    Reply
  • Doesn’t matter. Managed to connect to Management Studio (didn’t know I had to look for DB Engine on the cluster name, not the server name) and added it in there.

    🙂

    Reply
  • Hello Brent,

    I just want to say my heartily thanks to you, for your valuable contribution for SQL Server. The best part is that you are sharing your hard earned knowledge with everyone and everyone is get benefited by this. Simply I will pray to GOD to give you long life and send people like you on this earth. I pray to GOD to make your all good desired come true.
    Just keep it up and we also like to share our knowledge with you, if it make some value to you.

    Million Thanks again,
    Nikesh

    Reply
  • Brent,

    I am the IT Manager for a GC (Commercial Construction). This 2 part set may have saved me plenty of heartache.

    I know you don’t do the crystal ball thing but I need to try and get some clarification if possible.
    This is “Tabula Rosa” for us. No database history, never had a SQL installation, NOTHING to go on.

    We are putting in SQL2008R2 Standard Ed. with NOS=MS Server 2008R2; Server has 24GBRAM, 2xQuadCore Processors; NOS & SQL Programs on 250GB Mirrored set C:; Raid5(4TB set)E:(200GB)for TEMPDB, F:(400GB)Contract Manager,(Accounting, Proj Mgt, Inventory, Document Mgt.),G:(1.8TB)SQL Data. Plus 6TB mass storage standing by for Document Mgt. scan images; NAS Expansion and SQL data move if needed.

    Vendor Sys. Analyst states that with 20 users he estimates we can expect to generate upwards of ~3TB data in about 3 years not including the document scans which go to shared storage separate.

    Have read same “best practice: Move TempDB to its own Drive” on MSDN “Scaling Up Your Data Warehouse w/ SQL Server 2008”. However, I get the idea that this is for fairly substantial SQL environments.

    I totally like your perspective on creating fixed TempDB files, according to # cores, separate partition all plays to my sense of order. BTW – Our vendor’s System Analyst says all this is foreign to him…

    Here’s the question: How do I really determine whether or not (given the growth input I have from the vendor Systems Analyst) to use AUTOGROWTH with a single TempDB or implement multiple fixed, and calculate what size the fixed files should be?

    And by the way, what is the naming convention used to keep multiple temp files “staight”?

    Mind you we do about 1200 construction jobs a year and 3 years is nothing, this is a very document intensive business and every job generates the same data – its just a matter of scale be it a $25,000 job or a $10 million $ job.

    And by the way, what is the naming convention used to keep multiple temp files “staight”? Can you use tempdb0, tempdb1, etc as simple as that or what?

    Thank you for such a clear and valuable resource and any further light you can shed to help.

    Sincerely, Tom W.

    Reply
    • Hi, Tom. Even though this is a new app for YOU, it isn’t a new app for the vendor. Ask them to hook you up with other customers who have similar sizes to your company, and talk to their IT staff about their experiences with the application. When we’re talking 3 terabytes of data in 3 years, there’s no way for me to architect a blank-sheet system for you in a blog comment – I wish I had a fast & easy answer, but you’re talking about something that usually takes me 2-4 hours minimum, often 2-4 days.

      You’ve got a lot of good questions in there, but with the quantity of questions, I think you’re going to be better off served by starting a relationship with a DBA or a consultant. I’m not trying to give you a sales pitch here, but just set expectations about what you can get on the web for free.

      Reply
      • Brent,

        Thanks for the quick reply. No worries mate! Have been in your shoes too many times (23 years consulting). Have been researching fast & furious but thought I’d ask. Already talking to local SQL consultants and have request in to vendor for other like GC end users. Appreciate your input. Will make this a regular stop for continuing education “on the fly” as I bring this new service to fruition.

        All the best, Tom W.

        Reply
  • Brendan Bartlett
    March 17, 2011 1:59 pm

    Brent,

    In Windows Server 2008 there are a variety of “Server Roles” that can be installed through the OS’s “Server Manager” tool. When installing SQL Server on this platform are there any of these roles that should be installed when it will be primarily a SQL Server platform?

    Perhaps “Web Server (IIS)” if I’m installing Reporting Services?

    Reply
    • Brendan – I don’t recommend using a SQL Server box for anything other than SQL Server. Given the licensing costs for SQL Server, if you need a web server or anything else, it’s best to put that on another box.

      Reply
  • Brendan Bartlett
    March 17, 2011 3:49 pm

    Good point regarding IIS but I was actually just trying to use that as an example rather than asking about that role in particular.

    My understanding of the roles is that usually a server performs a particular function such as a file server, print server, application server etc. rather than just being a generic server doing 20 different tasks. (‘Cause that never happens, right?) So, for each particular function there is set of OS utilities/options that need to be configured. For example, if it’s a file server you have to set up shares, security on the shares, security on the folder (and whatever else I’m forgetting. I’m not a sysadmin!)

    What I was wondering was if any of these roles were designed with an app like SQL Server in mind. My guess has always been that they could just be ignored because the SQL Server install program is designed to check for all the required components and install them if they are missing.

    Reply
  • Hi Brent

    under “Configure SQL Server memory for best practices”,
    you refer to AWE setting. Does this still hold true for Windows 2008 R2 ruunning SQL Servere 2008 R2?

    Thanks

    Reply
  • Thanks for the tips!

    Mike Duncan
    MCSD

    Reply
  • I modify tempdb and logfiles with

    use master
    go
    alter database tempdb modify file (name=’tempdev’, filename=’T:\MSSQL\DATA\tempDB.MDF’, size = 1mb)
    go
    alter database tempdb modify file (name=’templog’, filename=’L:\MSSQL\LOGS\templog.LDF’, size = 1mb)
    go

    then i create additional tempdbs and log file, my question is; should i delete the files i created of 1mb? or what shoul i do?

    Reply
  • I set up a server 2008 standard 64 bits with 8 gb of ram. I created the partitions T for tempdb and L for logs, then i created the files using:

    use master
    go
    alter database tempdb modify file (name=’tempdev’, filename=’T:\MSSQL\DATA\tempDB.MDF’, size = 1mb)
    go
    alter database tempdb modify file (name=’templog’, filename=’L:\MSSQL\LOGS\templog.LDF’, size = 1mb)
    go

    then i created created tempdb files depending on my cpu cores:

    USE [master]
    GO
    ALTER DATABASE [tempdb] ADD FILE ( NAME = N’tempdev2?, FILENAME = N’T:\MSSQL\DATA\tempdev2.ndf’ , SIZE = 10GB , FILEGROWTH = 0)
    GO

    i did the same with the log, but i see that I have 1 file tempdb with 1mg and 4 with 5 gb, may i delete the dbfile of 1mb and the log file of 1mb i created initially? as to memory.. what should be my min memory and max memory taking into account i have 64 bits and 8 gigs? no awe?

    Reply
    • Hmm – based on what it looks like you’re doing, you’re adding additional files rather than replacing the ones you’ve already got. By adding more files, you’re not getting rid of the initial ones.

      Reply
  • hello, someone has a format or template to present a technical report of a server regarding os and sql diagnostic.

    Reply
  • server windows 2008 sql 2008 standard, what is the consumption of memory per user? if i connect users through terminal server what is the memory consumption and bandwidth consumption of the server with windows 2008 and sql installed? thank you

    Reply
    • Sorry, I don’t have a number for that – it would depend on what services you start on each user’s login, what build of SSMS they’re using, if they’re using any SSMS plugins, etc.

      Reply
  • You mention setting up alerts for sev 16-25, but sev 16 is “other user fixable errors” isn’t it?

    Are there particular errors that come through as a 16 that we don’t want to miss, or is 17-25 better?

    I ask because after configuring 16-25 on a box our alerts system was flooded with Sev16’s for “DESCRIPTION: An exception occurred while enqueueing a message in the target queue. Error: 15404, State: 19. Could not obtain information about Windows NT group/user…”

    I admit this was an issue that needed to be brought to light, but the system owner is questioning if we need email alerts for such messages or not.

    What are your thoughts on Sev 16 specifically?

    Thanks!

    Reply
  • Hi Brent,

    We just bought an IBM 3850 X5 with a pair of Intel E7-4870 CPU’s (20 cores total), 256GB RAM, and 8x300GB SAS 10K DAS. The DAS connects to a single M5015 SAS controller, which supports up to 1MB stripe size, and has two internal SAS ports. The server has 8 internal drive slots, 4 drives per backplane/channel/port. We’ll be using Windows 2008 R2 and SQL 2008, unsure if R2. The SQL db & logs will be hosted on dedicated FC NetApp LUNs. Will it be fine to make each channel a 4-drive RAID 10, and use logical drives for Windows Temp & SQL Temp DB, or do we really need to consider using a different RAID setup just to place the temp files on dedicated spindles? I haven’t found any tests for >128KB stripe size, but I’ve read to use 256K stripe for 2008. Should I consider the 512K or 1M stripe size since it’s supported?

    Thanks!

    Jim

    Reply
  • On best practice, “Move TempDB to its own drive”, when you say “Drive” do you mean physical drive?

    I have 2 drives in RAID 1 (C:) with Win 2008 R2 and the OS resides there. Then the remaining drives (4) I have in RAID 10 and partitioned up as logical drives D: (where SQL 2008 R2 64bit installed/resides) and E: (SQL Data folder\files reside). So in my situation, partitioning another logical drive (F:) and putting the TempDB there would not gain me anything – correct?
    Thanks.

    Reply
  • Hello how can install sql server 2008 R2 without the error creating a new instance thanks

    Reply
  • Vikki McCormick
    September 3, 2013 9:16 am

    Thanks for this article. I have referred to it many times. Now that we are getting into 2012 I just heard an interesting comment from the SANS guy. He ran into an issue, and read that in 2012 there is a bug and you no longer can set the tempdb up on it’s own drive on a Windows Server 2012 cluster. The SQL Server can’t find the tempdb. I found a tech net article that says it was supported with some tweeking. Seems to me that this should work. I have not been able to sit down with him to see exactly the errors he is getting yet, but when I do I was hoping to go with some info. Has anyone else had any trouble setting up a 2012 SQL Server on a Windows 2012 Server cluster and not been able to set up the tempdb on it’s own drive? Regards.

    Reply
    • Vikki – I haven’t had any issues with that, no.

      Reply
      • Vikki McCormick
        September 3, 2013 1:17 pm

        Thanks. I think I missed his point. He is saying it relates to mount points specifically. In 2008 you could use the installer to separate out user databases, system databases, logs and tempdb to separate mount points, but in 2012 it just gives you an error. There is a bug listed for the initial release. But I believe we are using SP1.

        SP1 BUG FIX for 2012 – 878531 You cannot uninstall SQL Server 2012 that has dependencies on multiple mount points.

        http://support.microsoft.com/kb/2674319

        I suppose I will have to set up a test. lol. Thanks again.

        Reply
        • Vikki McCormick
          September 3, 2013 1:44 pm

          Sad Face!!!

          TITLE: Microsoft SQL Server 2012 Service Pack 1 Setup
          ——————————

          The following error has occurred:
          Could not find the Database Engine startup handle.

          If I figure this out. I will post.

          Reply
          • Vikki McCormick
            September 3, 2013 2:02 pm

            Easy Peasy. Folder permissions needed to set up correctly on the Mount Points. Not sure it was the same issues that the SANS team experienced yet. I am not testing a totally parallel environment. But I was able to install to two mount points. Thanks again.

  • What is the recommended SAN configuration for the following scenario? I’ve read conflicting opinions of RAID 1/5/610, so your advice would be appreciated. Would the following system, since the san only has a single controller, benefit from a single raid 10 config?

    Virtualized server with 1 quadcore processor, 16 GB RAM, Windows Server 2008 R2 SE
    Small SAN with a single controller and 8 disks.

    The main purpose for this server is the SQL Server 2008 R2 SE backend for a web application.

    IIS, SSIS, SSRS and CA ArcServe backup applications installed and running on the server.

    Also, what would you set the min and max sql memory to in this scenario?

    Reply
    • Todd – unfortunately this is beyond the scope of something we can do in a blog post comment. Feel free to contact us with the form at the top of our site if you’d like consulting assistance.

      Reply
  • Hi brent,what are the best practice to configure tempdb and model databases?

    Reply
  • My standard practice is pretty mucht the typical move tempdb to its own lun, spit off all data , log files on there own luns as well. Adjust model db settings to simply recovery, but the one thing I’ve been doing is installed the shared features to c which is the default and install system database, sqlsrvr.exe etc on its own drive. This his worked for my team for very long time but I figured I’d ask if that is a good practice leaving the shared features on c and keeping system components .exe, master db, msdb db on the same drive but not on c.

    Reply
    • Eric – no, definitely not, don’t keep any databases at all on the system drive.

      Reply
      • So your saying best to install sqlsrv.exe, all those components on one drive (seperate from Os of course, and the master, model, msdb on a different drive as well.

        Reply
        • It’s totally okay to install SQL Server’s binaries on any drives you like – those aren’t really touched after the system starts. However, no databases should be on the OS drive. Thanks!

          Reply
  • Oscar Gonzalez
    February 20, 2014 7:22 pm

    What I’m wondering if we have three LUNS setup is if it is okay to have setup below?

    TEMPDB
    DATA (user and system databases)
    LOG

    I’m trying to find specific reasons on why we should separate master, model, and msdb from user data files. Other than segregating the IO activity on msdb during backup jobs and other maintenance plans I don’t see much of a benefit. Would anyone be willing to share their input on this matter? Thanks.

    I realize this an old thread, but it appears to have some recent activity, so I thought I’d give it a shot.

    Reply
    • Oscar – correct, if master/model/msdb represent serious IO throughput issues for you, you probably have some interesting workload issues that should be addressed. 😉

      Reply
  • abu elshabab
    April 1, 2015 5:29 am

    Thanks for this post!, but I have a question:

    You said: “If you have a dual-cpu quad-core server (8 cores total) and an 80-gb array for TempDB data, you would create eight 10gb files for TempDB.”.

    I didn’t understand the formula here. how did you reach to 10GB for each file?
    And excuse my knowledge, but what do you mean by “80GB array for TempDB”? it seems that I’m so shallow in SQL Server 😛

    Reply
    • An 80GB array means someone gave you an 80GB drive for TempDB. If you have 80GB of space, and you need to put 8 files on it, 80 / 8 = 10GB each. Hope that helps!

      Reply
  • The Link “The Best Free SQL Server Downloads List ” does not work

    Reply
  • […] a quick search for Checklist and sure enough Brent had a Pre-Installation Checklist as well as a Post-Installation Checklist for SQL Server 2005.  My personal installation/configuration steps differ slightly from Brent’s […]

    Reply
  • What is the general rule for splitting database files? At what size should you consider splitting a database into multiple files?

    Reply
  • Dear Brent,

    Now that Windows Server 2019 is adopted, should we format our disks into 2MB allocation sizes or stick to 64k?
    I haven’t found an article that has either tested this or does/doesn’t recommend this. Do you have any thoughts about this?

    Thanks in advance!
    Teun

    Reply
  • Hi Brent,

    Can you help me. I have one of the production server showing process SQL Server Windows NT – 64 Bit consuming 92% of the server memory from Task Manager. Can you tell what could be the cause? How can I reduce this percentage to normal?

    Reply
  • Hey Brent, I see the link to Paul’s blog on Instant file initialization is broken (due to the extension changed from .apsx and no redirect 🙁 )

    New Link is:

    https://www.sqlskills.com/blogs/paul/how-to-tell-if-you-have-instant-initialization-enabled/

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.