Blitz Result: User Databases on C Drive
By default, SQL Server’s user databases are installed on the system drive. This is a recipe for disaster under several situations:
- If your users do something that require the databases to grow (like never purging backup history from MSDB or building giant temp tables), they can grow until the system drive runs out of space, and Windows will stop dead. It’ll be your fault.
- If someone else does something to run the system drives out of space (like downloading huge files to their desktop or running big Windows Updates), the system will stop dead. It won’t be your fault – but actually, it still will be.
- If transaction log backups aren’t done fast enough, regular day-to-day activity or index maintenance might cause log files to grow to the point where the drive fills up. You guessed it – your fault.
This part of our SQL Server sp_Blitz script checks sys.master_files for database files on the C drive. If your system has a non-C system drive, you’ll need to perform this check separately yourself.
To Fix the Problem
Read the Microsoft knowledge base article on how to move SQL Server databases, but for user databases (which are often larger than system databases), we have special considerations. Consider log shipping the database under another name: do a full backup and transaction log backups of the existing database, and then restore those backups as a database with a different name on a different drive. During an outage window, you’ll be able to put the original database in read-only, do the last set of transaction log backups, restore them onto the other database, detach the original database, rename the newly restored database, and you’re off and running in a matter of minutes rather than hours.
After this change, you’ll have less reliability risk, and your system may even perform faster if the C drive ran on slow storage. You can gauge that through Physical Disk Avg sec/Read and sec/Write metrics, or through the file activity DMVs.