SQL Server Tasks You Probably Shouldn’t Automate

SQL Server
24 Comments

Every now and then I run across an automated script that does something a little suspicious. I’m not saying you should never put these things into a script, but if you do, seriously think about the surprise gotchas when someone runs the script:

  1. DBCC commands other than CHECKDB
  2. sp_configure (and especially RECONFIGURE afterwards)
  3. ALTER SERVER CONFIGURATION
  4. ALTER DATABASE
  5. ALTER AVAILABILITY GROUP
  6. CREATE INDEX or DROP INDEX
  7. KILL
  8. SHUTDOWN
  9. And most of the database engine management stored procedures

If you’re doing any of these on a scheduled basis, take a few minutes to document what you’re doing, why, and whether it’s safe to stop doing it. Your successor will thank you.

Believe me, otherwise she’s going to throw you under the bus when you’re gone, and if you haven’t left something behind to defend you, you’re going to look like That Guy.

Previous Post
Why You Simply Must Have a Date Table [Video]
Next Post
Did You Give SQL Server 2012 Standard Edition Enough Memory?

24 Comments. Leave new

  • Hrm. I use a few of these in different automated scenarios.

    DBCC DBINFO to get a last good DBCC date for my random DBCC checker.

    Alter database to set DB to single user before final backup/drop before archiving to tape.

    Create index to make sure an index exists in an R-word-ativity workspace where a custom email body parsing script I wrote gets used. It isn’t for every case so no use having it in 300 DBs when it’s only needed in maybe 5.

    Not sure there are many good alternatives where I have them implemented though.

    Reply
  • I’ve seen ALTER DATABASE automated somewhat often – 2 specific use cases:
    1.) Dev, test, UAT SQL Agent jobs that restore databases – kills connections, sets in single user mode does the restore, manipulates data and then set back to multi.
    2.) At a client they had a reporting db that had set up 2 databases. DB1 was the loading throughout the day and DB2 was marked read only for performance and was used by the reports. At the appointed time overnight they would kill all connections to DB2 then rename DB1 to DB2 and vice versa, then mark DB1 read only and start the day long sync of DB2 (first sync’ing with DB1 then all the other loads). I don’t know that I would have done it that way, however it was actually a fairly elegant solution with minimal disruption in service.

    Reply
  • CREATE INDEX or DROP INDEX – Yes, we do script the dropping and creation of spatial indices. Why? Because the data is frequently updated and performance is impacted for our desktop and application users of the spatial data if the spatial indices are not current. This may be something specific to GIS shops but I suspect with the growth in the use of spatial data types this sort of thing will become a bigger issue. In any event, we schedule the index building to occur after hours.

    What is the main downside to this? Any recommendations?

    Thanks for the post. Useful, as always. Cheers.

    Reply
  • Hi Brent,

    *Junior Alert*

    Can you give a few examples of gotchas that could occur when using CREATE INDEX or DROP INDEX in an automated script?

    Thank you

    Reply
    • KC – sure, say your script goes haywire and adds dozens – or in one case I saw, hundreds – of indexes. Are you actively monitoring the number of indexes on each table?

      Reply
      • Oh what a tangled web we weave, when first we practice to ‘turn on auto-pilot, walk away, and scratch our heads when the monkey goes wonky’.

        Good word. Thanks.

        Reply
  • I have a vendor that has an automated kill job in production. If a SPID was open for over two hours it would kill it. Fortunately in the 2008 R2 it doesn’t work because they don’t pull the output of dbcc opentran properly so the job just fails every time it runs. I still can’t talk them into removing it from their product but I was able to talk them into letting us disable it.

    Reply
  • I have DBCC Loginfo in an automated script to populate a temp table. Once a week I hit the entire system and check for unhealthy logs. If I don’t use DBCC Loginfo, what would you suggest as a substitute?

    Reply
  • I’m confused about the DROP/CREATE Index as well. As part of rebuilding our reporting DB we have a Scheduled Task that includes a step calling a stored procedure that drops the indexes (if they exist) and after repopulating all the tables (some 20 steps later), we have another step calling the stored procedure to create the indexes (if they don’t already exist).

    Is there something wrong with that automation?

    Reply
    • Is this something you run often? Rebuilding the data in a reporting database as a fallback in case of failure make sense. Rebuilding it every night instead of loading incremental differences can often be a problem (unless you have a looong maintenance window every night).

      Reply
      • Kendra,

        We rebuild each weekend “from scratch” because we don’t have good “change” auditing in the source DBs. Also a change in one DB could cause a different “result” to occur from an unchanged record in another DB. We’ve found TRUNCATE and INSERT is less strain on the transaction logs than UPSERTs. The job currently takes about 14 hours.

        Reply
      • Rudolf van der Heide
        December 16, 2014 3:06 am

        Kendra,

        In data warehouses this is common practice. Even moderate workloads are causing excessive logging and slowing down because of index maintenance. (for example 6M new records in a 400M table. The other problem is index fragmentation, that can be fixed in a nightly job, but that is just moving the problem to a different time.
        In my experience dropping and creating indexes is by far the best way to handle the effects.
        Partitioning would decrease the bad effects, but that requires enterprise edition and I don’t have it yet. It surely is a must if you want to do daily updates.

        Rudolf van der Heide
        Data warehouse architect

        Reply
        • Hi Rudolf,

          It’s a common practice in data warehouses to do incremental loads because the data is so large that reloading all the data would be time prohibitive. It’s also increasingly common to have 24×7 usage of a data warehouse which makes full reloads (and dropping indexes) impossible. Table partitioning is very popular for fact tables to help support incremental loads and deletes.

          If you have a small data warehouse with long downtime downtime windows, then rebuilding it can work– but I wouldn’t say that’s a great pattern in general. If you build a warehouse like this at the beginning and it grows, your users are likely to become very unhappy over time.

          Kendra

          Reply
  • I have a really, really bad example, but:

    I have a corrupt database that has deferred transactions and as a work around whenever a spid pops up that is blocked by -3 I kill it.

    Reply
  • So, you are telling me that the Shrink TempDB and Shrink_Transaction_Log_Files_User_Databases jobs that the DBA’s set on my reporting server are probably a bad idea? Surely the 18 hours they run update statistics every day are worth it. I don’t k now why they disabled their reindex and reorg jobs though, those seem important too.

    Reply
  • What do you think about changing the SQL Server Maximum server memory, on every restart?

    I have not implemented this, but I’m tempted to. Our SQL active/active FailOver Cluster has 128GB of RAM on each node, but we have configured the SQL instances only to use 60 gigabytes at maximum. This was done according to Microsofts best practices. So a lot of memory is wasted for a majority of time, obviously, when instances are running on their own dedicated node.

    http://dba.stackexchange.com/questions/50433/max-memory-settings-on-multi-instance-sql-server-2008-r2-cluster

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.