Let’s say you’ve got a vendor application that creates tables, loads them, queries them, and drops them all the time. You can’t change the code.
The tables involved desperately need an index in order to go faster, and they’re your biggest performance bottleneck.
Enter DDL triggers. Regular DML triggers fire when you insert/update/delete rows in a table, but DDL triggers fire when you create/alter/drop tables or databases. You may remember them from the last time Jeremiah and I played with them – way back in 2008!
The below code fires whenever a table is created, checks to see if the table name is like CustomerExport%, looks for the existence of a CustomerName field (which I probably should have done with XQuery, but I’m lazy), and if it exists, creates an index on the CustomerName field:
CREATE TRIGGER trgTableAdded ON DATABASE AFTER CREATE_TABLE AS
SET NOCOUNT ON
DECLARE @TableName SYSNAME, @SchemaName SYSNAME, @StringToExecute NVARCHAR(4000)
SELECT @TableName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)','SYSNAME')
SELECT @SchemaName = EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)','SYSNAME')
IF @TableName LIKE 'CustomerExport%'
AND EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'CustomerName' AND TABLE_SCHEMA = @SchemaName AND TABLE_NAME = @TableName)
SET @StringToExecute = 'CREATE INDEX IX_CustomerName ON ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) + ' (CustomerName);'
Voila – automatic indexing, and a lifetime of regret and careful debugging. See, there’s a boatload of ways this can break:
No datatype checks. In the above example, I’m not checking for datatype. If the vendor changes CustomerName to an NVARCHAR(MAX), my table creation will bomb with an error that CustomerName “is of a type that is invalid for use as a key column in an index.”
Queries can fail. We had some queries fail (particularly SELECT INTOs) because the table schema changed before the rows started going in. Multi-statement transactions may also fail because the whole transaction will get rolled back if the trigger fails.
No duplicate index checks. If the vendor later decides to add an index as part of their code, I could be creating a duplicate index, and now slowing my processes down. Or they might just change the way they do exports, and no longer need the index at all – and I won’t be able to catch that since the tables and indexes appear and disappear so fast. I could actually set up another DDL trigger looking for table drops of these same names, and log index usage information to a table, then periodically analyze that information to see if I needed different indexing.
Lots of risk and work here. Let’s not even go into how incredibly hard it is to catch these issues when the tables and the query plans keep disappearing from memory – it’s not even enough to just grab the slow queries, but you also have to grab the tables and their contents so you can reproduce the queries to figure out the right index to build. Obviously this isn’t a solution everyone should go implement on their vendor apps, particularly mission-critical ones, but it’s an example of how far you can go to tune vendor code when other options have been taken off the table.