Features SQL Server Needs to Add (And Drop)

When you work with the same database server day in and day out, you can get a bit blinded by your assumptions. You might just assume that the way SQL Server does it is how every database does it.

But check out these features from other database platforms:

Cache query results – SQL Server only caches raw data pages. If you join ten tables together and do an ORDER BY on the results, SQL Server doesn’t cache any of its work tables or the final results. It rebuilds that wheel again every time the query runs – even if the data hasn’t changed, and even if the data is read-only. Oracle’s SQL Result Cache does just what it says on the tin.

Transactional DDL – When you need to change a bunch of database objects as part of a deployment, better have your prayer beads handy. If any change goes wrong, especially in a long, complex script with no error handling, you can find yourself with a mess of a database where some changes worked and others didn’t. PostgreSQL’s transactional DDL support lets you start a transaction, make your table changes, and then only commit if they all worked.

Warm up the cache automatically – Restart a SQL Server or fail it over to another cluster node, and performance starts over from scratch: all queries will hit disk because nothing’s in the buffer pool yet. PostgreSQL’s pg_prewarm and pg_hibernator let you write the buffer pool to disk during a graceful shutdown, and warm up the cache on other nodes before you fail over so that they’re ready to fly.

Refresh indexed views on your terms – SQL Server’s indexed views are persisted to disk, but they’re always updated in real time with every delete/update/insert of the underlying tables. While that sounds like a good thing, it’s not a great idea for data warehouses where we want the fastest possible loads, or reporting tables for an OLTP system. Oracle’s materialized views can be refreshed on a scheduled basis. Need more up-to-date data? Just refresh ’em. Need to hold off while you run a database-intensive Black Friday sale? No problem, leave the views as-is.

Like these? I’ve got more. In a fun, relaxed webcast on Tuesday, June 23rd, I’ll will show you a few features from other database platforms, plus give you a tour through my favorite Connect.Microsoft.com requests. I’ll also discuss features that just didn’t catch on – and probably need to be taken out to the farm. We’ll talk through the whole thing live over Twitter, too.

Register for the webcast now.

Previous Post
Who’s Backing Up That Database?
Next Post
When does a Query Get Trivial Optimization?

30 Comments. Leave new

  • Thank you very much Brent.

  • +1,000,000,000,000,000,000,000,000,000 on the transactional DDL feature!

  • Hamid J. Fard
    June 3, 2015 11:04 am

    Hi Brent,

    I believe that some of these features are exist and also DBA can make some possible by their creativity.

    1- Transactional DDL: You can turn on explicit transaction or use BEGIN TRANSACTION then alter the table.

    create database test;
    use test;
    create table Test(ID bigint);
    begin transaction AlterTable
    Alter Table Test Add NewCol Bigint;
    select * from Test;
    Rollback Transaction AlterTable;
    Select * from Test;

    2- Pre-warm up: DBA can create simple AgentJob to select some frequently accessed data from every databases to bring data pages into the memory.

    3- Index Views: DBA can disable it and load data, and rebuild it when DBA would like to refresh Index View.

    create view VW_Sample
    With Schemabinding
    As Select ID from dbo.Test;

    Create Unique Clustered Index CIX on VW_Sample(ID Asc);

    –CHeck the Actual Execution Plan
    Insert Into Test Values(1);

    Alter Index CIX on VW_Sample Disable;

    –CHeck the Actual Execution Plan
    Insert Into Test Values(2);

    Alter Index CIX on VW_Sample Rebuild;

    –CHeck the Actual Execution Plan
    Insert Into Test Values(3);

    Note: These codes are tested in SQL Server Express Edition.

    • This is one of those cases where it’s really helpful to check out how the other feature works. PostgreSQL’s transactional DDL goes far beyond what you’ve shown – which will only work in the default isolation level of READ COMMITTED. As soon as you start using row versioning, things start to break down.

      The cases when transactional DDL are most helpful (long running reports) aren’t allowed when you are using the features that help out long running reports (SNAPSHOT isolation).

      It’d also be worth reading up on Oracle’s indexed view feature. It’s complex enough that I can’t summarize it, but it’s leaps and bounds above what we have. Kendra has produced some amusing demos where an indexed view can cause single row inserts to take several seconds to complete. If I have a trickle load warehouse, I don’t want that trickle load to be the primary source of load on the data warehouse.

      • Kendra Little
        June 3, 2015 11:23 am

        Correction: Insert of a single row takes 30 seconds in my favorite indexed view example. (Values inserted all in memory, laptop with SSDs.) I still giggle every time I show that.

        • Yep! 4 indexed views over a 15M row table – insert-heavy process 50 minutes. Drop IXVs, insert, create IXVs: 10 minutes. Indexed View performance BITES!

      • I didn’t realize that SQL Server’s transactional DDL was only allowed in Read Committed. That’s quite the gotcha. Thanks for sharing!

        I’m assuming that hasn’t changed in 2012 or above? I am only finding reference to this restriction published on the MSDN page which is for SQL 2008 R2.


        • It hasn’t changed with SQL Server 2012 or newer and it’s really only transactional in that you can wrap it in a transaction. People still can’t read a table when you make a change. Postgres does behave the same way, though.

      • Hamid J. Fard
        June 3, 2015 7:12 pm

        Hi Jeremiah,

        Thanks for your reply, Btw nobody will Alter Table in production server without having maintenance window or considering consequences.

  • James Lupolt
    June 3, 2015 11:21 am

    “Oracle’s SQL Result Cache does just what it says on the tin.”

    I’m curious to hear from Oracle DBAs about how happy they are with this feature and what benchmarking they’ve done. MySQL has something similar and it’s a notorious bottleneck that’s often best turned off.

    • AFAIK, it works fine in Oracle. But the feature is rather special purpose. You can best use it in (performance-wise) mature report environments. Not an obvious environment for MySQL.

  • Tim Cartwright
    June 3, 2015 11:56 am

    On the transactional DDL side, you can add transactionality and add termination to your scripts by adding the below script block after all DDL/DML statments. This will not only add transactionality, but keep subsequent statements from running leaving your database in a consistent state so that you don’t have to resort to a database restore.

    if @@error 0 and @@trancount > 0 rollback;
    if @@trancount = 0 set nocount, noexec, parseonly on;

    Like so:


    CREATE PROCEDURE dbo.spFooDoesNotExist1 — This alter should cause an error hopefully as this proc should not exist
    SELECT [1] = 1;

    ALTER PROCEDURE dbo.spFooDoesNotExist2 — This alter should cause an error hopefully as this proc should not exist
    SELECT [1] = 1;

    if @@error 0 and @@trancount > 0 rollback;
    if @@trancount = 0 set nocount, noexec, parseonly on;


    –type ctrl+shift+m to replace template parameters
    IF OBJECT_ID (N’dbo.spFoo2′) IS NULL

    ALTER PROCEDURE dbo.spFoo2
    SELECT [1] = 1;

    if @@error 0 and @@trancount > 0 rollback
    if @@trancount = 0 set nocount, noexec, parseonly on;


    –type ctrl+shift+m to replace template parameters
    IF OBJECT_ID (N’dbo.spFoo3′) IS NULL

    ALTER PROCEDURE dbo.spFoo3
    SELECT [1] = 1;

    if @@error 0 and @@trancount > 0 rollback
    if @@trancount = 0 set nocount, noexec, parseonly on;


    SELECT * FROM sys.objects o WHERE o.name LIKE ‘spFoo%’

  • Tim Cartwright
    June 3, 2015 11:57 am

    I agree it is somewhat of a PITA, and I would prefer the method you described but it does do the job.

  • Eric Zierdt
    June 3, 2015 1:02 pm

    One of the things that seems like a simple change, that would benefit me greatly (and other SQL developers) is the ability to switch databases (using the USE command) with the database name coming from a variable. i.e. USE @DBName This would reduce greatly the amount of dynamic SQL I need to write. I have requested this, and discussed it at Summit with the CAT team, but I can’t seem to get any traction on this idea.

    • The problem is scope. You are writing dynamic SQL and executing the same in a call. The USE is only good for that call; not outside of it.

      If you seek something where you do something like this in a stored procedure:

      declare @DB varchar(50);

      fetch next from cursorOfDBNames into @DB

      While @@FETCH_STATUS = 0
      use @db;
      fetch next from cursorOfDBNames into @DB;

      I can see that being useful. Something about the scope of USE presents a limitation that MS is uncomfortable with surmounting.

      • Tim Cartwright
        June 3, 2015 1:19 pm

        You can accomplish what you are trying with the use of the msforeach db proc, like so:

        EXEC sp_msforeachdb N’
        use [?];

        However, that still does not solve the op’s original desire to have the db be a variable.

        Here is my want: to be able to use a variable value passed in for the field names of a query. This is quite problematic when writing reporting and you want to allow the end user to select/deselect the columns they are pulling. Something like:

        select @fieldnames from myviewOrtable

        Pipe dream I know…. Until that ever occurs that sticks me with dynamic sql.

      • Eric Zierdt
        June 3, 2015 1:27 pm

        that is pretty much the type of situation I am looking at..but I do similar things in scripts that aren’t procs and in both situations it’d be nice.

  • How about a simple Median function without having to do windowing (maybe I aren’t aware of it? Seems like such a simple thing)? And being able to persist Min/Max/Medians… Single values that could easily be checked on INSERT/UPDATE without resorting to triggers.

    And how about real dynamic Pivoting without resorting to dynamic sql?

  • Transactional DDL?

    If I need to change a bunch of database objects, I use Red Gate SQL Compare.
    It does all the changes within a transaction. If there are any errors, it rolls everything back.

    No prayers beads required. 😉

    • Tim Cartwright
      June 4, 2015 8:11 am

      You also have to pay for it. Not always an option. Some companies just don’t see the need.

      BTW, thanks for the heads up to for each db issues. Never ran into that before. Will have to keep an eye out.

  • David Warner
    June 4, 2015 3:52 am

    I would like to see something like Oracle’s RAC where there are multiple cluster nodes receiving read/write transactions and utilising the same disks – this would mean you could provide a true rolling upgrade/maintenance procedure with no downtime. From what I understand of SQL Server I don’t think this is achievable for read and write activity with zero downtime – however I will be very happy to be proven wrong! 🙂

  • Not sure what you mean regarding transactional DDL. It’s there and it works. What often breaks upgrade scripts in case of errors is that sometimes errors do not terminate the batch. That is … an unwise design decision. It’s like ON ERROR RESUME NEXT.

    Therefore I think a missing feature would be sane transaction behavior in the case of errors. Right now it is essentially random what errors kill a tran and what errors don’t. The rule should be that the transaction is never killed and the erroring statement rolls back. Also, the batch should end. And SSMS should stop executing further batches.

  • Thomas Franz
    June 5, 2015 3:30 am

    Oracle has another nice feature called scalar subqueries cache (http://www.oracle.com/technetwork/issue-archive/2011/11-sep/o51asktom-453438.html)

    Simplified it is a small memory table (256 rows) that caches the results of every subquery in a SQL statment.

    select deptno, dname,
    (select count(*)
    from emp
    where emp.deptno = dept.deptno)
    from dept — pk on this table is country, deptno so deptno is not unique

    MS SQL would run the subquery once for every row in dept

    Oracle would cache the result of the subquery for every parameter (the deptno) so it would run only once for every deptno.

    Another example:
    — executes the function f() for every row (in MS SQL and Oracle):
    select owner, f(owner) f from stage;

    — Oracle runs f() only once per distinct owner as long as there are not more then 256 distinct owners; MS would call f() again once per row (if you remove the “from dual”):
    select owner, (select f(owner) from dual) f from stage;

  • Bryan Hamby
    June 5, 2015 8:56 am

    One of the things I miss from DB2 (and Postgres has very similar functionaltiy) are the date functions. You want a string with the the Year, then the name of the Month, the the Seconds, followed by Days, and finally the Quarter we’re in? Oh, and you want to display that using names from the Islamic Civil Calendar (in French)? No problem:

    SELECT VARCHAR_FORMAT(CURRENT_TIMESTAMP, ‘YYYY Month SS DD Q’, ‘fr@calendar=islamic-civil’)

    Or, you store a period in your database like “yyyyMM” (201505), and you want that as an actual date type? Again, easy:


    And maybe you want to add something to your dates. You just tell it the math, instead of having to use DATEADD():


  • Thanks for the insights. I’m looking forward to the Webcast and twitter session.

  • How about being able to shadow/emulate an existing session? Often, we notice long running queries and queries getting stuck for long time due to bad query plan. We can’t do much other than waiting for indefinite amount of time for the queries to complete OR terminate the process.

    If we have an option to be able to bring a background session live into a Management Studio query window, it can help in troubleshooting long running queries.

  • David Warner
    June 23, 2015 9:23 am

    How about Historical querying? Oracle and HP Vertica both offer these in differing forms … perhaps that may be of benefit.

  • I think ability to commit any open/outstanding transactions in an idle session from a different session would be a feature to add.

    • Tim Cartwright
      July 6, 2015 7:46 am

      @Praveen, I think that is a really bad idea actually. You should never commit any one elses transaction without an understanding of the work involved. If the session is truly idle and has left the transaction hanging it would be just better to kill the session and let the transaction roll back.