When you migrate to SQL Server 2014, rather than turning on the new cardinality estimator right away, I recommend sticking with the old cardinality estimator initially by leaving your database in SQL Server 2012 compatibility mode. Let the database settle down for a few weeks, let folks air out their grievances, and give yourself some time to make sure the system is working at least as well as the old system.
Once you’re on 2014, you can check out the new CE’s execution plans by playing around with trace flag 2312. That flag turns on the new CE in one of two ways:
- At the query level using OPTION (QUERYTRACEON 2312), or
- At the session level using DBCC TRACEON(2312)
The latter is easier if you have to test a lot of queries at once, or if you need to test a stored procedure. Just turn on the trace flag, and it’s on for only your session.*
* – I lied. It’s not.
If you set trace flag 2312 for your session, and then run a stored procedure whose execution plan isn’t already cached, that stored procedure gets compiled using the new cardinality estimator, and the resulting plan is put into cache.
Subsequent sessions will use that plan even if they don’t have trace flag 2312 enabled.
Here’s how to prove it on a SQL Server 2014 server in a 2012 (or earlier) compat mode database:
- Pick a rarely-executed stored procedure. (If you’re doing this in production, don’t use a stored procedure that will cripple your server if it happens to get compiled with the 2014 CE.)
- Recompile it using sp_recompile ‘myprocname’.
- In a new SSMS window, run DBCC TRACEON(2312) to enable the new CE in your session.
- Click Query, Include Actual Query Plans.
- Execute the stored procedure.
- Right-click on the query plan and click Show Execution Plan XML.
- In the first few lines, look for the string “CardinalityEstimationModelVersion” – if it’s 120, that’s the new CE. 70 is the old one. You’ll see 120 as shown in the example below.
- Close that query window, and start a new one. Run DBCC TRACESTATUS to verify that you don’t get a line for 2312, indicating you’re still on the old CE.
- Make sure Query, Include Actual Query Plans is on.
- Execute the stored procedure again.
- Right-click on the query plan and click Show Execution Plan XML. Even though you’re in the old compat mode, you’ll be looking at a 120-level CE plan.
- Recompile that stored proc using sp_recompile ‘myprocname’ just to make sure it gets out of cache.

So how do you test stored procedures with the new CE?
You could test on a different server with the databases set to the new 2014 compatibility level, but of course when you’re testing execution plans and query performance, the servers need to be identical. That’s not always possible.
If you have to test both the old & new CEs on the same server, you’ll need to create new stored procedures (like myprocname_2014) with OPTION (QUERYTRACEON 2312) on every single query in the stored proc – or, at least, all of the queries where you’re concerned about performance variances.
Kendra says: Well, there’s always plan guides. Kidding! I’M JUST KIDDING!
34 Comments. Leave new
Good article, and another argument for a dedicated test environment. It doesn’t need to necessarily be exactly the same as your production environment, but an unchanging test environment lets you do A-B tests like this without impacting production.
We’re doing exactly this to test the new CE:
Install SQL 2014 on test server
Restore a DB that’s set to CL 110
Run our workload and get the numbers recorded
Then restore that DB again
Set the CL to 120
If there are notes about foreign keys/indexes getting disabled, we re-enable them
Then run the workload again, get the numbers recorded
Then compare! Since it’s all on the same test server the only thing that changes is the CL.
Sean – yep, that’s a great way to do testing. I wish more folks did that. (And you make it sound so easy, heh.)
Could you not just get the plan handle for this proc and delete it? Then the next run would use the old CE?
DBCC FREEPROCCACHE (120_Plan_Handle)
Davis – yep, you just have to KNOW that it’s happening, heh.
We ran into a heck of a lot of problems with the 2014 cardinality estimator when we started to upgrade to 2014 from 2008. Especially with more complex workloads such as what usually runs in our data warehouse. Ultimately we went forward with setting the databases at 120 compatibility level and setting the trace flag 9481 globally.
For more info on trace flag 9481
https://support.microsoft.com/en-us/kb/2801413
Just curious– why not just use 110 compatibility level?
Trace flags are cool…
Well you would be able to take advantage of any new features that 120 compatibility level provides while still using the old cardinality estimator. Those are listed here:
https://msdn.microsoft.com/en-us/library/bb510680.aspx – Admittedly there are not a lot of earth shattering new features between 110 and 120.
SQL product team has fixed some CE related performance issues with SQL2014 SP1.
“FIX: Query performance issues when new cardinality estimator is enabled in SQL Server 2014”
https://support.microsoft.com/en-us/kb/3044519
Hi Brent,
I’ve read that trace flag 4199 must be used in order to turn “ON” query optimizer new features & bug fix on new SQL version. Otherwise SQL is not using them.
I guess new CE is not in this list… or maybe I am a little confused.
Thanks
Alberto
Alberto – that was true for versions prior to SQL Server 2014, but in SQL Server 2014, those improvements are based on the database compatibility level.
4199 is still required to enabled bugfixes though!
http://blogs.msdn.com/b/psssql/archive/2015/06/16/identifying-sql-server-2014-new-cardinality-estimator-issues-and-service-pack-1-improvement.aspx
Hi Brent/All,
I found a case where the trace flag does 4199 not seem to do anything, installing SP1 and setting comp level to 120 does the trick.
Although the example the the link works as described and needs the flag, in real life with stored procedures it does not affect the CE for the plan, its always CE2014 with both flag on or off.
Any ideas why would that be?
Thanks for your reply,
Adrian
Adrian – unfortunately, troubleshooting is kinda beyond the scope of what we can do in a blog post comment.
Here is my un-success story. We installed SQL Server 2014 without enough testing. Basically because the DBA team was dragging on forever and didn’t do much of a job anyway. Dev team had like 2 weeks to test everything because of the deadline. Anyway, we are stuck with 2014 and it’s killing me with its cardinality estimator. Basically, the logic behind it seems to be that every inner join is a filter. Lookups? Never heard of them! If you have a join the estimated number of rows is divided by 10. And I am working with a warehouse. I have a transformation query that returns 40M rows. SQL 2014 expects 15K. Of course, the plan is infested with nested loops and lazy spools, and query runs now for 20 minutes where it used to be 5. Not one query. All of my large queries have suffered. Turning back the estimator to 2012 solves the problem but requires sysadmin privileges which I don’t have on production server. I wish there was a real switch for the estimator where it could be downgraded without losing the rest of 2014 functionality.
You can just use OPTION(QUERYTRACEON 9481) on queries you want to use the old estimator for.
Jonathan – doesn’t work with stored procs (and nested stored procs) though.
True. Except you need sysadmins privileges to do so.
Any idea what the algorithm is for 2016?
For two columns, “lowest cardinality * SQRT(next highest cardinality) * cardinality” doesn’t seem to be it, in my tests, anyway.
I have had a scout around, but nada.
I’ve run into an issue with the Cardinality Estimator. Microsoft says the bug was fixed in SP1, but they didn’t mention that trace flag 4199 was required to activate the fix.
Upgraded test server to 2014 SP2, ran ProblemQuery and cardinality estimator problem was present. Made the problem go away by using OPTION( QUERYTRACEON 9481 ).
Problem shows as *fixed* when using OPTION( QUERYTRACEON 4199 ) — there was a substantial difference in the plans using when using TF 9481 and TF 4199.
If I understand it correctly, TF 9481 disables the new CE, but TF 4199 enables the new and improved CE installed with Service Pack 1.
OK.. my mistake. Microsoft *does* mention using TF 4199 to activate the fixed cardinality estimator released in SP1.
I’ve been testing SQl 2016 and SQl 2017 with a standard Information_Schema query. Absolutely fails on SQl 2016/2017 but works fine on SQL 2014. Our production environment cannot upgrade because of this. This is the standard query used by an adhoc reporting tool. It takes a few seconds in SQl 2014, but over a few minutes in each of 2016 and SQL 2017. Only grace is Trace flag 9481 but cannot do that for the entire production system, unless there are code changes for just adhocs.
SELECT INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE.TABLE_SCHEMA, INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE.TABLE_NAME, INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE.COLUMN_NAME as CONSTRAINT_KEYS, INFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_SCHEMA as REFERENCE_SCHEMA, INFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_NAME as REFERENCE_TABLE, unique_usage.COLUMN_NAME as REFERENCE_COLUMN FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE INNER JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS ON INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE.CONSTRAINT_NAME = INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS.CONSTRAINT_NAME INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS ON INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_NAME = INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS.UNIQUE_CONSTRAINT_NAME INNER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE unique_usage
ON INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_NAME = unique_usage.CONSTRAINT_NAME
Gursanjit – works fine here on 2017. You may want to post a question on a Q&A site like https://dba.stackexchange.com with the exact error that you’re getting.
The performance is most likely because we have about 1300 Tables in our schema and 800 views. Is there a way you can test it with these number of objects? We have done extensive testing on our SQL 2014, SQL 2016 and SQL 2017. Only the trace flag 9481 seems to help.
No, sorry, I can’t really do free consulting for that kind of thing. Your best bet there would be to contact Microsoft.
Thanks Brent! I wasn’t trying to get free consulting. I was just raising an issue with Microsoft’s SQL Server 2016 and SQL 2017, the queries on these standard system schema views are using incorrect plans. It quickly becomes evident with the number of objects. Appreciate your input as always!
We have changed database compatibility mode to 120 in UAT first and it’s working as expected however after a month when we made same changes in production, CPU utilization went to 100% and our application few workflow went down. many SP calls timed out .
As we can’t afford much down time, we reverted back the changes, database compatibility mode to 110, after the change suddenly CPU utilization came back to 45%.
We want to set database compatibility mode to 120 to improve many SP performance, which we are facing now.
Any suggestion would be great help!!
Manjunath – sure, absolutely, that’s exactly the kind of consulting work we do. Click SQL Critical Care at the top of the screen to learn more.
We are considering implementing Trace Flag 9481 to improve performance with SQL 2016, however, here are our concerns:
1) While this may improve performance with certain aspects of our applications, our fear is that it may negatively impact others. Is that a fair statement?
2) We will upgrade to SQL 2019 in Second Quarter of next year, so is Trace Flag 9481 still available in SQL 2019? If not, has it been replaced with another trace flag? Or deemed unnecessary?
Thanks!
Ray – for personalized advice about your own SQL Server, workload, and goals, you’ll want to click Consulting at the top of the site.
I have noticed a huge performance benefit of using OPTION (9481) in sql 2016 queries. We migrated from sql server 2012 to sql server 2016 and this particular stored process was running slow but using OPTION (9481) made a drastic change. But now issue is we get error “sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) (‘42000’, ‘[42000] [FreeTDS][SQL Server]Could not continue scan with NOLOCK due to data movement. (601) (SQLExecDirectW)’)” out of nowhere. There is with (nolock) in the sproc. it is frustrating because you run it again and it just run fine. Anyone experinced the same issue?
Yes, that’s fairly common: https://www.brentozar.com/archive/2021/11/nolock-is-bad-and-you-probably-shouldnt-use-it/
I am sorry I meant to say “here is NO with (nolock) in the sproc.” We get this error even there is no with (NoLock) in the query.
Gotcha – in this case, you’re best off posting it at a Q&A site like https://dba.stackexchange.com with as much detail as you can, like the text of the query and execution plans.