07. The Ascending Key Problem (11m)
In data warehouse scenarios where you constantly load in new data every night, and then go query the data you just loaded, you often get really bad execution plans. We explain why.
- 01. A Tale of Two Plans (7m)
- 02. Getting to Know Statistics (15m)
- 03. Combining Multiple Single-Column Statistics (8m)
- 04. Multi-Column Statistics and the Cardinality Estimators (14m)
- 05. What Is the Cardinality Estimator? (5m)
- 06. Filtered Statistics: ENHANCE! (15m)
- 08. 10 Rules for Managing Statistics (12m)
- 09. Seeks, Scans, and Sargability (9m)
- 10. How Execution Plans are Made (10m)
5 Comments. Leave new
I just want to clarify my understanding. When you added the extra 20000 rows, the modification count changed but the statistics did not update? So was the new Cardinality Estimator using the modification count to make its estimate, without knowing anything about the rows that were actually added? And the old Cardinality Estimator still estimated 1 row because it didn’t take into account the modification count at all?
Late reply, but I believe you’re correct. The new CE uses the row modification counter in the statistics to produce a better row estimate. This blog goes into a little more detail and covers the impact to the CE depending on how stats are updated. https://cloudblogs.microsoft.com/sqlserver/2014/03/17/the-new-and-improved-cardinality-estimator-in-sql-server-2014/
And waiting …
And wait …
Zzzzzz
I copied stats query and ran on stack over flow downloaded 1 gb database. i missing this sys view
sys.dm_db_stats_properties. How can i get this.
You don’t add system views – they’re already in the database server itself.