Which Queries Should You Cache in the Application?

I’ve talked about how the fastest query is the one you never make, and why when you’re scaling up a database application, you probably want to start caching query results in the app tier. Caching means:

  • Less licensing costs for SQL Server
  • Lower CPU usage both for query execution and for compilation time
  • Lower locking/blocking overhead (even NOLOCK requires a schema stability lock, for example)
  • Less demands on query workspace memory

But how do you go about finding which queries to cache?

Find your most frequently executed queries, then ask 3 questions about them.

Run sp_BlitzCache @SortOrder = ‘executions’, or look on the Plans Executions tab of the Consultant Toolkit. Look at the executions per minute column – and on seriously high-volume servers, I’ll usually divide those numbers by 60 to get executions per second, which drives the point home even more. With those queries in mind, ask:

Question 1: How often does this data change? Often, we’re dealing with single-table queries that fetch a value from a global config table. The data just doesn’t change often, and could easily be fetched just once at application server startup. If we needed to change a config value, worst case, we could restart the app servers gradually and have them fetch the updated data.

When you’re not sure how often data changes, look at the table’s contents:

In particular, you’re looking at the statistics on each index to show you how many writes the table has had recently. It can be tough to discern exactly when this number resets: it varies based on your version, patch level, failovers, restores, etc., but just the ratio of reads vs writes can be really enlightening. For example, at a recent client, we noted that we were fetching employee phone numbers millions of times per day, when only a handful of changes had been made over the last several months.

Question 2: Will out-of-date data really hurt? Some data absolutely, positively has to be up to the millisecond, but you’d be surprised how often I see data frequently queried out of the database when the freshness doesn’t really matter. For example, one client discovered they were fetching live inventory data on every product page load, but the inventory number didn’t even matter at that time. During sales, the only inventory count that mattered was when the customer completed checkout – inventory was checked then, and orders only allowed if products were still in inventory then.

Question 3: How much data are we talking about? Is it a config table with only a handful of rows, or is it a product list with millions of variations? The less data involved, the easier it is to cache, especially if we’re trying to cache in memory on the application servers. The more data involved, the more it’s going to hurt if we try to duplicate that data on every app server – and the more we need to start investigating a dedicated caching system like Redis, AWS ElastiCache, Azure Cache, or Google Cloud Memorystore.

Cache rules everything around me, yo

Cache rules everything around me

Which leads me to the second of my caching truths:

  1. The fastest query is the one you never make, and
  2. The first query you cache is the hardest.

When you implement caching for the very first time in an application, you’re going to be torn about a million little decisions. What caching platform do you use? How do you set it up? How do you test that it’s working? How much redundancy do you need? How will you add the code to your application?

But when you see the first results – millisecond-level response times for frequently accessed data, and lower loads on your database server – you’ll be sold. You’ll cache first, and ask questions later. That’s when you’ll start running into problems with cache invalidation, but for now, find your most frequently executed queries and ask those 3 questions.

Previous Post
I’m coming to SQL Saturday Salt Lake City in August!
Next Post
Why Order Isn’t Guaranteed Without an ORDER BY

19 Comments. Leave new

  • Bob Ammerman
    May 14, 2019 9:12 am

    I use a scheme wherein adds/updates/deletes to a cached table result in incrementing a counter in a control table (you could use a trigger, but I don’t). It then reads that counter ‘periodically’ and if it has changed I refresh the cache. Reading the counter is far cheaper than loading the whole table every time.

    Reply
    • Bob – ooo, neat approach! I’d worry a little about concurrency on the control table – it sounds like you control the app doing the inserts/updates/deletes, though, so you can tune for that.

      Reply
    • Assuming you can use it, SQL Server Change Tracking

      Reply
      • Wyatt – that’s not a bad idea now that 2016 SP1 & newer all have CDC & CT built into Standard Edition. (I don’t even remember if CT was in Standard before that.)

        Reply
        • Some of our teams use change tracking. I can see we have customers running the software that uses it on SQL Server 2012 Standard, so it’s been in Standard Edition since at least then. Before that, I can’t say (or am too lazy to look up).

          Reply
  • marcussoares
    May 14, 2019 10:02 am

    Hi, Brent.

    Would it work as In-memory OLTP?

    Reply
    • Marcus – that’s an interesting question. Have you used in-memory OLTP in production yet?

      Reply
      • marcussoares
        May 14, 2019 1:11 pm

        No, only in test envoriment.

        Do you have documents or links with tutorials to use Radis?

        Reply
        • Marcus – thankfully, there are a ton of good ones on Google that can help get you started. It’s a fairly mature app that’s been around for years, so you can just Google for things like Redis tutorial and find a bunch. I don’t have anything I’d personally recommend though.

          Reply
  • Bob Ammerman
    May 14, 2019 10:26 am

    I haven’t tried it, but I think I could use a SEQUENCE instead of a table row.

    Reply
  • Bob Ammerman
    May 14, 2019 10:34 am

    It looks like SEQUENCE won’t work. Surprisingly it seems there is no way (short of selecting from sys.sequences) to get the last value generated, but only a way to get a fresh sequence number. Querying sys.sequences doesn’t seem like the best idea for performance and/or concurrency, but I could be wrong.

    Reply
    • Yeah, that doesn’t help you with updates, either. The approach I’ve seen most often is adding a LastUpdateDate column to the tables you’re caching, and updating that with a trigger (or via the updating app.) Then, when you want to check the cache, you can do a COUNT(*) WHERE LastUpdateDate >= (the last time you updated your cache.)

      Reply
  • I agree and the code does matter.
    Once a developer was complaining updating all the rows to indicate the work was done was very slow. He was returning the result set and going through it one row at a time in his program and updating the column. I said just use an update statement. Tremendous difference.
    Better use of memory and processing things as a set operation.

    Reply
  • Brian Boodman
    May 14, 2019 11:25 am

    If you have multiple web servers sharing a database (e.g., due to load balancing), one technique to consider is the use of a cache server (e.g., Redis). This is slower than an in-application cache (and thus rarely appropriate in single-client systems) but faster than a database cache.

    Reply
  • Andrew Hill
    May 14, 2019 5:29 pm

    it’s not just “how often does this change”, it’s what could possibly go wrong if the cache is unknowingly out of date because detecting a stale cache can be complicated …
    “don’t worry we never change the list of allowed genders, you can ‘cache’ that reference data”

    From a Quora answer concerning cache invalidation hardness:
    Cache invalidation is hard because:
    – Everything in life we want to know, changes.
    – Those changes are non-deterministic.
    – Computers are only good at deterministic processes.

    Reply
  • Rémi BOURGAREL
    May 15, 2019 12:14 am

    Be careful when caching with an ORM, they mostly implement unit of work pattern which could be buggy if some items of the current context are loaded from something else than the database. Hibernate implements caching well but I don’t think EF of EF Core does this.
    There is also one question to ask for caching : the parameter combination. If there is too many parameter combination (let’s say you have a filter gps coordinates ), your cache hit will be too low and caching will have bad impact on performance as every query will generate a new cache item. Sometimes you have to rethink your query before caching : load more data and do some additional filtering on application.

    Reply
  • Robin Wilson
    May 19, 2019 3:39 pm

    I have started to use JSON data feeds now in web development where I pass a C# object to the Json function:
    return new JsonResult(collectionWrapper);

    Then in the JavaScript I run a run a jQuery $.get function to receive the data and store it into the browser storage using:
    localStorage.setItem(“object from C# page”, JSON.stringify(data));

    Then as all data is already loaded, any data searching is superfast, when before I would have executed another query to load a new resultset rather than filtering the existing one, especially taxing if search is triggered on each keypress (where typing a 10 letter word could trigger 10 database queries).

    Depending on how much data I need though the browser cache can be exceeded so I need to get creative about how many records to load.
    It has made users much less frustrated and the database much less overworked!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}