Blog

Back Up Transaction Logs Every Minute. Yes, Really.

Right now, your transaction log backups are set up to run every 15 minutes, every hour, or maybe every few hours.

It’s time for a reality check. Figure out what internal customer owns the data inside the database, and walk into their office. Ask them how much data they’re comfortable losing.

You’re probably going to have to repeat the question a couple of times because they won’t believe these words are coming out of your mouth. After they eventually recover, they’ll blurt out words like “NONE” and “ZERO” and “ZOMG”. Give them a quick overview of how much it costs to have a no-data-loss system – typically in the millions of dollars – and then ask for another answer.

They’re going to ask for a list of options and their costs. Forget the fine-grained details for now – let’s just assume one of your solutions involves transaction log backups.

The big hand is on the network cable, and the little hand is on the power cable.

The big hand is on the network cable, and the little hand is on the power cable.

Is it more expensive to run your log backups:

  • A – Every hour
  • B – Every 15 minutes
  • C – Every 5 minutes
  • D – Every minute

It’s a trick question – they all cost the same.

“BUT MORE LOG BACKUPS MEAN MORE SLOWDOWNS!”

Nope – it actually means less overhead. To keep the numbers easy, let’s say you generate 60GB of transaction log data per hour. Would you rather back up 1GB per minute in small chunks, or kick the server in the junk every hour trying to back up 60GB all at once? The users would definitely notice the latter – the server will slow to a crawl while it handles that firehose of log data for a sustained period.

“BUT MORE LOG BACKUPS MEAN MORE FILES!”

Yes, but if you think you’re really going to manually restore log backups through the GUI, one at a time, you haven’t been through a real mission-critical crash. Long before disaster strikes, you need to be familiar with a script-based method to automatically restore all of the log files in a directory. Test it, get comfortable with it, and then when the outage hits, you can let your script restore everything ASAP, no clicks required.

“BUT MY LOG BACKUP JOBS CAN’T FINISH IN A MINUTE!”

If you have too many databases and your jobs can’t keep up, it’s time to start breaking up the log backups into multiple jobs. Some of my clients use two jobs, for example – one to back up all odd-numbered databases by the database ID in sys.databases, and another one to back up all the evens. The log backups don’t stress out their servers by any means, and users don’t notice the impact of two simultaneous log backups in two different databases.

“BUT SURELY YOU CAN’T BE SERIOUS!”

I’m just as serious as you were when you randomly picked 15 minutes or 1 hour as the log backup schedule. The point is that it’s not for the geeks to decide – it’s for the customer to decide. Ask your business user about the business needs for their data, and then give them a menu of options to meet that goal.

You think they’re all going to insist that all data is mission-critical, but you’d be surprised. Once they see the costs involved with log shipping, AlwaysOn Availability Groups, SAN replication, and so forth, they’re able to make better informed decisions about which data really is priceless, and which data we could really re-enter at minimal cost.

But until you ask them this question, they assume you’re never going to lose data.

San Diego Class Recap

We had a lot of fun this month teaching SQL Server classes at the San Diego Harbor Hilton. (We wrote about how we picked the cities earlier.) Even when we teach, we learn, and then we share what we learned, so here we are.

San Diego Harbor Hilton

San Diego Harbor Hilton

We like to try new things with our products and services, and this year’s classes brought a few firsts for us. Let’s recap what’s new and how it worked.

We tried new class titles and module groupings. This year, we’re running a 2-day class on How to Be a Senior DBA and a 3-day class on SQL Server Performance Troubleshooting. We wanted to find out if we’d gotten the module mixes right.

To learn, we surveyed the bejeezus out of the attendees at the start. How did they rate their skills at each module before we started, and which modules were they looking forward to the most? Then at lunchtime and end-of-day, we gave them more surveys to find out if they would recommend each module to others.

We learned right from the first day – it was obvious from the looking-forward-to question that the attendees weren’t interested in one particular module, so we gave them the option to switch it out for another module instead. That went really well, and we’ll adapt the Chicago and Philadelphia itineraries to include that change by default.

Be Your Neighbor's Consultant

Be Your Neighbor’s Consultant

We added a lot of attendee interaction. One of the best ways to learn something is to teach it to someone else. In the high availability architecture module, we ran a 5-minute portion where you had to describe your app’s RPO/RTO needs to the person sitting next to you, and they had to recommend the right HA/DR infrastructure.

Great side effect: this got the attendees out of their shell right from the start! Before the very first bio break, each attendee knew at least one other person’s SQL Server challenges and felt comfortable talking about it.

Kendra Discusses the Outage Post-Mortem

Kendra Discusses the Outage Post-Mortem

We added optional after-hours group lab exercises. For example, on the first night of the How to Be a Senior DBA class, we broke attendees up into teams and handed them a production outage to fix. We watched via the projector as each team worked on their servers.

After the outage finished, the team conducted a post-mortem meeting just as they would (maybe?) in real-life, explaining what caused the outage, how they fixed it, and how they would ensure it didn’t happen again. Prizes went to the best solvers and best explainers.

Jeremiah Teaching

Jeremiah Teaching

We used attendee questions to improve the training. While the training takes place, one of us sits in the back of the room and transcribes every attendee question.

Now that the training is over, we’ll revisit the slides and demos to see where we can build those answers right into the training to keep making each delivery better. (We do this at our conference sessions, user group sessions, and webcasts too.)

Granted, we can’t just add material like crazy – otherwise we’d run way long every day – but sometimes it makes sense to take out portions of training material that isn’t working well, add in other portions, or reorder slides to introduce answers in a way that makes more sense for attendees. We have some slide decks we’ve been working on for several years, and gradually improving this way the whole time.

Brent Teaching with Hardware

Brent Teaching with Hardware

We added hands-on hardware for illustration. One of my favorite teaching icons says that if you want to teach students an abstract topic, give them something to put their hands on. I brought motherboards, memory, hard drives, and cables in to teach about how wait stats work, and as various queries ran onscreen, we talked about the physical holdup.

This class was the first time I’d tried this approach in person, and this is a great example of a session I’ll need to tweak over time. The time budgeting failed miserably – I didn’t get to go as deep as I wanted because I tried to cover too many things in too short of time. Now the fun part starts – I get to revisit the presentation, the attendee questions, and the subject matter, and figure out how I can impart maximum learning in 60 minutes.

Kendra Teaching

Kendra Teaching

We added prerequisite training videos and resources. To make sure everybody was on the same page when class started, each attendee got homework when they enrolled. They got a login for our training video system that gave them access to a group of videos we picked just for them.

When the in-person class started, their training login suddenly had access to all of the scripts and resources for the class, too. We use that same delivery system to handle all of our resources, and we love how it works.

We even extended it based on what we learned in the first class. One of the Senior DBA students suggested that we compile a list of learning resources mentioned during the class, build a PDF of the links, and add that PDF to the training site, too. Great idea! Plus, since we’ll be teaching these same classes in Chicago and Philadelphia later this year, we can even start with that same resource list because many of them will be applicable.

San Diego (the Hilton is on the left)

San Diego (the Hilton is on the left)

See, we learn a lot even when we’re teaching. We have such a great time with this, and we can’t wait to keep sharing what we learn. If it sounds like fun, join us in Chicago or Philly later this year.

Why Your Slow SQL Server Doesn’t Need a SQL Consultant (or Does It?)

Your SQL Server is slow – but should you call us in? Jeremiah and Brent had a throw-down the other day, and we figured we’d capture it here on the blog.

Brent Says You Do, and Here’s Why

5. You’ve been blindly throwing money at the problem without results. You’re on a first name basis with your local SAN salesperson. Your server racks are glistening with blue and green lights. But somehow, when users hit your app, they’re not as happy as your sysadmins – or your SAN salesperson. Before you burn another stack of Benjamins, it’s time to get an outside opinion.

brent-ozar

4. You’re doing something for the first time. Sure, you’re pretty good at Googling your way out of trouble, but the company’s starting to make riskier and riskier gambles on data storage. Maybe you should talk to someone who’s faced this problem before.

3. You’re relying on the manual. I love Microsoft as much as the next guy – especially now that they brought out a new ergonomic keyboard – but Books Online doesn’t tell the whole truth. When Microsoft unveils a new feature, they talk about all the positives, but they don’t always disclose the drawbacks. Get a reality check before you bet the farm on PowerFilePivotProRT, and hear what our other clients are doing to accomplish the same goal.

2. You need answers faster. We get together on a Monday, and by end of day Wednesday, you’ve got a prioritized action plan showing you how to make the pain go away by the end of the week. You get the knowledge and confidence to keep going without expensive long-term consultants. You’re really close – you just need our 3-day SQL Critical Care® to unlock the tips and tricks to make it work.

1. Developers can get back to adding features. Your real business isn’t SQL Server administration – it’s adding features to your app to make your end user happier. Bring us in, get the answers, and get back to work.

Jeremiah Says You Don’t, and Here’s Why

5. You’re probably blindly throwing money at the problem without results. Unless a consultant is willing to provide a list of happy customers, there’s no way to verify that they know something. Heck, even if they do provide a list of happy customers, you have no way of knowing that Luanne in IT management isn’t really someone’s Aunt Mavis.

jeremiah-peschka

4. Best practices aren’t universal. Every situation is different and the most complicated scenarios require a deep understanding of business goals, features, and SLAs. Consultants can help you understand best practices, but you’re the only person who knows what’s right in your environment. If you’re doing something for the first time and your Google-fu is running out, you can’t expect much help from outside.

3. Peer pressure shouldn’t change your decisions. We jokingly call this “design by Hacker News”. Just because a startup, our clients, or your next door neighbor are doing something, that doesn’t mean it’s the right fit for you. For many application feature decisions, it’s easy to build two or three prototype solutions and decide on the best one.

2. You need performance yesterday. Rather than wait a few weeks for a reputable consultant to show up, have you considered buying memory? If you’re on SQL Server Standard Edition and you have less than 64GB of memory, just buy more RAM. If you’re on EE and you have less RAM than you have data, why not max out that server with 16GB DIMMs; they’re cheap and you can solve most code sins with memory. Heck, we even recommend buying memory as one of the first ways to solve problems quickly.

1. Developers: understand your features. While developers should be adding features, they also need to understand the consequences of those features. Some functionality that’s possible in SQL Server requires an understanding of how to write queries to take advantage of those features – filtered indexes, indexed views, and ColumnStore indexes immediately spring to mind. The best way to understand a feature is to get in the database, make it work, and then make it work fast.

Brent’s Upcoming User Group Presentations

In a world where servers rule their administrators, one man will change your life. It just won't be this man.

In a world where servers rule their administrators, one man will change your life. It just won’t be this man.

Coming soon to a user group near you, it’s…me.

March 5 – South Florida SQL Server User Group (Remotely)

The Best Free SQL Server Downloads – You’re trying to manage SQL Server databases, and every time you Google for something, you get overwhelmed with all kinds of free tools, white papers, blogs, and newsletters. There’s so many that suck, and you’re tired of wasting time on bad ones. Microsoft Certified Master Brent Ozar has been using SQL Server for over a decade, and he’s built a killer list of the very best freebies. He’ll show you where to get the good stuff and how to use it.

March 12 – Philadelphia SQL Server User Group

How to Think Like the SQL Server Engine -
You’re a developer or a DBA, and you’re comfortable writing queries to get the data you need. You’re much less comfortable trying to design the right indexes for your database server. In this 90-minute session with Microsoft Certified Master Brent Ozar, you’ll role play as the database engine while Brent gives you queries. You’ll learn first-hand about clustered indexes, nonclustered indexes, execution plans, sargability, statistics, TempDB spills, and T-SQL anti-patterns. Register here.

March 13 – PASS DC (Chevy Chase) User Group

How to Think Like the SQL Server Engine – You’re a developer or a DBA, and you’re comfortable writing queries to get the data you need. You’re much less comfortable trying to design the right indexes for your database server. In this 90-minute session with Microsoft Certified Master Brent Ozar, you’ll role play as the database engine while Brent gives you queries. You’ll learn first-hand about clustered indexes, nonclustered indexes, execution plans, sargability, statistics, TempDB spills, and T-SQL anti-patterns. Register now.

March 14 – Richmond SQL Server User Group

How to Think Like the SQL Server Engine – You’re a developer or a DBA, and you’re comfortable writing queries to get the data you need. You’re much less comfortable trying to design the right indexes for your database server. In this 90-minute session with Microsoft Certified Master Brent Ozar, you’ll role play as the database engine while Brent gives you queries. You’ll learn first-hand about clustered indexes, nonclustered indexes, execution plans, sargability, statistics, TempDB spills, and T-SQL anti-patterns. Register now.

April 11-12 – SQLSaturday #267 Lisbon, Portugal

Brent is flying across the pond to go visit one of the best-regarded SQLSaturdays in Europe. Join Brent for a one-day Friday pre-con on virtualization, storage, and hardware for just 100€. Register now for the pre-con, and then register for the SQLSaturday too.

April 26 – SQLSaturday Chicago

The sessions haven’t been picked yet, but Brent’s fingers are crossed that he’ll be one of the chosen ones. You can register now.

Using SQL Server’s Table Valued Parameters

Table valued parameters (TVPs) are nothing new – they were introduced in SQL Server 2008. TVPs are a great way to get data into SQL Server, and yet very few developers use them.

Getting Started with TVPs

In order to get started, we need to create a user defined type. This type is necessary so we have a well defined interface between SQL Server and the outside world – you can just pass a bunch of junk in as a parameter to SQL Server… well, you could, but that’s called XML.

Let’s create that user defined type:

CREATE TYPE dbo.SalesPersonTerritory AS TABLE
(
    SalesPersonID INT,
    TerritoryID INT
);

Now we can create variables using the dbo.SalesPersonTerritory type whenever we want. It’s just this easy:

DECLARE @spt SalesPersonTerritory;
SELECT * FROM @spt;

Using Table Valued Parameters With Stored Procedures

Having a special table type is really convenient, but it doesn’t help if you can’t use it, right? Let’s assume that you’re calling a stored procedure and you’re sick of joining lists of strings on the client and then splitting them apart on the server. What you really need is the ability to pass a fully fledged table across the wire and into SQL Server.

This is relatively easy to accomplish. From C#, the code would look something like this:

string cnString = @"data source=.;initial catalog=TVPTester;user id=tvptester;password=tvptester;";

using (var connection = new SqlConnection(cnString))
{
    connection.Open();
    using (SqlCommand cmd = new SqlCommand("SELECT * FROM @tvp;", connection))
    {
        var pList = new SqlParameter("@tvp", SqlDbType.Structured);
        pList.TypeName = "dbo.SalesPersonTerritory";
        pList.Value = SalesPersonTerritoryTable();

        cmd.Parameters.Add(pList);

        using (var dr = cmd.ExecuteReader())
        {
            while (dr.Read())
                Console.WriteLine(dr["Item"].ToString());
        }
    }
}

That’s really all there is to it.

C# Tips for Table Valued Parameters

There are a few other things that developers can do to make their life easier when working with table valued parameters. A DataTable, just like a table in SQL Server, should have types declared for all columns. While it’s easy enough for developers to create these tables on the fly, that won’t cut it in production code – boiler plate code means that people can make mistakes.

To make life easier, developers can create methods in their application to make it easier to work with table valued parameters. We can create a special chunk of code that will make it easy for developers to instantiate and use a DataTable that matches up with the table valued parameter.

static DataTable SalesPersonTerritoryTable()
{
    var dt = new DataTable();
    dt.Columns.Add("SalesPersonID", typeof(int));
    dt.Columns.Add("TerritoryID", typeof(int));
    return dt;
}

Seasoned developers will even create methods that let them drop a list of objects straight into an appropriate DataTable. There are many different ways to build convenience methods into code and make it easy for development teams to work with TVPs.

For simple data type matching, this works well. If you’re dealing with more complex data types, you’ll want to check out SQL-CLR Type Mapping to make sure you get the right data type. You’ll notice that some datatypes (varchar for instance) have no direct corollary in the .NET Framework. Sometimes you just have to lose some fidelity – make wise decisions, it gets crazy out there.

These same techniques can be used with a string of ad hoc SQL, too. TVPs aren’t limited to stored procedures, they can be used anywhere that you are executing parameterized code.

Gotchas of TVPs

There are two big gotchas with TVPs.

First: the table variable that comes in as a table valued parameter cannot be changed. You’re stuck with whatever values show up. No inserts, updates, or deletes can be applied.

Second: table valued parameters are still table variables – they get terrible cardinality estimates.

We can get around both of these problems with the same technique – copy the contents of the TVP into a temp table. Although it adds an extra step to using the TVP, I’ve found that copying the contents of the TVP to a temporary table lead to better execution plans and much less confusion during development and troubleshooting.

Summary – Using Table Valued Parameters isn’t Hard.

TVPs aren’t difficult to use. They’re just different and require a different mindset. By using TVPs, developers can pass many rows into a stored procedure and create far more complex logic and behavior that is possible using only single parameter values.

What If You Knew The Server Would Crash Today?

This morning you woke up with a terrible premonition – you were absolutely sure your most important production database server was going to crash today.

What would you do?

Step 0: stop for coffee, because it's gonna be a long day.

Step 0: stop for coffee, because it’s gonna be a long day.

Here’s a quick list of places to start:

  1. Identify where the most recent backups are kept
  2. Make sure the backup schedule and frequency lines up with what the business wants
  3. Make sure the backups are getting offsite
  4. Make sure the monitoring software is watching this server, and sending me alerts

When was the last time you made a list like this and checked it twice? Do it this week, and schedule yourself a recurring task in your favorite to-do software to check this every quarter.

Because you’re not going to get these visions.

Satya Slips a SQL Server Standard Secret?

Everybody wants to know the next SQL Server release date, pricing, and feature lists, but anybody who knows those facts is bound by non-disclosure agreement and can’t say it publicly. Every now and then, though, we get lucky and someone from Microsoft slips up. That’s why I pay particular attention to Microsoft interviews as we approach release dates.

Microsoft’s perhaps-next-CEO new CEO Satya Nadella sat down for an interview with Gigaom. It included this interesting line:

…everybody who buys a SQL server, in fact in this release, gets an always-on secondary in Azure.

Satya Nadella

Satya Nadella

Well isn’t that special? Satya would be the guy to know, too – he used to run the Server and Tools group (which includes SQL Server), and he’s since been promoted to Cloud and Enterprise (which includes both SQL Server and Azure).

Will SQL Server 2014 Standard Include AlwaysOn AGs?

Right now, SQL 2012 only offers AlwaysOn Availability Groups in the expensive Enterprise Edition. 2012 Standard Edition’s closest HA/DR feature in Standard Edition is database mirroring, but that’s listed on the Walking Dead Deprecated Features List.

But what if Standard Edition includes a crippled AlwaysOn AG? Currently, SQL Server 2012 Standard Edition includes a crippled version of database mirroring that only offers synchronous replication, not asynchronous. It’s less desirable for many customers due to the transaction slowdowns if you have to commit at both servers before a transaction is considered committed. Microsoft could treat Standard AGs the same way – only offering the less-desirable version in Standard.

Standard’s AlwaysOn AGs could also be limited in numbers. While 2014 Enterprise will offer up to 8 readable replicas, perhaps Satya’s “an always-on secondary” means Standard gets exactly one secondary.

Satya could even mean that only an Azure-based replica is included – not an on-premise one. This would be an interesting turn of events because it would require vendor lock-in to just Microsoft’s cloud rather than Amazon’s or a colo vendor.

What Did Satya Mean by “Gets an AlwaysOn Secondary”?

Could he mean that Microsoft is really willing to include the AlwaysOn virtual machine? Like you get some kind of licensing key that unlocks one free Azure VM running SQL Server 2014, and an easy way to set up HA/DR between your on-premise SQL Server and your new free virtual machine?

This would be revolutionary because Microsoft would be seen as a real vendor partner to SQL Server users. Instead of buying expensive hardware and data center space to protect your SQL Servers, you could just rely on Microsoft’s cloud.

At first glance, this would look like Microsoft going into competition with server vendors like Dell, HP, and IBM Lenovo, plus competing with cloud vendors and data centers like Amazon and Rackspace. But hey, that’s exactly what Microsoft has been doing lately – going directly into competition with vendors that used to be partners. The Surface competes with laptop partners, and the Nokia deal competes with Windows Phone partners. This could just be the next step, especially given another Satya quote in the interview:

So one of the things is, I have an always-on database, where is it running? The always on database is kind of on your private cloud and kind of on Azure and so the distinction even goes away.

If Microsoft can move your database into their cloud, they stand to sell more virtual machines and services. Maybe your database’s disaster recovery is becoming Microsoft’s loss leader. I’d love this, because we all need to do a better job of protecting our databases from disaster.

Or DID Gigaom Misquote Satya?

Maybe Satya said “Everybody who buys SQL Server Enterprise Edition gets an AlwaysOn Secondary in Azure.” After all, in this one quote, Gigaom managed to mis-capitalize both SQL Server and AlwaysOn. It’s not inconceivable that the quote was butchered.

What Do You Ask the Leaving DBA?

The last DBA had an unfortunate accident in Alaska.

The last DBA had an unfortunate accident in Alaska.

When the database administrator turns in her notice, what questions should you ask her in her last couple of weeks?

I’m assuming, of course, that it was a friendly departure and you’ve got the full two weeks to have good conversations.

Or maybe the DBA is about to go on vacation, and you need to quickly get up to speed about things that might break while they’re gone.

One of my clients’ DBAs turned in their resignation, so I was presented with this very task. I haven’t done it in a while, so I asked Twitter, and here’s what I got:

I kind of expected answers like that – a lay of the land, and Buck Woody’s runbook is a good example – but there were so many good creative ideas:

When you look at someone else’s server, it’s so easy to think, “This guy couldn’t have known what he was doing – nobody should ever set this trace flag or this database option or use this cumulative update.” Maybe not, though – maybe it’s critical to making the application work.

How many of us left little snippets of code around that ended up becoming mission-critical? Often we don’t have source code control on these, either.

Does the company even have one?

Have you tested a restore lately?

And a followup – are you getting value out of it, or have you just set up email rules to bozo-bin all of the monitoring emails? It helps to know what to expect when the emails start coming your way.

Wow – that’s a fantastic answer. Often we just didn’t have the time to automate all of our problems, and we know that the flux capacitor will leak if we don’t clean it out weekly. Followed up with:

Now might be a good time to get that project approved.

Being a DBA takes soft skills. Getting this inside info on the squeaky wheels helps me prepare for the incoming support requests. Which brings me to:

And I’d follow that up with, “Why are you leaving?” Sometimes that helps reveal some of the biggest land mines.

Reporting in Production: SQL Server (video)

Everyone wants reports but nobody wants to build out a separate reporting server. What options do you have short of throwing up your hands in defeat? Join Jeremiah to learn about four SQL Server technologies that help with reporting in production. This session is for DBAs and developers looking for a place to get started with reporting against SQL Server.

For the links and scripts, check out the Reporting in Production: SQL Server page.

Exploring the Magic of the Plan Cache

The plan cache holds a lot of secrets about what’s going on inside SQL Server. In the First Responder Kit we shared one of our plan cache scripts to find the top resource consuming queries. That query works well, but over time we’ve added some additional functionality to the query. I figured it was time to share the new query that we’re using to analyze SQL Server performance.

Our existing query looks at individual query stats, but it doesn’t take into account stored procedure or trigger execution stats. During our SQL Critical Care checks, we’ve found it helpful to look at both procedures and triggers to figure out if they were causing problems for overall server health.

What Are We Looking At?

The original query just looked at sys.dm_exec_query_stats and looked at average and total metrics across CPU, duration, and logical reads.

Plan cache query: classic flavor

Plan cache query: classic flavor

 

This was a helpful approach, but over time it’s become apparent that we needed to look at more than just individual statements – what if a single procedure was causing problems that only show up in aggregate?

The Bigger Picture

To get a view of the bigger picture, we added in two more DMVs - sys.dm_exec_trigger_stats and sys.dm_exec_procedure_stats. This gives us a big picture view of what’s going on inside SQL Server – the only thing that would make this better would be a DMV for function execution stats.

To avoid skewing results, data is aggregated by the query_hash - unfortunately this means the queries won’t work against SQL Server 2005. Not only do we rank queries by CPU, IO, duration, and execution count, but a second level of ranking is provided that ranks queries within their logical grouping – by statement, procedure, and trigger. If you want to see what your most expensive trigger is doing, it’s easy enough to make a quick change to the query.

Check it out:

New and Improved Output

New and Improved Output

You can download the script in the usual way – by agreeing to our crazy terms of service, selling your email address to a foreign government, and clicking “download”.

Using the Query

This query is easy enough to start using right way – just download the file and run it. Once you’re familiar with it, scroll to the bottom and you’ll see two separate queries you can run. The first is suitable for pasting into Excel – it has no query plan and the SQL text is shortened to easily paste into a single cell. The second query has everything that you want.

Since data is dumped into a temporary table during analysis, it’s easy to keep re-querying the temporary table as you re-sort data or refine what you’re looking for.

css.php