We had a comment recently on Five Things That Fix Bad SQL Server Performance that got me thinking. The comment came from a frustrated system administrator, who wrote:
Chucking resources at a problem is not solving that problem, it’s just temporarily masking the symptoms of that problem.
Funnily enough, I’ve heard the exact same thing from developers who are facing a tough problem on a dramatically undersized SQL Server. The basic gist of what they ask is:
Doesn’t it make me a bad developer if I can’t solve this problem with code?
Everybody’s worried that adding memory is somehow cheating.
Performance Problems and Memory
There are three general categories of performance problems relating to memory:
- Problems you can solve with memory OR code / indexes
- Problems you can solve only with code / indexes and NOT just memory
- Problems you can solve only memory and NOT just code / indexes
For category #1, solving the problem with memory is often cheaper than changing code and indexes: the operational cost of the hours of development and testing can be quite high.
But I specifically started thinking about category #3. This is the category that the frustrated sysadmin and proud developer think doesn’t exist!
Here are three examples of problems that you can solve with memory, and not simply with code and indexes:
1. Memory Reduces IO On a Read-Hungry Workload
You’ve got a critical OLTP application. You’re using shared storage. Most of the time, your storage throughput is pretty good. But sometimes, other users of the storage get busy, and your read and write latencies go through the roof. Your queries slow down, blocking increases, and timeouts start hitting the application. You’ve tuned your indexes, but parts of the application rely on realtime reports that need to scan large tables.
SQL Server is designed so that in this case you can add more memory and reduce read IO to the storage, giving you more stable performance. It’s absolutely not cheating to give the server more memory — speeding up the storage would be much more expensive.
2. Some Queries Need Lots of Workspace Memory
In another database, you do lots of reporting and aggregation. You’ve got large tables. Your queries frequently join them, do sorts, and use parallelism. All of these operations need memory, and the more data you’re aggregating and querying, the more memory each of these queries can need.
Before a query starts running, it figures out how much of this workspace memory it needs, and looks at what is available given other queries that are running. SQL Server has to balance the memory used for data pages, execution plans, and this workspace memory: it doesn’t want to let one query take over! If not enough memory is available, your query has to wait.
You’ve optimized your queries and indexes and made sure the memory estimates are realistic, but when lots of people run reports, your queries can’t even get started because of a memory crunch. Solving this problem by adding more memory isn’t cheating: it’s helping the SQL Server do what it’s designed to do.
3. Some Features Need Lots of Memory Periodically
You’re managing a data warehouse where you’re using table partitioning to implement a sliding window. This has helped your users a lot: you can add and remove large amounts of data with minimal locking and blocking.
But for query performance reasons, you’ve had to add one or two non-aligned indexes that are present during the day. To do partition switching, you have to disable these indexes to do your sliding window data load and archive each night. Creating the partitioned index can consume large amounts of memory.
Using Enterprise features can require investing in Enterprise levels of memory.
Adding Memory Isn’t Cheating
Memory is an important tool for performance in SQL Server: it isn’t just something that covers up bad code, it’s often a solution to many different types of problems. Memory isn’t the only tool you need to help your queries go fast in SQL Server, but it’s certainly NOT a tool you should ignore.
Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, September 16! Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!
Curious how you can give a compelling technical presentation? Join Kendra to learn five important tips on how to select the right topic for your talk, write an effective abstract, construct a coherent presentation, and make it to the podium to give your first presentation.
Have questions? Feel free to leave a comment so we can discuss it on Tuesday!
Let’s say you’re a DBA managing a 2TB database. You use SQL Server transaction log shipping to keep a standby copy of the database nice and warm in case of emergency. Lots of data can change in your log shipping primary database: sometimes it’s index maintenance, sometimes it’s a code release, sometimes it’s just natural data processing.
And when a lot of data changes, your warm standby sometimes is a lot less warm than you’d like. It can take a long time to restore all those log files!
Here’s a trick that you can use to help “catch up” your secondary faster. A quick shout-out to my old friend Gina Jen, the SQL Server DBA and log shipper extra-ordinaire who taught me this cool trick years ago in a land far far away.
Log shipping Secret Weapon: Differential Backups
Through lots of testing and wily engineering, you’ve managed to configure nightly compressed full backups for your 2TB database that are pretty darn fast. (No, not everyone backs up this much data every night, but stick with me for the sake of the example.)
- Log shipping primary had a full backup last night at 2 am
- Log shipping secondary has transaction logs restored through 7 am
- It’s 3 pm, and you’d really like to have everything caught up before you leave the office
Here’s an option: run a compressed differential backup against your log shipping primary. Leave all the log shipping backup and copy jobs running, though — you don’t need to expose yourself to the potential of data loss.
After the differential backup finishes, copy it over to a nice fast place to restore to your secondary server. Disable the log shipping restore job for that database, and restore the differential backup with NORECOVERY. This will effectively catch you up, and then you can re-enable the log shipping restore and you’re off to the races!
But Wait A Second. Aren’t Those Full Backups A Problem?
Running a full backup without the COPY_ONLY keyword will reset the “differential base”. That means that each differential backup contains changes since the last full backup.
But here’s the cool thing about log shipping: restoring a transaction log brings the new differential base over to the secondary.
So as long as you’ve restored transaction logs past the point of the prior full backup, you can restore a differential to your log shipping secondary.
This Sounds Too Good To Be True. What Can Go Wrong?
This isn’t foolproof. If you haven’t run a full backup in a long time, your differential backup may be really big, and taking that backup and restoring it may take much longer than restoring the logs. (Even if you’re using log shipping, you should be doing regular full backups, by the way.)
And like I mentioned above, if your log restores are so far behind that they haven’t “caught up” with the last full backup taken on the primary, you’re not going to be able to restore that differential backup to the secondary.
What If a Transaction Log Backup File Is Missing?
A technique like this could work for you, as long as a full backup hasn’t run since the transaction log backup file went missing. (If it has, you need to re-setup log shipping using a full).
But a word of warning: if you have missing transaction log backup files, you have a “broken” log chain. You should take a full backup of your log shipping primary database to get you to a point where you have functioning log backups after it, even if you’re going to use a differential to bridge the gap on your log shipping secondary. (And keep that differential around, too!) Keep in mind that you won’t have point-in-time recovery for a period around where the log file is missing, too.
Log shipping is Great
I just love log shipping. It’s quick to set up, it’s relatively easy to manage, it’s included in Standard Edition, and it’s got these surprising little tricks that make it easy to keep going. You can learn more about log shipping from Jes, and join us in person in 2015 in our Senior DBA training, which includes an advanced module on log shipping.
It’s incredibly valuable to keep your audience active. Even when you’re working in a lecture format, you should try to help give your audience moments where it’s natural for them to mentally wake up and shake themselves out of a more passive listening mode.
Don’t worry: this doesn’t require changing much as a speaker. You don’t have to make your audience square dance.
There’s two big benefits to building habits to keep your audience active:
Benefit 1: Active listeners learn more
When the role of a student is limited to passive absorption, it’s easy to get bored and sleepy. The student has to constantly refocus themselves, and that takes effort on their part. By giving students built-in opportunities to be mentally active, you effectively give their brain less work to do.
Benefit 2: Active learners are More Fun to teach
Imagine standing in front of two rooms of people: in one of them, people are slumped over with glazed eyes. In the other, they’re alert, leaning forward slightly, following you with their eyes, and taking notes.
If you’re a very beginning speaker, both rooms may terrify you. That’s OK. But if you’ve got a few sessions under your belt, the room of alert people is probably much easier to work with. You get natural feedback about what they understand and it adds meaning to your experience. It helps you do a better job. You’re less tired at the end and more energized.
That’s why it’s worth it. Here’s how to get it done.
Warm up your audience
Don’t stand at the front of the room in silence before you get started. Chat with the people who are already there.
It’s OK if this doesn’t come naturally to you. I am very shy and nervous around strangers and small talk is quite difficult for me. You can overcome it! Write down a list of simple questions to ask the audience and even glance at it from time to time so you don’t have to remember:
- Where are people from?
- What other sessions have people been to? What was good? (If you’re at a conference)
- What made them want to attend the session / is there anything they’re looking forward to learning?
Remember to smile. Drawing a smiley face on a post it note somewhere and sticking it on the desk helps, strangely enough. (People won’t even know it’s yours.) Welcome people in as they come into the room casually and let them know you’re glad they’re there. If you’re making eye contact, you’re already helping your audience.
Identify your audience’s job role
One easy question to build into the beginning of your presentation is to ask your audience to identify their job title or job role by raising their hands. Build a simple slide that says, “What’s your job role?” at the top and lists a bunch of options, including “Other”. Ask people to raise their hands when you say their role name out loud. (I usually tell people it’s OK to vote twice if they do more than one thing.)
This is a nearly foolproof method to get most audiences in the United States to interact with you. The question is designed to have no “wrong” answer. It also gives you insight into the background of your audience and their interests.
It’s possible that in some settings the audience will have a hard time even answering this question. Be ready for that, and understand that it’s not you. This gives you an early tip that you may have an quiet group for cultural or situational reasons, which is very useful for you to know!
Ask questions during your presentation
When you first start speaking, audience participation may be scary. Know that you can get past that: questions and comments from the audience are one of the most fun and rewarding things you can work with as a presenter. They help you work in real-world advice and information and make your presentation relevant.
See this as something to build up to gradually over time. Some easier questions to start with:
- How many of you have worked with this feature?
- How many folks have heard of this?
- I’m about to show a big gotcha where I’m going to do ___. Do any of you think you may know what’s coming?
- Who thinks they might try this out?
For all of these questions, you need to be comfortable with the fact that nobody may raise their hand. That’s OK! You can say something like, “Great, it sounds like this is going to be new info for most of you.” Take that as useful information. If nobody says they might try it out, ask why in a friendly way.
Here’s a few pitfalls to avoid:
- If you’ve got a super quiet audience, don’t feel that you have to force the questions or make them interact with you. It’s OK. Go with what feels more natural to you.
- Avoid the question, “Does this make sense?” I’ve had to train myself out of this one. It’s heard as a rheutorical question by many people and may just fall flat.
- Also avoid, “Is anyone confused? / Does anyone not follow me?” Unless you’ve got a super-comfortable, confident, close knit group, most confused people will be shy to raise their hands.
I try to be very open about areas that have been very confusing to me in the past, or which may have stumped me for a while. Don’t force yourself to do this, but if you can get comfortable sharing with your audience what has been hard for you, this may help them get over the fear of “being the one to ask the dumb question”.
Give people an activity
I am a huge fan of challenges, quizzes, and interactive activities in our training classes. I’m always trying to think of new ways that I can engage learners to actively think through problems, because I believe that most people learn better when they get to try to solve a problem.
If you’ve got some presenting experience, you can include quizzes and design activities into your sessions. This does involve some risk taking, because you need to have a way to get people comfortable working together. I like to keep group activities short and give people a clear mission, then meet up again right away as a group to dive back into the class learning, but there’s many ways to do this.
Take breaks. No really, take breaks.
As a presenter, you need breaks. So do your attendees. Getting up and moving around on a regular basis helps people focus. Don’t feel like people are better off if you blast them with learning non-stop: they aren’t!
If you’re presenting with a laptop, you can make it easier for your attendees to relax and laugh during the break. We like to put DBA reactions up on the screen in auto refresh mode during breaks.
Alternate Speakers (if you Can)
We often have multiple speakers for our longer training sessions. This is helpful to us as speakers, but it also helps the audience. Changing up between different people with different presentation styles, manners of speaking, and movement helps people learn actively if you can do it.
Humor is an advanced feature
One of the best ways to keep your audience active is to make them laugh. Laughing wakes people up, gets some fresh air to their brain, and gives the audience a sense of togetherness.
But, uh, this ain’t easy. I’ve had to accept that I’m the funniest when I don’t mean to be. If I want to get good laughs out of the audience, I just need to use some of the methods above to help the audience learn actively. If that’s happening and I’m comfortable being my geeky self, I’ll get some great laughs out of them, and we’re all happy.
You can do this!
Whatever stage of presenting you’re at, you can improve your skills at helping your audience learn actively. If you’re just starting out, it’s ok: first just get comfortable being on stage, then start adding these in gradually.
And if you’re sitting in the audience, wondering what it’s like to be on stage, why not take a chance and try it out?
Let’s play the quiz! We’re using the honor system: be honest with yourself.
Five Simple Questions
- When I say the first name “Hilary”, you immediately know his last name is ____.
- Have you successfully set up replication so that you can add and remove articles without running a snapshot for everything, and done this before? (Yes or No). _____
- Do you have monitoring set up that notifies you immediately if a subscriber is behind? (Yes or No) ____
- How many books on SQL Server replication do you own (which you have read)? _____
- Do you know exactly how long it would take to re-initialize all your publications? (Yes or No) ____
Score Your Answers
- One point if you thought of this man.
- One point for Yes
- One point for Yes
- One point if the answer is 1 or higher.
- One point if your answer is “I know exactly how long, and I wish I didn’t know.”
Here’s how to interpret your score:
- 1-3 points: You’re just beginning, but you cared enough to take the quiz. You can get there.
- 4 points: You’re good at replication, AND you’re honest
- 5 points: You’re an expert
Ready to learn more about replication? We’ve written a lot about it.
Making index changes in SQL Server is tricky: you immediately want to know if the new index helped your performance and if it’s being used, but SQL Server execution plans and their related statistics
can be are insanely confusing.
It can be useful to run sp_recompile after you create an index, but not necessarily for the reason you might think.
It’s easier to show this than just write it out.
Let’s Fix a Slow Query By Creating an Index
Let’s say the biggest, baddest query on our SQL Server is from a stored procedure called dbo.kl_RecentlyCreatedAnswers. Our free tool, sp_BlitzCache, calls this procedure out for being our #1 CPU user:
exec sp_BlitzCache @top=5, @results='narrow'; GO
I can fix that. I design and test this index, and I deploy it to production. It’s totally going to fix this bad execution plan!
CREATE NONCLUSTERED INDEX itotallyfixedit ON [dbo].[Posts] ([CreationDate],[PostTypeId],[Score]) INCLUDE ([Id]) GO
I’m excited to see how awesome things are, so I immediately run sp_BlitzCache again. But here’s what I see:
Wait a second. But I just… It hasn’t even changed… What the heck, SQL Server?
Why Is My Terrible Plan Still In Cache?
Creating the index doesn’t cause SQL Server to find related plans in the cache right away and flush them out. My execution plans will hang out in the cache until one of these things happens:
- The query runs again. SQL Server sees that the schema on the table has changed and decides it needs to reconsider what to do, it recompiles the execution plan*.
- The old plan gets “aged” out of cache if it isn’t used again (maybe pretty fast if there’s a bunch of memory pressure)
- The plan is cleared by DBCC FREEPROCACHE, taking the database offline, a SQL Server restart, or a settings change that impacts the plan cache
*The fine print: Books Online lists a list of causes of recompilation here– note that creating an index on the table isn’t necessarily guaranteed by the list. However, the amazing Nacho Portillo recently blogged on this after looking at the source code and indicates that creating an index does flip a ‘schema changed’ bit that should reliably always trigger a recompile. He also mentions that there’s really no way to query all the plans that are still in the cache but are basically ‘invalidated’ due to the metadata change. Sorry, rocket scientists.
But My Plan Is STILL In Cache. Sort Of. Remember When I Said This Was Confusing?
Once the query runs again, I see something different. It did automatically decide to use my new index!
Wait a second. Something’s weird. Compare the average executions and CPU for the stored procedure (line 1) and the statement in it (line 2). They don’t seem to match up, do they?
Here’s what happened: the stored procedure ran again. The statement detected the schema change and recompiled. But the *whole* stored procedure didn’t recompile, and it’s showing me stats for 13 executions (not just the 10 since the index change). So my old performance metrics are all mixed up with my new performance metrics. I’m not loving that.
sp_recompile Can Help
Confusing, right? Because of this issue, you might want to run sp_recompile against the stored procedure after making an index change, even if it decided to use it. This forces the whole procedure to get a fresh plan and start collecting fresh execution statistics the next time it runs.
You could also take a heavier hand and run sp_recompile against the whole table, but do that with care: it requires schema level locks and can cause long blocking changes if lots of queries are reading and writing from that table.
Remember: even with sp_recompile, the execution plan stays in cache until it runs again (or is evicted for other reasons). The benefit is just that it will give you a “fresher” view of the execution stats for the whole stored procedure.
Fact: It’s a Little Messy
The main thing to know here is that creating indexes won’t drop or flush plans out, so don’t be surprised if you see old plans in execution plan analysis after you add indexing changes. This isn’t a completely tidy process, sometimes things are a little bit messy.
If you’re actively looking at execution plans in your cache, then running sp_recompile after you create an index can help ensure you’re looking at consistent data. But use it with care and monitor for blocking– don’t leave it unattended.
This example used a downloaded copy of the StackOverflow database. Learn how to get your own here.
We always like to innovate — not just with the solutions we design for our consulting customers and in how we teach, but in our free videos, too.
Our YouTube channel has become super popular. Lots of folks watch the recordings of our live webcasts. We stopped recently and asked, “How can we make this even better for the folks who attend our live event?” And we realized: we can give you more time to ask questions about that week’s training topic!
Here’s your mission:
- Watch the video below today. We won’t be presenting this live this week or re-covering the material from the video, we’re doing more advanced QA for the folks who’ve already watched it.
- Note down questions or comments you have on this post. (This is totally optional, but it means you won’t forget your question and it’s more likely we have time to talk about it with you.)
- Attend the live webcast on Tuesday at the normal time (11:30 am Central). Register here.
- During the first 10 minutes of the webcast, we’ll give away a prize– but you must be present to win!
The live discussion of the video and Q&A won’t be recorded and published, and you also need to be present to win the prize. See you on Tuesday!
“Enterprise Edition was installed for SQL Server, but it turns out that we only have a license for Standard Edition. Is that an easy change?”
I see this question a lot. The answer is well documented by Microsoft, but it seems to be really hard for folks to find! If you’d like to go straight to the source, everything I’m going to highlight here comes from the MSDN page Supported Version and Edition Upgrades.
Sometimes Edition Upgrades (SKUUPGRADES) are simple
If you want to make a supported edition change, it takes a little downtime but isn’t all that tricky. You run SQL Server Setup and just follow the steps in the Procedure section here.
Protip: The Edition Upgrade GUI lets you see and copy the current license key for that instance of SQL Server. (No, I’m not showing a screenshot with my key in it!)
You can also do this from the command line using the SKUUPGRADE parameter (and back in SQL Server 2005 and prior, that was your only option).
Changing the edition causes some downtime, but it’s a simple procedure. The fact that it’s relatively simple isn’t an excuse to skip testing: always run through this outside of production first so you know exactly what to expect. And always, always, always take your backups and make sure they’re on separate storage before you start. Document everything you need to know about your configuration just in case something goes wrong and you’ve got to reinstall.
It’s pretty simple. Except when it’s not supported.
What Goes Up Does Not Necessarily Come Down
The way I usually remember the rules is that you can typically change from a cheaper version to a more expensive version. But you can’t necessarily go from a more expensive version to a cheaper version.
So if you have SQL Server Enterprise Edition and you want to change to Standard Edition, a simple SKUUPGRADE isn’t going to work for you. (If you have the “Evaluation” Enterprise Edition, you’re probably OK though!) Check the chart for what you want to do to make sure.
Clusters are Special. (Not in a good way in this case.)
A lot of the confusion is around SQL Servers installed on failover clusters. You have to scroll waaaaay down on that page to see this:
Ouch! Changing the edition of a clustered SQL Server is not a simple thing.
While I’ve made you uncomfortable, check out KB 2547273, “You cannot add or remove features to a SQL Server 2008, SQL Server 2008 R2, or SQL Server 2012 failover cluster”.
What if I Don’t Know What Edition I Need?
Typically the answer here is to use Evaluation Edition. But if you’re running a failover cluster, be careful– as you can see above, you can’t easily change from Enterprise Evaluation to Standard Edition.
Will CHANGING THE EDITION Reset My Service Packs?
I believe this used to be true on SQL Server 2005– if you changed editions, you’d have to reapply service packs and cumulative updates afterward.
I just ran a test on SQL Server 2012 and upgraded from Developer Edition to Enterprise Edition on a test instance, and I still had version 11.0.3431 (Service Pack 1, Cumulative Update 10) after the upgrade.
But like I said, test this out with your version, even if it’s using a quick virtual environment that you don’t keep after the change has been completed successfully. There’s other real perks to doing this as well, such as making sure that your license keys really work and are the edition you think they are!
What If My Change Isn’t Supported By the GUI / Upgrade Installer?
In this case, you need to uninstall and reinstall SQL Server. It’s going to take longer and cause more downtime. You’ll have to reconfigure everything and reinstall service packs. (It’s not actually that much extra work, because you were going to take those backups and document all the special configuration just in case, right?)
What if I Can’t Take Much Downtime?
If downtime is a real issue, don’t make this change in place. Set up a new SQL instance, test the heck out of it, get it into monitoring and plan a way to migrate to it with limited downtime using something like Database Mirroring. (If you’re considering this, read that link– it mentions that mixing editions between database mirroring partners isn’t supported by Microsoft. You can’t set it up through the GUI, you have to use TSQL. If that makes you paranoid, you could do the migration with log shipping.)
I’ve worked with a lot of features in SQL Server. I know what I think is tricky and more difficult than it looks like at first. But experiences vary, right?
So I asked the Twitterverse, “What are the Top 3 Trickiest Features in SQL Server?” Here’s what I heard back.
SQL Server Replication “wins” the top spot for being mentioned by the most people. Maybe it won because it’s touched the hearts of the most people since it works with Standard Edition. Maybe it’s just been in the product long enough to have tricked lots of us?
@Kendra_Little replication is definitely #1!
— Derik Hammer (@SQLHammer) July 8, 2014
#2: Availability Groups
Coming in second is SQL Server Availability Groups. These may have only been with us since SQL Server 2012, but their complexity has impressed quite a few people already.
@Kendra_Little replication, service broker, AG
— Jason Kyle (@JasonNKyle) July 8, 2014
The number three place goes to a feature I hadn’t thought of myself… database administrators themselves. I laughed out loud when I saw these tweets, but, well, there’s some truth in it. We are a tricksy bunch!
— Joe Fleming (@muad_dba) July 8, 2014
Other top tricky features that came up:
- Service Broker (guessing they worked at MySpace)
- SSIS (oh, the clicking! the clicking!)
- SQL Server Clustering
- Active Directory (ah, Kerberos authentication, you devil you)
- Resource Governor (someone actually used Resource Governor!?!?!)
- Extended Events
- SAN Administrators
- Enterprise Architects
My Personal Top 3
Turns out I’m not so different from the Twitter community. My personal top three trickiest features are: Availability Groups, Replication, and Service Broker. (I’m not really all that into queues in SQL Server, but I do like Event Notifications, which use the Broker framework.)
What are yours?
As teachers, we’re always working to maximize the skills that students can learn in a given amount of time.
Many students “learn by doing.” But what’s the best way to do this?
You may immediately think of lab exercises as an option. But labs are treacherous to implement in a classroom environment: ten minutes after you’ve begun, the person next to you is suffering from unexpected reboots, you’re not sure why the scripts aren’t working properly for you, and the fellow behind you is somehow already done. Even under the best of circumstances, labs don’t move at the same pace for everyone. By lunchtime, you’re bored and frustrated.
We’ve got a better way.
We’re building two very cool Challenges in our two day Seattle course, Make SQL Apps Go Faster, which will be held just prior to the SQL PASS Summit in 2014.
Challenge 1: Learn to Diagnose a Server
We work with SQL Server in two different ways: we’re consultants and teachers. The two parts of our business enrich one another. Through consulting we constantly expand our experience and understanding of the real challenges people face. In the classroom we’re always refining our methods to help people learn faster. Now we’re bringing these two things even closer together.
In our upcoming two day course in Seattle, “Make SQL Apps Go Faster”, we’re using challenges to help our students learn more and really engage with the class. For the first day, students will get dynamic management view information for a SQL Server with a problematic workload. It will contain information just like you can gather with our free tools in the real world:
- sp_Blitz® ouput
- A snapshot of wait statistics and information from sp_AskBrent®
- Index information from sp_BlitzIndex®
- Top plans in cache from sp_BlitzCache™
Your challenge: write down what you’d do to tackle this environment.
We won’t give away all the answers. We’ll train you on these topics using slightly different examples so that you still get to figure things out on your own! Then at the end of the day we’ll go over the challenge, talk through your solutions, and compare them to our suggestions and experience working with SQL Servers around the world.
Challenge 2: Digging into TSQL, Queries, and Execution Plans
On the second day of the training, you get to specialize on query tuning challenges: you have 3 queries that you need to make faster. You’ll get the TSQL, schema, indexes, and execution plan information for all three queries and your challenge is to figure out how YOU would make these faster.
On this day you’ll watch Kendra, Brent, and Jeremiah dig in deep to tune queries and indexes and delve into more advanced features of execution plans in SQL Server.
At the end of the day you’ll revisit the challenge. Would you make any choices differently? How do your ideas compare to the solutions we came up with? Can you make the queries even faster than we can!?!
Get involved in your training
Consulting and teaching have taught me a huge lesson: people learn better and faster when they’ve got something fun to work on. We have a blast teaching people about SQL Server — join us in Seattle and take the challenge.