Blog

Should I Run sp_recompile After I Create An Index?

Making index changes in SQL Server is tricky: you immediately want to know if the new index helped your performance and if it’s being used, but SQL Server execution plans and their related statistics can be are insanely confusing.

It can be useful to run sp_recompile after you create an index, but not necessarily for the reason you might think.

It’s easier to show this than just write it out.

Let’s Fix a Slow Query By Creating an Index

Let’s say the biggest, baddest query on our SQL Server is from a stored procedure called dbo.kl_RecentlyCreatedAnswers. Our free tool, sp_BlitzCache, calls this procedure out for being our #1 CPU user:

exec sp_BlitzCache @top=5, @results='narrow';
GO

blitzcache before

I can fix that. I design and test this index, and I deploy it to production. It’s totally going to fix this bad execution plan!

CREATE NONCLUSTERED INDEX itotallyfixedit
ON [dbo].[Posts] ([CreationDate],[PostTypeId],[Score])
INCLUDE ([Id])
GO

I’m excited to see how awesome things are, so I immediately run sp_BlitzCache again. But here’s what I see:

blitzcache after

Wait a second. But I just… It hasn’t even changed… What the heck, SQL Server?

Why Is My Terrible Plan Still In Cache?

Creating the index doesn’t cause SQL Server to find related plans in the cache right away and flush them out. My execution plans will hang out in the cache until one of these things happens:

  • The query runs again. SQL Server sees that the schema on the table has changed and decides it needs to reconsider what to do, it recompiles the execution plan*.
  • The old plan gets “aged” out of cache if it isn’t used again (maybe pretty fast if there’s a bunch of memory pressure)
  • The plan is cleared by DBCC FREEPROCACHE, taking the database offline, a SQL Server restart, or a settings change that impacts the plan cache

*The fine print: Books Online lists a list of causes of recompilation here– note that creating an index on the table isn’t necessarily guaranteed by the list. However, the amazing Nacho Portillo recently blogged on this after looking at the source code and indicates that creating an index does flip a ‘schema changed’ bit that should reliably always trigger a recompile. He also mentions that there’s really no way to query all the plans that are still in the cache but are basically ‘invalidated’ due to the metadata change. Sorry, rocket scientists.

But My Plan Is STILL In Cache. Sort Of. Remember When I Said This Was Confusing?

Once the query runs again, I see something different. It did automatically decide to use my new index!

blitzcache after after

Wait a second. Something’s weird. Compare the average executions and CPU for the stored procedure (line 1) and the statement in it (line 2). They don’t seem to match up, do they?

Here’s what happened: the stored procedure ran again. The statement detected the schema change and recompiled. But the *whole* stored procedure didn’t recompile, and it’s showing me stats for 13 executions (not just the 10 since the index change). So my old performance metrics are all mixed up with my new performance metrics. I’m not loving that.

sp_recompile Can Help

Confusing, right? Because of this issue, you might want to run sp_recompile against the stored procedure after making an index change, even if it decided to use it. This forces the whole procedure to get a fresh plan and start collecting fresh execution statistics the next time it runs.

You could also take a heavier hand and run sp_recompile against the whole table, but do that with care: it requires schema level locks and can cause long blocking changes if lots of queries are reading and writing from that table.

Remember: even with sp_recompile, the execution plan stays in cache until it runs again (or is evicted for other reasons). The benefit is just that it will give you a “fresher” view of the execution stats for the whole stored procedure.

Fact: It’s a Little Messy

The main thing to know here is that creating indexes won’t drop or flush plans out, so don’t be surprised if you see old plans in execution plan analysis after you add indexing changes. This isn’t a completely tidy process, sometimes things are a little bit messy.

If you’re actively looking at execution plans in your cache, then running sp_recompile after you create an index can help ensure you’re looking at consistent data. But use it with care and monitor for blocking– don’t leave it unattended.

This example used a downloaded copy of the StackOverflow database. Learn how to get your own here.

Developers: You Don’t Need a DBA. I’ll Prove It.

That’s right, you heard me, and I’ll prove it on a really big stage. I’m honored to announce that I’ll be the first speaker at this year’s 24 Hours of PASS: Summit Preview Edition.

On Tuesday, September 9th, join me for my newest session: “Developers: Who Needs a DBA?” Here’s the abstract:

You store data in SQL Server, but you don’t have enough work to keep a full-time DBA busy.

In just one hour, you’ll learn the basics of performance troubleshooting and index tuning. I’m a recovering developer, and I’ll teach you the basic care and feeding of a Microsoft SQL Server 2005, 2008, 2012, or 2014 instance, plus give you scripts to keep you out of trouble.

Register now for free. While you’re there, check the boxes for some of the other 24 back-to-back SQL Server training sessions from community volunteers who’d love to share their knowledge with you. See you there!

Generating Identities

The only thing you ever need to use for database identity is an IDENTITY, right? Well, maybe. There are a lot of different options and they all have different pros and cons.

IDENTITY columns

The default way to identify objects in SQL Server is to use an INT or BIGINT column marked as an IDENTITY. This guarantees relatively sequential numbers, barring restarts and failed inserts. Using identity columns put the responsibility for creating and maintaining object identity in the database.

SQL Server will cache IDENTITY values and generate a new batch of identity values whenever it runs out. Because identity values are cached in memory, using identity values can lead to jumps in the sequence after SQL Server is restarted. Since identities are cached in memory in large batches, they make it possible to rapidly insert data – as long as disks are fast enough.

Sequences

Sometimes the application needs more control over identity. SQL Server 2012 added sequences. A sequence, unlike an identity value, is a separate object in the database. Both application and database code can read from the sequence – multiple tables can share a sequence for an identity column or separate sequences can be created for each table.

Developers using a sequence can use the CACHE value to cache a specific number of sequence values in memory. Or, if the application should have minimal gaps in the sequence, the NOCACHE clause should be used.

The Problem with Sequential Identities

Both IDENTITY and SEQUENCE values keep identity generation squarely in the database and, by using integral values, they keep the value narrow.

You can run into problems with sequential inserts on very busy systems – this can lead to latch contention on the trailing pages of the clustered index. This issue can be resolve by spreading inserts across the table by using a GUID or some other semi-random clustering key. Admittedly, most systems are never going to run into this problem.

GUIDs for Object Identity

Some developers use GUIDs as a way of managing object identity. Although database administrators balk at this, there are good reasons to use GUIDs for object identity.

GUIDs let the application generate object identity. By moving object identity out to the application layer, users can do work in memory and avoid multiple round trips to the database until they’re ready to save the entire set of data. This technique gives tremendous flexibility to application developers and users.

There’s one other thing that a well designed application gets from this technique – independence from the database. An application that generates its own identity values doesn’t need the database to be online 24/7; as long as some other system is available to accept writes in lie of the database, the application still function.

Using GUIDs for object identity does have some issues. For starters, GUIDs are much wider than other integral data types – 16 bytes vs 4 bytes (INT) or 8 bytes (BIGINT). This is a non-issue for a single row or even for a small database, but at significant scale this can add a lot of data to the database. The other issue is that many techniques for generating sequential GUIDs in the application (see NHibernate’s GuidCombGenerator) can still run into GUID collisions.

Integral Generators

What if you could get the best of both worlds? Applications generating unique identities that are also sequential?

The point of identity generation is to abstract away some portion identity from data attributes and provide an independent surrogate value. GUIDs can provide this, but they aren’t the perfect solution. Identity generators like flake or rustflakes promise roughly sequential identity values that are generated in the application layer and are unique across multiple processes or servers.

The problem with an external identity generator is that it is an extra piece of code that developers need to manage. External dependencies carry some risk, but these are relatively safe items that require very little effort implement and maintain.

The Solution

There’s no right solution, there’s only a solution that works for you. You may even use each solution at different points in the lifecycle of the same product. It’s important, though, for developers and DBAs to be aware of how identity is currently being handled, the issues that can arise from the current solution, and ideas of how to handle it going forward.

Why Archive Data?

The data story so far

The data story so far

Meet Margot. Margot is an application developer who works for a small company. Margot’s application collects and generates a lot of data from users including their interactions with the site, emails and texts that they send, and user submitted forms. Data is never deleted from the database, but only a few administrative users need to query historical data.

The database has grown considerably because of this historical data – the production database is around 90GB but only 12GB or so is actively queried. The remaining data is a record of user activity, emails, text messages, and previous versions of user data.

Margot is faced with an important decision – How should she deal with this increase in data? Data can’t be deleted, there isn’t budget to upgrade to SQL Server Enterprise Edition and use table partitioning, and there’s a push to move to a cloud service to eliminate some operational difficulties.

Using Partitioned Views to Archive Data

A Partitioned View

A Partitioned View

One option that Margot has read about is “partitioned views” – this is a method where data is split into two or more tables with a view over the top. The view is used to provide easy access to all of the information in the database. Storing data across many tables means DBAs can store data in many different ways – e.g. compressed tables or filegroups and tiered storage.

There’s a downside to this approach – all of the data is still in one database. Any HA solutions applied to the live portion of the data set will have to be applied to the entire data set. This could lead to a significant cost increase in a hosted/cloud scenario.

Archiving Data with a Historical Database

Archive this!

Archive this!

The second thing that sprang to mind was creating a separate archival database. Old data is copied into the archival database by scheduled jobs. When users need to run historical reports, the queries hit the archival database. When users need to run current reports, queries are directed to the current application database.

Margot immediately noticed one problem – what happens when a user needs to query a combination of historical and current data? She’s not sure if the users are willing to accept limited reporting functionality.

Archiving Data with Separate Data Stores

One active database. One archival octopus.

One active database. One archival octopus.

A third option that Margot considered was creating a separate database for the data that needed to be kept forever. Current data would be written to both the live database and the historical database. Any data that didn’t need to be ever be in the current database (email or SMS history) would only be written to the historical database.

Although this made some aspects of querying more complex – how could row-level security from the primary database be applied to the historical database – Margot is confident that this solves the majority of problems that they were facing.

This solution would require application changes to make querying work, but Margot and her team thought it was the most flexible solution for their current efforts: both databases can be managed and tuned separately, plus the primary database remains small.

Other Ideas?

Not every database needs to scale in the same way. What ideas do you have to solve this problem?

Changing SQL Server Editions: Standard, Enterprise, Evaluation and More

“Enterprise Edition was installed for SQL Server, but it turns out that we only have a license for Standard Edition. Is that an easy change?”

I see this question a lot. The answer is well documented by Microsoft, but it seems to be really hard for folks to find! If you’d like to go straight to the source, everything I’m going to highlight here comes from the MSDN page Supported Version and Edition Upgrades.

Sometimes Edition Upgrades (SKUUPGRADES) are simple

If you want to make a supported edition change, it takes a little downtime but isn’t all that tricky. You run SQL Server Setup and just follow the steps in the Procedure section here.

Edition Upgrade SQL Server 2012

“How complicated can this be?” … Famous Last Words

Protip: The Edition Upgrade GUI lets you see and copy the current license key for that instance of SQL Server. (No, I’m not showing a screenshot with my key in it!)

You can also do this from the command line using the SKUUPGRADE parameter (and back in SQL Server 2005 and prior, that was your only option).

Changing the edition causes some downtime, but it’s a simple procedure. The fact that it’s relatively simple isn’t an excuse to skip testing: always run through this outside of production first so you know exactly what to expect. And always, always, always take your backups and make sure they’re on separate storage before you start. Document everything you need to know about your configuration just in case something goes wrong and you’ve got to reinstall.

It’s pretty simple. Except when it’s not supported.

What Goes Up Does Not Necessarily Come Down

The way I usually remember the rules is that you can typically change from a cheaper version to a more expensive version. But you can’t necessarily go from a more expensive version to a cheaper version.

So if you have SQL Server Enterprise Edition and you want to change to Standard Edition, a simple SKUUPGRADE isn’t going to work for you. (If you have the “Evaluation” Enterprise Edition, you’re probably OK though!) Check the chart for what you want to do to make sure.

Clusters are Special. (Not in a good way in this case.)

A lot of the confusion is around SQL Servers installed on failover clusters. You have to scroll waaaaay down on that page to see this:

The fine print on upgrading failover clusters

The fine print on upgrading failover clusters

Ouch! Changing the edition of a clustered SQL Server is not a simple thing.

While I’ve made you uncomfortable, check out KB 2547273, “You cannot add or remove features to a SQL Server 2008, SQL Server 2008 R2, or SQL Server 2012 failover cluster”.

What if I Don’t Know What Edition I Need?

Typically the answer here is to use Evaluation Edition. But if you’re running a failover cluster, be careful– as you can see above, you can’t easily change from Enterprise Evaluation to Standard Edition.

Will CHANGING THE EDITION Reset My Service Packs?

I believe this used to be true on SQL Server 2005– if you changed editions, you’d have to reapply service packs and cumulative updates afterward.

I just ran a test on SQL Server 2012 and upgraded from Developer Edition to Enterprise Edition on a test instance, and I still had version 11.0.3431 (Service Pack 1, Cumulative Update 10) after the upgrade.

But like I said, test this out with your version, even if it’s using a quick virtual environment that you don’t keep after the change has been completed successfully. There’s other real perks to doing this as well, such as making sure that your license keys really work and are the edition you think they are!

What If My Change Isn’t Supported By the GUI / Upgrade Installer?

In this case, you need to uninstall and reinstall SQL Server. It’s going to take longer and cause more downtime. You’ll have to reconfigure everything and reinstall service packs. (It’s not actually that much extra work, because you were going to take those backups and document all the special configuration just in case, right?)

What if I Can’t Take Much Downtime?

If downtime is a real issue, don’t make this change in place. Set up a new SQL instance, test the heck out of it, get it into monitoring and plan a way to migrate to it with limited downtime using something like Database Mirroring. (If you’re considering this, read that link– it mentions that mixing editions between database mirroring partners isn’t supported by Microsoft. You can’t set it up through the GUI, you have to use TSQL. If that makes you paranoid, you could do the migration with log shipping.)

Register for our Upcoming Free Webcasts and Contests

We have a lot of fun in the pre-shows for our Tech Tuesday Triage webcasts. Technically we start at 11:30AM Central, but we usually go live at around 11AM Central. We talk about tech topics, what we’re up to, and we take questions from the audience.

We’re trying a new experiment just for fun – giving things away. (We like experiments like that.) In the pre-show, we’re going to draw names from the attendee list, and one of the early attendees will get something cool each week. It might be one of our online training videos, a book, our favorite things like T-shirts or computer peripherals, or free entry into one of our training classes. It’ll be different each week, and it’ll be a surprise during the pre-show.

To be eligible, show up at least 15 minutes before a Tech Tuesday Triage webcast starts, and be good at reading the instructions on the screen. Good luck!

Jes Schultz Borland

Aug 5 – the Mysteries of Missing Indexes

Triage Tuesday 30 minute Webcast starting at 11:30 AM Central
SQL Server is trying to help you – when you run a query, you see a missing index request. Before you run that CREATE INDEX script in production, consider a few things. How helpful will the index be? Is it similar to an existing index? Why is it recommending three indexes that are very similar? Jes will solve some common missing index mysteries in this 30-minute webcast. Register now.


Brent Ozar

Aug 12 – Conquer CXPACKET and Master MAXDOP

Triage Tuesday 30 minute Webcast starting at 11:30 AM Central
CXPACKET waits don’t mean you should set MAXDOP = 1. Microsoft Certified Master Brent Ozar will boil it all down and simplify CXPACKET to show you the real problem – and what you should do about it. Register now.


Brent Ozar

Aug 12 – The Right Perfmon Counters for SQL Server

Sponsored by Dell Software – starting at 12:30 PM Central
You need to monitor uptime and performance, but what counters should you focus on? Microsoft Certified Master Brent Ozar will share his favorite counters, explain what they mean, and give you thresholds to watch out for. Register now.


Kendra Little

Aug 19 – What Not to Architect: Application Patterns that Don’t Fit SQL Server

Triage Tuesday 30 minute Webcast starting at 11:30 AM Central
SQL Server’s great for a lot of uses, but not every application design pattern shows off SQL Server’s best side. Join Kendra in this free 30 minute webcast to learn about the top mistakes developers make in architecting solutions with SQL Server. Register now.


Brent Ozar

Aug 19 – Troubleshooting Shared Storage for DBAs

Sponsored by Idera – starting at 12:30 PM Central
Your SQL Server’s data lives on the SAN, and you’re not happy about that. All the Perfmon metrics you gather seem to point to a storage problem, but the SAN admin says the storage is sitting around bored, so it must be a SQL Server problem. Brent Ozar feels your pain – years ago, he was a DBA in the same position, so when his SAN admin quit, Brent took the job. In just 90 minutes, he’ll teach you what’s inside the SAN, why multipathing is so important, how to test storage throughput, and why TempDB should be on local SSDs. Register now.


Brent Ozar

Aug 26 – How to Get Your Query’s Execution Plan

Triage Tuesday 30 minute Webcast starting at 11:30 AM Central
You’ve got a query that’s running too slow, and you need to figure out why. The first step is admitting that you’ve got a problem, but the second step is getting the execution plan. Brent Ozar will show you how to get an estimated execution plan, why you probably shouldn’t, and several different ways to get the actual execution plan. He’ll finish with his favorite resources for learning how to read and improve execution plans. Register now.


Jeremiah Peschka

Sept 2 – Oracle Backup Basics

Triage Tuesday 30 minute Webcast starting at 11:30 AM Central
Database backups are a critical part of being a database administrator. And, let’s face it, many SQL Server DBAs are also responsible for Oracle systems. In this presentation, Jeremiah Peschka covers the basics of Oracle database backups in a way that will make sense of Oracle backups for SQL Server DBAs. Register now.


Kendra Little

Sept 16 – Top 5 Tips for Your First Presentation

Triage Tuesday 30 minute Webcast starting at 11:30 AM Central
Curious how you can give a compelling technical presentation? Join Kendra to learn five important tips on how to select the right topic for your talk, write an effective abstract, construct a coherent presentation, and make it to the podium to give your first presentation. Register now.


Brent Ozar

Sept 17 – What Queries are Killing My Server?

Sponsored by Idera – starting at 12:30 PM Central
You’ve got an in-house application that just isn’t fast enough, but why? You’ve tried running Profiler to trace queries, but it takes too much overhead, and you don’t end up with a really good understanding of what’s going on. In just one hour, you’ll learn three easy ways to catch the killer queries lurking in the shadows. Even better, they’re all completely free. Register now.


Doug Lane

Sept 23 – Introducing sp_BlitzRS

Triage Tuesday 30 minute Webcast starting at 11:30 AM Central
It’s hard to keep up with what your Report Server is doing, especially if your only tool is Report Manager. Now there’s an easier way, using the newest member of the sp_Blitz family: sp_BlitzRS! In this 30-minute webcast, you’ll learn how sp_BlitzRS can help you stay informed of how well your SSRS installation is running, who gets what subscriptions, which reports would benefit from preprocessing, and more! Register now.

The Top 3 Trickiest Features in SQL Server (Twitter Poll)

I’ve worked with a lot of features in SQL Server. I know what I think is tricky and more difficult than it looks like at first. But experiences vary, right?

So I asked the Twitterverse, “What are the Top 3 Trickiest Features in SQL Server?”  Here’s what I heard back.

#1: Replication

SQL Server Replication “wins” the top spot for being mentioned by the most people. Maybe it won because it’s touched the hearts of the most people since it works with Standard Edition. Maybe it’s just been in the product long enough to have tricked lots of us?

#2: Availability Groups

Coming in second is SQL Server Availability Groups. These may have only been with us since SQL Server 2012, but their complexity has impressed quite a few people already.

#3: DBAs

The number three place goes to a feature I hadn’t thought of myself… database administrators themselves. I laughed out loud when I saw these tweets, but, well, there’s some truth in it. We are a tricksy bunch!

Honorable Mentions

Other top tricky features that came up:

  • Service Broker (guessing they worked at MySpace)
  • Security
  • SSIS (oh, the clicking! the clicking!)
  • SQL Server Clustering
  • PIVOT
  • XML
  • CLR
  • Active Directory (ah, Kerberos authentication, you devil you)
  • Resource Governor (someone actually used Resource Governor!?!?!)
  • Extended Events
  • SAN Administrators
  • Enterprise Architects

My Personal Top 3

Turns out I’m not so different from the Twitter community. My personal top three trickiest features are: Availability Groups, Replication, and Service Broker. (I’m not really all that into queues in SQL Server, but I do like Event Notifications, which use the Broker framework.)

What are yours?

Using Triggers to Automatically Add Indexes

Let’s say you’ve got a vendor application that creates tables, loads them, queries them, and drops them all the time. You can’t change the code.

The tables involved desperately need an index in order to go faster, and they’re your biggest performance bottleneck.

Enter DDL triggers. Regular DML triggers fire when you insert/update/delete rows in a table, but DDL triggers fire when you create/alter/drop tables or databases. You may remember them from the last time Jeremiah and I played with them – way back in 2008!

The below code fires whenever a table is created, checks to see if the table name is like CustomerExport%, looks for the existence of a CustomerName field (which I probably should have done with XQuery, but I’m lazy), and if it exists, creates an index on the CustomerName field:

CREATE TRIGGER trgTableAdded ON DATABASE AFTER CREATE_TABLE AS
BEGIN
    SET NOCOUNT ON
    DECLARE @TableName SYSNAME, @SchemaName SYSNAME, @StringToExecute NVARCHAR(4000)
    SELECT @TableName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]','SYSNAME')
    SELECT @SchemaName = EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)[1]','SYSNAME')
    IF @TableName LIKE 'CustomerExport%' 
        AND EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS
            WHERE COLUMN_NAME = 'CustomerName' AND TABLE_SCHEMA = @SchemaName AND TABLE_NAME = @TableName)
    BEGIN
        SET @StringToExecute = 'CREATE INDEX IX_CustomerName ON ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) + ' (CustomerName);'
        EXEC(@StringToExecute);
    END
END
GO
And when Kendra found out what I'd done, she broke my arm.

And when Kendra found out what I’d done, she broke my arm.

Voila – automatic indexing, and a lifetime of regret and careful debugging. See, there’s a boatload of ways this can break:

No datatype checks. In the above example, I’m not checking for datatype. If the vendor changes CustomerName to an NVARCHAR(MAX), my table creation will bomb with an error that CustomerName “is of a type that is invalid for use as a key column in an index.”

Queries can fail. We had some queries fail (particularly SELECT INTOs) because the table schema changed before the rows started going in. Multi-statement transactions may also fail because the whole transaction will get rolled back if the trigger fails.

No duplicate index checks. If the vendor later decides to add an index as part of their code, I could be creating a duplicate index, and now slowing my processes down. Or they might just change the way they do exports, and no longer need the index at all – and I won’t be able to catch that since the tables and indexes appear and disappear so fast. I could actually set up another DDL trigger looking for table drops of these same names, and log index usage information to a table, then periodically analyze that information to see if I needed different indexing.

Lots of risk and work here. Let’s not even go into how incredibly hard it is to catch these issues when the tables and the query plans keep disappearing from memory – it’s not even enough to just grab the slow queries, but you also have to grab the tables and their contents so you can reproduce the queries to figure out the right index to build. Obviously this isn’t a solution everyone should go implement on their vendor apps, particularly mission-critical ones, but it’s an example of how far you can go to tune vendor code when other options have been taken off the table.

Learning By Doing: How We’re Innovating In Our Seattle Course, “Make SQL Apps Go Faster”

A team challenge in our Chicago classes in 2013

A team challenge in our training in Chicago, 2013

As teachers, we’re always working to maximize the skills that students can learn in a given amount of time.

Many students “learn by doing.” But what’s the best way to do this?

You may immediately think of lab exercises as an option. But labs are treacherous to implement in a classroom environment: ten minutes after you’ve begun, the person next to you is suffering from unexpected reboots, you’re not sure why the scripts aren’t working properly for you, and the fellow behind you is somehow already done. Even under the best of circumstances, labs don’t move at the same pace for everyone. By lunchtime, you’re bored and frustrated.

We’ve got a better way.

We’re building two very cool Challenges in our two day Seattle course, Make SQL Apps Go Faster, which will be held just prior to the SQL PASS Summit in 2014.

Challenge 1: Learn to Diagnose a Server

We work with SQL Server in two different ways: we’re consultants and teachers. The two parts of our business enrich one another. Through consulting we constantly expand our experience and understanding of the real challenges people face. In the classroom we’re always refining our methods to help people learn faster. Now we’re bringing these two things even closer together.

In our upcoming two day course in Seattle, “Make SQL Apps Go Faster”, we’re using challenges to help our students learn more and really engage with the class. For the first day, students will get dynamic management view information for a SQL Server with a problematic workload. It will contain information just like you can gather with our free tools in the real world:

Your challenge: write down what you’d do to tackle this environment.

We won’t give away all the answers. We’ll train you on these topics using slightly different examples so that you still get to figure things out on your own! Then at the end of the day we’ll go over the challenge, talk through your solutions, and compare them to our suggestions and experience working with SQL Servers around the world.

Challenge 2: Digging into TSQL, Queries, and Execution Plans

On the second day of the training, you get to specialize on query tuning challenges: you have 3 queries that you need to make faster. You’ll get the TSQL, schema, indexes, and execution plan information for all three queries and your challenge is to figure out how YOU would make these faster.

Always get involved in your learning and question everything. (Isn't that what my face says here?)

Always get involved in your learning and question everything. (Isn’t that what my face says here?)

On this day you’ll watch Kendra, Brent, and Jeremiah dig in deep to tune queries and indexes and delve into more advanced features of execution plans in SQL Server.

At the end of the day you’ll revisit the challenge. Would you make any choices differently? How do your ideas compare to the solutions we came up with? Can you make the queries even faster than we can!?!

Get involved in your training

Consulting and teaching have taught me a huge lesson: people learn better and faster when they’ve got something fun to work on. We have a blast teaching people about SQL Server — join us in Seattle and take the challenge.

Public Service Announcement: Please Don’t Spend Too Much.

Uncle Brent Wants You To Use Launch2015 to Save 30%

Uncle Brent Wants You
To Use Launch2015 to Save 30%

It has come to my attention that some people bought training from us last week and they paid full price.

While that’s cool – we really do appreciate your generosity – you can do better.

During the month of July, use the coupon code Launch2015 to save 30% off any of our training, including:

So in the remaining 3 days left to use the coupon code, be smart: shop for our training videos & classes, but don’t pay full price. And use the savings to buy yourself a little something. You’re welcome.

css.php