Key length matters in SQL Server indexes.
It’s a best practice to keep your index keys as narrow as possible, and SQL Server enforces a maximum key length of 900 bytes on most “normal” clustered and nonclustered indexes.
But what happens if you want to optimize the lookup of a wide column? You’re not necessarily out of luck, you may just have to get a bit creative.
What If I Need to do an Equality Search on a Wide Column?
Let’s say I have a simple table. I have a narrow key on my clustered index and then I have a pretty wide variable length column. I need the wide column to be unicode, which makes it even wider, since unicode data types take up more room.
Here’s our sample table with a few rows (just pretend it has a lot more):
CREATE TABLE dbo.LookupValues ( i int identity, bigval nvarchar(2000) default (REPLICATE('d',700)), constraint pk_LookupValues_i primary key (i) ); GO --Insert rows with the default values begin tran declare @i smallint = 0; while @i < 10000 begin insert dbo.LookupValues default values; set @i=@i+1; end commit GO --Insert a few smaller values insert dbo.LookupValues (bigval) VALUES ('big'); insert dbo.LookupValues (bigval) VALUES ('bunny'); insert dbo.LookupValues (bigval) VALUES ('bunny bunny'); GO
Let’s say we write to this table rarely, but query it often. When this query runs, I want to make it as fast as possible:
SELECT i from dbo.LookupValues where bigval = N'bunny';
Right now, this query has to scan every row in the clustered index (the whole table) to find instances where bigval=N’bunny’. That’s not ideal, and as the table grows it’ll become worse and worse, burning more IO and CPU, and taking longer over time.
There’s usually an easy way to make a query like this fast: just create a nonclustered index on the bigval column. But when I try, it doesn’t work because of restrictions on key size.
--Make my query faster! CREATE NONCLUSTERED INDEX ix_LookupValues_bigval on dbo.LookupValues (bigval); GO
Warning! The maximum key length is 900 bytes. The index 'ix_LookupValues_bigval' has maximum length of 4000 bytes. For some combination of large values, the insert/update operation will fail. Msg 1946, Level 16, State 3, Line 1 Operation failed. The index entry of length 1400 bytes for the index 'ix_LookupValues_bigval' exceeds the maximum length of 900 bytes. The statement has been terminated.
Terminated. Yeah. I can’t just index this to make my query fast.
Options for Indexing Wide Keys
So what’s a performance tuner to do?
My first thought when I hit this problem was that I might have to use a fulltext index. A fulltext index can work here– it lets you index large columns, but it would be kind of a bummer to have to do it. Fulltext indexes have extra overhead and are really designed for different things than doing a simple equality search, so it would be like using a jackhammer because you can’t find a mallet.
My partner Jeremiah Peschka came up with a quick and clever solution using an indexed computed column. You can work all sorts of cool magic with computed columns in SQL Server– the trick is just to remember them!
Here’s how it works: you add a computed column to the table that’s the hash of the large value. You then index the computed column and modify your query to take advantage of it.
In this example we use SHA_512 for the hashing algorithm. This will give an output of 64 bytes– well within our limits for index key sizes.
ALTER TABLE dbo.LookupValues ADD bigvalhash AS HASHBYTES('SHA2_512', bigval) PERSISTED; GO CREATE NONCLUSTERED INDEX ix_LookupValues_bigvalhash on dbo.LookupValues (bigvalhash) INCLUDE (bigval); GO
Now, to get the query work, we need to change it a bit:
SELECT i from dbo.LookupValues where bigvalhash = HASHBYTES('SHA2_512', N'bunny') and bigval = N'bunny'; GO
This revised approach gives me a targeted index seek and limits my logical reads. Voila!
The Fine Print on This Solution
There are a few things to note:
- HASHBYTES results are dependent upon datatype. If my query used HASHBYTES(‘SHA2_512′, ‘bunny’), it would not find any rows, because the column is hashed unicode values and I provided a hashed non-unicode value.
- I do still include “bigval= N’bunny’” in my query. In theory there shouldn’t be collisions with SHA-512, but it doesn’t add much expense to the query and in my example I deemed it “worth it” to me. You might make a different choice.
Sometimes Old Tools Do the Trick
What I love most about this solution is that it’s creative, but it’s not really weird, when you think about it. It uses standard features that have been in SQL Server for a long time to create a way to do something that seems like the product wouldn’t support– and that’s really cool.
It’s difficult to define why some things are wildly successful in a sea of forgettable products. Some recipes have a mysterious umami that comes from a subtle dash or two of magical sauce. In business and technology there’s an indefinable edge that sets people and teams apart. How do you get that special something?
Here’s one strategy businesses use all the time: layer products to build new, deeply specialized meta-products. First, they create a service or application they can sell. It generates revenue, but it also generates data. Sometimes they have to adjust (or heaven forbid, pivot) the product, but they get good at it. They start making money.
After a while they apply analysis to harvest meta-information from the product. The meta-information is used to create new, different products. These products may offer extra insights to existing customers, but they may also suit new and different customers, too. BLAMMO, the business is more diverse and successful.
These techniques aren’t just for packaged products. This is also what helps companies create services that seem a little magical, because they’re simultaneously simple and complex. We use these principles to make our own SQL Server Training super sharp and effective for students.
This isn’t something that only we can do. You can use the same techniques to find your own secret sauce.
Step 1: To Get Ahead, Look Back
One of our primary consulting products is our Critical Care Sessions. We work closely with a client to identify their pain points and the bottlenecks in their SQL Server environment– issues can be anywhere from the storage layer to SQL configuration to query optimization and execution. We deliver recommendations for the client to implement within the first couple of weeks, the next month, and the next quarter. We tailor each recommendation to the client’s specific needs.
After a few weeks have passed, we meet again. We work with the client to find out how much they’ve gotten done. What’s been easy? What’s been difficult? Are there any questions we can help out with?
We listen very carefully, because this is an important source of information. Following up with your customers and finding out what has worked for them and why some tasks are trickier for them than normal is one of the most important things you can do. This tells us:
- Where misunderstandings commonly occur
- What types of data successfully gets buy-in to make difficult changes
- Which scripts and tools are really critical to get people that data
The truth is this: it’s easy to know facts. It’s difficult to be effective. Because of this, we constantly collect data on what helps people get things done and evolve our tools.
This information is certainly helpful to our consulting product itself– it keeps us at the top of our game. But we also get an important component of our secret sauce from this: we can train people in the classroom to be effective using the techniques and scripts we’ve refined.
To apply this step yourself, create a habit of regularly meeting with customers and listening to feedback. Make notes. You won’t always be able to implement every idea you have, but keep track of what you find along the way. Steps 2 and 3 will give you a chance to harvest that information.
Step 2: Identify Your Differentiators
Very smart people are sometimes terrible teachers. We knew we could teach on a wide variety of subjects. But how did we build a great training?
The key is to identify what sets you apart and narrow your scope to projects where you can impact your audience deeply. We don’t want to fill a training room with warm bodies and talk about general SQL Server topics. We want to get the right people into the room so that we can address their immediate problems and have them jumping out of their seats to get going.
To create great training, we identified what we do better than anyone else. These are our differentiators:
- We’ve built tools and scripts we can quickly train people to use effectively to diagnose server health and performance and tackle tough indexing problems.
- We’ve created specialized materials to help developers learn the critical factors to SQL Server performance in a short amount of time
- We have refined queries and techniques to teach developers to recommend effective changes in their environment
- We have an army of slide decks, blog posts, and even animated gifs to help developers solve performance problems, no matter how niche– so making Q&A awesome is no problem.
Knowing these differentiators made our mission clear. We knew we could change people’s careers with a short 2 day class, targeted at a specific audience (developers), and give them immediately useful scripts and techniques to improve performance.
We applied also looked at differentiators for how we offer the training itself. What new things could we do to help people access the training and make it work for them in a new way?
- We created a business justification form to help people show the value of their training to management before signing up.
- We also included two two-hour group Q&A webcasts for just the people in the training session two weeks after the event. You know how most trainings leave you with a feeling of confusion about a few topics when you get back to your desk and try to apply what you learned? These Q&A webcasts mean our attendees can get answers for those difficult situations.
Define your differentiators for yourself. What do you do better than anyone else in your environment? What needs do you fill, or could you fill, that nobody else covers well? Where are the gaps and needs in your workplace that you find interesting?
Step 3: Don’t Kill Yourself with Focus
You’ll never get permission to have an idea.
We didn’t start our business with the plan of giving our own training events. We love consulting. We speak in front of people all the time– we give our weekly webcasts and speak at local and national conferences, but we primarily think of ourselves as people who go out and initiate change rather than people who stand behind a podium and lecture. We naturally focused on consulting as our core business.
Focus is great and it can help make you successful. But too much focus makes you predictable. It keeps you from really using your secret sauce.
We give training because we were open to a new idea. We realized that two day training is perfect to teach a developer what they need to know about performance tuning SQL Server to design changes for real-world environments. It’s just enough time to get them out of the office and teach them how to use the right tools, but not so long that they’re swamped under an incredible tidal wave of email when they return.
Set aside time at least once a month to think about how your job could be dramatically different. Don’t restrict yourself by immediate needs or how much time you have available. Give yourself room to dream of what you’d be really proud of doing in six months or a year.
Here’s the key: don’t wait until you’re having problems to make your work more awesome. Be creative and act on your aspirations when you’re already successful and busy. That’s often when you’ll have your best ideas! We added our training offering at a time when we were already very successful, simply because we knew we could offer a very high quality product and help developers in a way nobody else can.
The Secret Sauce Isn’t Out of Reach
Good news: if you read our newsletter every week or even just made it to this point in the post, you’re probably a specialist already. You may sometimes feel like a small fish in the big SQL Server pond, but that’s just human nature. If you’re even in the pond you are well on your way toward becoming very good at jobs that are difficult to fill.
Skills that can set you apart like becoming great at performance tuning SQL Servers are completely within your reach. A few steps to find the right path is all it takes.
I’ll never think “I’ve seen it all” when it comes to SQL Server– there’s just always someone waiting around the corner who’s found a weird new thing to do with it. But there are some things I really wish I could stop finding.
One of those things I hate finding is bad fillfactor settings. Fillfactor can be a useful tool to help performance, but it’s often a performance killer if you use it incorrectly.
A quick introduction: what is fillfactor, and how is it set?
“Fillfactor” is a setting for indexes in SQL Server. When you create or rebuild an index, you can tell SQL Server what percentage of each 8K data page used in the “leaf” level of the index it should fill up.
In other words, if you set a 90% fillfactor when you rebuild a clustered index, SQL Server will try to leave 10% of each leaf page empty. The empty space will be present on the page as it’s written on disk, and it’ll take up the same amount of space as it sits in memory.
By default, SQL Server uses a 100% fillfactor and tries to fill up all the pages in indexes as close to full as it can. Depending on how many rows actually fit on the page, your mileage may vary.
There are two ways to set fillfactor in SQL Server:
- At the SQL Server instance level using a sys.configurations setting for fill factor. (I don’t recommend using this option, for reasons I’ll describe below.)
- At an individual index level by specifying fillfactor when you create or rebuild the index.
Index maintenance jobs sometimes automate the second of these options. If you’re using an index maintenance stored procedure you picked up from around the web, you may be setting fillfactor without realizing it.
Why do people love to set fillfactor?
DBAs and developers often read that lowering the fillfactor improves performance by reducing page splits. Perhaps they’re trying to fix a performance problem, or perhaps they’re feeling paranoid. They either lower fillfactor too much on some indexes, or apply a fillfactor change to all indexes.
Here’s the scoop: it’s true that the default fillfactor of 100% isn’t always good. If I fill my pages to the brim, and then go back and need to insert a row onto that page, it won’t fit. To make the data fit and preserve the logical structure of the index, SQL Server will have to do a bunch of complicated things (a “bad” type of page split), including:
- Add a new page
- Move about half the data to the new page
- Mark the data that was moved on the old page so it’s not valid anymore
- Update page link pointers on existing pages to point to the new page
And yep, that’s a lot of work. It generates log records and causes extra IO. And yes, if you have this happen a lot, you might want to lower the fillfactor in that index a bit to help make it happen less often.
Where do people mess up fillfactor?
Here’s the thing: having a bunch of empty space on your data pages is ALSO bad for performance. Your data is more spread out so you probably have to read more pages into memory. You waste space in cache that’s just sitting there empty. That’s not only not awesome, it can be TERRIBLE in many cases.
This is particularly wasteful because not all indexes are prone to “bad” page splits. Let’s say I have a clustered index on an incrementing INT or BIGINT identity value. I insert loads of new rows and values are rarely updated or deleted. In this case I can fill my pages very full because I’m always putting new rows at the “end” of the index. Adding these new pages aren’t bad page splits– although unfortunately they are counted in the “page splits/sec” performance counter, which makes it very tricky to find just the “bad” splits.
I frequently find that people have put a fillfactor setting of 80 or below on all the indexes in a database. This can waste many GB of space on disk and in memory. This wasted space causes extra trips to storage, and the whole thing drags down the performance of your queries.
Best practices for setting fillfactor
Here’s some simple advice on how to set fillfactor safely:
- Don’t set the system wide value for fillfactor. It’s very unlikely that this will help your performance more than it hurts.
- Get a good index maintenance solution that checks index fragmentation and only acts on indexes that are fairly heavily fragmented. Have the solution log to a table. Look for indexes that are frequently fragmented. Consider lowering the fillfactor gradually on those individual indexes using a planned change to rebuild the index. When you first lower fillfactor, consider just going to 95 and reassessing the index after a week or two of maintenance running again. (Depending on your version and edition of SQL Server, the rebuild may need to be done offline. Reorganize can’t be used to set a new fillfactor.)
This second option may sound nitpicky, but in most environments it only takes a few minutes to figure out where you need to make a change. You can do it once a month. And it’s worth it– because nobody wants their database performance to slow down and realize that they’ve been causing extra IO by leaving many gigabytes of space in memory needlessly empty.
Now that we’ve covered the basics, how about those five things? Here’s the fine print on how fillfactor is implemented, and what it does and doesn’t impact.
1) Books Online warned you
The most painful thing about finding bad fillfactor settings is that the Microsoft team has tried to warn people about how bad fillfactor settings can hurt performance. Unfortunately, most people don’t seem to find the warning.
Check out this quote in Books Online: “For example, a fill factor value of 50 can cause database read performance to decrease by two times. “
That’s pretty clear, right? So if I set a fillfactor of 70 (when I don’t need it), I risk decreasing performance by 30%. That doesn’t sound great, either. Because of this delicate balance, follow the best practices above.
2) Fillfactor does not apply to heaps
The fillfactor setting only applies to indexes, not to all tables. If you have a table which does not have a clustered index, it’s called a “heap” in SQL Server. Heaps are weird in several ways. One of those ways is that fillfactor doesn’t apply– not even the fillfactor you set at the instance level. (Page splits don’t work the same way either, but this is not a good reason to have heaps.)
3) Fillfactor doesn’t impact new pages inserted at the end of an index
SQL Server only uses fillfactor when you’re creating, rebuilding, or reorganizing an index. It does not use fillfactor if it’s allocating a fresh new page at the end of the index.
Let’s look at the example of a clustered index where the key is an increasing INT identity value again. We’re just inserting rows and it’s adding new pages at the end of the index. The index was created with an 70% fillfactor (which maybe wasn’t a good idea). As inserts add new pages, those pages are filled as much as possible– likely over 70%. (It depends on the row size and how many can fit on the page.)
4) Fillfactor does not apply to LOB pages
Fillfactor applies to in-row data pages. When you create a table, depending on the data types, you have options as to when some large types get stored on-row, or off-row. When data is not stored in-row, fillfactor settings don’t apply to those special pages.
A general rule of thumb is that if you infrequently read large object columns, it’s better to keep those columns off-row. You will incur extra IO every time you need to fetch off-row data. but it keeps the frequently accessed in-row columns of your index more efficient.
5) Someone may have changed fillfactor without you realizing it
Once fillfactor is set on an index, it stays there. Further rebuilds or reorganizations of the index maintain that fillfactor unless you specify a different value. It’s easy for a change in fillfactor to sneak in. Unless you check for indexes with a fillfactor set, you might not realize what’s going on in your database.
Like anything else, there’s weird exceptions where in very rare cases, setting a super-low fillfactor on a very heavily updated table (which is probably small), can help reduce contention. These cases are very rare. Often there’s a better long term change in the application layer that would handle the issue.
Bottom line: If you find you’re using a low fill factor and you don’t know exactly why it was needed, always look carefully to see if you’re hurting performance more than helping it. And if you’re using the same fillfactor on all of your indexes, you’re probably wasting lots of valuable memory.
How to check your fillfactor today
Our free sp_BlitzIndex tool looks for all sorts of insanity in your indexes at the database level. It will snoop through your metadata and warn you about low fillfactor values, plus let you know exactly how big those indexes have grown. Check it out today.
We write a lot about best practices, we teach customers best practices, and we publish popular references for best practices about all sorts of things like performance monitoring, SQL Server Setup, and Availability Groups.
But the internet is a sea of information and many SQL Server instances have unique needs. How does a SQL developer or DBA decide what’s a best practice?
How Most People Define Best Practices
In the real world, the #1 definition of a Best Practice in effect this: “Something we’ve been doing for years without noticing any problems.”
That’s just how human nature works. We all have a set of things we do that are “normal”, and we usually put them into our mental Best Practice Bucket. If we start to have problems, we assume it’s not due to anything “normal”.
Unfortunately, human nature makes people persist all sorts of bad practices. I find everything in the wild from weekly reboots to crazy settings in Windows and SQL Server that damage performance and can cause outages. When I ask why the settings are in place, I usually hear a story that goes like this:
- Once upon a time, in a land far far away there was a problem
- The people of the land were very unhappy
- A bunch of changes were made
- Some of the changes were recommended by someone on the internet. We think.
- The problem went away
- The people of the land were happier
- We hunkered down and just hoped the problem would never come back
- The people of the land have been growing more and more unhappy over time again
Most of the time “best practices” are implemented to try and avoid pain rather than to configure things well. And most of the time they aren’t thought out in terms of long term performance. Most people haven’t really implemented any best practices, they’ve just reacted to situations.
Why Best Practices Can Get You Fired
There’s one thing you can count on: many of yesterday’s best practices aren’t true today. Maybe the change you made long ago was a best practice at the time (or at least not an unusual or bad practice), but it may be hurting you now.
It doesn’t matter if something used to be a best practice or not. If times and technology have changed and you haven’t kept up, that outdated best practice can still get you fired if you don’t learn to let go.
What You Should Do Today
We try to make it easy for people to find out where their configuration veers away from current best practices.
Here’s what you should do today: run our free tool that checks your SQL Servers for signs you’re not following a best practice.
Look carefully and critically at everything that it points out. Don’t dismiss any of the items by saying, “oh, but we need that because we’ve always done it that way.” If it’s something you’ve been doing a long time, question it even more.
Are you getting the most out of your TSQL queries? Most people still write queries like they’re using SQL Server 2000, but there’s lots of new functionality that can make your TSQL simpler to read, write, and support.
Kendra Little introduces you to five cool features that you should think about using to improve your SQL Server queries (along with the time the demo on that feature starts):
1) The glory of the OUTPUT clause – 4:53
2) Over and Partition By – 12:23
3) The APPLY operator for reusable computations – 16:56
4) EXCEPT and INTERSECT – 22:03
5) The Magic of indexed computed columns – 24:24
Looking for the scripts from the webcast? Scroll on down to below the video for the download link.
Scripts and Links
Need to check your database compatibility level? Our sp_Blitz script does that and much more.
Download demo scripts from the webcast here: 5-TSQL-Features-Youre-Missing-Out-On-Demo-Scripts.zip
Questions from the Webcast
Here are some great questions we got from folks during the webcast which I didn’t have time to answer live, but caught up with after the show.
From John on subqueries
Question: Is the difference with using a subquery vs Over/Partion is the subquery is processed for every row and the over isn’t?
Answer: SQL Server is too tricksy to make things so simple to remember. Subqueries won’t necessarily get processed repeatedly for every row as a general rule. The optimizer will do its darndest to rewrite things as efficiently as it can in the time it is allowed for optimization. The behavior depends on the query and the version of SQL Server.
Special things are definitely done when you bring in OVER, though. If you’d like to nerd out on how this works, good news! Paul White wrote a detailed post on this very topic. (Warning: brain melting may ensue.)
From Riley on CROSS APPLY vs CTEs
Question: How does the CROSS APPLY performance compare to using a CTE, where the Subtotal Plus Tax is computed in the CTE and then added to in the final SELECT? Do they generally result in the same plan?
Answer: As as general rule, I don’t expect CROSS APPLY and CTEs to get the same plans– I’ve seen many cases where they do not.
In this specific case for making computations reusable, the plans are very different. The APPLY operator lets you put in the calculation without a FROM clause or a correlation– it uses the same Compute Scalar operator that the original version of the query used. If you put the calculation in a CTE you have to essentially make it a subquery, and that’s not necessarily something SQL Server’s going to be able to normalize out. (I did a quick test of this particular scenario just to see if it was more clever than I thought in 2012, and it gets a much more expensive plan.)
From Donald on computed columns
Question: I thought Computed columns were a bad idea ?
Answer: Not at all! Like anything else, you can do something terrible with them, but they can also be absolutely brilliant by simplifying code or improving performance. Persisting and sometimes indexing the computed column can be very powerful when it’s a frequently read calculation.
From Grace on portability/ANSI standard
Question: Are these statements compatible with ANSI SQL or are they specific to MS T-SQL?
Answer: All my demos and comments here are specific to TSQL, specifically the Microsoft SQL Server implementation. Some of these are likely in the ANSI spec, but SQL Server very frequently doesn’t adhere to ANSI SQL specs– it likes to do it’s own thang. So I wouldn’t count on it!
From Mike on mikes
Question: Hey guys, what microphone setup do you use.. the sound is good. I moderate some PASS VC’s and have to get a new microphone…
Answer: Brent, Jeremiah and I all use Yeti Microphones. We love ‘em. Glad the sound is good live! It suffers a little in the recordings we publish of the webcasts because the audio is recorded through our online webcast service. For trainings we record at home, the Yeti really gets to shine.
Hungry for more on making queries better? Check out our TSQL Training.
Writing queries with date and time data types can be tricky! Join Kendra to learn tips and best practices for selecting the right data types, choosing the best performing functions to round (or truncate) values, and how to avoid common pitfalls with dates and times in SQL Server. If you have one year of experience writing T-SQL queries, this free 30 minute webcast is for you.
Notes and Links
A quick note about datetimeoffset. This is one of the more confusing topics of the webcast, because figuring out how to make applications timezone aware is tricky. A rule of thumb: if you truly need to store the time zone “offset” of the value from where the data originated for legal or research purposes (“What time did the data appear to be from the perspective of the user?”), then this is the datatype for you.
For display purposes in most applications, it’s more efficient to normalize off the time zone preference of the individual user in a separate table, then store datetime in a single standardized timezone like UTC. Essentially, since people travel around and governments change timezones, persisting the information of what timezone someone was in when they did something is usually not worth it, because it’s usually not valuable information.
A great detailed discussion (with cartoons!) of circumstances where you might choose datetimeoffset vs datetime/datetime2 is in this StackOverflow question.
Poster downloads are over this way.
Developers and database administrators frequently clash on major questions about how to scale application performance. Are they just being stubborn, or is one side really right? Jeremiah and I recently gave a talk on this topic to the AtlantaMDF User group and we wanted to open this debate to the world.
Someone’s Right and Someone’s Wrong
Developers and DBAs are usually being stubborn, and for good reason. DBAs tend to be tasked with being the protectors of an environment, while developers are on a mission to ship more features, make changes, and bring in more money. Each side has a reason for their views, but on most controversial topics, one side actually is more right than the other.
Most DBAs believe that Object Relational Mapping tools (ORMs) write terrible code and that stored procedures are a better data access layer. This belief is widespread because database administrators struggled for years with early versions of NHibernate and Entity Framework.
Some of the growing pains hurt: everything from type mismatches to hilariously long and tangled queries have burned people. But developers embrace ORMs for good reasons. These tools really do help build, test, and ship features more quickly. If you become a SQL Server performance tuning specialist, you can get the best of both worlds– but you have to let go of some of your hangups about ugly SQL queries.
It’s hard to argue against application caching… at least unless you’ve tried to implement it and come up frustrated. DBAs argue that developers should cache everything, while developers can tell you how that isn’t as easy as it sounds.
We know that usually there IS a place for caching, you just need to know how to find the “biggest bang for your buck” opportunities. (We’ve got some slides and scripts below to get you started.)
Access to Production
Who hasn’t had this argument over and over? Usually developers are fighting to get into production and DBAs are fighting to keep them out. Lots of time gets wasted on this topic needlessly.
We created tools like sp_BlitzIndex to help people share information for performance tuning more easily. You just gotta find the right tools and the right level of access that lets everyone do their jobs (and keep their jobs).
Check out the Full Presentation
Wish you were there? Check out the slides:
Get the Scripts
Download ‘em here.
Join Us Live
We train developers and DBAs diagnose the real bottlenecks in their SQL Servers and use data to get beyond the problems we just described. Join us at a live training event to learn how to performance tune SQL Server.
Developers frequently make a big mistake: they tune out on discussions of storage performance and IO. Don’t fall into this trap! Understanding how storage impacts SQL Server performance helps you tune the right parts of your code and focus on what really matters. In this 30 minute video, Microsoft Certified Master Kendra Little demonstrates why developers need to understand the IO impact on writes and reads in SQL Server. She’ll show you how simple configuration changes can sometimes save hundreds of hours of developer time.
Interested in the tools or references in the video? Check out the links at the bottom of this post.
Links and References
On Monday, March 11, Jeremiah and Kendra will be speaking at the AtlantaMDF SQL Server User group in Alpharetta, Georgia. We’ll be fresh from our company retreat in Mexico and may or may not be sporting some killer sunburns.
We’ll be giving a new talk, ‘Performance Tuning Challenge: DBAs vs Developers’. We’re going to tackle three big issues where developers and DBA frequently go to war about how to write and tune code using SQL Server. We’ll talk about each side’s perspective, and debate who’s right and who’s wrong. We’ll also share the secrets of how we’ve helped teams move past all the controversy and solve their production problems.
If you’re in the Atlanta area, join us for this free event on Monday, March 11 at 6:30 PM.
Whether you’re planning a major migration, a code release, or an update to Windows security patches, there’s always the risk that something may go terribly wrong. How can you prevent disaster? In this 30 minute webcast, Kendra will share the top three elements to planning a successful change for SQL Server.