“Should a query get to use more than one CPU core?” That’s an important question for your SQL Server. If you’re not sure what parallelism is, get started by exploring the mysteries of CXPACKET with Brent. He’ll introduce you to the the setting, ‘Cost Threshold for Parallelism’.
Let’s test Cost Threshold for Parallelism
I generate an estimated execution plan for the following query. I’m running against a copy of the StackOverflow database that doesn’t have many indexes.
SELECT COUNT(*) FROM dbo.Posts WHERE PostTypeId=2
I get a parallel plan with an estimated cost of 272.29. (I can tell it’s parallel because of the yellow circle with double arrows on three of the operators.)
I decide I’m going to test out ‘Cost Threshold for Parallelism’ and make this plan go serial. This is a server wide configuration, but I’m on my test instance, so no worries.
exec sp_configure 'cost threshold for parallelism', 275; GO RECONFIGURE GO
I run my query again and look at my actual plan this time…
Hey wait, that’s still parallel! It has the same estimated cost, and that cost is below where I set the cost threshold for parallelism. This seems broken.
At this point, I might get confused and think SQL Server was using a cached plan. But changing configuration options like cost threshold for parallelism will trigger recompilations– that shouldn’t be it.
What ‘Cost’ is SQL Server Using?
The secret is in the serial plan. I need to look at the estimated cost for this query — it’s the same as our original, but I’ve added a MAXDOP 1 hint to force a serial plan:
SELECT COUNT(*) FROM dbo.Posts WHERE PostTypeId=2 OPTION (MAXDOP 1) GO
The estimated cost for the serial version of the plan is 287.963, which is over the threshold I set at 275! This is the cost that is being considered and which is above the bar I set for who gets to go parallel. I can prove it by raising my cost threshold to just above this level:
exec sp_configure 'cost threshold for parallelism', 288; GO RECONFIGURE GO
And now when I run my query (with no maxdop hint to force it), I get a serial plan.
Cost Threshold Doesn’t Just Apply to the Plan You See
Behind the scenes, the optimizer is considering many possible plans. The Cost Threshold for Parallelism setting comes into play during that process and uses costs that you don’t see in the final plan.
This can be tricky to remember. Special thanks to Jeremiah, who’s explained this to me in the past (more than once!) This is covered in SQL Server Books Online, but oddly enough most of us don’t seem to find it there.
“Quorum” is incredibly important to keep your SQL Server online when you use Windows Failover Clustering or AlwaysOn Availability Groups. Learn what quorum is, how to see the current quorum configuration, how to change it, and guidelines for how to configure quorum in three common real-world scenarios.
We’re a group of specialists and teachers here at Brent Ozar Unlimited. But we didn’t start that way– we started out as engineers and developers.
So what was it that changed our career trajectories? For each of us, we had a lot of smaller factors along the way, but there have been one or two big game changers that have really made a difference.
Jeremiah: The PASS Summit
The first time I went to the PASS Summit, I attended at my own expense and used up half of my vacation time to attend. I’d been involved in the SQL Server community for a few short months, and I had been chatting via email with a goofy DBA named Brent Ozar.
Brent and I both attended pre-conference sessions, but we were in different ones. It happened that our speakers took breaks at same time, and I ended up running into Brent over on one of the breaks. We chatted and ended up eating lunch together.
I’m a shy person and a big conference can be daunting when you don’t know anybody, so I ended up tagging along with Brent throughout a lot of the conference. The upside of hanging out with Brent is that he’ll talk to anybody – speakers, Microsoft employees, or random attendees. The PASS Summit was my gateway into both the SQL Server community – I made a lot of friends that week – and into my current career.The connections I made at the PASS Summit that year planted the seeds for the rest of my career.
I never would have guessed that Twitter would change my life. And I’ve never been passionate or crazy about Twitter – I think it’s useful, but if I go a day or two without using it, I don’t mind.
But Twitter is the way that I started connecting to the larger SQL Server community. I started following people, then following their friends, and started tweeting about SQL Server. I asked and answered questions. Twitter gave me a sense of what other SQL Server developers and DBAs cared about, what problems they faced, and what they were interested in.
I’m a pretty naturally shy person, and I never knew what to talk to people about at conferences and events when I first met them. Twitter has helped me solve that problem: you can sense the mood of a conference while you’re there. You can tweet about it. You can talk about the tweets with people!
To get started with Twitter, check out our free eBook.
Jes: SQL Saturday
Specifically SQL Saturday Chicago 2010 – my first! I’d been attending my “local” user group for a few months, I’d been reading blogs, I was participating in forums, and I was even on Twitter before I went to this event. I knew I loved working with SQL Server, and I knew there were people that could help me when I had questions. But getting to attend an event – for free, nonetheless! – and getting to meet these people was amazing.
I remember attending a session by Brad McGeehee, who’d written “Brad’s Sure DBA Checklist”, which I kept in a binder at my desk. I remember getting to watch Chuck Heinzelman and Michael Steineke set up a cluster and test it. I even went to a session by one Brent Ozar about a script called sp_Blitz. At the end of the day, with a full brain, I went to the lobby and saw a group of the presenters sitting around chatting. I bravely approached and joined the circle, which included Brent and Jeremiah, and chatted with them. Had I not attended that event, had I not made the connections I did, I doubt I would be where I am in my career today!
Look for a SQL Saturday near you!
DOUG: SQL CRUISE
I’d recently presented at my first two SQL Saturdays when the opportunity came to go on SQL Cruise. A contest that for some cosmic reason I felt was begging me to enter — send in your story of victory with SQL Server and win a free spot on the cruise. I made a video about refactoring bad code with a magical hammer, and won the contest. But that was just the beginning.
During the cruise, I:
- Had a dream about a murder mystery happening on the cruise, which led me to write and present my SQL Server Murder Mystery Hour session.
- Got some phenomenal career advice from people like Brent, Tim Ford, and Buck Woody, that is still helping me to this day.
- Became friends with the three people who ended up hiring me three years later for the best job I have ever had.
That’s my career adventure. What will yours be?
Brent: Meeting “Real” DBAs
I was a lead developer at a growing software business and I felt like I had no idea what was going on inside the database. Our company suddenly grew large enough to hire two “real” database people – Charaka and V. They were both incredibly friendly, fun, and helpful.
I was hooked. Who were these cool people? Where did they come from? How did they learn to do this stuff?
They both took time out of their busy jobs to answer whatever questions I had, and they never made me feel stupid. They loved sharing their knowledge. They weren’t paranoid about me taking their jobs or stealing their secrets – and looking back, they probably just wanted me to be a better developer.
They insisted that sure, I had what it took to be a database admin. I went for it, and it’s been a rocket ship ever since.
If you’re going to a SQL conference for the first time, join us to learn how things work. We’ll share what happens when you walk in the door for the first time, where to go after hours, what to bring, and what NOT to bring.
I love free tools. I also love analyzing SQL Server’s wait statistics. But I’m not a fan of Activity Monitor, a free tool in SQL Server Management studio, which helps you look at wait stats.
Activity Monitor just doesn’t give you the whole truth.
I fired up a workload with HammerDB against a test SQL Server 2014 instance. My workload runs a query that’s very intensive against tempdb, and it’s really beating the SQL Server up by querying it continuously on seven threads.
Let’s look at our wait statistics. Here’s what Activity Monitor shows in SQL Server 2014:
Here’s what our free procedure, sp_AskBrent shows for a 10 second sample while the workload is running. I ran: exec sp_AskBrent @ExpertMode=1, @Seconds=10;
Activity monitor groups wait types. It took a whole lot of waits and rolled them up into ‘Buffer Latch’. This isn’t necessarily a bad thing, but I’ve never heard of documentation that explains what’s rolled up into which groups. By comparison, sp_AskBrent showed me the specific wait types PAGELATCH_UP, PAGELATCH_SH, and PAGELATCH_EX, with the amounts for each one. sp_WhoIsActive even showed me the type of page that is having the bottleneck (GAM) and the database and data file (tempdb’s data file 1).
Activity monitor leaves out wait types. sp_AskBrent showed that in aggregate, my #1 wait was CXPACKET over the sample. That tells me that a lot of my queries are going parallel. That’s not necessarily a bad thing, but it’s important for me to know that about my workload at the time. It helps me know that I want to learn more, and make sure that the right queries are going parallel. (In this case, the query that’s going parallel is pretty cheap, and is just as fast single threaded. My throughput goes up dramatically if I adjust the Cost Threshold setting up a bit.)
Friends don’t let friends use Activity Monitor. It may be convenient, but it doesn’t tell you the truth. Do yourself a favor and use a free tool that gives you wait statistics straight up.
It can be a tricky to introduce yourself to people at conferences when you read their blogs and watch their presentations and videos. You feel like you know them, but they don’t know you yet. What the heck do you say? So you take a deep breath, you head on over, and then… you say something really awkward.
I’ve totally been there too, with my foot right in my mouth. I am truly socially awkward, so much so that reviewing posts like this helps me get better at small talk before a big event.
Dodge the Backhanded Compliment
“Your presentations are great, even though ____.”
What it sounds like: “You’re not that good.”
What to say instead: “Your presentation on [folding paper towels] was great.”
It’s OK if you already knew how to fold a paper towel, they won’t assume you’ve never seen one before. Just stop before the “even though” or “but”. Less is more! And trust me, a simple concrete statement explaining that someone’s hard work helped you out will truly mean a lot to them. If you want to follow it up with a question to start a conversation, you can chase it with, “What inspired you to present on that topic?” (If you have real criticism, it’s valuable to share that too– after a few sentences. It’s kinda weird to lead with that.)
Sneak Past that Accidental Brush Off
“I wish I could come to your talk, but _____”
What it sounds like: “I’m not coming to your presentation.”
For newer speakers, it’s disheartening for them to hear this: they just hear “I’m not coming” and they immediately picture having to present to a room full of empty seats. That can have kind of a “sad Eeyore” ring because many speakers fear “what if nobody shows up?” (This is really hard to not say by accident, I’m still training myself out of it.)
What to say instead: “I like your title and abstract for [Cool Story, Bro].” It’s OK, you don’t have to attend. But you also don’t have to explain that you’re not going, unless they specifically ask. If you want to start a longer chat, it’s also great to ask them, “I’m thinking of attending Mladen’s session on [Security for Developers]. Do you know any other good sessions on that topic?” Speakers love to help you figure out what session to attend, even if the session isn’t their own.
Ooops, Did I Just Kinda Call You Ugly?
“You look so much _______ in person!”
What it sounds like: “You look short/dumpy/frumpy/bad in some context.” Even if you’re saying they look great now, this, uh, implies that’s not always the case.
What to say instead: “It’s great to meet you face to face.” Don’t worry, they think it’s great to meet you as well. If you’re in a place where you can have a chat, just ask, “How did you get started [publishing hilarious animated gifs on the internet]?”
Pack Your Bags For A Quick Guilt Trip
“Do you remember me? We met at ________.”
What it sounds like: “It makes me feel bad that you don’t remember me. And this might be a trick question and I’m totally trolling you, you won’t know till you answer.”
Some people are really good with names and faces. Oh, how I envy those people! For the rest of us, we do truly feel guilty if we met you before and you know who we are and we don’t remember your name. The real problem with this is that it’s hard for the conversation to not fall flat after this. It doesn’t go anywhere.
What to say instead: “I think we may have met at [a store selling panty hose in Texas] a few years back. It’s great to see you again!” Just adding a little “maybe” in there automatically puts the other person at ease if they don’t remember the situation for whatever reason.
Boy, My Tribe Sure Is Dumb!
“I love your blog posts. My developers are so dumb though! They always are doing _____.”
What it sounds like: “I don’t have anything nice to say about anyone. And maybe you’re a member of that club.”
I think some folks start like this because it’s a way of saying, “we must have this in common, right?” But it just doesn’t work so well. Leading with a negative statement gives the conversation a weird vibe, and you’re gambling that the person actually agrees with you. Sometimes they don’t!
What to say instead: “I can really relate to that blog post you wrote on [juggling chainsaws].” And if you don’t remember a specific post to talk about, but you can think of a topic, just mention that. Letting people know that you read their work is pretty darn exciting for them, all by itself.
I’M NOT REALLY INTERESTED IN YOU
“You talk to ___, a lot right? Where are they?”
What it sounds like: “You’re not that important to me, but your friends are.”
It’s totally normal to talk about what you’ve got in common, but don’t start with the people commonalities. Talk about what’s important to that other person – their work, presentation, family, or even just get coffee.
What to say instead: “What’s the #1 thing on your mind this week?” This lets the person share their excitement with something, and it’s contagious. You might learn something about a fun insider event too!
How to End It, Short And Sweet
Most initial conversations at conferences aren’t very long. You’ve got sessions to go to, there’s tons of people around saying hi to each other. But don’t be afraid to end it: you’ll probably run into one another again soon. Offer to trade business cards! That will help you remember each other– especially if your business card has your picture on it.
Small Talk is Just a Skill
You’ve got to start a conversation somewhere, and at a conference, you start it with small talk. We aren’t all naturally good at it, though.
Know this: even if you do end up with your foot in your mouth, it’s OK. Just smile and keep meeting new people. We’ve all been there, and it’s not nearly as big a deal as it feels like when your face turns red.
Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday. Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!
Is your SQL Server wasting memory? Join Kendra to learn how to identify when memory is going to waste, and track down whether it might be due to licensing, schema problems, fragmentation, or something else. Register now.
Looking for the queries from the video?
There are two short queries that check out the sys.dm_resource_governor_workload_groups and sys.dm_os_nodes DMVs. For those, just read up on the topic linked in Books Online and write a very simple select.
The longer query that looks at memory usage by in the buffer pool is a simple adaptation from the Books Online page on sys.dm_os_buffer_descriptors. Check it out and customize it for your own needs!
We had a comment recently on Five Things That Fix Bad SQL Server Performance that got me thinking. The comment came from a frustrated system administrator, who wrote:
Chucking resources at a problem is not solving that problem, it’s just temporarily masking the symptoms of that problem.
Funnily enough, I’ve heard the exact same thing from developers who are facing a tough problem on a dramatically undersized SQL Server. The basic gist of what they ask is:
Doesn’t it make me a bad developer if I can’t solve this problem with code?
Everybody’s worried that adding memory is somehow cheating.
Performance Problems and Memory
There are three general categories of performance problems relating to memory:
- Problems you can solve with memory OR code / indexes
- Problems you can solve only with code / indexes and NOT just memory
- Problems you can solve only memory and NOT just code / indexes
For category #1, solving the problem with memory is often cheaper than changing code and indexes: the operational cost of the hours of development and testing can be quite high.
But I specifically started thinking about category #3. This is the category that the frustrated sysadmin and proud developer think doesn’t exist!
Here are three examples of problems that you can solve with memory, and not simply with code and indexes:
1. Memory Reduces IO On a Read-Hungry Workload
You’ve got a critical OLTP application. You’re using shared storage. Most of the time, your storage throughput is pretty good. But sometimes, other users of the storage get busy, and your read and write latencies go through the roof. Your queries slow down, blocking increases, and timeouts start hitting the application. You’ve tuned your indexes, but parts of the application rely on realtime reports that need to scan large tables.
SQL Server is designed so that in this case you can add more memory and reduce read IO to the storage, giving you more stable performance. It’s absolutely not cheating to give the server more memory — speeding up the storage would be much more expensive.
2. Some Queries Need Lots of Workspace Memory
In another database, you do lots of reporting and aggregation. You’ve got large tables. Your queries frequently join them, do sorts, and use parallelism. All of these operations need memory, and the more data you’re aggregating and querying, the more memory each of these queries can need.
Before a query starts running, it figures out how much of this workspace memory it needs, and looks at what is available given other queries that are running. SQL Server has to balance the memory used for data pages, execution plans, and this workspace memory: it doesn’t want to let one query take over! If not enough memory is available, your query has to wait.
You’ve optimized your queries and indexes and made sure the memory estimates are realistic, but when lots of people run reports, your queries can’t even get started because of a memory crunch. Solving this problem by adding more memory isn’t cheating: it’s helping the SQL Server do what it’s designed to do.
3. Some Features Need Lots of Memory Periodically
You’re managing a data warehouse where you’re using table partitioning to implement a sliding window. This has helped your users a lot: you can add and remove large amounts of data with minimal locking and blocking.
But for query performance reasons, you’ve had to add one or two non-aligned indexes that are present during the day. To do partition switching, you have to disable these indexes to do your sliding window data load and archive each night. Creating the partitioned index can consume large amounts of memory.
Using Enterprise features can require investing in Enterprise levels of memory.
Adding Memory Isn’t Cheating
Memory is an important tool for performance in SQL Server: it isn’t just something that covers up bad code, it’s often a solution to many different types of problems. Memory isn’t the only tool you need to help your queries go fast in SQL Server, but it’s certainly NOT a tool you should ignore.
Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, September 16! Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!
Curious how you can give a compelling technical presentation? Join Kendra to learn five important tips on how to select the right topic for your talk, write an effective abstract, construct a coherent presentation, and make it to the podium to give your first presentation.
Have questions? Feel free to leave a comment so we can discuss it on Tuesday!
Let’s say you’re a DBA managing a 2TB database. You use SQL Server transaction log shipping to keep a standby copy of the database nice and warm in case of emergency. Lots of data can change in your log shipping primary database: sometimes it’s index maintenance, sometimes it’s a code release, sometimes it’s just natural data processing.
And when a lot of data changes, your warm standby sometimes is a lot less warm than you’d like. It can take a long time to restore all those log files!
Here’s a trick that you can use to help “catch up” your secondary faster. A quick shout-out to my old friend Gina Jen, the SQL Server DBA and log shipper extra-ordinaire who taught me this cool trick years ago in a land far far away.
Log shipping Secret Weapon: Differential Backups
Through lots of testing and wily engineering, you’ve managed to configure nightly compressed full backups for your 2TB database that are pretty darn fast. (No, not everyone backs up this much data every night, but stick with me for the sake of the example.)
- Log shipping primary had a full backup last night at 2 am
- Log shipping secondary has transaction logs restored through 7 am
- It’s 3 pm, and you’d really like to have everything caught up before you leave the office
Here’s an option: run a compressed differential backup against your log shipping primary. Leave all the log shipping backup and copy jobs running, though — you don’t need to expose yourself to the potential of data loss.
After the differential backup finishes, copy it over to a nice fast place to restore to your secondary server. Disable the log shipping restore job for that database, and restore the differential backup with NORECOVERY. This will effectively catch you up, and then you can re-enable the log shipping restore and you’re off to the races!
But Wait A Second. Aren’t Those Full Backups A Problem?
Running a full backup without the COPY_ONLY keyword will reset the “differential base”. That means that each differential backup contains changes since the last full backup.
But here’s the cool thing about log shipping: restoring a transaction log brings the new differential base over to the secondary.
So as long as you’ve restored transaction logs past the point of the prior full backup, you can restore a differential to your log shipping secondary.
This Sounds Too Good To Be True. What Can Go Wrong?
This isn’t foolproof. If you haven’t run a full backup in a long time, your differential backup may be really big, and taking that backup and restoring it may take much longer than restoring the logs. (Even if you’re using log shipping, you should be doing regular full backups, by the way.)
And like I mentioned above, if your log restores are so far behind that they haven’t “caught up” with the last full backup taken on the primary, you’re not going to be able to restore that differential backup to the secondary.
What If a Transaction Log Backup File Is Missing?
A technique like this could work for you, as long as a full backup hasn’t run since the transaction log backup file went missing. (If it has, you need to re-setup log shipping using a full).
But a word of warning: if you have missing transaction log backup files, you have a “broken” log chain. You should take a full backup of your log shipping primary database to get you to a point where you have functioning log backups after it, even if you’re going to use a differential to bridge the gap on your log shipping secondary. (And keep that differential around, too!) Keep in mind that you won’t have point-in-time recovery for a period around where the log file is missing, too.
Log shipping is Great
I just love log shipping. It’s quick to set up, it’s relatively easy to manage, it’s included in Standard Edition, and it’s got these surprising little tricks that make it easy to keep going. You can learn more about log shipping from Jes, and join us in person in 2015 in our Senior DBA training, which includes an advanced module on log shipping.