We’re a group of specialists and teachers here at Brent Ozar Unlimited. But we didn’t start that way– we started out as engineers and developers.
So what was it that changed our career trajectories? For each of us, we had a lot of smaller factors along the way, but there have been one or two big game changers that have really made a difference.
Jeremiah: The PASS Summit
The first time I went to the PASS Summit, I attended at my own expense and used up half of my vacation time to attend. I’d been involved in the SQL Server community for a few short months, and I had been chatting via email with a goofy DBA named Brent Ozar.
Brent and I both attended pre-conference sessions, but we were in different ones. It happened that our speakers took breaks at same time, and I ended up running into Brent over on one of the breaks. We chatted and ended up eating lunch together.
I’m a shy person and a big conference can be daunting when you don’t know anybody, so I ended up tagging along with Brent throughout a lot of the conference. The upside of hanging out with Brent is that he’ll talk to anybody – speakers, Microsoft employees, or random attendees. The PASS Summit was my gateway into both the SQL Server community – I made a lot of friends that week – and into my current career.The connections I made at the PASS Summit that year planted the seeds for the rest of my career.
I never would have guessed that Twitter would change my life. And I’ve never been passionate or crazy about Twitter – I think it’s useful, but if I go a day or two without using it, I don’t mind.
But Twitter is the way that I started connecting to the larger SQL Server community. I started following people, then following their friends, and started tweeting about SQL Server. I asked and answered questions. Twitter gave me a sense of what other SQL Server developers and DBAs cared about, what problems they faced, and what they were interested in.
I’m a pretty naturally shy person, and I never knew what to talk to people about at conferences and events when I first met them. Twitter has helped me solve that problem: you can sense the mood of a conference while you’re there. You can tweet about it. You can talk about the tweets with people!
To get started with Twitter, check out our free eBook.
Jes: SQL Saturday
Specifically SQL Saturday Chicago 2010 – my first! I’d been attending my “local” user group for a few months, I’d been reading blogs, I was participating in forums, and I was even on Twitter before I went to this event. I knew I loved working with SQL Server, and I knew there were people that could help me when I had questions. But getting to attend an event – for free, nonetheless! – and getting to meet these people was amazing.
I remember attending a session by Brad McGeehee, who’d written “Brad’s Sure DBA Checklist”, which I kept in a binder at my desk. I remember getting to watch Chuck Heinzelman and Michael Steineke set up a cluster and test it. I even went to a session by one Brent Ozar about a script called sp_Blitz. At the end of the day, with a full brain, I went to the lobby and saw a group of the presenters sitting around chatting. I bravely approached and joined the circle, which included Brent and Jeremiah, and chatted with them. Had I not attended that event, had I not made the connections I did, I doubt I would be where I am in my career today!
Look for a SQL Saturday near you!
DOUG: SQL CRUISE
I’d recently presented at my first two SQL Saturdays when the opportunity came to go on SQL Cruise. A contest that for some cosmic reason I felt was begging me to enter — send in your story of victory with SQL Server and win a free spot on the cruise. I made a video about refactoring bad code with a magical hammer, and won the contest. But that was just the beginning.
During the cruise, I:
- Had a dream about a murder mystery happening on the cruise, which led me to write and present my SQL Server Murder Mystery Hour session.
- Got some phenomenal career advice from people like Brent, Tim Ford, and Buck Woody, that is still helping me to this day.
- Became friends with the three people who ended up hiring me three years later for the best job I have ever had.
That’s my career adventure. What will yours be?
Brent: Meeting “Real” DBAs
I was a lead developer at a growing software business and I felt like I had no idea what was going on inside the database. Our company suddenly grew large enough to hire two “real” database people – Charaka and V. They were both incredibly friendly, fun, and helpful.
I was hooked. Who were these cool people? Where did they come from? How did they learn to do this stuff?
They both took time out of their busy jobs to answer whatever questions I had, and they never made me feel stupid. They loved sharing their knowledge. They weren’t paranoid about me taking their jobs or stealing their secrets – and looking back, they probably just wanted me to be a better developer.
They insisted that sure, I had what it took to be a database admin. I went for it, and it’s been a rocket ship ever since.
If you’re going to a SQL conference for the first time, join us to learn how things work. We’ll share what happens when you walk in the door for the first time, where to go after hours, what to bring, and what NOT to bring.
I love free tools. I also love analyzing SQL Server’s wait statistics. But I’m not a fan of Activity Monitor, a free tool in SQL Server Management studio, which helps you look at wait stats.
Activity Monitor just doesn’t give you the whole truth.
I fired up a workload with HammerDB against a test SQL Server 2014 instance. My workload runs a query that’s very intensive against tempdb, and it’s really beating the SQL Server up by querying it continuously on seven threads.
Let’s look at our wait statistics. Here’s what Activity Monitor shows in SQL Server 2014:
Here’s what our free procedure, sp_AskBrent shows for a 10 second sample while the workload is running. I ran: exec sp_AskBrent @ExpertMode=1, @Seconds=10;
Activity monitor groups wait types. It took a whole lot of waits and rolled them up into ‘Buffer Latch’. This isn’t necessarily a bad thing, but I’ve never heard of documentation that explains what’s rolled up into which groups. By comparison, sp_AskBrent showed me the specific wait types PAGELATCH_UP, PAGELATCH_SH, and PAGELATCH_EX, with the amounts for each one. sp_WhoIsActive even showed me the type of page that is having the bottleneck (GAM) and the database and data file (tempdb’s data file 1).
Activity monitor leaves out wait types. sp_AskBrent showed that in aggregate, my #1 wait was CXPACKET over the sample. That tells me that a lot of my queries are going parallel. That’s not necessarily a bad thing, but it’s important for me to know that about my workload at the time. It helps me know that I want to learn more, and make sure that the right queries are going parallel. (In this case, the query that’s going parallel is pretty cheap, and is just as fast single threaded. My throughput goes up dramatically if I adjust the Cost Threshold setting up a bit.)
Friends don’t let friends use Activity Monitor. It may be convenient, but it doesn’t tell you the truth. Do yourself a favor and use a free tool that gives you wait statistics straight up.
Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday. Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!
Is your SQL Server wasting memory? Join Kendra to learn how to identify when memory is going to waste, and track down whether it might be due to licensing, schema problems, fragmentation, or something else. Register now.
Looking for the queries from the video?
There are two short queries that check out the sys.dm_resource_governor_workload_groups and sys.dm_os_nodes DMVs. For those, just read up on the topic linked in Books Online and write a very simple select.
The longer query that looks at memory usage by in the buffer pool is a simple adaptation from the Books Online page on sys.dm_os_buffer_descriptors. Check it out and customize it for your own needs!
Brent Loves the Beer Pong
My favorite part is almost a throwaway gag – the database emergency is over, and our hero is moving on to other things:
It’s only on the screen for a second or two, just enough to make the viewer say, “Wait – is that guy doing what I think he’s doing?”
It captures the idea that nobody really wants to work with us. They want the SQL Server to be fast and reliable, and they want it to be invisible. They don’t really want to spend their time working on the database. They want to get back to adding features to their application, managing other servers, or … playing beer pong.
Doug Loves to Keep Servers Out of the E.R.
I get a lot of satisfaction from knowing that our clients not only learn to solve the problems they have, but also how to avoid them in the future. We don’t want our clients to be in the same pickle a few months later and need to call someone again. We value success stories over repeat business.
Jes Loves Teaching Rocket Surgery
This is exactly what we do:
We don’t just find and fix problems, we teach and train people so they are empowered to solve other problems going forward. At heart, I’m a teacher. I don’t hoard my knowledge, or try to be the one person who knows how to fix a problem.
Kendra Loves That Creepy Insect Moment
My favorite moment is the view of the SQL Server, right after the witch doctor fixes things up. It’s smiling but…. then an insect runs right over its teeth. Things are better, but you’ve got this weird feeling of dread. What’s really going on in there?
Epipheo captured this brilliantly. We never want to be the witch doctor. We built our process so that it doesn’t just make the symptoms of your problem go away, but empowers you to understand the cause and the cure.
Jeremiah Loves Saving Money
It’s true, I really do love helping our clients save money. I don’t want to be the person recommending a new infrastructure, built from the ground up, with fine Corinthian leather server racks. We joke about solving computing problems with money, but the truth is that we try to solve problems through ingenuity before we try to solve them with an AmEx.
Along the way to solving problems with our smarts and yours, we focus on just the important stuff. There’s no 4,300 page report. No incomprehensible advice. Just solutions to your problems.
So what was your favorite part?
SQL Server Agent is one of my favorite tools – it allows you to automatically schedule jobs, alert you if things are going badly, and capture information into database tables.
One common request is to execute a query and send the results via email on a regular basis. This task is a great match for SQL Server Agent and sp_send_dbmail. Setting up a job and a step that uses sp_send_dbmail with the @query parameter, and building on it with other available parameters, you can easily accomplish this.
Let’s take a look at a job I’ve created. The first step is to gather metrics from the sys.dm_os_sys_memory DMV. I insert the data into a table in my master database.
The second step in this job uses sp_send_dbmail to send the results of the query from today to a database mail profile account.
Let’s look at the command more closely.
EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Main DB Mail profile', @recipients = 'email@example.com', @subject = 'Memory Values', @query = N'SELECT total_physical_memory_kb, available_physical_memory_kb, system_memory_state_desc, collection_date_time FROM MemoryDMVHistory WHERE CAST(collection_date_time AS Date) = CAST(GETDATE() AS DATE) ORDER BY collection_date_time DESC;', @attach_query_result_as_file = 1, @query_attachment_filename = 'Memory Values.txt'
- @profile_name is the Database Mail profile you want to use to send the email. You must have at least one profile already set up.
- @recipients is the list of email addresses to send the email to. If you have more than one, separate them with a semi-colon.
- @subject is the email subject.
- @query is the query you wish to run. MSDN says it “can contain any valid Transact-SQL statements”, but I haven’t tested the limits of this statement yet.
- @attach_query_results_as_file has two options: 0 or 1. When set to 0, the query results are included in the body of the email. When set to 1, the results are included as an attachment.
- @query_attachment_filename is optional, and is the name and file extension of the attachment. I recommend setting this, with a .txt or .csv extension.
There are other options you can add as well, such as @execute_query_database, @query_result_header, @query_result_width, and more.
When I execute the job, I receive this email:
This isn’t well-formatted, so I may want to modify the results with @query_result_header and @query_result_width.
Both SQL Server Agent jobs and sp_send_dbmail give you a lot of flexibility for creating and emailing query results. Learn to use them effectively!
Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, September 23! Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!
It’s hard to keep up with what your Report Server is doing, especially if your only tool is Report Manager. Now there’s an easier way, using the newest member of the sp_Blitz family: sp_BlitzRS! In this webcast, you’ll learn how sp_BlitzRS can help you stay informed of how well your SSRS installation is running, who gets what subscriptions, which reports would benefit from preprocessing, and more!
Have questions? Feel free to leave a comment so we can discuss it on Tuesday!
Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, September 16! Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!
Curious how you can give a compelling technical presentation? Join Kendra to learn five important tips on how to select the right topic for your talk, write an effective abstract, construct a coherent presentation, and make it to the podium to give your first presentation.
Have questions? Feel free to leave a comment so we can discuss it on Tuesday!
You store data in SQL Server, but you don’t have enough work to keep a full-time DBA busy.
In just half an hour, you’ll learn the basics of performance troubleshooting and index tuning from me – a recovering developer myself. I’ll show you the basic care and feeding of a Microsoft SQL Server instance and give you scripts to keep you out of trouble:
In this session that I gave at the 24 Hours of PASS 2014, I cover these steps:
1. Start every new table with a clustered index.
A clustered index tells SQL Server, “Here’s how I want you to sort the data.” Without that definition, SQL Server just sloppily throws your data anywhere that it can find space. While that might be awesome for insert speeds, it’s not so good as a universal data structure when we need to make performance tuning as easy as possible.
Generally, your clustered index needs to be on fields that are:
As you learn more about indexes, it’s tempting to micro-optimize them by saying, “Well, I think my end users are going to query this data on a combination of date and warehouse number.” Don’t over-think that before the users start using the app. Index designs are easy to change later.
There are situations where heaps – tables without clustered indexes – can perform faster. Thing is, they have some serious gotchas as Kendra Little explains in this 30-minute video. Heaps also miss out on the magic of SQL Server’s automatically suggested indexes, and that’s especially important for us in the next step.
2. Check your indexes weekly with sp_BlitzIndex®.
As it runs your beautifully crafted queries, SQL Server is constantly gathering index diagnostics. It keeps track of what indexes would have been the most helpful, how many times they would have been used, and how to create those indexes. You can query those diagnostic tables yourself, but frankly, it’s painful.
That’s where the free sp_BlitzIndex comes in. Download it, install it on your production server, and then run it in your database. It gives a funny sanity check on which indexes you could probably drop, and which ones you should add.
Run it, and then cover your eyes. It’s going to produce all kinds of heinous warnings about the awful mess that the last guy stuck you with. I only want you to focus on two warnings:
Warning: Index Hoarding – these are nonclustered indexes that are in place now, but nobody’s using them. SQL Server has decided these indexes just aren’t the most efficient way to get your data. Disable these rather than dropping them, because that way if they’re needed again, you can just enable them again as explained in that link.
(Now would be a good time to mention that if you’ve been bossing SQL Server around by specifying index hints in your queries, you’re gonna have a bad time. Your queries will fail.)
Warning: Missing Indexes – SQL Server wants a copy of your table stored in a different order to make queries go faster. Careful adding these willy-nilly – aim for 5 or less nonclustered indexes per table, and each index should have 5 or less fields on it. Avoid indexing XML, VARCHAR(MAX), NVARCHAR(MAX), or other big data types because the index will be huuuuge.
If you’re on SQL Server Standard Edition, creating indexes can lock your table while you work, so do the actual adding in development first. Time how long it takes, and depending on your dev hardware’s speed and database size, you’ll have a rough idea of what it will look like in production. Then do the doin’ during a maintenance window or after hours. If you’re lucky enough to use SQL Server Enterprise Edition, you can use the WITH (ONLINE = ON) parameter while creating indexes to have a lower impact on your end users.
And remember that first step where we said to start with a clustered index? If you’re thinking SQL Server will recommend the right clustered index for you, think again.
Repeat this process every Friday. See, SQL Server empties out the contents of this diagnostic data whenever SQL Server restarts, and then it constantly keeps piling the data back up. On Friday, even if your server was restarted over the weekend, you’ve got some good diagnostic data to check because users have been hitting the app all week. (This is why you can’t run it in your dev/staging environment and get the same good recommendations.) Within a month or so, you won’t believe the difference in performance.
3. Build the Suckerboard weekly with sp_BlitzCache®.
As it’s executing queries, SQL Server tracks which queries get run most often, and which ones use the most server resources. It’s like capturing a trace of your server, but even better because SQL Server is already doing this for you. You can query the dynamic management views to pull the data out, but ain’t nobody got time for that.
That’s where the free sp_BlitzCache comes in. Download it, install it on your production SQL Server, and then run it with these parameters:
EXEC sp_BlitzCache @top = 10, @sort_order = ‘duration’
This will give you the top 10 queries that have run the longest in total. For example, if a query runs for 10 seconds, and it’s been called 1,000 times, then it’ll have 10,000 total seconds of duration, so it will rank higher than a query that was only ran once and ran for 500 seconds.
This is the opposite of the leaderboard – this is the Suckerboard. It’s the Hall of Shame, the queries that we’re not too proud of. The good news is that it’s usually really easy to fix these once you know what they are. On the far right of sp_BlitzCache’s output, you can click on the query plan to see the query’s execution plan, and you’ll often find things like missing index warnings or zombies.
Repeat the process with a few other parameters to see the highest CPU users, the ones that read the most data, and the ones that ran most frequently:
EXEC sp_BlitzCache @top = 10, @sort_order = ‘CPU’
EXEC sp_BlitzCache @top = 10, @sort_order = ‘reads’
EXEC sp_BlitzCache @top = 10, @sort_order = ‘execution’
Repeat this every Friday and email the list of queries to your team. This has an amazing effect – people see their queries on the Suckerboard, and they immediately wanna work on those queries to get them off the Hall of Shame. It becomes a competition to keep off the Suckerboard.
After a month or so, you’ll be very comfortable with the list of queries. You’ll know them right away at a glance, and you’ll know why they show up – like what kind of work they’re doing. This makes you extremely well-equipped to deal with the next step.
4. When slowness strikes, use sp_WhoIsActive and sp_AskBrent®.
When someone comes running in screaming that the SQL Server is slow, run these two stored procedures:
sp_WhoIsActive lists all of the queries that are running right now, ordered by longest-running to newest.
sp_AskBrent checks a bunch of common potential issues on your server and warns you if a data file is growing, a transaction is stuck rolling back, a backup is running, and so on.
5. When you need to learn more, here’s what to do.
- Explaining Clustered vs Nonclustered Indexes – Jes Schultz Borland simplifies these in a blog post with examples.
- Filtered Index Limitations – these are indexes with a WHERE clause, thereby taking up less space, but…yeah.
- How to Decide if You Should Use Table Partitioning – this advanced indexing technique has a tempting call as you start to scale. Kendra Little explains it.
- More index resources
Plan cache and execution plan resources:
- How to Use sp_BlitzCache® – Jeremiah explains how to query your plan cache and find the most resource-intensive queries.
- How to Get Your Query’s Execution Plan – a 15-minute video that explains a few different methods.
- Parameter Sniffing – if your query suddenly runs slow out of nowhere without any changes, this might be your problem.
- More execution plan resources
And if you’re coming to PASS, we’re holding a performance tuning class on Monday & Tuesday called Make SQL Server Apps Go Faster. If you learned something in this article & video, then you’ll learn a LOT in the class. Read more about it now.
DBAs reading this: stop freaking out.
I know, I’m simplifying a lot of stuff in here. When you read the sentence, “Index designs are easy to change later,” you instantly pop a vein in your forehead because you remember that time you had to change a clustered index on a 1TB data warehouse.
Take a step back and put yourself in the frame of mind of a developer who’s alone in a dark room with an application, a SQL Server, no DBA, no budget, and no time to get everything perfect. I’m trying to give them the tools to get good-enough performance while spending less than 2 hours a week worrying about their database.
Sure, there’s all kinds of interesting scenarios and tricks you’d like me to explain on this page – but remember, the time they spend reading this page and learning techniques counts against their 2 hours of spare time per week. Focus on the things that will truly make a difference in performance, and ease off the academic “well actually” stuff.
Thanks for tuning in to our webcast again this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday, September 9! Not only do we answer your questions, we also give away a prize at 12:25 PM – don’t miss it!
The SQL Server community is unlike any other technical community. We have so many helpful people and our pick of resources. When you want to contribute, you have many options – from speaking to writing, attending conferences to running user groups, answering forum questions to writing articles. Which should you invest your time in to help your career and personal growth? Let me share my experiences with all of these so you can make the best choice.
Have questions? Feel free to leave a comment so we can discuss it on Tuesday!