Erika loves having fresh flowers around the house. Every Saturday morning, I pick up a bouquet at a farmer’s market or grocery store and put it in a vase for her. I’m slowly upping my game by learning more and more about the art of arranging flowers.
When I say flowers, I bet you think about the English Garden style: a big, complex vase with all kinds of flowers crammed into every nook and cranny. It’s an explosion of color and life.
That’s way too stuffy for us. We’re into minimalism, clean lines, and letting materials speak for themselves. I like plucking one or two of the more beautiful or unusual flowers and putting them in their own vase. This leans toward the Ikebana style of Japanese flower arrangement, specifically the Nageire type. (I don’t even want to think about how badly I’m going to mispronounce these if I ever have to say them out loud.)
Writing database code is like arranging flowers.
If you show someone your bouquet, they might not like it. They might give you a million reasons about why it’s not right or why another way is better. That’s not the point – does it produce the results you want?
If your goal is to get to market quickly and cheaply, just buy a premade bouquet from the grocery store, throw the flowers in the vase and be done with it. Use LINQ, Entity Framework, NHibernate, or whatever code tools make your job easy.
If you translate your app code into SQL code, you’re building an English Garden. You start by declaring variables at the center, then populating those variables by checking configuration tables, then spin out to more and more other tables, getting your results in loops and setting values one at a time. This is exactly how developers have always been taught to arrange their flowers, and it works just fine. Once you’re used to doing it, you can bang that code out quickly, and the results are attractive.
But if you need it to be beautifully fast, you need Ikebana. You need very clean, very minimalist code that gets the job done in as few statements as possible. In a database environment, this means set-based code that avoids cursors and loops.
While clean, Ikebana-style database code is simple to behold, it’s deceivingly complex to build. The first step is moving as much logic as possible from the database server to the application server – starting with the ORDER BY statement. If you’re not fetching just the TOP X rows, then do all sorting in the application server. Removing just that one line from a query will often cut its cost dramatically. Your development platform (.NET, Java, Cobol, whatever you kids are using these days) is really good at scaling out CPU and memory-intensive work like sorting, and you’re already really good at splitting out your work into multiple application servers. Leverage that capability.
Think of it like pruning your code – remove all the things that database servers don’t do beautifully, and what you’re left with will be gorgeous.
Are you frustrated by third party applications that you can’t change, but you have to support? Tired of beating your head against the wall when your users complain about things you can’t fix? In this 30-minute session, Brent Ozar will show you his favorite tricks to get the most performance without losing support. He’ll show you how to interact with vendors and get what you want – without getting heartburn:
Like that video? We’ve got half a dozen more scheduled for upcoming Tuesday lunches. Click the boxes you want and sign up for free.
How would you like to go to a SQL Server conference in Las Vegas where the sessions are taught by Brent Ozar Unlimited, SQLskills, SQLServerCentral, and SQL Sentry?
Yep. Me, Jeremiah, Kendra, Kimberly Tripp, Paul Randal, Jonathan Kehayias, Erin Stellato, Steve Jones, and Aaron Bertrand. Between us, that’s 3 MCMs, 2 MCM instructors, 7 MVPs, and 2 MVP Regional Directors.
If you’re serious about learning SQL Server, this should be the very first conference on your fall priority list. Check out some of these sessions:
- Troubleshooting SQL Servers in VMware and SANs (me)
- Understanding Locking, Blocking, and Isolation Levels (Kimberly)
- Understanding Logging and Recovery (Paul)
- X-Ray Glasses for Your Indexes (Kendra)
- Branding Yourself for a Dream Job (Steve)
- Deadlocking for Mere Mortals (Jonathan)
- Hadoop: The Great and Powerful (Jeremiah)
- Making the Leap from Profiler to Extended Events (Erin)
How much would you pay for three days of awesome learning at a conference like this with top-notch speakers, all killer no filler?
And hey, it’s Vegas, so it’s a great team building city, like when Jeremiah and I rented cars last time and, uh, built teams. Yeah.
But wait – there’s more! Check out the pre-con workshops:
- Accidental DBA Starter Kit (me, Jeremiah, Kendra – Pre-Con Sunday) - You’re responsible for managing SQL Servers, but you’ve never had formal training. You’re not entirely sure what’s going on inside this black box, and you need a fast education on how SQL Server works. In one day, you’ll learn how to make your SQL Server faster and more reliable. You’ll leave armed with free scripts to help you find health problems and bottlenecks, a digital set of posters that explains how SQL Server works, and an e-book that will keep your lessons moving forward over the next 6-12 months.
- Queries Gone Wild: Real-World Solutions (Kimberly – Pre-Con Sunday) - Have you ever wondered why SQL Server did what it did to process your query? Have you wondered if it could have done better? And, if so, how? Transact-SQL was designed to be a declarative language that details what data you need, but without any information about how SQL Server should go about getting it. Join order, predicate analysis – how does SQL Server decide the order or when to evaluate a predicate? Most of the time SQL Server gets the data quickly but sometimes what SQL Server does just doesn’t seem to make sense. Inevitably you’ll encounter certain workloads and queries that just aren’t performing as well as you expect. There are numerous reasons why query performance can suffer and in this full-day workshop Kimberly will cover a number of critical areas while showing you how to analyze a variety of query plans throughout the day.
- Scale Up or Scale Out: When NOLOCK Isn’t Enough (me, Jeremiah, Kendra – Post-Con Thursday) - Partitioning, replication, caching, sharding, AlwaysOn Availability Groups, Enterprise Edition, bigger boxes, or good old NOLOCK? You need to handle more data and deliver faster queries, but the options are confusing. In this full-day workshop, Brent, Kendra, and Jeremiah will share the techniques they use to speed up SQL Server environments both by scaling up and scaling out. We’ll share what features might save you hundreds of development hours, what features have been a struggle to implement, and how you can tell the difference. This workshop is for developers and DBAs who need to plan long term changes to their environment.
- Practical Disaster Recovery Techniques (Paul – Post-Con Thursday) - Disasters happen – plain and simple. When disaster strikes a database you’re responsible for, can you recover within the down-time and/or data-loss limits your company requires? What if your plan doesn’t work? This workshop isn’t about how to achieve high-availability, it’s about how to prevent or overcome the obstacles you’re likely to hit when trying to recover from a disaster – such as not having the right backups, not having valid backups, or not having any backups! In this demo-heavy workshop, you’ll learn a ton of practical tips, tricks, and techniques learned from 15 years of experience helping customers plan for and recover from disasters, including less frequently seen problems and more advanced techniques. All attendees will also receive a set of lab scenarios for further study and practice after the class with assistance from Paul.
Now how much would you pay for all this? Three thousand? Four thousand? Ten thousand? BUT WAIT, THERE’S MORE!
For $1,894 before June 24th, you can get the Show Package: the conference, PLUS a pre-con or post-con of your choice, PLUS your choice of a Surface RT, Xbox, or a $300 gift card.
For $2,294, you get all that plus ANOTHER pre-con or post-con – five days of nonstop learning from the absolute best in the business.
No? You want more? Okay, you drive a hard bargain, buddy. Use discount code OZAR and you get another $100 off. Register now. Operators are standing by.
Everyone wants to make sure they’re getting the best performance out of their solid state storage. If you’re like a lot of people, you want to make sure you’re getting what you paid for, but how do you know for sure that the drive is performing well?
Watch that Average
The first way to monitor performance it to use some perfmon counters. Although there are a lot of perfmon counters that seem helpful, we’re only going to look at two:
- PhysicalDisk\Avg. Disk Sec/Read
- PhysicalDisk\Avg. Disk Sec/Write
As soon as you get a solid state drive in your server, start monitoring these numbers. Over time you’ll be able to trend performance over time and watch for poor performance. When the SSDs pass out of your valid performance guidelines (and they probably will), you can pull them out of the storage one at a time and reformat them before adding them back into the RAID array. Note it isn’t necessary to do this
Although it’s risky, this approach can work well for detecting performance problems while they’re happening. The downside is that we don’t have any idea that the drives are about to fail – we can only observe the side effects of writing to the SSDs. As SSD health gets worse, this average is going to trend upwards. Of course, you could also be doing something incredibly dumb with your hardware, so we can’t really use average performance as a potential indicator of impending hardware failure.
Which SMART Attributes Work for SSDs?
What if we could watch SSD wear in real time? It turns out that we’ve been able to do this for a while. Many vendors offer SMART status codes to return detailed information about the status of the drive. Rotational drives can tell you how hot the drive is, provide bad sector counts, and a host of other information about drive health.
SSDs are opaque, right? Think again.
SSD vendors started putting information in SMART counters to give users a better idea of SSD performance, wear, and overall health. Although the SMART counters will vary from vendor to vendor (based on the disk controller), Intel publish documentation on the counters available with their SSDs – check out the “SMART Attributes” section of the Intel 910 documentation. These are pretty esoteric documents, you wouldn’t want to have to parse that information yourself. Thankfully, there are easier ways to get to this information; we’ll get to that in a minute.
Which SMART Attributes Should I Watch?
There are a few things to watch in the SMART status of your SSDS:
- Write Amplification
- Media Wear-out Indicator
- Available Reserved Space
Write Amplification, roughly, is a measure of the ratio of writes issued by your OS compared to the number of writes performed by the SSD. A lower score is better – this can even drop below 1 when the SSD is able to compress your data. Although the Write Amplification doesn’t help you monitor drive health directly, it provides a view of how your use pattern will change the SSD’s lifespan.
The Media Wear-Out Indicator gives us a scale from 100 to 0 of the remaining flash memory life. This starts at 100 and drifts toward 0. It’s important to note that your drive will keep functioning after Media Wear-Out Indicator reports 0. This is, however, a good value to watch.
Available Reserved Space measures the original spare capacity in the drive. SSD vendors provide additional storage capacity to make sure wear leveling and garbage collection can happen appropriately. Like Media Wear-Out Indicator, this starts at 100 and will drift toward 0 over time.
It’s worth noting that each drive can supply additional information. The Intel 910 also monitors battery backup failure and provides two reserved space monitors – one at 10% reserved space available and a second at 1% reserved space available. If you’re going to monitor the SMART attributes of your SSDs, it’s worth doing a quick search to find out what your SSD controllers support.
How do I Watch the SMART Attributes of my SSD?
This is where things could get ugly. Thankfully, we’ve got smartmontools. There are two pieces of smartmontools and we’re only interested in one:
smartctl. Smartctl is a utility to view the SMART attributes of a drive. On my (OS X) laptop, I can run
smartctl -a disk1 to view the SMART attributes of the drive. On Windows you can either use the drive letter for a basic disk, like this:
smartctl -a X:
Things get trickier, though, for certain PCI-Express SSDs. Many of these drives, the Intel 910 included, present one physical disk per controller on the PCI-Express card. In the case of the Intel 910, there are four. In these scenarios you’ll need to look at each controller’s storage individually. Even if you have configured a larger storage volume using Windows RAID, you can still read the SMART attributes by looking at the physical devices underneath the logical disk.
The first step is to get a list of physical devices using WMI:
wmic diskdrive list brief
The physical device name will be in the
DeviceID column. Once you have the physical device name, you can view the SMART attributes with
smartctl like this:
smartctl -a /dev/pd0 -q noserial
Run against my virtual machine, it looks like this:
C:\Windows\system32> smartctl -a /dev/pd0 -q noserial smartctl 6.1 2013-03-16 r3800 [x86_64-w64-mingw32-win8] (sf-6.1-1) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: Windows 8-0 SSD Serial Number: 0RETRD4FE6AMF823QE7R Firmware Version: F.2FKG1C User Capacity: 68,719,476,736 bytes [68.7 GB] Sector Size: 512 bytes logical/physical Rotation Rate: Solid State Device Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ATA8-ACS, ATA/ATAPI-5 T13/1321D revision 1 SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Sat Apr 27 08:35:03 2013 PDT SMART support is: Unavailable - device lacks SMART capability.
Unsurprisingly, my virtual drive doesn’t display much information. But a real drive looks something like this:
Holy cow, that’s a lot of information. The Intel 910 clearly has a lot going on. There are two important criteria to watch, simply because they can mean the difference between a successful warranty claim and an unsuccessful one
- SS Media used endurance indicator
- Current Drive Temperature
The Intel 910 actually provides more information via SMART, but to get to it, we have to use Intel’s command line tools. By using the included isdct.exe, we can get some very helpful information about battery backup failure (yup, your SSD is protected by a battery), reserve space in the SSD, and the drive wear indicator. Battery backup failure is a simple boolean value – 0 for working and 1 for failure. The other numbers are stored internally as a hexadecimal number, but the isdct.exe program translates them from hex to decimal. These numbers start at zero and work toward 100.
If you’re enterprising, you can take a look at the vendor specification and figure out how to read this data in the SMART payload. Or, if you’re truly lazy, you can parse the text coming out of smartcl or isdct (or the appropriate vendor tool) and use that to fuel your reports. Some monitoring packages even include all SMART counters by default.
The Bad News
The bad news is that if you’re using a hardware RAID controller, you may not be able to see any of the SMART attributes of your SSDs. If you can’t get accurate readings from the drives and you’ll have to resort to using the Performance Monitor counters I mentioned at the beginning of the article. RAID controllers that support smartmontools are listed in the smartctl documentation.
Special thanks go out to a helpful friend who let us abuse their QA Intel 910 cards for a little while in order to get these screenshots.
The PASS Virtualization Virtual Chapter hosted a Q&A session with me last week. We talked about storage configuation options like VMDK/VHD vs RDM, how licensing works, what’s the biggest SQL Server I’m comfortable virtualizing, and much more:
For more tips, check out our virtualization resources page.
If you write code that accesses a database one terabyte or larger, you need to know that things are different around here. When you hit the very large database (VLDB) territory, you need to pay particular attention to statistics, TempDB, and staging tables. Microsoft Certified Master Brent Ozar will share his favorite lessons for developers who work with either OLTP or data warehouses in this 25-minute webcast:
The links discussed in the webcast include:
SQL Server 2012 Query Performance Tuning by Grant Fritchey – we talked about the statistics analysis chapter as an example of what matters much more in the terabyte territory.
SQL Server 2012 Internals and Troubleshooting – the storage chapter helps you pick the right storage for your TempDB, which matters much more in VLDBs.
Download SQL Server 2012 Management Studio – free download for all Microsoft SQL Server users. I’d recommend using this even if you’re not using SQL Server 2012 on the back end – it’s a better development environment. Bonus – check out Jes’s SSMS tips and tricks video.
Our free server troubleshooting tool, sp_Blitz™, just learned some new tricks. If you call it like this:
@OutputDatabaseName = ‘DBAtools’,
@OutputSchemaName = ‘dbo’,
@OutputTableName = ‘BlitzResults’
It will create the table DBAtools.dbo.BlitzResults (if it doesn’t already exist) and insert the results. The table includes columns for the server name and the check date, so you can query this table to see what’s changed since the last time you ran sp_Blitz™. Plus, there’s more:
Added new checks for disabled CPU schedulers (due to licensing or affinity masking), extended stored procedures, disabled remote access to the DAC, databases in unusual states like suspect or emergency, logins with CONTROL SERVER permission.
Easier readability – whenever anyone’s asked how to add checks, I’ve always just said, “Read the code.” Last week, I read the code with fresh eyes and realized it could use some cleanup and comments. I used Red Gate SQL Prompt to clean up the formatting, which is better but could still use some work. The code is nearing 4,000 lines, so I moved the old changes out of the proc and into a new online changelog. I put in some time to explain what the proc is doing, and that should help people reading it from the first time. I’ve also started an sp_Blitz™ Documentation page where I’ll be explaining some of the more advanced uses.
New downloadable sp_Blitz™ app for Windows - over and over, we heard from people that they wanted a quick way to generate a print-friendly copy of the sp_Blitz™ results. Jeremiah worked with a developer and built our first app:
The app creates sp_Blitz™ in TempDB on the server of your choice, executes it, shows you the results, and can output the results to PDF – complete with links to the explanations for each result.
We’ve got lots of ideas for the app, but we wanted to get it out there first and let you help drive it. If there’s something you’d like to see added or changed, leave a comment here or contact us. The whole point of this thing is to make your life easier. I wish this tool would have been available back when I was a production DBA struggling to figure out what was going on in my environment, so every little improvement helps. Hopefully this can save you from the stress I had back then. Get it from the download page and enjoy!
Key length matters in SQL Server indexes.
It’s a best practice to keep your index keys as narrow as possible, and SQL Server enforces a maximum key length of 900 bytes on most “normal” clustered and nonclustered indexes.
But what happens if you want to optimize the lookup of a wide column? You’re not necessarily out of luck, you may just have to get a bit creative.
What If I Need to do an Equality Search on a Wide Column?
Let’s say I have a simple table. I have a narrow key on my clustered index and then I have a pretty wide variable length column. I need the wide column to be unicode, which makes it even wider, since unicode data types take up more room.
Here’s our sample table with a few rows (just pretend it has a lot more):
CREATE TABLE dbo.LookupValues ( i int identity, bigval nvarchar(2000) default (REPLICATE('d',700)), constraint pk_LookupValues_i primary key (i) ); GO --Insert rows with the default values begin tran declare @i smallint = 0; while @i < 10000 begin insert dbo.LookupValues default values; set @i=@i+1; end commit GO --Insert a few smaller values insert dbo.LookupValues (bigval) VALUES ('big'); insert dbo.LookupValues (bigval) VALUES ('bunny'); insert dbo.LookupValues (bigval) VALUES ('bunny bunny'); GO
Let’s say we write to this table rarely, but query it often. When this query runs, I want to make it as fast as possible:
SELECT i from dbo.LookupValues where bigval = N'bunny';
Right now, this query has to scan every row in the clustered index (the whole table) to find instances where bigval=N’bunny’. That’s not ideal, and as the table grows it’ll become worse and worse, burning more IO and CPU, and taking longer over time.
There’s usually an easy way to make a query like this fast: just create a nonclustered index on the bigval column. But when I try, it doesn’t work because of restrictions on key size.
--Make my query faster! CREATE NONCLUSTERED INDEX ix_LookupValues_bigval on dbo.LookupValues (bigval); GO
Warning! The maximum key length is 900 bytes. The index 'ix_LookupValues_bigval' has maximum length of 4000 bytes. For some combination of large values, the insert/update operation will fail. Msg 1946, Level 16, State 3, Line 1 Operation failed. The index entry of length 1400 bytes for the index 'ix_LookupValues_bigval' exceeds the maximum length of 900 bytes. The statement has been terminated.
Terminated. Yeah. I can’t just index this to make my query fast.
Options for Indexing Wide Keys
So what’s a performance tuner to do?
My first thought when I hit this problem was that I might have to use a fulltext index. A fulltext index can work here– it lets you index large columns, but it would be kind of a bummer to have to do it. Fulltext indexes have extra overhead and are really designed for different things than doing a simple equality search, so it would be like using a jackhammer because you can’t find a mallet.
My partner Jeremiah Peschka came up with a quick and clever solution using an indexed computed column. You can work all sorts of cool magic with computed columns in SQL Server– the trick is just to remember them!
Here’s how it works: you add a computed column to the table that’s the hash of the large value. You then index the computed column and modify your query to take advantage of it.
In this example we use SHA_512 for the hashing algorithm. This will give an output of 64 bytes– well within our limits for index key sizes.
ALTER TABLE dbo.LookupValues ADD bigvalhash AS HASHBYTES('SHA2_512', bigval) PERSISTED; GO CREATE NONCLUSTERED INDEX ix_LookupValues_bigvalhash on dbo.LookupValues (bigvalhash) INCLUDE (bigval); GO
Now, to get the query work, we need to change it a bit:
SELECT i from dbo.LookupValues where bigvalhash = HASHBYTES('SHA2_512', N'bunny') and bigval = N'bunny'; GO
This revised approach gives me a targeted index seek and limits my logical reads. Voila!
The Fine Print on This Solution
There are a few things to note:
- HASHBYTES results are dependent upon datatype. If my query used HASHBYTES(‘SHA2_512′, ‘bunny’), it would not find any rows, because the column is hashed unicode values and I provided a hashed non-unicode value.
- I do still include “bigval= N’bunny’” in my query. In theory there shouldn’t be collisions with SHA-512, but it doesn’t add much expense to the query and in my example I deemed it “worth it” to me. You might make a different choice.
Sometimes Old Tools Do the Trick
What I love most about this solution is that it’s creative, but it’s not really weird, when you think about it. It uses standard features that have been in SQL Server for a long time to create a way to do something that seems like the product wouldn’t support– and that’s really cool.
It’s difficult to define why some things are wildly successful in a sea of forgettable products. Some recipes have a mysterious umami that comes from a subtle dash or two of magical sauce. In business and technology there’s an indefinable edge that sets people and teams apart. How do you get that special something?
Here’s one strategy businesses use all the time: layer products to build new, deeply specialized meta-products. First, they create a service or application they can sell. It generates revenue, but it also generates data. Sometimes they have to adjust (or heaven forbid, pivot) the product, but they get good at it. They start making money.
After a while they apply analysis to harvest meta-information from the product. The meta-information is used to create new, different products. These products may offer extra insights to existing customers, but they may also suit new and different customers, too. BLAMMO, the business is more diverse and successful.
These techniques aren’t just for packaged products. This is also what helps companies create services that seem a little magical, because they’re simultaneously simple and complex. We use these principles to make our own SQL Server Training super sharp and effective for students.
This isn’t something that only we can do. You can use the same techniques to find your own secret sauce.
Step 1: To Get Ahead, Look Back
One of our primary consulting products is our Critical Care Sessions. We work closely with a client to identify their pain points and the bottlenecks in their SQL Server environment– issues can be anywhere from the storage layer to SQL configuration to query optimization and execution. We deliver recommendations for the client to implement within the first couple of weeks, the next month, and the next quarter. We tailor each recommendation to the client’s specific needs.
After a few weeks have passed, we meet again. We work with the client to find out how much they’ve gotten done. What’s been easy? What’s been difficult? Are there any questions we can help out with?
We listen very carefully, because this is an important source of information. Following up with your customers and finding out what has worked for them and why some tasks are trickier for them than normal is one of the most important things you can do. This tells us:
- Where misunderstandings commonly occur
- What types of data successfully gets buy-in to make difficult changes
- Which scripts and tools are really critical to get people that data
The truth is this: it’s easy to know facts. It’s difficult to be effective. Because of this, we constantly collect data on what helps people get things done and evolve our tools.
This information is certainly helpful to our consulting product itself– it keeps us at the top of our game. But we also get an important component of our secret sauce from this: we can train people in the classroom to be effective using the techniques and scripts we’ve refined.
To apply this step yourself, create a habit of regularly meeting with customers and listening to feedback. Make notes. You won’t always be able to implement every idea you have, but keep track of what you find along the way. Steps 2 and 3 will give you a chance to harvest that information.
Step 2: Identify Your Differentiators
Very smart people are sometimes terrible teachers. We knew we could teach on a wide variety of subjects. But how did we build a great training?
The key is to identify what sets you apart and narrow your scope to projects where you can impact your audience deeply. We don’t want to fill a training room with warm bodies and talk about general SQL Server topics. We want to get the right people into the room so that we can address their immediate problems and have them jumping out of their seats to get going.
To create great training, we identified what we do better than anyone else. These are our differentiators:
- We’ve built tools and scripts we can quickly train people to use effectively to diagnose server health and performance and tackle tough indexing problems.
- We’ve created specialized materials to help developers learn the critical factors to SQL Server performance in a short amount of time
- We have refined queries and techniques to teach developers to recommend effective changes in their environment
- We have an army of slide decks, blog posts, and even animated gifs to help developers solve performance problems, no matter how niche– so making Q&A awesome is no problem.
Knowing these differentiators made our mission clear. We knew we could change people’s careers with a short 2 day class, targeted at a specific audience (developers), and give them immediately useful scripts and techniques to improve performance.
We applied also looked at differentiators for how we offer the training itself. What new things could we do to help people access the training and make it work for them in a new way?
- We created a business justification form to help people show the value of their training to management before signing up.
- We also included two two-hour group Q&A webcasts for just the people in the training session two weeks after the event. You know how most trainings leave you with a feeling of confusion about a few topics when you get back to your desk and try to apply what you learned? These Q&A webcasts mean our attendees can get answers for those difficult situations.
Define your differentiators for yourself. What do you do better than anyone else in your environment? What needs do you fill, or could you fill, that nobody else covers well? Where are the gaps and needs in your workplace that you find interesting?
Step 3: Don’t Kill Yourself with Focus
You’ll never get permission to have an idea.
We didn’t start our business with the plan of giving our own training events. We love consulting. We speak in front of people all the time– we give our weekly webcasts and speak at local and national conferences, but we primarily think of ourselves as people who go out and initiate change rather than people who stand behind a podium and lecture. We naturally focused on consulting as our core business.
Focus is great and it can help make you successful. But too much focus makes you predictable. It keeps you from really using your secret sauce.
We give training because we were open to a new idea. We realized that two day training is perfect to teach a developer what they need to know about performance tuning SQL Server to design changes for real-world environments. It’s just enough time to get them out of the office and teach them how to use the right tools, but not so long that they’re swamped under an incredible tidal wave of email when they return.
Set aside time at least once a month to think about how your job could be dramatically different. Don’t restrict yourself by immediate needs or how much time you have available. Give yourself room to dream of what you’d be really proud of doing in six months or a year.
Here’s the key: don’t wait until you’re having problems to make your work more awesome. Be creative and act on your aspirations when you’re already successful and busy. That’s often when you’ll have your best ideas! We added our training offering at a time when we were already very successful, simply because we knew we could offer a very high quality product and help developers in a way nobody else can.
The Secret Sauce Isn’t Out of Reach
Good news: if you read our newsletter every week or even just made it to this point in the post, you’re probably a specialist already. You may sometimes feel like a small fish in the big SQL Server pond, but that’s just human nature. If you’re even in the pond you are well on your way toward becoming very good at jobs that are difficult to fill.
Skills that can set you apart like becoming great at performance tuning SQL Servers are completely within your reach. A few steps to find the right path is all it takes.
When I’m writing a presentation or blog post, I often start here:
It’s a photo of me in my office in Dallas, Texas in 2004. When I look at that picture, I remember everything like it was yesterday. I can talk at length about everything on the bookshelf, on my desk, in my drawers (the desk drawers, that is).
I can tell you what technology problems I was struggling with, plus what problems my manager was concerned about. I remember what I knew, and what I didn’t know yet. I can recite the web sites I frequented.
Next, I can turn the mental camera around and see exactly what’s outside my office door: my developers and my support team. I can tell you what they rocked at and what they wanted training on. I can remember how we decorated their cubes for their birthdays – covering Julian’s stuff in aluminum foil, building a princess’ castle for Hima.
The funniest thing, though, is that I didn’t remember any of this until I rediscovered this photo several years ago. All of a sudden, everything was clear to me.
And I realized who I was writing for.
Now, it’s really easy for me to scope my presentations and blog posts because I’m writing for 2004 Brent. 2004 Brent hadn’t studied databases and tried to turn them inside out – he just needed to store data and get it back out quickly. He wasn’t on a first name basis with book authors and MVPs – he didn’t even know what an MVP was.
You need to take this picture today.
Set up your camera with a self-timer or get a colleague to shoot a few pictures of yourself sitting in your work environment. Get pictures of the books on your shelf, the stuff on your desk, and maybe take a screenshot of your task list. Write yourself a one-page note covering:
- The stuff you’re comfortable with
- The stuff you’re uncomfortable with
- The things you want to learn this year
- The things you learned recently that surprised you
Stash these pictures and words away in a time capsule folder somewhere. A few years from now, when you’re writing a presentation covering something you’ve learned, get these back out. Think about what you knew and didn’t know, and that’s your target audience. Before you use a term or acronym, think back and ask, “Did 2013 Me know that? If not, lemme introduce the topic.”
When you’re writing, remember that you’re never writing for your current self. You’re writing for the past version of you. Having these pictures and words will help you define your audience.