Occasionally you check out job listings and wonder, “Could I have a better job?” If you’ve been working as a database administrator for a few years, it’s time to learn how to tell a dream job from a potential nightmare. Join Kendra Little for a 30 minute guide on how to read hidden messages in job listings and find the right next step for your career.
Need to practice for your next interview?
We’ve got training! For $29 you get 18 months of access to videos helping you prepare and practice for interviews with 100 practice DBA interview questions on on your desktop, laptop, iPad, or iPhone. Learn more or buy it now.
Liked this video? Check out our upcoming free webcasts.
Fast – When developers ask how quickly a piece of code needs to run, don’t say fast. Give them a finish line so that they can know when it’s time to move on. “This query currently does over 15mm logical reads. We don’t allow production OLTP queries that do over 100k logical reads – anything higher than that needs to hit the reporting server instead.” Developers don’t want to write slow queries, but if you don’t show them how to measure their queries, they don’t know what’s slow versus fast. Show them how to measure what makes a query successful, and they’ll start measuring it long before they bring it to you.
Sometimes - When code works unpredictably, don’t say it sometimes works. Look for things it has in common when it fails. Does it bomb every Sunday, or when it handles over ten customers, or when it’s only being called once at a time? Keep digging for environmental coincidences until we can give the developers or QA staff a lead. Sometimes (see what I did there?) I find that developers want access to the production server just because they can’t get enough specific troubleshooting help from the DBAs. “Your code fails sometimes” isn’t going to cut it.
Never - When the developer tries to deploy a trigger in production, don’t say, “We never allow that.” The business, not our emotional desire, dictates the technical solutions we use. We’re here to advise the business, and sometimes the business won’t go along with our advice. Our job is to lay out our concerns clearly and concisely, preferably with risk assessments and real-life stories, and then listen. I’d love to build every box and application perfectly, but we gotta ship if we’re going to keep paying salaries.
Fine - When you’re asked how the server is performing, don’t say it’s fine. Quantify it in terms of batch requests per second, average duration for a stored procedure, or some other metric that you can measure precisely. Bonus points if we correlate this number over time, like if we can say, “We normally average 1,400 to 1,500 batch requests per second during peak weekday hours, and we’re doing 1,475 right now.”
Large – When developers asks how big a table or database or index is, don’t say large. What’s large to you is small to someone else, and vice versa. Give exact numbers: number of terabytes in the largest database, number of databases, number of rows in the largest table, or the number of times you’ve updated your resume in terror because the backup failed.
It Depends – DBAs love giving this answer as a cop-out for tough questions, and if you’re not careful, it comes off as a condescending know-it-all. For best results, immediately follow this phrase with the word “on”, as in, “It depends on the table’s access pattern – in read-focused systems like our data warehouse, we can have up to 10-15 indexes on our dimension tables, but in OLTP databases like our web site, we need to aim for 5 indexes or less per table.” The faster you can help someone to the answer they’re looking for, the more they’ll respect you as a partner, not an enemy combatant.
Moving between databases is hard enough, try using multiple databases in the same application and you might start thinking you’ve gone insane. Different application demands for accessibility, redundancy, backwards compatibility, or interoperability make this a possibility in the modern data center. One of the biggest challenges of running a heterogeneous database environment is dealing with a world of data type differences. There are two main ways to work through this situation:
- Using a subset of data types.
- Creating custom data type mappings.
To make comparisons easier, I’m going to focus on SQL Server, PostgreSQL, and Azure Table Services.
Using a Subset of Data Types
The ANSI standard defines a number of data types that should be supported by database vendors but, as with all standards, there’s no guarantee that vendors will support all data types or even support them equally. The SQL Standard defines the following data types:
time (with or without time zone),
timestamp (with or without time zone),
As an example of differences between the ANSI standard and vendor implementations, the ANSI standard defines a
TIMESTAMPdata type that is implemented as a date and time with an optional time zone whereas SQL Server defined
TIMESTAMPas an arbitrary auto-incrementing unique binary number.
Taking a look around it’s easy to see that there are major differences between databases. An easy way to resolve this problem is to use only a small subset of the available data types. This choice seems attractive when we’re working with a language that doesn’t support rich data types. Some languages only have support for a limited number of data types (C provides characters, numeric data types, arrays, and custom
structs), while more advanced languages provide rich type systems.
Comparing our database solutions, Azure Table Services Data Model supports a constrained set of data types. While rich type systems are valuable, the Table Services data model provides everything needed to build complex data structures. The simple data model also makes it easy to expose Azure Table Services data as ATOM feeds that can be consumed by other applications. By opting for simplicity, this simple data model makes it possible to communicate with a variety of technologies, regardless of platform.
The downside of restricting an application to a limited set of data types is that it may become very difficult to store certain data in the database without resorting to writing custom serialization mechanisms. Custom serialization mechanisms make it impossible for users to reliably report on our data without intimate knowledge of how the data has been stored.
Compare the supported Azure Table Services data types with SQL Server 2008 R2′s data types and PostgreSQL’s data types. There’s some overlap, but not a lot. Limiting your application to a subset of datatypes is really nothing more than limiting your application to a subset of data that it can accurately store, model, and maniuplate. Everything else
Custom Data Type Mappings
Let’s assume we have an application that is built using PostgreSQL as the primary OLTP back end. We can expose a lot of our functionality through our cloud services as simple integers and strings, but there are some things that aren’t assured to work well when we move across different OLTP platforms. We can’t always map data types – how does
inet map to SQL Server or Azure Table Services? There’s no immediately apparent way to map the
inet data type to any other data type.
Clearly, custom data type mappings are not for the faint of heart. Decisions have to be made about gracefully degrading data types between databases so they can be safely reported on and reconstituted in the future. Depending on the application,
inet could be stored as
Edm.String in Azure Table Services or
VARCHAR(16) (which only works if we’re ignoring IPv6 addresses and the netmask).
If this sounds like a recipe for confusion and disaster, you might be on to something. Using custom data type mappings across different databases can create confusion and requires custom documentation, but there is hope.
Applications using the database only need to know about the data types that are in the database. Reporting databases can be designed to work with business users’ reporting tools. As long as the data type mappings do not change, it’s easy enough to keep the reporting databases up to date through automated data movement scripts.
What Can You Do?
There’s a lot to keep in mind when you’re planning to deploy an application across multiple databases. Understanding how different databases handle different data types can ease the pain querying data in multiple databases. There’s no reason to limit your application to one database, just be aware that there are differences between platforms that need to be taken into account.
Google have created their own cross application/platform data serialization layer called protocol buffers. If you’re looking at rolling your own translation layer, protocol buffers may fit your needs.
Erika loves having fresh flowers around the house. Every Saturday morning, I pick up a bouquet at a farmer’s market or grocery store and put it in a vase for her. I’m slowly upping my game by learning more and more about the art of arranging flowers.
When I say flowers, I bet you think about the English Garden style: a big, complex vase with all kinds of flowers crammed into every nook and cranny. It’s an explosion of color and life.
That’s way too stuffy for us. We’re into minimalism, clean lines, and letting materials speak for themselves. I like plucking one or two of the more beautiful or unusual flowers and putting them in their own vase. This leans toward the Ikebana style of Japanese flower arrangement, specifically the Nageire type. (I don’t even want to think about how badly I’m going to mispronounce these if I ever have to say them out loud.)
Writing database code is like arranging flowers.
If you show someone your bouquet, they might not like it. They might give you a million reasons about why it’s not right or why another way is better. That’s not the point – does it produce the results you want?
If your goal is to get to market quickly and cheaply, just buy a premade bouquet from the grocery store, throw the flowers in the vase and be done with it. Use LINQ, Entity Framework, NHibernate, or whatever code tools make your job easy.
If you translate your app code into SQL code, you’re building an English Garden. You start by declaring variables at the center, then populating those variables by checking configuration tables, then spin out to more and more other tables, getting your results in loops and setting values one at a time. This is exactly how developers have always been taught to arrange their flowers, and it works just fine. Once you’re used to doing it, you can bang that code out quickly, and the results are attractive.
But if you need it to be beautifully fast, you need Ikebana. You need very clean, very minimalist code that gets the job done in as few statements as possible. In a database environment, this means set-based code that avoids cursors and loops.
While clean, Ikebana-style database code is simple to behold, it’s deceivingly complex to build. The first step is moving as much logic as possible from the database server to the application server – starting with the ORDER BY statement. If you’re not fetching just the TOP X rows, then do all sorting in the application server. Removing just that one line from a query will often cut its cost dramatically. Your development platform (.NET, Java, Cobol, whatever you kids are using these days) is really good at scaling out CPU and memory-intensive work like sorting, and you’re already really good at splitting out your work into multiple application servers. Leverage that capability.
Think of it like pruning your code – remove all the things that database servers don’t do beautifully, and what you’re left with will be gorgeous.
Are you frustrated by third party applications that you can’t change, but you have to support? Tired of beating your head against the wall when your users complain about things you can’t fix? In this 30-minute session, Brent Ozar will show you his favorite tricks to get the most performance without losing support. He’ll show you how to interact with vendors and get what you want – without getting heartburn:
Like that video? We’ve got half a dozen more scheduled for upcoming Tuesday lunches. Click the boxes you want and sign up for free.
Celebrity cooking shows are popular around Brent Ozar Unlimited®. We watch Top Chef for the creative cooking as much as the human drama. Contestants on Top Chef face huge challenges – they’re working alone, have a limited set of tools, have a fixed set of ingredients, and operate under ridiculously strict time guidelines. What makes the creativity of chefs even more interesting is the limitations they’re under: contestants have to work with strange, disgusting or difficult ingredients in order to win the favor of the judges.
Building successful software isn’t much different. Sure, there’s a team of collaborators to help us make decisions; but everyone on that team is frequently responsible for one area of the application. Success or failure depends on your ability to work with the requirements you’re given and please the final judges – the end users.
The Rules of the Contest
Top Chef contestants work within the rules of the contest – they have a limited amount of time with a limited number of ingredients to make an appealing meal. The core ingredients and the time period are non-negotiable. The show’s producers call these the rules, in the world of software these are business requirements, and in the world of programming we might call these application invariants. But no matter what you call them, these things can’t change. Requirements might be something like
- A picture can’t be viewed until four thumbnails have been generated.
- Property listings cannot be viewed until approved by the listing agent.
Contestants on Top Chef don’t immediately start cooking – although careful editing makes it look that way. Development teams, even agile teams, shouldn’t get immediately start coding once they have their hands on requirements. It’s important to look carefully at the requirements and make sure it’s possible to deliver something that the business (the judges) are happy to see. Winning chefs don’t immediately reach for a bottle of Frank’s Red Hot to give a dish a bit of pizzazz, they consider all of the options and match their condiments to the meal, so why do we always reach for the same tools?
It’s easy to be lulled by the familiar – hot dogs and burgers are easy, but they don’t win Top Chef. While you don’t need award winning code to win the game of delivering software, you do need to make the right choices to make life easier.
Working with black chicken and monkfish liver may not be the easiest thing, but contestants on Top Chef are routinely able to turn strange ingredients into masterpieces. Business requirements spell out how our applications have to behave at the end of the day, but you’ll notice that it doesn’t matter how you get there. Just make sure you get there – solve the business problem and move on.
Start with the Ingredients
It isn’t uncommon for Top Chef contestants to scrap their first ideas after a few minutes of work. Likewise, don’t be afraid to throw away your first idea. If you’re a pack rat, write your idea on a piece of paper and hide it from yourself. It’s okay to come back to your first idea, but it’s important to think about the problem in a different way.
What are the core ingredients of your application? Just because you have chicken, that doesn’t mean you should make chicken cordon bleu. Ask yourself, “What am I supposed to create?” A few applications I’ve come across in the last year are:
- Single sign-on systems
- Hosted property listings
- Utility easement tracking
- Document tracking and signing
Each of these applications has a different set of features and functionality. Would you use the same solution for each one? Looking at it a different way – would you serve the same meal for the Superbowl as you would for Christmas dinner?
Make an itemized list of the ingredients that you have on hand. Your requirements are your ingredients. They drive the way the users will interact with the data. As you investigate the requirements, make sure you ask the users questions like, “Do you need point in time recovery for easement property maps?” or “Is it a requirement that a user have a first name, last name, bio, and profile picture or would a user name and password be acceptable?” Understanding your requirements drives your choices.
It’s All About the Ingredients
Under all of your application code, you need somewhere to store your data. One of the Top Chef judges frequently asks “Where’s the protein?” when served a salad. As you work through application requirements, use these to ask yourself “Where’s the data?”
Top Chef contestants typically aren’t told that they need to make sweet glazed salmon, they’re told to use a set of ingredients and produce a fine meal. It’s up to the chef to determine whether to use rémoulade or tartar sauce and it’s up to you to make technical decisions. The business user isn’t going to know the answers to your technical questions, but they do know that a user only needs a user name and password to use the application.
Use the business requirements to help make your database design decisions – if an image doesn’t need to be transactionally consistent with all other data, you don’t need to store it in your relational database. The rules of the contest – the business requirements – should shape how you design your application. They give you both the restrictions and freedom you need to be creative.
What Will They Eat?
Food falls into distinct cuisines. If I gave you a choice between sushi or tapas, you’d be able to make an informed choice because you know the ingredients and style used for each style of cooking.
As you evaluate the business requirements, dig deeper and imagine the types of answers that users might look for in the data. Will users look for property along the path of a tornado where repairs need to be made? Are users searching for houses with specific features – e.g. find single family homes with 2 or more bathrooms and an attached garage? Or are users’ questions difficult to predict and completely free form?
Understanding how people will use the data guides the choices we make. If users will be performing free form text searches, a full text search engine like SOLR should be considered. If an application is pure OLTP, it’s possible that you can use a key-value database. Understanding application requirements means that you can decide whether you need to use SQL Server or you can investigate other options.
Some of the database cuisines to consider are:
- Relational database (SQL Server, PostgreSQL)
- Document database (CouchDB, MongoDB)
- Text Search (Lucene/SOLR, Elastic Search)
- Key-Value database (Riak, Cassandra)
The processing of picking a database can lead to conflict. Developers have their favorite new technologies they want to try and entrenched products are frequently favored above all others. Understanding how one database meets application requirements is important – if you don’t know which ingredients you have, you don’t know what to make; if you don’t understand the application invariants involved, you can’t know which option is the best.
Ultimately, making sure you pick the right tool for the job can lead to faster development, easier support, and better throughput.
What Do the Judges Think?
The most important thing, though, is what the judges think. It doesn’t matter if you’ve made the greatest chicken salad sandwich ever, if your work doesn’t meld with the judges’ expectations you won’t be taking home the prize. Understanding how the requirements influence the ways that users will work with data is critical if you want to be successful. Once you know how people will work with the tools, you’ll be able to make the right decisions for your application.
How would you like to go to a SQL Server conference in Las Vegas where the sessions are taught by Brent Ozar Unlimited®, SQLskills, SQLServerCentral, and SQL Sentry?
Yep. Me, Jeremiah, Kendra, Kimberly Tripp, Paul Randal, Jonathan Kehayias, Erin Stellato, Steve Jones, and Aaron Bertrand. Between us, that’s 3 MCMs, 2 MCM instructors, 7 MVPs, and 2 MVP Regional Directors.
If you’re serious about learning SQL Server, this should be the very first conference on your fall priority list. Check out some of these sessions:
- Troubleshooting SQL Servers in VMware and SANs (me)
- Understanding Locking, Blocking, and Isolation Levels (Kimberly)
- Understanding Logging and Recovery (Paul)
- X-Ray Glasses for Your Indexes (Kendra)
- Branding Yourself for a Dream Job (Steve)
- Deadlocking for Mere Mortals (Jonathan)
- Hadoop: The Great and Powerful (Jeremiah)
- Making the Leap from Profiler to Extended Events (Erin)
How much would you pay for three days of awesome learning at a conference like this with top-notch speakers, all killer no filler?
And hey, it’s Vegas, so it’s a great team building city, like when Jeremiah and I rented cars last time and, uh, built teams. Yeah.
But wait – there’s more! Check out the pre-con workshops:
- Accidental DBA Starter Kit (me, Jeremiah, Kendra – Pre-Con Sunday) - You’re responsible for managing SQL Servers, but you’ve never had formal training. You’re not entirely sure what’s going on inside this black box, and you need a fast education on how SQL Server works. In one day, you’ll learn how to make your SQL Server faster and more reliable. You’ll leave armed with free scripts to help you find health problems and bottlenecks, a digital set of posters that explains how SQL Server works, and an e-book that will keep your lessons moving forward over the next 6-12 months.
- Queries Gone Wild: Real-World Solutions (Kimberly – Pre-Con Sunday) - Have you ever wondered why SQL Server did what it did to process your query? Have you wondered if it could have done better? And, if so, how? Transact-SQL was designed to be a declarative language that details what data you need, but without any information about how SQL Server should go about getting it. Join order, predicate analysis – how does SQL Server decide the order or when to evaluate a predicate? Most of the time SQL Server gets the data quickly but sometimes what SQL Server does just doesn’t seem to make sense. Inevitably you’ll encounter certain workloads and queries that just aren’t performing as well as you expect. There are numerous reasons why query performance can suffer and in this full-day workshop Kimberly will cover a number of critical areas while showing you how to analyze a variety of query plans throughout the day.
- Scale Up or Scale Out: When NOLOCK Isn’t Enough (me, Jeremiah, Kendra – Post-Con Thursday) - Partitioning, replication, caching, sharding, AlwaysOn Availability Groups, Enterprise Edition, bigger boxes, or good old NOLOCK? You need to handle more data and deliver faster queries, but the options are confusing. In this full-day workshop, Brent, Kendra, and Jeremiah will share the techniques they use to speed up SQL Server environments both by scaling up and scaling out. We’ll share what features might save you hundreds of development hours, what features have been a struggle to implement, and how you can tell the difference. This workshop is for developers and DBAs who need to plan long term changes to their environment.
- Practical Disaster Recovery Techniques (Paul – Post-Con Thursday) - Disasters happen – plain and simple. When disaster strikes a database you’re responsible for, can you recover within the down-time and/or data-loss limits your company requires? What if your plan doesn’t work? This workshop isn’t about how to achieve high-availability, it’s about how to prevent or overcome the obstacles you’re likely to hit when trying to recover from a disaster – such as not having the right backups, not having valid backups, or not having any backups! In this demo-heavy workshop, you’ll learn a ton of practical tips, tricks, and techniques learned from 15 years of experience helping customers plan for and recover from disasters, including less frequently seen problems and more advanced techniques. All attendees will also receive a set of lab scenarios for further study and practice after the class with assistance from Paul.
Now how much would you pay for all this? Three thousand? Four thousand? Ten thousand? BUT WAIT, THERE’S MORE!
For $1,894 before June 24th, you can get the Show Package: the conference, PLUS a pre-con or post-con of your choice, PLUS your choice of a Surface RT, Xbox, or a $300 gift card.
For $2,294, you get all that plus ANOTHER pre-con or post-con – five days of nonstop learning from the absolute best in the business.
No? You want more? Okay, you drive a hard bargain, buddy. Use discount code OZAR and you get another $50 off. Register now. Operators are standing by.
This weekend, emails went out to folks who’d submitted their sessions for the PASS Summit 2013 in Charlotte.
If you’re bummed, listen up. I know what it feels like to get turned down because I got turned down the first couple of times I submitted, too. The blessing and the curse of the SQL Server community is that there’s so many people who want to help others – but of course this makes it harder to get your place up on the stage. It’s only going to get worse/better as more people continue to discover the community.
Whether you got a good email or a bad one, your work is just beginning. Either you’re prepping for this October, or you need to start prepping for the next conference. In either case, here’s 51 questions you need to ask yourself about your abstract, your material, and your delivery.
- What pain is bringing the attendee to this session?
- How are they going to relieve that pain when they get back to the office?
- What does the attendee know already coming in?
- Who should not attend this session?
- Reading your abstract, are the answers to the above four questions crystal clear?
- What did you learn from Adam Machanic’s post Capturing Attention?
- Did your abstract take one thing off before it left the house?
- If you search the web for your abstract title, what comes up?
- Who else do you expect will submit on a similar topic?
- How will you show your own personality and expertise in the abstract?
- Of ProBlogger’s 52 Types of Blog Posts, which one matches your planned sessions?
- What other types of sessions from that list could you use to surprise and delight attendees?
- Are you teaching why or how?
- How would a handout make it easier for attendees to learn your lessons?
- What visualization would bring your session to life?
- Could you contract out a local design student or company to build it for you?
- Are you presenting to teach or to impress?
- Have you gotten feedback on your abstract from a proven speaker you trust?
- If a teacher graded your presentation, would you get an A?
- On that 24-point scale, what would it take to succeed at a national conference?
- What topics are you going to avoid entirely in order to save time?
- How often have you rehearsed this presentation before giving it to a local user group?
- Have you given this presentation before at local user groups and SQLSaturdays?
- Did you record the session (either video or audio)?
- Did you watch the recording to see where you can improve the material and your delivery?
- What questions did the attendees ask at those sessions?
- What feedback did the attendees give at the user group or SQLSaturday?
- How will you use that feedback to improve your session?
- If you gave attendees a test at the end of your session, what questions would be on it?
- If your session was a movie, what genre would it be?
- What other movies would be sitting next to it in the store?
- Who would play the leading role?
- What are three words you want attendees to use to describe your session?
- How do your abstract, material, and delivery inspire those three words?
- Have you clearly attributed ownership to the code and pictures in your session?
- If nobody asks any questions at all, will you still be able to fill the time slot?
- If you get many questions, which slides/sections can you skip without losing meaning?
- Where will you post all of the resources for your session?
- If people have a question while reading those resources, how will they contact you?
- If this session was a module in an all-day training class, what would the other modules be?
- What’s the worst thing that could happen in your session?
- How will you recover if that thing happens?
- Can you form an instant community of your attendees using a Twitter hash tag or chat room?
- What would your session look like with no demos whatsoever?
- What would your session look like as 100% demos and no slides?
- If you started the session with a question, what would that question be?
- What’s the easiest, simplest way for the attendee to learn the lessons?
- Could you get the presentation’s learning lessons across with a blog post or series?
- When you ask people why they linked to your post, what do they say they found compelling?
- What questions did readers ask in the comments?
- What’s stopping you from writing that blog post right now to gauge reader interest?
No, really. What’s stopping you? Don’t think for one moment that attendees will skip your session because they’ve read your work. It’s the exact opposite: readers come to your session because they like your work. Whether PASS told you yes or no, start writing your blog posts right now to find out what works and what doesn’t.
Everyone wants to make sure they’re getting the best performance out of their solid state storage. If you’re like a lot of people, you want to make sure you’re getting what you paid for, but how do you know for sure that the drive is performing well?
Watch that Average
The first way to monitor performance it to use some perfmon counters. Although there are a lot of perfmon counters that seem helpful, we’re only going to look at two:
- PhysicalDisk\Avg. Disk Sec/Read
- PhysicalDisk\Avg. Disk Sec/Write
As soon as you get a solid state drive in your server, start monitoring these numbers. Over time you’ll be able to trend performance over time and watch for poor performance. When the SSDs pass out of your valid performance guidelines (and they probably will), you can pull them out of the storage one at a time and reformat them before adding them back into the RAID array. Note it isn’t necessary to do this
Although it’s risky, this approach can work well for detecting performance problems while they’re happening. The downside is that we don’t have any idea that the drives are about to fail – we can only observe the side effects of writing to the SSDs. As SSD health gets worse, this average is going to trend upwards. Of course, you could also be doing something incredibly dumb with your hardware, so we can’t really use average performance as a potential indicator of impending hardware failure.
Which SMART Attributes Work for SSDs?
What if we could watch SSD wear in real time? It turns out that we’ve been able to do this for a while. Many vendors offer SMART status codes to return detailed information about the status of the drive. Rotational drives can tell you how hot the drive is, provide bad sector counts, and a host of other information about drive health.
SSDs are opaque, right? Think again.
SSD vendors started putting information in SMART counters to give users a better idea of SSD performance, wear, and overall health. Although the SMART counters will vary from vendor to vendor (based on the disk controller), Intel publish documentation on the counters available with their SSDs – check out the “SMART Attributes” section of the Intel 910 documentation. These are pretty esoteric documents, you wouldn’t want to have to parse that information yourself. Thankfully, there are easier ways to get to this information; we’ll get to that in a minute.
Which SMART Attributes Should I Watch?
There are a few things to watch in the SMART status of your SSDS:
- Write Amplification
- Media Wear-out Indicator
- Available Reserved Space
Write Amplification, roughly, is a measure of the ratio of writes issued by your OS compared to the number of writes performed by the SSD. A lower score is better – this can even drop below 1 when the SSD is able to compress your data. Although the Write Amplification doesn’t help you monitor drive health directly, it provides a view of how your use pattern will change the SSD’s lifespan.
The Media Wear-Out Indicator gives us a scale from 100 to 0 of the remaining flash memory life. This starts at 100 and drifts toward 0. It’s important to note that your drive will keep functioning after Media Wear-Out Indicator reports 0. This is, however, a good value to watch.
Available Reserved Space measures the original spare capacity in the drive. SSD vendors provide additional storage capacity to make sure wear leveling and garbage collection can happen appropriately. Like Media Wear-Out Indicator, this starts at 100 and will drift toward 0 over time.
It’s worth noting that each drive can supply additional information. The Intel 910 also monitors battery backup failure and provides two reserved space monitors – one at 10% reserved space available and a second at 1% reserved space available. If you’re going to monitor the SMART attributes of your SSDs, it’s worth doing a quick search to find out what your SSD controllers support.
How do I Watch the SMART Attributes of my SSD?
This is where things could get ugly. Thankfully, we’ve got smartmontools. There are two pieces of smartmontools and we’re only interested in one:
smartctl. Smartctl is a utility to view the SMART attributes of a drive. On my (OS X) laptop, I can run
smartctl -a disk1 to view the SMART attributes of the drive. On Windows you can either use the drive letter for a basic disk, like this:
smartctl -a X:
Things get trickier, though, for certain PCI-Express SSDs. Many of these drives, the Intel 910 included, present one physical disk per controller on the PCI-Express card. In the case of the Intel 910, there are four. In these scenarios you’ll need to look at each controller’s storage individually. Even if you have configured a larger storage volume using Windows RAID, you can still read the SMART attributes by looking at the physical devices underneath the logical disk.
The first step is to get a list of physical devices using WMI:
wmic diskdrive list brief
The physical device name will be in the
DeviceID column. Once you have the physical device name, you can view the SMART attributes with
smartctl like this:
smartctl -a /dev/pd0 -q noserial
Run against my virtual machine, it looks like this:
C:\Windows\system32> smartctl -a /dev/pd0 -q noserial smartctl 6.1 2013-03-16 r3800 [x86_64-w64-mingw32-win8] (sf-6.1-1) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: Windows 8-0 SSD Serial Number: 0RETRD4FE6AMF823QE7R Firmware Version: F.2FKG1C User Capacity: 68,719,476,736 bytes [68.7 GB] Sector Size: 512 bytes logical/physical Rotation Rate: Solid State Device Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ATA8-ACS, ATA/ATAPI-5 T13/1321D revision 1 SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Sat Apr 27 08:35:03 2013 PDT SMART support is: Unavailable - device lacks SMART capability.
Unsurprisingly, my virtual drive doesn’t display much information. But a real drive looks something like this:
Holy cow, that’s a lot of information. The Intel 910 clearly has a lot going on. There are two important criteria to watch, simply because they can mean the difference between a successful warranty claim and an unsuccessful one
- SS Media used endurance indicator
- Current Drive Temperature
The Intel 910 actually provides more information via SMART, but to get to it, we have to use Intel’s command line tools. By using the included isdct.exe, we can get some very helpful information about battery backup failure (yup, your SSD is protected by a battery), reserve space in the SSD, and the drive wear indicator. Battery backup failure is a simple boolean value – 0 for working and 1 for failure. The other numbers are stored internally as a hexadecimal number, but the isdct.exe program translates them from hex to decimal. These numbers start at zero and work toward 100.
If you’re enterprising, you can take a look at the vendor specification and figure out how to read this data in the SMART payload. Or, if you’re truly lazy, you can parse the text coming out of smartcl or isdct (or the appropriate vendor tool) and use that to fuel your reports. Some monitoring packages even include all SMART counters by default.
The Bad News
The bad news is that if you’re using a hardware RAID controller, you may not be able to see any of the SMART attributes of your SSDs. If you can’t get accurate readings from the drives and you’ll have to resort to using the Performance Monitor counters I mentioned at the beginning of the article. RAID controllers that support smartmontools are listed in the smartctl documentation.
Special thanks go out to a helpful friend who let us abuse their QA Intel 910 cards for a little while in order to get these screenshots.
SQL Server execution plans provide a roadmap to query performance. Once you understand how to read the execution plan, you can easily identify bottlenecks and detours. In this high level session, Jeremiah Peschka will introduce you to the concepts of reading SQL Server execution plans including how to get an actual execution plan, how to read the plan, and how to dive deeper into the details of the pieces of the plan. This session is for developers and DBAs who have never looked at SQL Server execution plans before.
In this talk I mentioned a few tools.