1. Home
  2. Blog
  3. Page 152

Registered for the SQLpass Summit


I’ll be attending the SQLpass Summit 2007 in Denver in a couple of weeks.  It’s an annual conference for Microsoft SQL Server database administrators with seminars like “Managing Large Data Warehouses” and “Plan Cache Analysis in SQL Server 2005.”  I’m excited because this is the first national SQL conference I’ve been able to attend.

I’m also excited just to get away from the daily grind at work for a few days.  The last two and a half months were brutal, and things have finally slowed down to a normal pace again.  Not everything is back to normal – we’ve still got a couple of SANs that I’m afraid to touch – but for the most part, things are up and running.

I’ve got a new junior DBA starting the same week I’m heading out of town for SQLpass, too.  I needed the help, although I gotta say that the new guy probably isn’t going to get much SQL for the next few weeks.  After doing only SAN and VMware management for months, I’m eager to get back to SQL again.

Ugly month of outages

Hardware, Storage, Virtualization

The past month has been a real challenge, filled with 70-80 hour weeks and clawing back from one disaster after another.

In late June, IBM recommended that we upgrade the firmware on our DS4800 SAN controllers to fix some problems. We obliged, and two days later, our Exchange cluster rebooted without warning. We’d been having some other problems with those servers anyway, so we didn’t connect the dots. A few days later, the reboot happened again, and then again.

Meanwhile, in DBAville, my brand new SQL cluster started failing over from the active node to the passive node for no apparent reason. The servers didn’t reboot, just simply failed SQL over from one node to the other, and it appeared that my stuff was having different symptoms than the Exchange cluster. I worked the issue on my own, and just as I started tracing it back to disk problems…

One of our VMware LUNs got corrupted, taking down 19 servers at once. We pulled an all-weekender to rebuild and restore all of the servers before the next business day. We started working that issue with VMware, and they came up with a good action plan. We were midway through implementing it when it happened again, albeit with slightly different symptoms. There went another weekend rebuilding and restoring servers.

By this time, we had a pretty good idea that all three problems were related to clustered stuff on the DS4800 SAN controller, but it was too late. I built a new standalone SQL server (without clustering) and hooked it up to a different SAN, the DS4500. It was immediately stable, and I breathed a sigh of relief because I had time to troubleshoot the DS4800 problem.

We brought in IBM, and they recommended that we upgrade the firmware on our IBM SAN switches and another SAN – the DS4500 where I’d just moved my SQL stuff. The switch firmware upgrade went fine, but the DS4500 firmware upgrade went straight to hell – the SAN was unusable. That took down dozens of servers and brought IBM engineers into our office on a weekend to figure out how to fix it. In the mess, I lost my SQL box. Thankfully, it was a quick rebuild and I had good backups, but the SAN was unusable for a day, and we lost a lot of time troubleshooting with IBM and LSI.

Thursday, LSI engineers showed up due to a lucky coincidence. About a month ago, when our problems started, IBM had the foresight to schedule a health check. It almost turned into an autopsy. LSI’s staff fixed a lot of the problems quickly, and they have a pretty good answer on the Exchange reboots. They’re recommending a software tweak and IBM is replacing the 4800 controller cards, and if these two things don’t work, IBM is giving us another DS4800 and new servers to build a new Exchange cluster on a new SAN from scratch to see if that fixes the problem.

Unfortunately, that means my weekends at the office haven’t ended yet. This weekend we’re doing the controller cards and the software changes.

Today, I’m taking my first full day off in weeks.  I already felt much better last night, just hanging out and eating dinner with Erika and Ernie.

Bought a new Honda Mini-Minivan


Honda Fit at Miami Ink

After the Jeep developed a transmission problem last weekend, Erika and I decided to take the plunge. We traded in the Saturn for a new Honda Fit. Our plan had always been to have one really reliable car and one fun car, and for a couple of years, the Jeep was both. Now the Jeep just plays the part of the fun car, and the Honda is the reliable one. Knock on wood, anyway.

The Fit looks like a minivan that went through a copier at 50% reduction. I would have preferred something with two doors, but my coworkers seem to appreciate not having to climb into back seats when we go to lunch. It’s fun to drive, and it feels much more expensive than its $17k price would indicate.

During the research & buying process, Erika kept noting that I wasn’t really excited. She was more excited than I was, mostly due to the fact that it came with a 5-day Carnival cruise for two. I figured out that I wasn’t peppy about it because this was the first 100% rational car purchase I’ve made in years. I bought our Volvo C70 out of lust, bought the Jeep out of lust, and then bought this one out of practicality. Sure, I’d love to just own awesome cars, but there’s something to say for being mature and getting something reliable, cheap to maintain, and cheap to operate.

Argh. We’re getting old. The next one will be a power hardtop convertible.

The Jeep takes a road trip – without us


My Jeep on a tow truck The Jeep is going on its own Memorial Day weekend road trip – to the dealer. Last night, Erika and I tried to take it to P. F. Chang’s for our payday Friday ritual only to find the shifter wasn’t connected to anything. I could slide it around freely between R, N, D, 3, 2, 1, Contact.

Of course, this brought back our thoughts of buying a second Jeep, especially now that we have a Saturn with a dying starter and a disabled Jeep.  I’ll take the Saturn to get the starter fixed on Tuesday, and I’m fighting the temptation to go pick up a lease on a Civic or something similarly inexpensive and reliable.

I still love my Jeep dearly.  I plan on keeping it for the rest of my life, albeit as a second car as it gets older.


My experience at the Apple Genius Bar


Apple retail stores have a Genius Bar where you can get tech support.  I walked over to the Apple Store on Lincoln Road (damn, I love living in Miami Beach) and bellied up to the bar.  The bartender, wearing a black t-shirt with the label “Genius” on the front, helped out a couple of customers.  While I waited, another Apple employee noticed me waiting, asked if I had a reservation, and then helped me make one with a nearby iMac.  Wow.

I sat watching the Genius tend to his customers.  He was helpful, patient, and fair.  One of the laptops needed serious repair, and he explained the customer’s options as to how to back up their data before sending in the laptop for service.  The customer was clearly in over his head, but the Genius helped him make the right decision without trying to fleece the guy for the $50 backup charge.

When my turn came up about ten minutes later, the Genius looked at my Macbook Pro for less than sixty seconds, agreed that the battery was hosed, and went into the back to fetch me a new battery – free under warranty.  He installed it, made sure the system recognized the additional charge capacity, and then sent me on my merry way.

Damn.  From here on out, I’m insisting that all of my relatives get Apples.

Take Your Kids To Work Day


Yesterday was Take Your Sons and Daughters To Work Day, and I got roped into leading sessions. I ran a presentation on desktops & servers: we opened up desktops, explained what the parts did, showed off a couple of servers, and finished things off with a round of LAN games. We were a big hit with the kids as well as the parents because evidently ours was the least boring and most interactive session, and I took pride out of that.

I learned some lessons, though, and I’m passing those along here in case anybody else needs to run one of these sessions for kids:

10 kids is a great group, 16 kids is too many. The 16-kid groups quickly devolved into screamfests, and we couldn‮t get everybody to stay quiet long enough to learn something. The 10-12-kid groups were more manageable – 2-3 kids per volunteer seemed about right. Remember, we‮re not professional teachers – it‮s harder for us to manage this number of kids.

Know how to get problem kids out fast. I had a kid try to slice his arm open with a stick of RAM, and then tried to jam a CPU into his skin to see if it would leave a mark. He was actively causing disruptions with other kids, trying to get them to do the same thing, and I had no idea what on earth to do. It was an unsafe situation for him as well as the other kids. Looking back, I wish I would have asked him to step into the back of the room and had the chaperone go drop him off with his dad, but since it all happened so fast, I didn‮t know what to do. Thank God he didn‮t break his skin, because I‮d have freaked out.

The chaperones need to be trained, and then be actively involved. We had some chaperones that sat in the back quietly while the kids yelled and screamed, and we had other chaperones who actively helped to keep the kids under control. We had much better results with the latter.

Don‮t let parents leave with the kids during breaks. During lunch and the snack break, the parents took kids off to various places, and they didn‮t all come back at the start time. As a result, we had to manage 10 bored kids for 15-20 minutes. We couldn‮t start the lessons because we had to wait for all of the kids. The kids got uncontrollable while they sat and waited for the latecomers. An alternative would be having the chaperones running some kind of activity until all of the kids got together again.

We need separate tracks for 8 year olds versus 12 year olds. 8‮s are old enough to get something out of it, but the material has to be completely different than what you present to a 12 year old. My 11-12 year old folks wanted to know more and more and more, but the 8-10 year olds needed shorter, more visual lessons.

Code Camp Fort Lauderdale


Me at Florida Code Camp

It’s 7:30 AM, and I’m sitting in an empty conference room. Why? Because Mike Culver from Amazon Web Services is about to present sessions on S3 and EC2, two of my favorite upcoming technologies. I’ve been getting more and more involved with virtualization over the last year – Southern is running 2/3 of their Windows servers under VMware – and Amazon Web Services attracts me for its tie-ins with scalability and virtualization. For the life of me, I can’t figure out how to use the technologies at Southern, but I’m still looking.

There’s actually a keynote starting shortly in another room, but I’m already camped out here in Mike’s conference room just to ensure a good seat in here.

Update after the sessions: Wow, this guy was great – funny, informative, and motivational.  Most of the attendees were interested right away, and asked great, pointed questions.  I didn’t learn much, but seeing EC2 in action made me abandon the rest of the day’s sessions and start playing with EC2 instead!  I’m curious to see if I can get it working as a low-powered DRP SQL Server using asynchronous database mirroring.

The Boutique Generation

1 Comment

Tara Hunt blogged about the Boutique Generation, saying:

“You may be a member of the Boutique Generation if you:

  1. Enjoy shopping at your local pharmacy, grocer, clothing store where
    you know the owners and feel yourself going out of your way to give
    them business even if their prices are higher because you prefer to
    have the personalized service over cost reduction
  2. Enjoy shopping at specialty stores, like Cheese Emporiums,
    Perfumeries, Tea Shops, Sephora (drooool), etc. that carry a single,
    narrow type of product…”

The list goes on, and as I read through it, everything really rang true. It rang so close to home that I had to stop myself and ask, “I thought everybody felt this way?” I guess not, though, because when I drive past Wal-Mart parking lots, they’re almost always packed. That place is a zoo, and I can’t stand to step foot in there because it’s so utterly impersonal, so bland, so assembly-line. It’s not that I’m financially loaded and I can afford to shop at Saks – I can’t – but at least I can choose Target, where they offer more products with the types of personal stories that Tara’s article touches on.

Tara’s article is a real thought-provoker. The only thing I would suggest is that the word “generation” probably doesn’t apply. For example, my mom exhibits these exact same personality traits, and by definition, she’s a member of a different generation. (Don’t take offense to that, Mom.)

Technorati Tags:

FAQ about working with an Apple Mac


Word is getting out at the office about me using a Mac, and at least once a week now, someone comes over to my cube to ask a few questions. Here are the most common ones:

Q: Is that your personal computer, or your work one?
It’s my work one, and it’s the only computer I use.

Q: Is that a normal Microsoft keyboard & mouse, and a normal Dell flat panel?
Yes. These days, Apples use the same peripherals as regular laptops.

Q: So how’s it different from a normal laptop?
Aside from OSX, the engineering of the laptop itself is just awesome. The DVD burner loads via slot – no gimmicky sliding plastic tray. The power adapter has a magnetic connection that just pops loose if it snags on something. The keyboard glows at night, so you can see each key and its corresponding letter/number. The tiny webcam above the monitor blows everybody away. Everything you want is built in, instead of being optional, and since it’s a standard piece of hardware, all of the drivers work perfectly from the get-go.

Q: How much did it cost?
Around 10-15% more than a similarly equipped Dell laptop on our corporate discount, with the exception of add-on memory. Apple marks up memory like crazy, so it makes sense to get that from or another high-quality memory vendor. We get our Microsoft licensing for Windows and Office through our Microsoft Enterprise Agreement, so there’s no extra cost for software licensing over and above what we’d pay on a Dell.

Q: It runs Windows and Office?
Yep, there’s a Mac version of Office and Outlook (called Entourage). Since it can run both Apple OS X and Microsoft Windows simultaneously using Parallels or VMware, it can do everything I need to do at the office.

Q: Are you on the domain?

My Windows virtual machine is, but my Mac isn’t. I could add it to the domain, but I didn’t see any advantages to that, so I’m an island for now. I can access everything on the domain, though – the Mac just prompts for my username & password once, and saves it in my keychain for reuse each time I try to access a domain resource again.

Q: Do you really use that many Mac programs?
Surprisingly, yes. When I first started to switch, I vowed to try to switch over to Mac entirely. I do my email, remote desktop, Word, Excel, presentations, network diagnostics (ping, traceroute, etc), instant messaging, iTunes, and more all in the Mac side of things. I do run Windows to use SQL Server 2005 Management Studio, VMware VirtualCenter,
and IBM Director, and I couldn’t do my job (or at least as good of a job) without those tools. On the weekends, though, I take my Macbook Pro home and I don’t even start Windows on it. There’s even a Mac client for our VPN.

Q: What about that iLife, iDVD, iPhoto, iWeb, GarageBand stuff?
I don’t actually use any of that stuff. I blog using the Performancing tools for Firefox, and I manage my photos with Flickr, but that’s about as close as I get to creativity. I admire those artsy folks, but I’m not one of them.

Q: How long did it take before you were productive?
What the hell are you talking about? I’ve been productive for years! Oh, wait, you mean with the Mac, sorry. It took about two weeks, and I couldn’t have done it without the excellent book Mac OS X Support Essentials. It answers a lot of tech support questions that you’ll experience along the way – things that seem obvious in Windows, but are just handled a little differently in OS X. Not saying OS X is better or worse, just a few things are different, like where your files are stored or how to set an IP address. This book is aimed at technical users who need to solve specific problems, not a how-do-I-get-started approach.

Q: How long is the battery life?
I turn the display brightness way down when I’m working off the grid, so I get around 3-3.5 hours.

Q: What are the drawbacks?

  • There’s no connector for a docking station, so I have to plug/unplug the same six cables every time I bring the machine into the office. (Power, audio, USB, ethernet, video, USB2 hard drive).
  • Microsoft Entourage (the Outlook equivalent) doesn’t do a very good job of detecting free/busy time in the company calendar, or scheduling meeting rooms as resources. Fortunately I don’t schedule many meetings myself.
  • Syncing with a handheld is easier than Windows – but only as long as you have a supported handheld. My Cingular 8125 isn’t supported, but thankfully I use Direct Push with our Exchange server anyway, so that doesn’t really matter to me. It might matter for some folks, though.
  • Since only 3 of the 10,000 employees use Macs, it’s much harder to get peer support. For example, the authentication on our Cisco wireless access points doesn’t seem to like Macs, and the network guys don’t have the time to find out why. I’m guessing I could figure it out if I spent enough time on it, but I don’t roam with my laptop anyway (I use my 8125 for meeting notes) so I haven’t taken the time either.

Q: Would you recommend buying one?
For home use – yes, absolutely. For office use – a qualified yes, but check your applications first. If you spend most of your day in a Windows app like, say, the SAP GUI, then it probably makes sense to stick with Windows.

Technorati Tags: , ,

SQL Server Perfmon (Performance Monitor) Best Practices

Monitoring, SQL Server

Database administrators need to know each SQL Server’s weakest link so we know where to focus their hardware budgets and time. On any given server, we need to know what the slowest component is, and where it’ll pay off to invest hours and dollars. This is performance tuning, and the first place to start is by using Performance Monitor.

Performance Monitor, or Perfmon, measures performance statistics on a regular interval, and saves those stats in a file. The database administrator picks the time interval, file format, and which statistics are monitored. After the stats are gathered over a period of time (hours or days), we can do analysis by opening the results in Excel and setting up some basic formulas.

Perfmon isn’t just for SQL Server tuning: system administrators use it to monitor performance on Windows itself, Exchange, file & print servers, and anything else that can run into bottlenecks. As a result, it’s easy to find Perfmon information online, but it’s not always specific to SQL Server. Since every application has its own set of statistics, it helps to get SQL-related tips.

Setting Up Perfmon for SQL Server Tuning

On your desktop or laptop (not the SQL Server itself), go into Control Panel, Administrative Tools, Performance. The first thing that comes up is a 1990’s-looking line chart with a few basic performance stats on your workstation itself. Interesting, but not what we’re after.

On the left side, expand Performance Logs and Alerts, and click on Counter Logs. Counter Logs let us pick a set of performance counters and log them to a file periodically. One counter log is listed by default, but we’re going to add a new one. Right-click on Counter Logs and click New Log Settings. Name it with the name of your database server, because each server should have its own log settings. We could theoretically build a single counter log for all of our database servers, but then anytime we want to run the counter log against one of our servers to test its performance, it will log data for ALL servers, and that’s not usually how we want to do performance tuning. After typing in a name, we can start setting up the counters we want to log.

The Basics of SQL Server Perfmon Counters

Click the Add Counters button, and change the computer name to point to the SQL Server name instead of your workstation’s name. (If you have multiple SQL Server instances on the machine, don’t worry about it, that doesn’t apply here.) After typing in the server name, hit Tab, and the workstation will pause for a few moments. It’s gathering the list of performance objects available on that server. Each server will have different lists of performance objects depending on what software is installed on that server: for example, SQL Server 2016 offers a different set of counters than SQL Server 2008.

In the Performance object dropdown, choose the “Memory” object. The list of counters will change. Select the “Pages/sec” counter, and click the Add button. It will seem like nothing happened, but try clicking the Add button again. You’ll get an error saying that the counter was already added. It’s not exactly an elegant user interface, but it works. Technically. Now scroll up to the “Available MBytes” counter, highlight it, and click Add. Those are the only two memory counters we’re going to monitor for now.

In the Performance object dropdown, choose the “Processor” object, and in the counters list, highlight the “% Processor Time” counter. Notice that in the right hand window, we now have more instances to choose from. We can track the % Processor Time statistic for each individual processor, or for all of them combined (_Total). Personally, I like to highlight each individual processor, and then click Add. I don’t find the _Total statistic useful because it’s simply an added number combined from all of the individual processors. On a 4-CPU box (single cores, no hyperthreading), that means our _Total statistic can be from 0 to 400. However, 100 can mean two different things: it could mean that each processor is running 25% utilization, or it could mean that one processor is pegged at 100% while the rest are twiddling their thumbs waiting for work. It could also mean any other number of combinations, like two processors at 50%. Therefore, the _Total number usually gives me more questions than answers, so I don’t bother logging it. Highlight all of the processor instances except _Total, and click the Add button.

In the Performance object dropdown, choose Physical Disk, and choose the “% Disk Time” counter. Notice that again on the right side window, we get multiple instances; this time, we get one per physical disk. In performance terms, physical disk means one disk shown in Computer Management’s Disk Management tool. One physical disk may have multiple partitions, each with its own drive letter, but for performance tuning, we want to know how hard that one physical disk is working.

This one “physical disk” may have a bunch of actual physical drives, like in RAID systems. However, Windows isn’t quite smart enough to know exactly how many drives are in the RAID array, so the term “physical disk” is a little misleading here.

Highlight all of the physical disks in the instance list (again, leave off the _Total instance) and click the Add button.

The Best SQL Server Performance Monitor Counters to Analyze

Now that you’ve got the hang of adding counters, here’s the full list we need to add, including the ones mentioned above:

  • These are listed OBJECT first, then COUNTER
  • Memory – Available MBytes
  • Physical Disk – Avg. Disk sec/Read
  • Physical Disk – Avg. Disk sec/Write
  • Physical Disk – Disk Reads/sec
  • Physical Disk – Disk Writes/sec
  • Processor – % Processor Time
  • SQLServer: General Statistics – User Connections
  • SQLServer: Memory Manager – Memory Grants Pending
  • SQLServer: SQL Statistics – Batch Requests/sec
  • SQLServer: SQL Statistics – Compilations/sec
  • SQLServer: SQL Statistics – Recompilations/sec
  • System – Processor Queue Length

After adding those, click the Close button, and we’re back to the counter log setup window. Under “Sample data every”, the default should be 15 seconds, and that’s fine. In the “Run As” entry, type in your domain username in the form domainname\username, and click the Set Password button to save your password. This lets the Perfmon service gather statistics using your domain permissions – otherwise, it tries to use its own credentials, and they probably won’t work on the remote server.

Click on the Log Files tab, and change the log file type to Text File (Comma delimited). This lets us import it into Excel easier. Click on the Configure button, and set the file path. I save mine in a file share called PerformanceLogs so that I can access it remotely via a mapped drive, and so that I can share it with other users who want to know how their server is performing. Click OK, and click OK again, and you’ll be back at the main Performance screen listing the counter logs.

If all went well, your counter log is now running. To make sure, right-click on your new counter log’s name, and the Start option should be grayed out. If it’s not, most likely there was a permissions problem with the username & password you used. Click Properties, and try setting the username & password again.

Let Perfmon run for a day or two to gather a good baseline of the server’s activity. It’s not that invasive on the SQL Server being monitored, and the in-depth results will pay off. The more data we have, the better job we can do on analyzing the Perfmon results.

Turning SQL Server Perfmon Logs Into Pretty Excel Spreadsheets

After Perfmon has run for a day or two, go back in and right-click on the counter log and click Stop. That will cease monitoring of the database server.

Open the results CSV in Excel. It’s going to look ugly at first:

Perfmon results in Excel
Perfmon Counters In Excel

This is the data from Perfmon. Each column represents one of the metrics, and each row represents one time period. It’s hard to see that with the way it’s currently formatted, but we’ll solve that. Time to put some lipstick on this pig. We’re going to:

  • Get rid of the half-empty row #2 and cell A1
  • Format the dates
  • Format the numbers
  • Clean up the headers
  • Add formulas across the top (my Secret Sauce part)

Excel pros can figure out how to do this, but for the folks who don’t spend too much time in spreadsheets, I’ll explain how:

Getting Rid of the Half-Empty Row and Cell A
Row #2 in this screenshot only has numbers in the first few columns. That’s typical for Perfmon – when it first starts monitoring a server, it takes it a round of checks before it’ll gather all of the data. Highlight row 2 and hit the Delete key on your keyboard. (I figured we’d start easy.)

Then delete the contents of cell A1, which we don’t need.

Format the Dates in Column 1

Click on cell A2 and look at the formula bar, and you’ll see that it’s a date – it’s just that Excel picks the wrong format to display it. Highlight all of column A (by clicking on the “A” button at the top of the column). Right-click on that A button and click Format Cells. Pick a Date format that includes both the date and the time, and click OK.

At that point, the column will probably show all “#######” entries, which just means it’s not wide-enough. Double-click on the line between the A and B columns, and Excel will autosize the column.

Format the Numbers

In our spreadsheet, some of the numbers have tons of stuff after the decimal place and it’s hard to read. Highlight all of the columns except A (our dates), and right-click and Format Cells. Choose the Number format, zero decimals, and check the box for Use 1000 Separator.

Some of our numbers will have valid data less than zero, but we’ll fix that later. Right now we’re just getting the basics cleaned up.

At this point, your spreadsheet should look like this:

Perfmon results in Excel, a little cleaned up
Perfmon Results Formatted

Now let’s solve that mystery row of headers.

Cleaning Up the Headers

In row #1, notice how all of the cells in my screenshot start with \MYSERV. That’s because the server is called MYSERVERNAME. (Well, not really, but I faked it for this demo.) Click on one of those cells, and you’ll see the full text that Perfmon saves in the header, like “\MYSERVERNAMEMemoryAvailable MBytes”.

Do a find & replace by hitting Control-H. In the “Find What” box type your server name with two backslashes, like \MYSERVERNAME. In the “Replace with” box, erase everything. We want to just delete everywhere that says \MYSERVERNAME. Click Replace All, and Excel should tell you how many replacements it made.

Now do the same for these strings, but be careful to note which ones have spaces:

  • Memory
  • PhysicalDisk
  • Processor
  • SQLServer:Buffer Manager
  • SQLServer:General Statistics
  • SQLServer:Memory Manager
  • SQLServer:SQL Statistics
  • System

Now you’ll be able to tell a little more about what’s in each header, but to make it better, change the font size to 8 for that header row. Then highlight all of the columns and make them wider – not terribly wide, but just wide enough that you can see most of the headers for easier analysis. It’ll end up looking like this:

Perfmon results in Excel, now with cleaner headers
Perfmon Counter Names Now Visible

Getting better. Before the next step, go ahead and save the spreadsheet in Excel format, because the calculations will be slow.

Add Formulas Across the Top

Here comes the secret sauce: we want formulas across the top to quickly summarize our findings. It’ll be easier to understand by looking at the final desired result first:

Perfmon results in Excel, after their Extreme Makeover
Perfmon Results with Formulas

In this particular example, don’t be fooled by the zeroes in columns C through E – that’s because this server didn’t actually have any activity for those metrics.

Start by right-clicking on the Row 1 button at the far left of the screen and click Insert, and repeat that process seven times so that we’ve got seven empty rows at the top.

Then type in the labels in cells A2-A6 to match my screen shot.

In the B column, put in the formulas. In my typed examples below, I assume that your data goes from row 9 to row 100, but you’ll want to change that 100 number to wherever your last data row is.

  • B2 (Average) will be =AVERAGE(B9:B100)
  • B3 (Median) will be =MEDIAN(B9:B100)
  • B4 (Min) will be =MIN(B9:B100)
  • B5 (Max) will be =MAX(B9:B100)
  • B6 (Std Deviation) will be =STDEV(B9:B100)

Finally, the icing on the cake: hit Control-Home, then move your cursor to cell B9, which should be your first cell with data. Hit Alt-W-F-Enter, which should Freeze Panes. That way we can move around our spreadsheet while we still see the dates on the left and the formulas & headers across the top.

Now, we’ve got a gorgeous spreadsheet that’s easier to roam around and analyze.

Analyzing the Performance Monitor Results to Find SQL Server Bottlenecks

Now for the detective work! There are a lot of smart people out there who have some great ways of interpreting Perfmon results. My favorite is the Microsoft SQL Server Customer Advisory Team (SQLCAT), who published two fantastic blog posts that sum up what counters to look at, and what they mean:

So now that we’ve established how the smart people do it, I’ll tell you how I do it, ha ha ho ho.

Look for Obvious CPU Bottlenecks

First, look at the Processor Queue Length for CPU pressure. If this number is averaging 1 or higher (except during the SQL Server’s full backup window if you’re using backup compression), this means things are waiting on CPUs to become available.

I’m suspicious when this number is over 1, because it often means that people have installed other software on the SQL Server such as applications or web sites. That’s a problem. If you get pushback from management saying that they don’t want to buy new servers, point out that two CPUs of SQL Server Enterprise licensing cost around $50-60k – which would pay for a separate server for the web app. If you can eliminate applications from the SQL Server, then you don’t have to use as much CPU power, and less CPUs mean less licensing costs.

There are more in-depth Perfmon metrics that you can add to your capture if you see the Processor Queue Length showing up, but for junior DBAs, the first thing I would recommend is simply remote desktop into the SQL Server. Right-click on the taskbar, click Task Manager, and click on the Processes tab. Check the box that shows processes for all users, and then click on the CPU column to sort the CPU percentage from high to low. Sit and watch it for a minute or two. Which processes are using CPU power? If it’s SQLServer, then we need to do more research, but there’s a good chance it’s another application, and we need to get that app off this server.  Generally speaking, enabling hyperthreading on a SQL Server is not going to fix that kind of a problem.

Look for Obvious Memory Problems

On the Memory – Available MBytes statistic, look for fluctuations of a couple hundred megabytes or more. If that’s happening, then either the SQL Server’s memory is being adjusted dynamically (probably a bad idea for performance) or users are actively logging into the SQL Server via remote desktop and running software. Correlate these fluctuations with disk activity: when available memory drops, is disk activity also increasing? Is this disk activity affecting the page file drive? If so, use this as a demonstration to people using remote desktop – show them that this is why people shouldn’t remote desktop into database servers.

If the Memory – Available MBytes dips below 100mb, that’s an indication that the operating system may be getting starved for memory. Windows may be paging out your application to disk in order to keep some breathing room free for the OS.

Look at Disk Metrics Last

The reason we don’t look at disk performance metrics first is because memory problems can trigger disk problems. If a SQL Server doesn’t have enough memory, or if the SQL Server account doesn’t have the necessary permissions to lock pages in memory, then disk activity may be artificially high on the page file drive.

As we examine the disk numbers, make a mental note of which drive has the page file – and there may be multiple page files as well. Also, find out if one physical drive array is shared by multiple drive letters. This is especially important for servers that boot from SAN: the boot drive may contain the page file, and that boot drive may be on a shared set of spindles with several drive letters. Heavy page file activity will slow down access to all of the affected drive letters in Windows.

Another caveat: mount points. If the shop uses mount points, a drive letter may be broken up into several sets of physical drive arrays. If you don’t know what a mount point is, then you’re probably not using them, and you can ignore that advice.

Want help? Talk to Brent for free.

See sample findings now

If you haven’t got time for the pain, I can get you started faster using SQL Server’s built-in diagnostic tools plus awesomely powerful free scripts.

My SQL Critical Care® is a quick, easy process that gets to the root cause of your database health and performance pains.

Learn more, see sample deliverables, and book a free 30-minute call with Brent.

DBA 101: The Always-On Workstation


DBAs need an in-office computer to run overnight and weekend performance logs, traces, and long-running queries. For example, let’s say you need to run a long series of maintenance queries to rebuild indexes over the weekend. You shouldn’t be remote-desktopping into your production servers to run this query: instead, you just need a simple workstation that you can leave logged-in, with the desktop locked, that runs the query.

If you were assigned a laptop as your primary workstation, then you’ll need an alternate desktop computer or an existing development server (never a production server). It doesn’t have to be 24/7 reliable or high-powered – a previous generation desktop or a virtual machine is fine. It should be a server that you and only you manage. Otherwise, someone else will restart the server at the most inopportune times, thereby shutting off your traces and performance logs.

Picking the Operating System

If your company uses SQL Server 2000, then this DBA machine should not run Windows Vista, since the 2000 tools don’t run on Vista. You can manage SQL Server 2000 using the 2005’s Management Studio, but sometimes managing a 2000 server is easier with the 2000 tools.

If the machine runs a workstation version of Windows (Windows 2000 Pro, Windows XP, Windows Vista) be aware that only one person will be able to remote into this machine at a time. If it runs a server version of Windows, then two people (or three, depending on how savvy they are) can remote into the machine. If you’ve got two or more database administrators, either use a server, or give each DBA their own workstation for management.

If the security folks ask tough questions about whether you need to be an administrator on the box, thank them for doing their job, and agree with them. (Hopefully they’ve been this tough about your production servers as well.) The only rights you need are the ability to remote-desktop in and run Perfmon and SQL. You don’t need to install anything else, start & stop services, etc.

Applications to Install

It does not need to be a SQL Server. It only needs the management tools installed, like 2000’s Enterprise Manager and 2005’s Management Studio. When you run traces (capturing queries and events from a SQL Server), you will need to save those events to another SQL Server, but it doesn’t have to be this machine. If you don’t have a development server, then – well, you’ve got other problems, but let’s start by fixing them and installing SQL Server 2005 Development Edition on this machine. During installation, change the database to point to a different drive, not the C drive. If you leave them on the C drive, and you save a large trace to this machine, it’s easy to run out of drive space and crash your machine altogether. That would not be good. If the databases are on the D drive, for example, you can still fill up the drive and crash SQL Server, but at least the entire machine won’t crash.

On this server, set up a shared folder called PerformanceLogs, and give read-only access to everyone. When you run performance logs, set them up to be saved in this directory. That way, when other users (developers, junior DBAs, network staff) need to interpret the performance numbers, you can point them to that share and they can always grab the latest copies without your assistance.

Install the free version of‘s remote control application.  This helps you remote control the machine from your house even when the company’s VPN is down or when you’re at a location where you can’t get the VPN software installed.  I can’t say enough good stuff about LogMeIn – and in fact, I wrote a separate blog post about too.

Next Steps

Once you’ve got your always-on workstation, here’s a couple of articles that you can read for your next step:

  • Performance Monitoring with Perfmon – Perfmon is free, and you can use it from your always-on workstation to investigate performance problems with database servers.
  • Free SQL Server tools – a list of utilities that you’ll want to install on your always-on workstation to do performance tuning, alerting, and so on.

Budgeting 101 for SQL Server database administrators


It’s that beautiful time of year when everybody watches the colors. No, not the changing browns and reds of the foliage, but the red and black numbers on P&L statements. That’s right – it’s budget time.

In order to build a good database budget, we have to ask our managers a few key questions:

Are we going to archive old data, or keep it online?

Database growth can be controlled by keeping a limited amount of history in the database. As DBAs, though, our job isn’t to limit the business, but to give them options. The business can decide whether keeping more data online is worth the additional expenditure for hard drives.

Example: take a data warehouse with 3 years of history. Near year-end, DBAs may be tasked with pruning this data out, but users often complain that they’ve decided they want the extra data online forever. Budget time is the perfect time to present this question to the end users. If the warehouse needs to go from 3 years to 4 years of storage, that’s an increase of 33%. Take the current storage costs, add 33%, and present that as the cost to keep an additional year of history online. Every year, recalculate storage costs and ask the same question again.

Include backup costs in these calculations, too. A truly savvy DBA will organize their filegroups so that old, historical data lives on read-only filegroups that rarely get backed up, but if we were all that good, we wouldn’t be having this conversation.

How has sales growth changed over the last year?

Database size can sometimes be changed based on business growth. If an e-commerce book store sold 15% more books this year than last year, and if they expect to grow by another 15% next year, then that’s a lot more transactions and customers that have to be stored in the database.

I’ve found that it’s easier to budget by past growth instead of future predicted growth because – well, let’s face it, predictions are hard. It’s much easier for a manager to understand and agree that we’ve already grown by 15% over the last year, and that our infrastructure has to keep pace.

Has our company headcount grown this year, and what’s the projected growth?

If a database application supports 1,000 users, and the company projects a growth of 5% next year, then that’s 5% more users that the application has to support. Granted, most applications are built in a way that scales easily, but big growth happens fast. Changes in headcount give DBAs a clear, concise way to show why their hardware has to be able to support more load.

What pilot applications are being rolled out to more users this year?

In small to midsize companies, database administrators aren’t always brought into meetings early enough during the planning phase of application development. Proof-of-concept applications become pilot applications, which become widely-rolled-out applications. At budget time, survey the landscape to see which small applications are loved by the users, and what kinds of wide adoption their managers expect to see.

This is where being a DBA gets hard: database servers are often shared between multiple applications. Being a good DBA means knowing what percentage of the database server load is being consumed by a particular application. A new application might be a nasty resource hog, and it’s not obvious until the number of users scales up. Now is the time to run profiles and performance logs to get insight into each pilot application. The more a DBA knows about each pilot application, the better job they can do budgeting for the future.

Finally, what more could you do with more time?

Make a list of day-to-day and month-to-month DBA chores that are getting overlooked due to a lack of time. Everybody has them – whether it’s fire-drill test restores, performance tuning, or even plain old career training. Give management a list of things that the department could accomplish with one or more additional head count. The list should be unbiased, matter-of-fact items, not “give me another body or I’m going to die.” The less emotion and the more raw facts the list includes, the easier it is for managers to forward it on to their managers and get approval.

Even if another person isn’t needed, make the list anyway. That way, a year or two from now when the department really does need another DBA, it’s easy to compare last year’s list with this year’s list and see that the company’s needs are changing.

Software projects aren’t like normal projects


The ever-brilliant Jeff Atwood wrote a fascinating blog post about why building software is hard: it’s never been built before.  He was following up on a similar blog post, but in a nutshell, software projects are harder than construction projects because they’re nothing like projects at all.  Every time we build a new piece of software, it’s like building an experimental plane. We don’t even know if it will fly.  Construction projects, on the other hand, are fairly repeatable.  Sure, every now and then somebody builds an Eiffel Tower or the first Frank Gehry building, but otherwise, it’s the same basic things done over and over.

I can completely relate to this because I’m in the midst of bringing a new project – err, experiment – to life as we speak.  I’m being asked to budget dollars for a database server and guarantee that the queries will be returned within X seconds.  The problem has two parts: the queries haven’t been designed yet, and nobody has even documented exactly what they’re going to do.  In order to guarantee that an unknown query will finish in a known amount of time, I’m going to need an exorbitant amount of money – or else, I’m going to have to build an experimental airplane.  I can’t stand spending money without a really good reason, so it’s time to play Skunkworks.

This is why I own a t-shirt with Jeff’s blog logo.

First 5k Done


I finished my first 5k race last night, and I look forward to more of ’em.
My official time was 37 minutes 46 seconds. I placed 40th out of 42 in my division (30-34 year old males), 415th out of 496 males, and 866th overall out of 1163 runners. The average male ran 30 minutes 17 seconds, so I’m above average. (ahem) The overall winner did it in 14 minutes 41 seconds, and all but 2 guys did it under an hour. I would like to meet the two guys who took over an hour just to shake their hands for finishing.

You can also slice and dice the results online.

Here’s how the run looked in NikePlus:

Festival of Lights 5k

I screwed up my iPod settings where you see the first couple of red blips. I was running with a new iPod armband with plastic over the jog wheel, and I couldn’t get my jogging-training podcast back on track.  After that, I had a tough time keeping my pace up.  Lesson learned for next time – cut out the clear plastic, touch the jog wheel directly, and have a full 45 minute playlist set up so I don’t have to screw around with it.
The statistics don’t mean much to me. I hadn’t expected to do as well as I did, and I’d expected to be much more exhausted and pained. I feel good, I’m not sore, and I know I could have pushed myself a lot harder. But why? I’m not doing this to win anything, just to get up off my lazy ass.

I totally understand how these things get addictive. I feel great, and I enjoyed myself. I can see myself running several of these a year just to get the t-shirts.

My first 5k race

1 Comment

12th Annual Fantasy of Lights
Today, I signed up for my first 5k race – well, my first race period – the 12th Annual Fantasy of Lights 5k.

While this will technically be a race, I will in no way be judging my results by the time it takes me to finish, or where I finish in the pack.  I feel like a winner just registering for the thing, because if I finish, I’ll be “ahead” of everybody else in the city who didn’t run that day.  That’s the way I like to think of it, anyway.

I have to thank my coworker, Ben, for even mentioning the race.  I wouldn’t have thought to look up upcoming 5k’s, but when he mentioned it, I said to myself, why the hell not?  5k is only 3 miles.  I’ve been running/walking about a mile and a half, and it’s not like 5k is some huge insurmountable obstacle.  Why not?  I could do that.  And hey, it comes with a t-shirt, and I’m all about t-shirts.

The description says: “The 5K course is flat, fast, and ideal for setting a personal record.”  I burst out laughing when I read that one.  I will definitely be setting a personal record, that record being a race completion.

I’ve been loosely following the Cool Running Couch to 5k Plan for the last couple of weeks, but I didn’t think I’d actually be doing a 5k run.  The plan is supposed to take nine weeks.  The race is in a week and a half, but I’m not worried because I’m sure I’ll be walking some of it anyway.

Goodbye MovableType, hello WordPress

SQL Server

The more sober among you will notice a small change in the look & feel of this here site. I’ve been toying around with WordPress on some of my other sites for a while, and it’s time I took the plunge and changed over For the next couple of weeks, this site will be in varying levels of workitude and decrepidation.

Why the switch? There’s so many reasons I almost don’t know where to start.

I wanted one tool to manage all of the pages in my site. When I started blogging, I wrote my own blog software in .NET because I wanted to play around with Dreamweaver MX and get the functionality exactly the way I wanted it. In May of 2003, I switched to MovableType because I wasn’t developing new functionality as fast as the MT folks, and I was tired of managing my own blog software. Now, I’m tired of using one tool to manage my blog, and another tool to manage the rest of the pages in my site. WordPress lets me manage all of my content in a single web-based tool that works on both my Mac and my PCs.

Easier content management is important to me because I wanted to start introducing more business-based content in here. I’m mentoring a couple of guys at work who are interested in becoming database administrators, and I’ll be putting my training materials in here. These gems of knowledge aren’t really blog entries, and they’ll change over time as I flesh them out, but I didn’t want to use yet another content management tool. WordPress does a great job of handling both date-related things like blogs, and content-related things like articles.
Another reason: WordPress makes it easier to support microformats, which are tiny snippets of HTML that encapsulate little standards. Microformats will change the web. I’m positive of it. Microformats are going to make it easier to share and distribute content, especially like the DBA training articles I’ll be working on. I don’t intend to make any money off these articles, and I’d like to see them syndicated so other DBAs-in-the-making can have an easier time getting up to speed. I’m excited to see how microformats can help make that happen.

And yet another reason – ever since I realized that Matt was the guy behind WordPress, I’ve felt horribly guilty about not using his software on my blog. I didn’t put two and two together until I noticed a Valleywag article, and suddenly I went, “Ahhh, I’ve seen that guy before!” Duh.

And last and probably least (no offense), Lloyd reminded me of my guilt by joining Automattic, the folks who make WordPress.

Goodbye Flock, hello Firefox


The below screenshot is what you see when you try to go to with Flock:

Bye-bye Flock

When I go to a web site, and software tries to hijack my session to go to a different web site, that’s bad software. It doesn’t matter how well-intentioned the offer is – if the software has the gumption to change my destination, that’s bad software. Bad software, bad. Go to the corner.

I first noticed this in the recent Flock builds, and I thought whoa, this can’t be right. This has to be somebody’s idea of a funny experiment. I mentioned it in the Flock mailing lists when they asked for the community’s top ten concerns, and I didn’t get an official response. I figured okay, maybe they’ll get rid of that in some coming builds. So I just downloaded a tinderbox build, gave it a shot, and it’s still there.

That means they know, and they’re okay with it.

I’m not.

So, here’s the next page I went to:

Hello Firefox

And just like that, my Flock experience is over. I’ll miss it. I had a personal tie to it – heck, my name was even in the credits for a while – but I just will not stand for software that tries, even just once, to subvert my browsing habits from one provider to another. Not gonna do it. Wouldn’t be prudent.

Now, to find the best plugin…

Movies on my Cingular 8125


This weekend, I was determined to set up my Cingular 8125 PocketPC phone to play movies, TV shows, and music videos. It was faster and easier than I’d even expected.

Step 1: get the free PocketDivxEncoder. Install this on your desktop computer. It converts your existing videos to PocketPC-friendly formats that take less space, and fit your PocketPC’s screen better. It has presets for a lot of common devices like smartphones, iPods, PlayStationPortables, and more. They’ll try to upsell you on their new product, Lathe, for $6, but the older PocketDivxEncoder works fine. Amazingly, PocketDivxEncoder can even use Quicktime files as input, which I love since I’ve got a bunch of music videos in that format.

Step 2: get the free The Core Pocket Media Player. This installs on your PocketPC to play videos. No additional codecs or plugins are needed if you’re using the PocketDivxEncoder.

Step 3: convert your movies and copy them to your PocketPC. Just open each one with PocketDivxEncoder, and use the default settings for PocketPC. It converts the video to 320 pixels wide and resamples the audio down to mono to save space. If you’re a music nut like me, you’ll probably want to crank up the default audio quality to a higher setting to keep the tracks in stereo for headphone-quality audio. Music videos ended up taking 7-12mb each, and movies take around 200mb. Works like a champ! The quality is beautiful for such a small size.