Blog

What’s the Right Server for Your Business? Tweet Chat Recap

Last week, after blogging about What Server Should You Buy, I partnered with Microsoft and Dell to host a Twitter chat about choosing the right server hardware. (Disclaimer: Dell and Microsoft paid me for my time.)

Some of the interesting questions and answers included:

There’s much more, and you can read the full Twitter chat archive here, and if you’ve got more questions about Dell, Microsoft, and server hardware in general, feel free to post ‘em in the comments. I’ll work with Dell and Microsoft to help get the answers posted here.

Make SQL Apps Go Faster – Only 60 Seats Left!

In just a few weeks, Brent, Jes and I will be giving a full day pre-conference session at the SQLPASS 2013 Conference in Charlotte, North Carolina. Our pre-conference will be Tuesday, October 15, 2013.

There’s only a few seats left!

This pre-con’s going to be killer– we’ve got 240 folks registered and ready to learn. Capacity is capped at 300 due to logistics, so as of this writing there’s room for 60 more students.

What you’ll learn

You’re a developer or DBA stuck with a database server that’s not going fast enough. You’ve got a hunch that the biggest bottleneck is inside the database server somewhere, but where? In just one day, you’ll learn how to use powerful scripts to identify which queries are killing your server, what parts of the database server are holding you back, how to tackle indexing improvements, and how to identify query anti-patterns.

This pre-conference session will cover the following topics and more:

  • How wait stats tell you where to focus your tuning
  • How the plan cache shows you which queries are the worst
  • How to make fast improvements by picking the right indexes
  • How to identify and fix the most common query anti-patterns

How much does it cost?

When you register for the PASS Summit, our “Make SQL Server Apps Go Faster” is just $395.

If you’ve already registered for the Summit, email Shannon.Cunningham@sqlpass.org to get in on the fun.

Brent, Kendra and Jes in their final Pre-Con planning session.

Brent, Kendra and Jes in their final Pre-Con planning session.

And for those of you who scrolled down to the bottom of the post in your excitement looking for a call to action,  here’s how to register.

High Availability Doesn’t Fix Poor Performance

Imagine this: you have a database, and it’s on a SQL Server instance, which is on a server. That server may be physical or it may be virtualized. The database, or the instance, or the server, has a high availability solution implemented. It might be failover clustering; it might be synchronous mirroring; it might be VMware HA.

One day, your queries start to run a little slower. But the performance degradation is slow, and gradual. Weeks or months slip by, and bit by bit, millisecond by millisecond, the queries are taking longer. Before you know it, queries are taking out locks, blocking each other, and leading to timeouts.

What You Want To Happen

Your server is overloaded. CPU usage is high. Available memory is low. I/O times are dragging on. You know there is another server sitting in the data center that is waiting to take on this load. It’s just sitting there, idle!

You want your HA solution to kick in and take over. Why doesn’t it realize that performance has tanked and it needs to come to the rescue?

Your SQL Server Isn’t Down, It’s Just Thinking

The Thinker shutterstock_61948642Chances are that if there was a disaster and your HA solution kicked in right now, you’d experience the same terrible performance on the other server, too – with the added pain of having downtime to failover.

Why?

High availability solutions are implemented to be there in case of failure. High CPU usage, low memory warnings, or excessive I/O times don’t constitute a failure. As much as you wish the workload could be transferred to the server with lower utilization, it won’t be – you have to tune your workload.

Yes, I’m telling you to roll up your sleeves and start performance tuning.

Do you have monitoring in place so you can check your baselines to see what resources are being used more than normal?

Have you checked the plan cache for the most resource-intensive queries in your server?

Are you performing regular index tuning (using a tool like sp_BlitzIndex®, so you have the right indexes for your workload?

Remember, your HA solution is there to pick up the pieces when something fails – not to be a safety net for poor performance.

The Network and the Update

We work with SQL Server every day, happily sending and receiving millions of packets of data across the network. Have you stopped to think about what happens when something goes wrong?

The Situation

It’s a regular Tuesday afternoon. Users are happily submitting orders into the shopping cart. After browsing around the website for a while, the users decide that it’s time to buy something. After clicking “Check out now”, the user goes through the check out process, enters their payment information, and then clicks “Buy some stuff!” A few milliseconds later, they get a message that their order was successful and everything is zipping over to their house.

It’s something that we do every day, right? Heck, it’s simple enough that I drew a little diagram:

The state of the application... when it's working.

The state of the application… when it’s working.

The Problem

What happens when something goes wrong?

There are a lot of places where things could go wrong – a browser bug could prevent form submission, the shopping cart application could go down, the ordering system could be down for maintenance, or SQL Server might even crash (but you’ve configured a solid HA system, so that ain’t gonna happen).

What’s left to fail? The network.

Everything works as it should – the user clicks “Buy some stuff!” and a message is sent to the application servers. The application servers do their magic, approve the credit card, build an order, and save the order in SQL Server. Immediately after SQL Server acknowledges the write and commits the transaction, but before any confirmation is sent to the user, the top of the rack switch power cycles. Randy, the operations intern, accidentally unplugged the power cord before plugging it back in.

There goes the network neighborhood.

There goes the network neighborhood.

Now What?

This is the part that’s up to you. Sound off in the comments:

  • Did the user’s write succeed or fail?
    • What does SQL Server think?
    • What does the application think?
    • Did SQL Server do the right thing?
  • What can you do about this sort of thing?
  • How can your application handle failures like this?

Missed Connection: Employer Seeking Employee

You were helping a sysadmin understand the details of a request for a new virtual server. You whiteboarded a diagram to show how SQL Server uses memory. You got so excited when the sysadmin asked a hard question that you spilled coffee everywhere.

We were tuning queries on a video conference with developers from That Software Company next door. They said they wished you worked with them, because you ask such great questions at the coffee maker.

We’re looking for someone just like you.

We are: a small team of loving SQL commando medics. We parachute in when our clients need help. We diagnose what’s causing their problems and build a plan to solve their biggest pain points. We’re growing, and we’re hiring a full time SQL Server consultant.

Team Meetings

Team Meetings

The Job: What You’ll Do

As Employee #2, you’ll help clients make their SQL Servers faster and more reliable.

Your missions will usually be short term consulting projects. You will diagnose bottlenecks in storage, problems in Windows and SQL Server configuration, and figure out when queries are killing a server. You’ll help clients understand the root causes of their problem, and you’ll prescribe a custom action plan that solves their issues in the short, medium, and long term.

Although you’ll work from home most of the time, you’ll be an integral part of our team. You’ll adopt our SQL Critical Care® toolkit and processes, but you’ll also get inspired, and suggest and build improvements for the whole team to use. You’ll work closely with me, Brent, Jeremiah, and Jes during the days: we use chat, email, and hangouts to ask questions and solve problems as a group.

You’ll contribute to our online video training. You’ll challenge yourself and learn something every week.

What We’ll Do for You

We work hard, and we take time off seriously. Our benefits include:

  • 6 weeks paid vacation per year
  • 1 paid conference, plus 1 week of learning (on projects to be announced soon) — each year. You don’t have to speak at the conference, and it doesn’t count as vacation time.
  • Health insurance
  • Full time telecommuting with a max of 1 week of travel per 2 months
  • Paid certification attempts (whether you pass or fail)
  • Home internet/VOIP/cell phone expenses paid
  • $3,000 hardware/software budget every 2 years to pick your own tools (we like Apple gear, but you can pick whatever you want – you’re your own IT department)
  • If you’re a Microsoft MVP, we pay for your travel and time for the MVP Summit each year.
  • We’ve started a tradition of awesome company trips, and we have one booked for February 2014. You and your significant other will be invited along. (You just need a passport.)

To Apply, Send Us Two Things

First, we need to know where you’ve spoken recently and what you talked about. This can include all sorts of public speaking– it doesn’t have to be technical.

Next, tell us how you give back to a technical community. (It doesn’t have to be the SQL Server community.)

Email the details (be specific!) on both of these things to help@BrentOzar.com along with your name, and you’re in the running! 

If you’ve only got the good intentions to start speaking soon, don’t apply. Get out there and do it, and tell us about it next time.

Don’t send us your resume. We only want the two items listed above right now.

Our work is never boring.

Our work is never boring.

It may take us some time to decide. We lucked out last time and found Jes pretty quickly after we posted our ad. A year and a half later, we know we couldn’t have hired a better Employee #1.

Life’s not always that easy. It’s probably going to take a while to find the right person this time– and we’re willing to wait as long as we need to. But you never know, so drop us a line fast!

Why Index Fragmentation Doesn’t Matter [Video]

Are you still rebuilding all of your indexes every week – or heaven forbid, every night – and still having performance problems? Odds are, you’re not actually making things better – you’re making them worse! I explain why in this video:

For more, read my post: Stop Worrying About Index Fragmentation.

You Can’t Kill Transactional Replication

Roach-Top-View

Repli-Roach

If SQL Server’s Transactional Replication was an animal, it would be a cockroach. It’s not pretty, but it’s resourceful and strong. Once you get even a small amount of replication in your environment it starts spreading everywhere. And if it gets out of control– well, in that case you’ve got no choice but to pretty much bomb everything, clear the fumes as quickly as possible, and reinitialize.

But unlike a cockroach, you can’t always just get rid of Transactional Replication. The bad news is, it’s here to stay.

In this post you’ll get a whirlwind introduction to Transactional Replication. I’ll explain why newer features like Change Tracking, Change Data Capture, and Availability Groups have failed at exterminating Transactional Replication. You’ll also learn what it takes to support replication in a mission critical environment.

SQL Server Transactional Replication basics

SQL-Server-Transactional-Replication-Publisher

Transactional Replication lets you publish and filter individual database objects.

1) Publications define replicated articles. Transactional Replication lets you select individual tables to publish (called “articles”), allowing you to just publish a fraction of a very large database. It also lets you:

2) A fleet of SQL Server Agent jobs does magic. I just learned this: a group of cockroaches is called “an intrusion of cockroaches.” That seems like it probably applies to a whole lot of SQL Server Agent jobs, too.

Exactly how many jobs? So many jobs that you’ll lose count. Replication is complicated. It’s got a SQL Server Agent job to read the transaction log of where you’ve defined your publications (Log Reader Agent). There’s another job that helps run Snapshots (it helps you initialize or re-initialize the replication). Another helps apply the snapshot and move transactions around (the Distribution Agent).

There are also jobs for cleaning things up, refreshing monitoring, and all the general scuttling around that replication does.

SQL-Server-Transactional-Replication-Distributor

3) The distribution database collects commands. Replication doesn’t send transactions directly to your subscribers. The Log Reader Agent snoops in your transaction log and figures out what changes have been made to the articles you’ve defined in your publications. When it finds changes, it sends them over to the distribution database.

You can configure distribution on the publishing server, or you can scale it out to an independent server. The important thing to know is that each change you make in your publisher gets translated into independent commands (insert, update, and delete), and these commands get inserted into the distribution database before it moves onward. No matter what you do, that process takes some time: replication isn’t instantaneous.

4) Commands get applied to subscribers. You may set up many subscriptions to a single publication. This is part of what’s cool about replication– a single publication can publish a narrow group of tables to be read by many subscribers (perhaps in different datacenters around the world).

You can elect to configure pull subscriptions or push subscriptions. In the “pull” model, each subscriber runs jobs that poll in data periodically. In the “push” model, the changes are more constantly pushed out to the subscribers from the distributor.

5) Old commands get cleaned up. Hopefully. Lots of commands may collect up in the distribution database. That can be a good thing– maybe a subscriber is offline for a few hours, wouldn’t it be sad if it couldn’t catch up?

But you don’t want to hoard too many commands, or your distribution database will bloat up like crazy. There are jobs that delete out old commands. In an active environment, balancing out cleanup, new activity, and pushing SQL-Server-Transactional-Replication-Subscriberdata to subscribers can be… challenging.

6) Then you learn a million factoids. We’ve barely scratched the surface. I haven’t shown you Replication Monitor, a tool whose GUI is so confusing that you don’t dare take your eyes off it. (It’s hard to avoid it, but do use it with care– I’ve had it cause blocking when run by multiple people concurrently in a couple versions of SQL Server.) I haven’t talked about tweaking Replication Agent Profile Settings or the limitations of tracer tokens. We haven’t debated when you might want to initialize from backup.

Transactional Replication is a complicated monster. It’s not something you can book-learn, it’s just got far too many configuration options and moving parts. Replication something you learn by trying it, having problems, fixing it, and then repeating the process.


Transactional Replication vs. Change Tracking

Two new replication-ish features were introduced in SQL Server 2008: Change Tracking and Change Data Capture. These two technologies are both completely separate from replication, and from each other.

Change Tracking, like transactional replication, is available in Standard Edition. That’s a good thing, but if you’re comparing the two features, people still pick replication.

Change tracking just answers a very basic question: has a row changed? It doesn’t capture the “old data” or send the “new data” anywhere– it just updates a marker letting you know that a row has changed since your application last polled it.

It’s up to the user to write an application to regularly poll the tables (and know what version it last tracked).

It’s also recommended for the user to enable snapshot isolation on the database using change tracking and explicitly use it when querying Change Tracking tables to make sure that you don’t miss data or cause blocking with Change Tracking cleanup processes. Snapshot isolation is great, but turning it on and managing it are not at all trivial.

Change Tracking also has some additional overhead. For each row modified in a tracked table, a row is added to a change table. For each transaction that commits, a row is inserted into an internal transaction table, also. (It doesn’t keep the previous values in those tables, so they’re small inserts, but on super active tables that’s extra writes hitting the transaction log of your database– and extra writes when those tables get cleaned up.)

Verdict: Replication is complicated, but Change Tracking ain’t simpler. Due to having much less custom code to write, more freedom with isolation levels, and no extra inserts in internal tables, most people decide that Transactional Replication beats Change Tracking.


Transactional Replication vs. Change Data Capture

How about Change Tracking’s distant cousin, Change Data Capture? This is an Enterprise Edition feature– it must surely perform better than the Standard Edition friendly Transactional Replication, right? Well, sorry, not necessarily.

I love the concept behind Change Data Capture. Imagine that you’re a database administrator managing a transactional replication publication. You have a table that holds advertising campaign keywords.

  • Each row in the table contains a CampaignID column, a keyword column, and a BidPrice column
  • Some campaigns have hundreds of thousands of keywords
  • Tools let advertisers set up campaigns quickly with LOADS of keywords– and the keyword BidPrice

No magical creature here

No magical creature here

It’s the day before a big holiday. One of the big advertisers goes in, and sets up a huge campaign– 200K rows of changes roll into the table. They realize they messed up and deactivate it right away. 200K more changes roll in. Then they set up a new one. Then the tweak the BidPrice.

Suddenly replication is churning trying to get all these changes into the distribution database, not to mention handle what’s happening in any other replicated tables you have. Replicated commands start backing up.

You’re left saying, “But all I want is the most recent change they made!”

This is the scenario where Change Data Capture sounds like a great idea. The concept with “CDC” is that you write your own polling. If you poll every hour, you just pick up the latest version of the row (or whatever history you want), and take that with you. You don’t have to take every command, so it’s super light-weight.

But CDC isn’t all unicorns and magic, either.

When you enable Change Data Capture for a table, SQL Server starts tracking inserts, updates, and deletes in the transaction log (similarly to replication). But when you make changes, they’re read from the log and then inserted into “change tables” associated with the tracked tables. The change tables are automatically created in the same database as the tracked tables– and inserts into them are also logged operations. So for every modification you do on a change tracked table, you’re also doing an insert (and logging it) into a tracking table.

Data in the tracking tables needs to be cleaned up periodically, of course– and that also gets logged. You need to write your own polling mechanism to pull changes, and there’s limitations to how you can change schema of a table.

While our test scenario seemed like perhaps it would help us out, after checking out a whitepaper on tuning CDC, things don’t look so hot: the whitepaper recommends avoiding scenarios where rows can be updated immediately after insert, and also avoiding scenarios where large update scenarios can occur. Also, our dream of only pulling the latest, or “net” change if a row has been changed multiple times has a cost– that requires an additional index on each tracking table, and the whitepaper points out that this can have a noticeable performance hit. This isn’t seeming like such a great fit anymore– and worse, it sounds like if it doesn’t go well, it’s going to slow down our whole publishing database.

This comparison highlights that although transactional replication commands can get backed up in the distributor, the fact that replication allows distribution to be offloaded to its own server independent from the publisher is a real strength.

Verdict: Due to extra logging caused by internal change tables (and cleanup), plus the need to write custom code, plus limitations on schema changes, most people decide that Transactional Replication beats Change Data Capture.


Transactional Replication vs. Availability Groups

Now here’s a real competitor, right? SQL Server 2012 AlwaysOn Availability Groups allow you to scale out reads across multiple SQL Server instances. You can set up readable secondary copies of a database which have their own independent storage, memory, and server resources. The secondaries can be in either a synchronous or asynchronous mode. You can even offload backups to secondaries.

If anything can render Transactional Replication completely outdated and useless, this is the feature.

It’s absolutely true that Availability Groups are better than transactional replication for high availability. But that’s not surprising– Transactional Replication is terrible for high availability! Even in Standard Edition, a simple two node failover cluster beats Transactional Replication when it comes to HA. So, yes, Availability Groups wins here, but replication doesn’t even make second place. (By a long shot.)

But when it comes to scaling out reads, Availability Groups still aren’t perfect. SQL Server 2012 did introduce a cool new feature where temporary statistics are created for read only databases that helps this feature out, but it doesn’t quite do everything. Availability Group readable secondaries still have some limitations:

  • You can’t create indexes on the secondaries– all indexes must be created on the primary
  • The more indexes you create on the primary, the heavier your IO load and potential for data churn in maintenance– which makes it harder for secondaries to keep up
  • The whole database goes in the availability group. You can’t just include a couple of tables.

Verdict: Using Availability Groups for scale out reads works in some scenarios, but it does not replace Transactional Replication.


Transactional Replication doesn’t solve your HA or DR pains

If you remember one thing from this article, make it this fact: Transactional Replication is not a good way to save your bacon if a server fails or a database gets corrupted. This isn’t a feature for high availability and disaster recovery– it’s much more of a programmability feature to distribute data from a few critical tables from one instance out to many subscribers.

Transactional Replication does NOT support any of the following:

  • Failover (or fail-back)
  • Connection string enhancements
  • Load balancing
  • Automatic page repair from corruption
  • Protection of the replicated data (it can be updated, made incorrect, etc)

For high availability and disaster recovery, you want to look primarily at other features in SQL Server.


What if you need to write to multiple masters?

If you’ve got a datacenter in China, a datacenter in Europe, and a datacenter in the United States, you probably don’t want to send all of your writes to just one datacenter. You probably want to send writes to the closest local datacenter and have changes synchronize between databases in all of them.

There’s only one feature in SQL Server that helps with this: peer to peer replication. It’s an Enterprise Edition feature which is like two-way transactional replication. Peer to peer replication requires that you handle conflict resolution, and isn’t compatible with all newer features (example: Availability Groups).

Transactional replication has a feature called Updatable Subscriptions that allows changes at a subscriber to be sent back around to the publisher, but the feature has been deprecated in SQL Server 2012. You’re recommended to look to Peer to Peer replication instead.


Requirements for supporting replication

I have a confession to make: I like Transactional Replication. There’s something about it which I admire– its flexibility, its tenacity, its ability to endlessly survive and propagate itself through more and more versions of SQL Server while other features continuously try and fail to exterminate it.

Roachlipication-Database

Transactional replication: a force of nature

I don’t often recommend that clients use Transactional Replication. Frequently, there are other ways to solve a problem without taking on the burden of supporting Transactional Replication. It’s just not that easy to support.

If you do think you need Transactional Replication, make sure you have the right staff in place. For mission critical data, you need to have multiple, dedicated SQL Server DBAs, an established on-call rotation, a good incident response system, root cause analysis, a pre-production environment, good Change Management processes, and the ability and desire to refine and extend production monitoring.

That may sound like a lot, but trust me, the first time someone asks, “why is the data on the subscriber so old?”, you’re going to realize why I listed a lot of things. One thing you can count on with replication is that there’s sure to be interesting times along the way.

What Server Should You Buy?

When you get a new laptop, you don’t just pick Small, Medium, or Large. You look at all kinds of specs – screen resolution and size, memory size, storage, weight, and of course, price. Eventually you pick the one that’s perfect for you, and then you shake your fist at the skies because you can’t actually afford it, and then you go back through the list again. (What, is that just me?)

When you buy a server, you face the same choices. In the old days, we just picked 1 socket, 2 sockets, 4 sockets, or 8 sockets, and we were done.

Today, if I head over to Dell and try to shuffle through the 2-socket servers, I’ve got a bewildering array of choices:

Eeny, meeny, miny, mo

Eeny, meeny, miny, mo

Do I want flexible IO, data-intensive, or state-of-the-art flexibility? What do these terms even mean? Can I get a 50-50 and eliminate half of the choices?

Step 1: Make a list of the software features you’re going to use.

Are you building a cluster or using virtualization? That means accessing shared storage.

If you’re implementing SQL Server 2012 in a cluster, you might want to use the new capability to put TempDB on local SSDs. If you’re considering SQL Server 2014, you might want even more space for Buffer Pool Extensions, the ability to cache frequently used tables locally on SSD. If you’re sticking with old faithful SQL Server 2008 in a cluster, amp up that storage throughput – all of your data, including TempDB, is going to live on the SAN.

If you’re implementing SQL Server Enterprise Edition, you’ll want lots of memory – it’s dirt cheap compared to the software licensing costs. On the other hand, if you’re sticking with SQL Server Standard Edition, which is capped at just 64GB of memory, then you don’t need a box that can take a terabyte or more of memory. 

Step 2: Plan the lifespan of your server.

Servers need expiration dates. Sure, we all think we’re going to be around forever, and nobody wants to talk about old age. If we don’t plan for it, though, we end up alone and cold, shivering in the alley, trying to bum a cigarette off a passing NoSQL “admin” who was able to keep up with the changes in – 

Wait, I got off track there. Servers. But the NoSQL guys have a real point – they deploy commodity hardware in packs, knowing that as new faster stuff comes out, they can smoothly swap out boxes without taking the entire cluster down. Virtualization sysadmins can do the same thing. Heck, even us lowly Microsoft SQL Server guys can do it as long as we plan well enough, and to do that, we have to ask the business tough questions about how long this server is supposed to last.

If the business wants to deploy a server and then never touch it again for the life of the application, we can totally do that – but we have to know that going in, and it affects the kind of server we buy.

Step 3: map your feature and lifespan list to server strengths.

Do your features need fast local storage? If you decided to use local TempDB or Buffer Pool Extensions, you’ve got a decision to make. PCI Express drives like Fusion-IO and Intel 910s produce smokin’ fast results, but you’ll need to have enough fast PCI Express slots available for those drives. If you’d rather go with 2.5″ drives like the Dell PowerEdge R720 I tested recently, you’ll want enough local drive bays, plus possibly 2 RAID controllers for maximum bandwidth.

Do you have fast shared storage? It’s easy to plug any server into a SAN with just one or two connections, but if you want to step up to more bandwidth, things change. You’ll need 4 or more Host Bus Adapters (HBAs) to saturate a dozen cores or more, and that means lots of free PCI Express slots.

Do you need lots of network speed for virtualization? The latest round of hypervisors have all kinds of cool storage migration and replication features built in, but they do their magic over the network. You’re going to need a couple of 10Gb ports built in.

Step 4: Map up your strengths to server options.

The server market is changing constantly. What would have been unthinkable just a couple of years ago – like running a large data warehouse on less than $20k of solid state drives – is suddenly not just possible, but commonplace.

At the same time, I see customers running with scissors, like trying to virtualize multi-terabyte data warehouses with no recognition of the storage throughput required, or putting all of their SQL Servers on virtual hosts by themselves.

We need to talk.

We need to talk before your next server decision. Dell and Microsoft have set up a one-hour free Twitter chat focused on picking the right hardware. On September 25th at noon Central time, I’m working with them to talk about your server options, virtualization, solid state, and the future of the server market.

It doesn’t even cost you anything! Dell and Microsoft are sponsoring the whole thing, including paying me for my time. (I have to disclose that I’m getting paid to blog and tweet this stuff, but rest assured that all advice given during the chat and here on the post is 100% good old honest Brent.)

Register now.

What Does the #SQLPASS Board of Directors Do?

The PASS Board of Directors election is coming up, so I sat down with Allen Kinsel (Blog@AllenKinsel) to get some insight. Allen knows first-hand.

Brent: The PASS Board of Directors election is coming. What exactly do Board members do?

allen-kinselAllen: Board Members set the direction and future vision for the organization. In a more perfect world they might solely be responsible for the strategic direction of the organization much as a typical corporate board of directors is but, because of the unique nature of the organization if a particular director has a passion for how a certain area of PASS should be different they can become quite tactical and put in the actual work to accomplish their goals.

Is it a full time job? How much commitment is involved?

Since it’s an unpaid position, thankfully it is not a full time job. The commitment level is what each director makes of it. If a director has a real passion for something and wants to get it done, they can invest literally as much time as they have to give In order to further their agenda. Otherwise the commitments are typically a few hours a week on average.

How many people are on the Board, and what kinds of backgrounds do they come from?

There are 14 members of the board though only 6 are directly elected by the membership. There are 4 additional board members placed from the founding partners (CA/Microsoft) and an additional 4 on the executive that are elected by the board every other year. The backgrounds of board members change after every election cycle but, typically about the only thing they all have in common is a love of SQL Server. Otherwise board members come from all backgrounds: technical, management, DBA’s, developers, Full time employees, consultants etc…

You’ve been on the Board – what was your favorite part?

My favorite part was certainly my final year of my term where I was able to make significant changes to the way PASS supports chapters and their leaders. Laying out a vision for enabling chapters to have a toolset that rivaled the tools we currently give SQL Saturday leaders and then executing on that vision and seeing how much easier it made new chapter leaders jobs was quite rewarding. I believe that given the support and another year the tools would have fully delivered their initial promise of being everything chapter leaders needed to run a chapter efficiently.

What do you think makes a good Board member?

Many things can add up to make a good Board member but if I had to put one thing on the list, it would be passion. For a person to be truly successful and get things done on the board they need passion. Passion fuels all the other needed aspects of the job. If I could add a second thing to that list it would be experience. As with all things, experience matters. Since experience on a board level is not the easiest thing to find on a resume, the first year serving for most board members can be quite daunting while attempting to find their way.

If you’re a community member looking at a ballot, how do you pick names? Most readers aren’t lucky enough to know people on the ballot personally.

If it is feasible voters should educate themselves on which candidates share their ideals through http://sqlpass.org/elections.aspx prior to voting. If that’s not feasible I would suggest a second good option is relying on the nominations committee’s vetting process to guide how your cast votes. Prior to being put on the ballot the candidates are vetted by a community led group (the nominations committee). Based on their rankings the candidates are listed on the ballot in the order of who they believe will be best for the position.

If somebody wanted to run for the Board a few years from now, what would you recommend they do to start building experience for it?

I’d recommend that they volunteer, get involved and participate in the #sqlfamily any way that they can. In order to truly understand the needs of the community you have to have the pulse of the community and a very effective way to do that is through participation.

How to Prove Your SQL Server Needs More Memory [Video]

Open up Performance Monitor (aka Perfmon), then:

  • Windows 2008 & newer – expand Monitoring Tools, click on Performance Monitor
  • Windows 2003 & older – click System Monitor

This brings you to the graph. Click on the metrics at the bottom and click the red X at the top to remove each of them. We want to start clean.

Then click the + button to add a new counter. At the top, where it says “Select counters from computer”, put in the name of your SQL Server starting with two backslashes, like \\MYSQLSERVER and hit the Tab key. Perfmon will pause for a few seconds while it gets the list of Perfmon counters on that server, and then shows them in a list below.

Add these counters:

  • Memory – Available MBytes
  • SQLServer: Buffer Manager – Page Life Expectancy
  • SQLServer: Memory Manager – Memory Grants Pending
  • SQLServer: Memory Manager – Target Server Memory
  • SQLServer: Memory Manager – Total Server Memory
  • SQLServer: SQL Statistics – Batch Requests/sec
  • SQLServer: SQL Statistics – Compilations/sec

While data starts to come in to the graph, watch our 30-minute video on How to Prove Your SQL Server Needs More Memory. I explain what the counters mean and how to interpret them:

The takeaways, arranged from easy to hard:

If your server is slow and you’ve got less than 64GB of memory, learn how to explain business costs as I explain in the video. It’s smarter to buy $500 worth of memory rather than spend days struggling with a problem and making risky changes to your server. Sure, the business is going to argue against you – that’s their job, and it’s your job to clearly explain the pros and cons of each side. You have to persuade.

If Memory Grants Pending is averaging over 0, queries can’t even start because they can’t get enough memory. You may have queries that have incorrect memory grants, or you may just desperately need more RAM.

If Compiles/Sec is over 10% of Batch Requests/Sec, SQL Server may not be able to cache execution plans. This causes increased CPU use and slower queries. There’s a lot of gotchas with this one, though – now the hard work starts, because you have to spend time analyzing your workload to see if the execution plans can even be reused.

If Page Life Expectancy is low, you may need memory to cache more data pages. In the video, I explain how to start with tuning indexes and queries first (since that’s typically a lower cost than buying more memory once you’re over 64GB). I mention these tools:

css.php