Scaling SQL Server: Growing Out

SQL Server
2 Comments

Scaling up is hard, scaling out is even harder. We all know that we can scale reads by adding some kind of replication or read-only copies of databases or using a massive caching layer. What happens when we have to scale writes?

Borrowing Brent’s grocery store metaphor, a database that is effectively scaled for reads will have ample parking, the aisles will be wide and well lit, and there will be plenty of high capacity shopping carts available for everyone to use. There are even 20 staffed checkout lines. The problem is that goods coming into the store only enter the store through a single loading dock that has space for one truck. Shoppers can get in or out quickly, but there’s no way for new groceries to get to the shelves.

If we were to redesign the grocery store to be able to handle more shipments, we need to ask ourselves:

  • How often do we need new shipments? Do we need more shipments or more shipments at the same time?
  • Where do those shipments need to go in the store?
  • Do we need to put similar items together on the truck?

Scaling Out For More Produce

To get more data into SQL Server, we need more places to write. There comes a time when the hardware in a server can’t handle any more writes, no matter how much hardware is thrown at the problem. Writes also have the distinct problem of issuing locks and hanging on to them until the lock passes out of scope (be that an individual statement or an entire transaction).

We can approach the problem a few ways:

  • Writing to one server and reading from others
  • Writing to many servers and reading from many servers
  • Finding a new job

Since this is an article about solving the problem, we can rule out finding a new job right away. That leaves us with two possible solutions.

Write Primary

This is a tried and true way to solve the problem of scaling writes. Since a mix of reading and writing to a single server can cause a lot of locking and blocking, much of the locking and blocking can be eliminated by moving reads to a separate server.

Moving reads to a new server solves the problem of writing quick enough to our primary server, but it just spreads the problem out to all of the read replicas. Instead of having to troubleshoot locking and blocking problems on a single server, someone will have to troubleshoot locking and blocking problems combined with whatever mechanism we’re using to move data to the read servers. Instead of having one problem, we now have a lot of problems. And, to put a big red cherry on top of this sundae of problems, eventually there will be problems scaling writes on a single write server.

Sharing the Load

Instead of writing to a single server, it’s possible to write to many servers. SQL Server supports peer-to-peer replication, MySQL can be sharded through a driver, PostgreSQL has the Postgres-XC project, and other databases have their own solutions. Clearly this is a problem that people have approached before.

A major difficulty with something like peer-to-peer replication is determining where to write data. Sometimes this is done via a discriminator column. Some attribute of the data is used to determine which server will receive the data. This could be first name, user name, email address, or an arbitrary value assigned at object creation. Schemes like this work well with a finite number of servers and an even distribution of data. Natural keys can have data skew based on regional and linguistic preferences. Splitting the alphabet in even chunks won’t lend an even distribution of data.

The other option is to randomly partition the data. We could assign a random number when data is initially written, but random numbers aren’t entirely random.

If you can’t trust randomness what can you trust? Math.

Hashing algorithms make it easy to take an input of any length and produce an identifiable output of a fixed length. The best part is that some hashing algorithms have the interesting side effect of evenly distributing data (roughly) across that fixed length output value. This is usually called consistent hashing. However the consistent hashing is generated, it’s easy to divide the total range of hashed values into multiple buckets and spread writes out across many databases. This has been discussed at length throughout computer science and in multiple papers, including Consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web and Dynamo: Amazon’s Highly Available Key-value Store.

Where Does the Load Split?

Implementing a combination of consistent hashing and peer-to-peer replication isn’t easy. The decision to route a write has to be made somewhere.

  • In the application A layer of code is written to determine how a write will be routed.
  • On the wire Instead of routing writes in our application, writes are sent to the appropriate server by router that performs packet inspection looking for a hash value.

Both approaches have their merits. Routing in the application can make this process as easy as deploying a new DLL or configuration file. A determined developer could write code to look for new databases and partition writes accordingly. On the flip side, routing writes on the wire makes this routing transparent to any calling applications – no code will need to be changed to add additional databases. Router configuration changes will be required, but the changes can be performed at any time rather than waiting for a regular application deployment.

Specialty Stores

Just as there are specialty stores in the real world, it may become necessary to create special purpose data stores and separate the data. This is frequently called sharding and is a different approach to spreading out load. Rather than assume that all databases need the same data, sharding assumes that the data can be separated across servers by some sharding key. In applications where there is no reporting that performs aggregations across all of the data, sharding can be an easy way to scale out read and write performance. Additional servers are added and new data can be written to them. Problems arise when one shard becomes heavily loaded and data needs to be moved to a new shard. This is a tricky operation at the best of times and requires careful planning.

Where Do I Split the Load?

Designing a scale out strategy isn’t easy. There are many factors to consider, and knowing how to scale out writes is just one thing to think about before designing your overall architecture.

Previous Post
Fixing SQL Server Management Studio’s Tab Text
Next Post
Should SQL Server Denali Run on Windows XP?

2 Comments. Leave new

  • A system I worked on recently used SQL service broker to scale. The central DB was replicated to serveral servers which the application used (network load balanced). Procs on these servers wrote to a queue instead of tables. These were then sent onto the central db.
    Here they could be processed in batches thus reducing locking and the number of threads could be controlled.
    The downside was that there is a slight lag in the data replicating back to the server that requested the change. Though the application was a forum where this method was acceptable.

    Reply
  • We had these types of issues with incoming data streams. Lots of incoming data and locking/blocking issues.

    Luckily our incoming data could be easily spread across multiple DB Silos based on the facility # (the key) the data comes from. Certain facilities are assigned to specific DB Silos.

    The issue is when we have to move a faciltiy to a different DB Silo for balancing or other business reasons. You have to bring all of the historical data with the move for that facility. THAT is a complex process.

    There are solutions for scaling out. But they can create other issues. 🙂 Be careful.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.