Which Database Is Right for Me?

When you start developing a new application how do you pick the database back end? Most people pick what they know/what’s already installed on their system: the tried and true relational database. Let’s face it: nobody is getting fired for using a relational database. They’re safe, well understood, and there’s probably one running in the datacenter right now.

Here’s the catch: if you’re using a relational database without looking at anything else, you’re potentially missing out. Not all databases work well for all workloads. Microsoft SQL Server users are already aware of this; there are two distinct database engines – one for transactional and one for analytical workloads (SQL Server Analysis Services).

When you start a new project, or even add a substantial feature, you should ask yourself [questions about your data][1]. Here are a few sample questions:

  • Why am I storing this data?
  • How will my users query the data?
  • How will my users use this data?

The Relational Database

There are a lot of reasons to use a relational database. Relational databases became the de facto choice for databases for a number of reasons. In addition to being based on sound mathematical theory and principles, relational databases make it easy to search for specific information, read a select number of columns, and understand structure by querying the metadata stored in the relational database itself.

The self-describing nature of a relational database provides additional benefits. If you’ve created a relational database with referential integrity and schema constraints you are assured that every record in the database is valid. By enforcing data integrity and validity at the data level, you are assured that any data in the database is always correct.

The adoption of SQL as a standard in the mid-1980s finalized the victory of the relational database for the next 25 years. Using SQL it became easy to create ad hoc queries that the original database developers had never dreamed of. The self-describing nature of relational databases combined with SQL’s relatively simple syntax was hoped to make it easy for savvy business users to write their own reports.

In short, relational databases make it easy to know that all data is always correct, query data in many ways, and use it in even more ways. Will all of these benefits, you’d think that people would have no need for a database other than a relational database.

Document Databases

Document databases different from relational databases primarily because of how they store data. Relational databases are based on [relational theory][2]. While databases differ from relational theory, the important thing to remember is that relational database structure data as rows in tables. Document databases store documents in collections. A document closely resembles a set of nested attributes or, if you’re more like me, you might think of it as a relatively complete object graph. Rather than break an application entity out in to many parts (order header and line items) you store application entities as logical units.

The upside to this is that document databases all developers to create software that reads and writes data in a way that is natural for that particular application. When an order is placed, the order information is saved as a logical and physical unit in the database. When that order is read out during order fulfillment, one order record is read by the order fulfillment application. That record contains all of the information needed.

Unlike relational databases, document databases do not have a restriction that all rows contain the same number of columns. We should store similar objects in the same collection, but there’s no mandate that says the objects have to be exactly the same. The upside of this is that we only need to verify that data is correct at the time it is written. Our database will always contain correct data, but the meaning of “correct” has changed slightly. We can go back and look at historical records and know that any record was valid when it was written. One of the more daunting tasks with a relational database is migrating data to conform to a new schema.

While data flexibility is important, document databases may make it difficult to perform complex queries. Document databases typically do not support what many database developers have come to think of as standard operations. There are no joins or projections. Instead it’s a requirement to move querying logic into the application tier.

Database developers will find the following query to be a familiar way to locate users who have never placed an order.

With a document database a naive approach might be to write queries that retrieve all users and orders and subsequently merge the list of results. A more practical approach is to cache a list of order IDs within the user object to improve look up performance. This seems like a horrible idea to many proponents of relational thinking, but it allows for rapid lookups of data and is considered to be an acceptable substitute for joins in a document database. Finding the users who have never placed an order becomes as simple as looking for users without an orders property. Some document databases support secondary indexes, making it possible to improve lookups.

Document databases are a great fit for situations where an entire object graph will always be retrieved as a single unit. Additionally, document databases make it very easy to model data where most records have a similar core of functionality but some differences may exist between records.

Key/Value Store

Key/value stores are simple data stores. Data is identified by a key when it is stored and that key is used to retrieve data at some point in the future. While key/value stores have existing for a long time, they have gained popularity in recent years.

Many data operations can be reduced to simple operations based on primary key and do not require additional complex querying and manipulation. In addition, key/value stores lend themselves well to being distributed across many commodity hardware nodes. A great deal has been written about using key/value stores. [Amazon’s Dynamo][2] is an example of a well documented and much discussed key/value store. Other examples include Apache Cassandra, Riak, and Voldemort.

Key/value stores typically only offer three data access methods: get, put, and delete. This means that joins and sorting must be moved out to client applications. The data store’s only responsibility is to serve data as quickly as possible.

Of course, if key/value stores did nothing apart from serve data by primary key, they wouldn’t be terribly popular. What other features do they offer to make the desirable for production use?

Distributed

It is very easy to scale a key/value store beyond a single server. By increasing the number of available servers, each server in the cluster is responsible for a smaller amount of data. By distributing data, it’s possible to get faster throughput and better data durability than is possible with a monolithic server.

Partitioning

Many key/value stores use a technique known as consistent hashing to divvy up the key space. Using consistent hashing means we can divide our key space into many chunks and distribute responsibility for those many chunks across many servers. Think of it like this: when you go to register in person at an event the alphabet has frequently been divided up into sections at separate tables. Splitting up responsibility for check ins across the alphabet means that, in theory every attendee can be served faster by having multiple volunteers sign them in. Likewise, we can spread responsibility for different keys across different servers and spread the load evenly.

Replication

Data is replicated across many servers. Replicating data has several advantages over having a single monolithic, robust, data store. When data is stored on multiple servers the failure of any single server is not catastrophic; data can still be read and written while the outage is solved.

Hinted Handoff

Hinted handoff mechanisms make it easy to handle writing during a server outage. If a server is not available to write data, other servers will pick up the load until the original server (or a replacement) is available again. Writes will be streamed to the server responsible for the data once it comes back online. Much like replication, hinted handoff is a mechanism that helps a distributed key/value store cope with the failure of individual server.

Masterless

Many distributed databases use a master server to coordinate activity and route traffic. Master/coordinator servers create single points of failure as well as singular bottlenecks in a system. Many distributed key/value databases bypass this problem by using a heterogeneous design that makes all nodes equal. Any server can perform the duties of any other server and communication is accomplished via gossip protocols.

Resiliency

The previous features add resiliency and fault tolerance to key/value data stores. Combining these features makes it possible for any node to serve data from any other node, survive data center problems, and survive hardware failures.

Column-Oriented Databases

Column-oriented databases store and process data by column rather than row. Although commonly seen in business intelligence, analytics, and decision support systems, column-oriented databases are also seeing use in wide table databases that many have sparse columns, multi-dimensional maps, or be distributed across many nodes. The advantage of a column-oriented approach is that data does not need to be consumed as an entire row – only the necessary columns need to be read from disk.

Column-oriented databases have been around for a long time; both Sybase IQ and Vertica are incumbents, and SQL Server Apollo is Microsoft’s upcoming column store, slated for release in SQL Server Denali. Google’s Bigtable, Apache HBase, and Apache Cassandra are newer entrants into this field and are the subject of this discussion. Bigtable, HBase, and Cassandra are different from existing products in this field: these three systems allow for an unlimited number of columns to be defined and categorized into column families. They also provide additional data model and scalability features.

I have to speak in generalities and concepts here since there are implementation differences between the various column-oriented databases.

Data Model – Row Keys

A row in a column-oriented database is identified by a row key of an arbitrary length. Instead of using system generated keys (GUIDs or sequential integers), column-oriented databases use strings of arbitrary length. It’s up to application developers to create logical key naming schemes. By forcing developers to choose logical key naming schemes, data locality can be guaranteed (assuming keys are ordered).

The original Bigtable white paper mentions using row keys based on the full URL of a page with domain names reversed. For example www.stackoverflow.com becomes com.stackoverflow.www and blog.stackoverflow.com becomes com.stackoverflow.blog. Because data is sorted by row key, this scheme makes sure that all data from Stack Overflow is stored in the same location on disk.

Data Model – Columns & Column Families

Column families are an arbitrary grouping of columns. Data in a column family is stored together on disk. It’s a best practice to make sure that all of the column in a column family will be read using similar access patterns.

Column families must be defined during schema definition. By contrast, columns can be defined on the fly while the database is running. This is possible because column families in a column-oriented database are sparse by default; if there are no columns within a column family for a given row key, no data is stored. It’s important to note that different rows don’t need to contain the same number of columns.

Indexing

Column-oriented databases don’t natively support secondary indexes. Data is written in row key order. However there is no rule that data can’t be written in multiple locations. Disk space is cheap, CPU and I/O to maintain indexes is not.

The lack of secondary indexes may seem like a huge limitation, however it frees application developers from having to worry about how indexes might be maintained across multiple distributed servers in a cluster. Instead, developers can worry about writing and storing data the same way that it needs to be queried.

N.B. Cassandra has secondary indexes as of Cassandra 0.7

Picking the Right Database

Ultimately, picking the right database depends on workload, expertise, and future plans. It’s worth considering one of many options before settling on a relational database or one of many other databases. They all serve different purposes and fill different niches. The decision to store your data one way will have far reaching implications about how data is written, retrieved, and analyzed.


Resources

Previous Post
It’s a Lock: Due Diligence, Schema Changes, and You
Next Post
New Community Event by Denny Cherry: SQL Excursions

14 Comments. Leave new

  • Document databases put the load of synchronizing denormalized data on the programmer (this is a manual job). I think this model is ridiculous. “Change the name of a widget” becomes “Change the name of a widget in its document, then in all other documents which have the widgets name embedded and better not forget one when the code changes a year from now!”. High price for performance, maybe we should just stick to plain old SQL optimizations?!

    Reply
    • @Tobi, you’ve hit on one of the primary complaints about non-relational databases. The question to ask is “Do we really need to update all historical versions of the data?” If what you need is a canonical data store that offers a universal single version of the truth as it exists right now, then the only option is a relational database. If you can relax that restriction then other options like document databases become viable products.

      Ultimately, it’s a matter of looking at what your application needs and making the best decisions based on all the information available.

      Reply
      • I know of very few applications where relaxing data correctness is an option. The most relaxation that would be possible would be to have slightly wrong stats counters in a bulletin board for example. For all other changes however (a user changes his avatar, all posts need to be updated now), this would need to be programmed and tested for.

        Reply
        • Tobi – I used to feel that same way when I worked in enterprise applications, but the more that I’ve worked with dot-coms, the more flexibility I’ve found in certain applications.

          For example, if you’re tracking environmental temperatures around the world from thousands of devices all sending data over TCP/IP, you’re already dealing with a somewhat unreliable fabric of devices and networks. In a case like that, management might be completely fine with playing fast and loose with data integrity in order to handle more incoming data faster at a lower price point.

          Once you start working with this technology, you’ll be amazed at what business managers want to gather – but couldn’t justify in the days of relational databases.

          Reply
          • True. I fully admit they have a fair usage case (high-perf requirements and special cases where they just make sense). However process-centric business apps (the ones with many interconnected tables) are an anti-case for many on-relational stores. I suspect process-centric apps are the majority by far at least in my experience.

            I tried db4o, an object-db (superset of relational), once and had to realize that it still was not a replacement for relational because the tooling I am used to was just unmatched.

            I feel that too much focus is placed on the technology side, too little on tooling and the 100 little real-world problems.

  • Nigel Ainscoe
    May 8, 2011 5:28 am

    The more I look at document databases, the more I am seeing the possibilities they offer. I think the nub of the issue lies in these two questions “Does the data you are modelling currently conform to a fixed schema?” and “What is the likely frequency of schema change?”. If you have a fixed schema with a low rate of change then a relational DB is going to be a good solution, however when you need a flexible schema such as we see with many web page models (or is that object graphs) then a document database offers a lot.

    And to Tobi: The question “what happens when a user changes his avatar?” I would ask the question “Just because they have changed their Avatar, why does that mean that a historic post with their old Avatar should be updated to display the new one?” And this question does not just apply to Avatars.

    Reply
    • This is an easy excuse 😉 Just change the specifications to conform with the technology.

      In my experience changes to a database do not occur very frequently. Even if you add 10 columns per day to the database, which would be far above average, this would not require significant development resources. Just script the statements and add the columns to your entities, done. Handling the database rarely requires a signification percentage of development time.

      Reply
  • Brent,

    Interestingly, nobody seems to mention multi-value/multi-dimensional databases anymore. I still work with one every day (although I do love my relational databases), so I guess I should mention them because they are a powerful, and often overlooked option. You could say they somewhat closely resemble “document” databases, and fit in the NoSQL camp.

    They store data as “files”, where each file has “attributes”, and each attribute can have a multi-valued list. They have there own SQL-like query languages typically, and often intrinsically support object models. Multi-value databases include IBM’s U2/Universe, jBase, Intersystem’s Cache, Pick/D3 and QM to name a few.

    Sometimes I wish there was a multi-value product that had management tools as good as SQL Server but didn’t cost as much as some of the current multi-value tools.

    Jon

    Reply
  • Great post Jeremiah. I really need to look closer at column oriented databases.

    Piotr

    Reply
  • Another variant along the lines of the column-oriented approach is the hybrid database, where data is stored by rows in a column-oriented fashion. In our DBXten plug-in for Oracle, Informix, and PostgreSQL, for example, we store N rows (where N is on the order of thousands) of values for a column family, by columns, in a single database row. We then build secondary indexes on the mins/maxes of these columns for each block. Storing the data by columns offers great compression, storing thousands of rows per block reduces database row overhead (logging, etc) and the secondary indexes allow queries to quickly zero in on the blocks containing the rows of interest.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.