Connection strings – boring topic, right? Well, it turns out there’s a few things that can save your behind.
Tip 1: Build your application with 3 connection strings.
If you want to scale, write your application from the start with 3 different connection strings:
- Writes and real-time reads
- Reads that can tolerate data older than 15 seconds (no writes allowed with this connection string)
- Reads that can tolerate data several hours old (like reports, and no writes allowed with this string)
In the beginning, all three connection strings will have the same content – they’ll all point to your production database server. When you need to scale, though, the production DBA can use different techniques to scale out each of those tiers.
1. Writes and real-time reads – this is pretty tough for DBAs to scale, so we need to keep the number of queries here to a minimum. Our only option right now for writing in two SQL Servers at once for the same database is to use replication. I like replication in theory, but it isn’t a set-it-and-forget-it option when you’re trying to scale more load than one SQL Server can handle, plus do active development. I only recommend this option for clients with full time DBA teams that are on call around the clock. Fortunately, you probably won’t have to pursue this option because most of your load will be using the other two connection strings anyway.
2. Reads that can tolerate data older than 15 seconds – here we start to have more options to scale out. When we don’t have to worry about merging data from multiple sources, we can use easier-to-manage tools like transactional replication or SQL Server 2012′s Availability Groups. This is where the bulk of your queries should go – but I mean bulk in terms of execution count, not in terms of load generated. Think of this as the default connection string for your application.
3. Reads that can tolerate data several hours old – users love reports. As your application grows, you won’t have time to make all of the report queries clean. Users will build really crazy reports with nasty, ugly queries that burn up all kinds of CPU and IO. There will come a point when management will have to make a decision: do they allow reports to slow down production in order to get near-real-time data, or do they shove these ugly queries over to a more delayed data source? Let this be a political decision, not a technical decision: if you’ve got a separate connection string already for queries that can be hours old, then we’ve got all kinds of ways to scale those reads out. My favorite technique for this is log shipping because I can log ship to an unlimited number of read-only database servers. I do have to kick running queries out when I restore, but we’ve got tricks for that too.
To be really successful, dev managers need to police the use of these 3 connection strings – even long before we implement different SQL Servers:
- Writes and real-time reads – all code that uses this connection string must be signed off by a senior developer and a DBA. Must go through change control.
- Reads older than 15 seconds – all queries must be peer-reviewed.
- Reads several hours old – free-for-all. Anybody can write craptastic queries here, like with off-the-shelf reporting tools.
Treat that #1 connection string like a valuable resource, and your database apps will scale like crazy.
Tip 2: Get connection string examples at ConnectionStrings.com.
Much like DownForEveryoneOrJustMe.com, this is one of those simple sites that just does what it says on the label. Scroll down the page on the SQL Server 2008 examples and you might be stunned at just how many cool things you can do, including my next favorite tip.
Tip 3: Set up a backup server with the Failover Partner parameter.
SQL Server Database Mirroring lets us set up a primary server and a secondary server. All logged transactions are sent to the secondary server, either synchronously or asynchronously. If we’re really serious about availability, we can even set up a witness server that will automatically fail over the database to the secondary server. Your applications don’t have to change a thing – as long as you specify Failover_Partner in the connection string like this:
Data Source=ProductionSQLServerName;Failover Partner=ProductionSQLMirrorName;Initial Catalog=myDataBase;Integrated Security=True;
When your application tries to connect to the ProductionSQLServerName and times out, it’ll automatically try the failover partner. If it’s up, it’ll keep right on working as normal. You don’t have to get up in the middle of the night to change anything.
Here’s the cool part: you don’t need to use database mirroring to use Failover Partner. Whether you’re using database mirroring, replication, log shipping, or duct tape, much like the honey badger, your applications don’t care. They’ll just try to connect to the Failover Partner name whenever the primary server is down.
If you wanna get really fancy, you can even move the failover partner around from server to server using aliases. Aliases are SQL Server’s equivalent of a hosts file: your client machines check the registry first to see if a SQL Server name is actually an alias that points somewhere else. Normally aliases are set up using SQL Server Configuration Manager, but they’re just registry settings, and that means you can roll aliases out via Active Directory Group Policy. (No, I don’t expect you to know how to do that, but it’s just another tool in the arsenal of big companies that need to scale.)
When you add this Failover Partner trick, just make sure you don’t leave the production database writable on both the production server AND the failover partner at any one time. Don’t use a dev server as the failover partner, for example. If your production server goes down for a simple reboot, your applications will start connecting to the dev server, write their transactions in dev. I know you only want your production data in one database, not two, but remember – your application is the honey badger. He don’t care.