Nearly every day, we’re faced with a decision about quality and time. Do you write quick and dirty code or do you take the time to get it right? Your decision is going to last a lot longer than you want to remember. When we’re tuning T-SQL, I often hear things like, “Ugh, I remember writing that. I meant to fix that three years ago.” We all know that we need to write good clean code, right? Nobody is debating the merits of paying down technical debt or writing good code.
How do you know that you’re accumulating technical debt? There’s no FICO score or credit report for your code. If the application is still functioning and unit tests are still passing, how can you tell that you’ve got technical debt? There are a few symptoms that I’ve seen that are good indicators of technical debt.
The Feared Function
We’ve all run into this before. There’s one function or stored procedure that nobody wants to touch. Every time someone sees the name of the function they groan and say, “We’ve been meaning to re-write that for a while.” This function starts off as a quick fix to add a new feature. The feature grows over time until hundreds, or even thousands, of lines of business logic are embedded in the database in a scalar function.
Unfortunately, there’s no easy cure for the feared function. As painful as it sounds It’s usually the case that someone will need to sit down and map out the behavior of the code in question. It’s possible to re-write the code to either operate in a set based fashion or even push the logic up into the application tier – doing a lot of computations in the application tier is cheaper than performing them in SQL Server and, potentially, much faster as developers will have full control of the algorithms that they’re using.
Applications show users fully formatted data. Numbers are formatted beautifully for the user’s locale, time is displayed correctly, and HTML is added around key elements. It’s tempting to put presentation code in the database; if one stored procedure or view is changed to display formatted and localized time we can save hours or days of development and testing time in the application tier. Just one change fixes things for everyone, right?
While it’s convenient to put formatting code to the database, keep in mind that these tiny functions add up over time. Most monitoring tools filter out queries with short execution times – this is a good thing, otherwise the tools would spend all their time dealing with small queries. But these tiny presentation operations add up on SQL Server. Database servers are very expensive places to be performing a lot of presentation driven work. Most reporting and programming frameworks provide easy to use templating tools that make it easy to display richly formatted information to users.
The Master View
How often do you see a view that pulls back every column from nearly every table in the database? Don’t be shy, I bet you’ve seen this view, or even written this view, far more often than you’d care to admit. It’s okay, we all make mistakes. The master view seems like a compelling idea – it encapsulates core business logic and functionality into a single view that can be used to compose additional queries. And, after all, SQL Server should be smart enough to make sense of the body of the view itself and optimize our query based on the source of the view itself.
Unfortunately, the world is not perfect. Sometimes SQL Server is unable to divine our meaning from the query we write. IN these cases, instead of optimizing the view, SQL Server will execute the view code as written. What should have been a simple query that returns a few columns from two or three tables becomes a huge mess where SQL Server queries every table in the master view and returns every column you’ve listed in the
SELECT list of the query. This is clearly a bad thing.
What can you do about this? Barring finding a new job, you can start by re-writing queries that use the master view to use as few tables and columns as possible. This is going to be a time consuming process. The worst part about this is that the idea of the master view is dangerously convenient. Views should be composable, right? Unfortunately, real world experience states that this isn’t the case. The idea of a master view works great when you’re rapidly writing code but once you need performance, it’s time to dig in and optimize your queries to make sure everything is running as fast as it can and doing as little work as possible.
Want to Know More?
If you want to learn more and you’re in the Seattle area next week, you can drop by SQL in the City on Monday, November 5th and hear more about what you can do to boost code quality, test code, and find a better way to write database code. And, if you’re at the 2012 PASS Community Summit, I’ll be presenting A Developer’s Guide to Dangerous Queries. This session is a discussion about patterns and anti-patterns that we see every day in databases.
Strong development practices don’t spring up overnight; they take time, effort, and teamwork. Database development practices are doubly hard because they involve many moving pieces – unit testing, integration testing, and deploying changes that could have potential side effects beyond changing logic. In this session, Microsoft SQL Server MVP Jeremiah Peschka will discuss ways users can move toward a healthy cycle of database development using version control, automated testing, and rapid deployment.
Tools Mentioned in the Webcast
- SQL Server Alerts – Having basic SQL Server alerts in place can give you warnings of possible hardware failure.
- tSQLt – A database unit testing framework for SQL Server.
- Fixed database roles and user-defined database roles help define granular security.
- Many libraries exist for performing database migrations – they’ll typically be bundled with an ORM. Even Entity Framework supports database migrations!