Hiring a DBA? Need to get a job description for the human resources folks? Here’s how to get started.
First, decide whether it’s a production or development DBA. Think of the database in terms of a fridge. When you run a restaurant, you need at least one (and probably several) refrigerators to keep your raw ingredients and your prepared dishes cold.
Your chefs rely on the fridges to get their jobs done. They have tons of training to pick the right ingredients to put in the fridge, prepare the food correctly, and know when to take things in & out of the fridge.
If your restaurant absolutely, positively cannot go down, you’ll end up hiring a handyman or facilities guy. He has to know how fridges work, and if a fridge can’t keep the food cold enough, he steps in to diagnose and fix it.
The chefs are your developers.
When you have a LOT of chefs, you hire a development DBAs to organize the fridge and clean it out. They don’t usually write code, but if they do, the code is inside the database – they’re not writing presentation-layer code in C# or Java.
The handyman or facilities guy is your production DBA. He’s more concerned about the back side of the fridge than the front side. He doesn’t do any
They all work with the fridges, but the similarities end there. Small shops might indeed hire one guy to buy the food, put it in the fridge, cook it, and fix the fridge when it breaks. However, those shops aren’t going to win any awards for food quality, and when the fridge breaks, the cooking stops while he fixes the fridge.
Sample Production DBA Job Description
This position’s job duties include:
- Ensure all database servers are backed up in a way that meets the business’s Recovery Point Objectives (RPO)
- Test backups to ensure we can meet the business’ Recovery Time Objectives (RTO)
- Troubleshoot SQL Server service outages as they occur, including after-hours and weekends
- Configure SQL Server monitoring utilities to minimize false alarms
- As new systems are brought in-house, choose whether to use clustering, log shipping, mirroring, Windows Azure, or other technologies
- Install and configure new SQL Servers
- Deploy database change scripts provided by third party vendors
- When performance issues arise, determine the most effective way to increase performance including hardware purchases, server configuration changes, or index/query changes
- Document the company’s database environment
To do a great job in this position, experience should include:
- On-call troubleshooting experience with at least one production SQL Server for a year. You don’t have to be the only DBA or have DBA in your job description, but you should have been the one person that the company would call if the SQL Server service stopped working.
- Finding DMV queries to answer questions about server-level performance
- Using free tools like sp_Blitz™ and sp_WhoIsActive to diagnose server reliability and performance issues
The following skills aren’t strictly necessary, but will make you a well-rounded candidate for bonus points:
- Tuning T-SQL queries to improve performance
- Troubleshooting hardware using tools like Dell OpenManage, HP System Insight Manager, and IBM Director
Sample Development DBA Job Description
This position’s job duties include:
- Ensure that new database code meets company standards for readability, reliability, and performance
- Each week, give developers a list of the top 10 most resource-intensive queries on the server and suggest ways to improve performance on each
- Design indexes for existing applications, choosing when to add or remove indexes
- When users complain about the performance of a particular query, help developers improve the performance of that query by tweaking it or modifying indexes
- Conduct SQL Server lunch-and-learn sessions for application developers
- Advise developers on the most efficient database designs (tables, datatypes, stored procedures, functions, etc)
To do a great job in this position, experience should include:
- Writing and improving SQL Server T-SQL queries for at least a year. You may have technically had “C# Developer” or “Java Developer” on your job title, but you were known amongst the office as the go-to person for T-SQL questions.
- Designing tables and picking datatypes
- Using Profiler traces and other tools to find the most frequently run queries
- Using free tools like sp_BlitzIndex™ and DMV queries to answer questions about index usage
The following skills aren’t strictly necessary, but will make you a well-rounded candidate for bonus points:
- On-call troubleshooting for SQL Server service outages
- Deciding whether clustering, log shipping, mirroring, replication, etc are the right fit to solve a business problem
Things I Didn’t Include In These DBA Job Descriptions
If you’re using any of the following technologies, mention it in your job description so that the candidates know what to expect:
- Failover clustering, SAN replication, and other high availability technologies
- SQL Server merge, peer to peer, or transactional replication
- LINQ, Entity Framework, NHibernate, or other ORMs
- Service Broker
- Analysis Services, Integration Services, or Reporting Services
There’s nothing wrong with having your production or development DBA work with those technologies, by the way – but they’re special technologies that require prominent placement in job descriptions.
When it comes to indexes, SQL Server is really helpful. It lets you see what indexes queries are asking for both in execution plans, and missing index dynamic management views (“DMVs”). I like to look at the DMV missing index requests using sp_BlitzIndex™.
When you look at missing index requests, it’s always important to remember one of the biggest things: these missing index requests won’t ever ask for or recommend a specific clustered index.
Let’s take a look at what this might mean against a slightly modified version of the AdventureWorks2012 sample database.
Hey, we have a high Value missing index!
Let’s say we run sp_BlitzIndex™ and it diagnoses that we have a high value missing index.
--To diagnose a database, we would run: EXEC dbo.sp_BlitzIndex @database_name='AdventureWorks'
We scroll to the right on this line to get more info and see that the index request has an overall “benefit” over over one million. That benefit is a made up number– it’s a combination of the number of times the index could have been used, the percentage by which SQL Server thought it would help the queries generating the request, and the estimated “cost” of the requests. These factors are all multiplied together to help bubble up the biggest potentially useful requests.
We also see that the index request is really quite narrow. It only wants an index with a single key column on the “City” column! That seems really reasonable.
Scrolling over farther, we can see that this could have potentially been used about 49 thousand times. The queries that could have used it (it’s quite possibly more than one), were rather cheap– their costs were less than one on average– and it would have improved those queries a whole lot (around 93% overall). We currently don’t have ANY nonclustered indexes on the table in question, so we don’t have to worry about creating a duplicate nonclustered index.
This index seems really reasonable. It would be used a lot, and it would help those queries out.
If we keep going to the right, we see that there’s some sample TSQL to create the suggested index at the far right:
CREATE INDEX [ix_Address_City] ON [AdventureWorks].[Person].[Address]
Sure enough, that syntax will create a nonclustered index. That seems really good, but hold on a second!
Always look at the whole table
Just to the left of the “Create TSQL” column is a really important helper– the “More Info” column.
Copy the command from the appropriate row out. It will look something like this:
EXEC dbo.sp_BlitzIndex @database_name='AdventureWorks', @schema_name='Person', @table_name='Address'
This command will help you look at the whole table itself and assess if there might be something you’re missing.
When you run this command, you’ll see a big picture view of the indexes on the table, the missing index requests for the table, and the number and types of the columns in the table:
In this table, note that we don’t even have a clustered index. This table is a heap! Heaps have all sorts of wacky problems in SQL Server.
In this case, we have an OLTP database and we definitely want to avoid heaps to keep our sanity.
Our queries are requesting an index on the City column, but it looks an awful lot like our table was modeled with another clustered index in mind (AddressID). Don’t just assume that the missing index request itself will always make the best clustered index. You need to take a look at your overall workload and the queries which use the table. You need to decide based on the whole workload and overall schema what should be the clustered index, if you have a primary key, and if the primary key is the same thing or different than the clustered index. After you have that design, then make decisions for your nonclustered indexes.
The clustered index is special
The clustered index in any table has special uses and significance. This index is the data itself, and it will be used in every nonclustered index in the table. If you are defining a new clustered index or changing a clustered index, SQL Server will need to do IO on every nonclustered index on the table as well. Always make sure to test your indexing changes and choose your index definitions carefully.
Want to try this out on your own?
Feel free! A copy of the scripts I used to set up AdventureWorks for this very simple demo are below. Note that this is only suitable for test environments, and the script will mess up your Person.Address table (so make sure you can re-restore AdventureWorks2012 afterward). Get sp_BlitzIndex™ here
RAISERROR('Careful there! Run in sections.',20,10) WITH LOG; GO --****************** -- (C) 2013, Brent Ozar Unlimited. -- See http://BrentOzar.com/go/eula for the End User Licensing Agreement. --WARNING: --This script suitable only for test purposes. --Do not run on production servers. --This query may have a very long runtime on some systems. --****************** --Modify this restore to fit your file system and preferences IF DB_ID('AdventureWorks') IS NOT NULL BEGIN USE master; ALTER DATABASE AdventureWorks SET SINGLE_USER WITH ROLLBACK IMMEDIATE; END GO RESTORE DATABASE AdventureWorks FROM DISK=N'C:MSSQL11.SQL2012CSMSSQLBackupAdventureWorks2012-Full Database Backup.bak' WITH MOVE N'AdventureWorks2012_Data' TO N'C:MSSQL11.SQL2012CSMSSQLDATAAdventureWorks2012_Data.mdf', MOVE N'AdventureWorks2012_Log' TO N'C:MSSQL11.SQL2012CSMSSQLDATAAdventureWorks2012_log.ldf', REPLACE, RECOVERY; GO USE [AdventureWorks]; GO ---Remove all the indexes from Person.Address ALTER TABLE Person.BusinessEntityAddress DROP CONSTRAINT FK_BusinessEntityAddress_Address_AddressID; GO ALTER TABLE Sales.SalesOrderHeader DROP CONSTRAINT FK_SalesOrderHeader_Address_BillToAddressID; GO ALTER TABLE Sales.SalesOrderHeader DROP CONSTRAINT FK_SalesOrderHeader_Address_ShipToAddressID; GO DROP INDEX [AK_Address_rowguid] ON [Person].[Address] ; GO DROP INDEX [IX_Address_AddressLine1_AddressLine2_City_StateProvinceID_PostalCode] ON [Person].[Address] ; GO DROP INDEX [IX_Address_StateProvinceID] ON [Person].[Address] ; GO ALTER TABLE [Person].[Address] DROP CONSTRAINT [PK_Address_AddressID] GO --Run a query a bunch of times set nocount on; set statistics time, io off; GO declare @i int =0; while @i &lt; 10000 begin declare @AddressLine1 nvarchar (60), @AddressLine2 nvarchar (60), @City nvarchar (30), @StateProvinceCode nchar(3), @PostalCode nvarchar (15) SELECT @AddressLine1 = AddressLine1, @AddressLine2 = AddressLine2, @City=City, @StateProvinceCode=[StateProvinceCode], @PostalCode=PostalCode FROM [Person].[Address] AS [pa] JOIN [Person].[StateProvince] AS 1 on pa.StateProvinceID=ps.StateProvinceID WHERE City = 'Kenosha'; set @i=@i+1; end GO declare @i int =0; while @i &lt; 39291 begin declare @countme BIGINT SELECT @countme = count(*) FROM [Person].[Address] AS [pa] JOIN [Person].[StateProvince] AS 1 on pa.StateProvinceID=ps.StateProvinceID WHERE City = 'San Diego' and ps.StateProvinceCode='CA'; set @i=@i+1; end GO --Diagnose! EXEC dbo.sp_BlitzIndex @database_name='AdventureWorks' --Look at the Person.Address table specifically exec sp_BlitzIndex @database_name='AdventureWorks', @schema_name='Person', @table_name='Address' GO
Sometimes, the best stories are the horror stories. Join Brent as he talks about some of the worst situations he’s seen in his years of database administration. He’ll share the inspiration behind some of his favorite entries at http://DBAreactions.tumblr.com. We’ll either laugh or cry. Or both.
Liked that webcast? We’ve got many more coming up – just check the boxes and put in your email address.
Have you ever wished your SQL Server could have an identical twin, holding the same data, in case you ever needed it? SQL Server mirroring provides just that, and you can choose if you want it for high availability or disaster recovery. If you’ve ever been curious about what mirroring really is, and what the pros and cons are, this is the session for you.
From the Dept of Corrections: During the webcast a viewer asked in Q&A if automatic page repair was a one way or two way street. Kendra answered that if the mirror gets an IO error it will go into a suspended state. This is somewhat correct but incomplete– the mirror will also try to correct the issue with a page from the principal afterward and attempt to resume mirroring. More info here.
Occasionally you check out job listings and wonder, “Could I have a better job?” If you’ve been working as a database administrator for a few years, it’s time to learn how to tell a dream job from a potential nightmare. Join Kendra Little for a 30 minute guide on how to read hidden messages in job listings and find the right next step for your career.
Liked this video? Check out our upcoming free webcasts.
Fast – When developers ask how quickly a piece of code needs to run, don’t say fast. Give them a finish line so that they can know when it’s time to move on. “This query currently does over 15mm logical reads. We don’t allow production OLTP queries that do over 100k logical reads – anything higher than that needs to hit the reporting server instead.” Developers don’t want to write slow queries, but if you don’t show them how to measure their queries, they don’t know what’s slow versus fast. Show them how to measure what makes a query successful, and they’ll start measuring it long before they bring it to you.
Sometimes - When code works unpredictably, don’t say it sometimes works. Look for things it has in common when it fails. Does it bomb every Sunday, or when it handles over ten customers, or when it’s only being called once at a time? Keep digging for environmental coincidences until we can give the developers or QA staff a lead. Sometimes (see what I did there?) I find that developers want access to the production server just because they can’t get enough specific troubleshooting help from the DBAs. “Your code fails sometimes” isn’t going to cut it.
Never - When the developer tries to deploy a trigger in production, don’t say, “We never allow that.” The business, not our emotional desire, dictates the technical solutions we use. We’re here to advise the business, and sometimes the business won’t go along with our advice. Our job is to lay out our concerns clearly and concisely, preferably with risk assessments and real-life stories, and then listen. I’d love to build every box and application perfectly, but we gotta ship if we’re going to keep paying salaries.
Fine - When you’re asked how the server is performing, don’t say it’s fine. Quantify it in terms of batch requests per second, average duration for a stored procedure, or some other metric that you can measure precisely. Bonus points if we correlate this number over time, like if we can say, “We normally average 1,400 to 1,500 batch requests per second during peak weekday hours, and we’re doing 1,475 right now.”
Large – When developers asks how big a table or database or index is, don’t say large. What’s large to you is small to someone else, and vice versa. Give exact numbers: number of terabytes in the largest database, number of databases, number of rows in the largest table, or the number of times you’ve updated your resume in terror because the backup failed.
It Depends – DBAs love giving this answer as a cop-out for tough questions, and if you’re not careful, it comes off as a condescending know-it-all. For best results, immediately follow this phrase with the word “on”, as in, “It depends on the table’s access pattern – in read-focused systems like our data warehouse, we can have up to 10-15 indexes on our dimension tables, but in OLTP databases like our web site, we need to aim for 5 indexes or less per table.” The faster you can help someone to the answer they’re looking for, the more they’ll respect you as a partner, not an enemy combatant.
Moving between databases is hard enough, try using multiple databases in the same application and you might start thinking you’ve gone insane. Different application demands for accessibility, redundancy, backwards compatibility, or interoperability make this a possibility in the modern data center. One of the biggest challenges of running a heterogeneous database environment is dealing with a world of data type differences. There are two main ways to work through this situation:
- Using a subset of data types.
- Creating custom data type mappings.
To make comparisons easier, I’m going to focus on SQL Server, PostgreSQL, and Azure Table Services.
Using a Subset of Data Types
The ANSI standard defines a number of data types that should be supported by database vendors but, as with all standards, there’s no guarantee that vendors will support all data types or even support them equally. The SQL Standard defines the following data types:
time (with or without time zone),
timestamp (with or without time zone),
As an example of differences between the ANSI standard and vendor implementations, the ANSI standard defines a
TIMESTAMPdata type that is implemented as a date and time with an optional time zone whereas SQL Server defined
TIMESTAMPas an arbitrary auto-incrementing unique binary number.
Taking a look around it’s easy to see that there are major differences between databases. An easy way to resolve this problem is to use only a small subset of the available data types. This choice seems attractive when we’re working with a language that doesn’t support rich data types. Some languages only have support for a limited number of data types (C provides characters, numeric data types, arrays, and custom
structs), while more advanced languages provide rich type systems.
Comparing our database solutions, Azure Table Services Data Model supports a constrained set of data types. While rich type systems are valuable, the Table Services data model provides everything needed to build complex data structures. The simple data model also makes it easy to expose Azure Table Services data as ATOM feeds that can be consumed by other applications. By opting for simplicity, this simple data model makes it possible to communicate with a variety of technologies, regardless of platform.
The downside of restricting an application to a limited set of data types is that it may become very difficult to store certain data in the database without resorting to writing custom serialization mechanisms. Custom serialization mechanisms make it impossible for users to reliably report on our data without intimate knowledge of how the data has been stored.
Compare the supported Azure Table Services data types with SQL Server 2008 R2′s data types and PostgreSQL’s data types. There’s some overlap, but not a lot. Limiting your application to a subset of datatypes is really nothing more than limiting your application to a subset of data that it can accurately store, model, and maniuplate. Everything else
Custom Data Type Mappings
Let’s assume we have an application that is built using PostgreSQL as the primary OLTP back end. We can expose a lot of our functionality through our cloud services as simple integers and strings, but there are some things that aren’t assured to work well when we move across different OLTP platforms. We can’t always map data types – how does
inet map to SQL Server or Azure Table Services? There’s no immediately apparent way to map the
inet data type to any other data type.
Clearly, custom data type mappings are not for the faint of heart. Decisions have to be made about gracefully degrading data types between databases so they can be safely reported on and reconstituted in the future. Depending on the application,
inet could be stored as
Edm.String in Azure Table Services or
VARCHAR(16) (which only works if we’re ignoring IPv6 addresses and the netmask).
If this sounds like a recipe for confusion and disaster, you might be on to something. Using custom data type mappings across different databases can create confusion and requires custom documentation, but there is hope.
Applications using the database only need to know about the data types that are in the database. Reporting databases can be designed to work with business users’ reporting tools. As long as the data type mappings do not change, it’s easy enough to keep the reporting databases up to date through automated data movement scripts.
What Can You Do?
There’s a lot to keep in mind when you’re planning to deploy an application across multiple databases. Understanding how different databases handle different data types can ease the pain querying data in multiple databases. There’s no reason to limit your application to one database, just be aware that there are differences between platforms that need to be taken into account.
Google have created their own cross application/platform data serialization layer called protocol buffers. If you’re looking at rolling your own translation layer, protocol buffers may fit your needs.
Erika loves having fresh flowers around the house. Every Saturday morning, I pick up a bouquet at a farmer’s market or grocery store and put it in a vase for her. I’m slowly upping my game by learning more and more about the art of arranging flowers.
When I say flowers, I bet you think about the English Garden style: a big, complex vase with all kinds of flowers crammed into every nook and cranny. It’s an explosion of color and life.
That’s way too stuffy for us. We’re into minimalism, clean lines, and letting materials speak for themselves. I like plucking one or two of the more beautiful or unusual flowers and putting them in their own vase. This leans toward the Ikebana style of Japanese flower arrangement, specifically the Nageire type. (I don’t even want to think about how badly I’m going to mispronounce these if I ever have to say them out loud.)
Writing database code is like arranging flowers.
If you show someone your bouquet, they might not like it. They might give you a million reasons about why it’s not right or why another way is better. That’s not the point – does it produce the results you want?
If your goal is to get to market quickly and cheaply, just buy a premade bouquet from the grocery store, throw the flowers in the vase and be done with it. Use LINQ, Entity Framework, NHibernate, or whatever code tools make your job easy.
If you translate your app code into SQL code, you’re building an English Garden. You start by declaring variables at the center, then populating those variables by checking configuration tables, then spin out to more and more other tables, getting your results in loops and setting values one at a time. This is exactly how developers have always been taught to arrange their flowers, and it works just fine. Once you’re used to doing it, you can bang that code out quickly, and the results are attractive.
But if you need it to be beautifully fast, you need Ikebana. You need very clean, very minimalist code that gets the job done in as few statements as possible. In a database environment, this means set-based code that avoids cursors and loops.
While clean, Ikebana-style database code is simple to behold, it’s deceivingly complex to build. The first step is moving as much logic as possible from the database server to the application server – starting with the ORDER BY statement. If you’re not fetching just the TOP X rows, then do all sorting in the application server. Removing just that one line from a query will often cut its cost dramatically. Your development platform (.NET, Java, Cobol, whatever you kids are using these days) is really good at scaling out CPU and memory-intensive work like sorting, and you’re already really good at splitting out your work into multiple application servers. Leverage that capability.
Think of it like pruning your code – remove all the things that database servers don’t do beautifully, and what you’re left with will be gorgeous.
Are you frustrated by third party applications that you can’t change, but you have to support? Tired of beating your head against the wall when your users complain about things you can’t fix? In this 30-minute session, Brent Ozar will show you his favorite tricks to get the most performance without losing support. He’ll show you how to interact with vendors and get what you want – without getting heartburn:
Like that video? We’ve got half a dozen more scheduled for upcoming Tuesday lunches. Click the boxes you want and sign up for free.
How would you like to go to a SQL Server conference in Las Vegas where the sessions are taught by Brent Ozar Unlimited, SQLskills, SQLServerCentral, and SQL Sentry?
Yep. Me, Jeremiah, Kendra, Kimberly Tripp, Paul Randal, Jonathan Kehayias, Erin Stellato, Steve Jones, and Aaron Bertrand. Between us, that’s 3 MCMs, 2 MCM instructors, 7 MVPs, and 2 MVP Regional Directors.
If you’re serious about learning SQL Server, this should be the very first conference on your fall priority list. Check out some of these sessions:
- Troubleshooting SQL Servers in VMware and SANs (me)
- Understanding Locking, Blocking, and Isolation Levels (Kimberly)
- Understanding Logging and Recovery (Paul)
- X-Ray Glasses for Your Indexes (Kendra)
- Branding Yourself for a Dream Job (Steve)
- Deadlocking for Mere Mortals (Jonathan)
- Hadoop: The Great and Powerful (Jeremiah)
- Making the Leap from Profiler to Extended Events (Erin)
How much would you pay for three days of awesome learning at a conference like this with top-notch speakers, all killer no filler?
And hey, it’s Vegas, so it’s a great team building city, like when Jeremiah and I rented cars last time and, uh, built teams. Yeah.
But wait – there’s more! Check out the pre-con workshops:
- Accidental DBA Starter Kit (me, Jeremiah, Kendra – Pre-Con Sunday) - You’re responsible for managing SQL Servers, but you’ve never had formal training. You’re not entirely sure what’s going on inside this black box, and you need a fast education on how SQL Server works. In one day, you’ll learn how to make your SQL Server faster and more reliable. You’ll leave armed with free scripts to help you find health problems and bottlenecks, a digital set of posters that explains how SQL Server works, and an e-book that will keep your lessons moving forward over the next 6-12 months.
- Queries Gone Wild: Real-World Solutions (Kimberly – Pre-Con Sunday) - Have you ever wondered why SQL Server did what it did to process your query? Have you wondered if it could have done better? And, if so, how? Transact-SQL was designed to be a declarative language that details what data you need, but without any information about how SQL Server should go about getting it. Join order, predicate analysis – how does SQL Server decide the order or when to evaluate a predicate? Most of the time SQL Server gets the data quickly but sometimes what SQL Server does just doesn’t seem to make sense. Inevitably you’ll encounter certain workloads and queries that just aren’t performing as well as you expect. There are numerous reasons why query performance can suffer and in this full-day workshop Kimberly will cover a number of critical areas while showing you how to analyze a variety of query plans throughout the day.
- Scale Up or Scale Out: When NOLOCK Isn’t Enough (me, Jeremiah, Kendra – Post-Con Thursday) - Partitioning, replication, caching, sharding, AlwaysOn Availability Groups, Enterprise Edition, bigger boxes, or good old NOLOCK? You need to handle more data and deliver faster queries, but the options are confusing. In this full-day workshop, Brent, Kendra, and Jeremiah will share the techniques they use to speed up SQL Server environments both by scaling up and scaling out. We’ll share what features might save you hundreds of development hours, what features have been a struggle to implement, and how you can tell the difference. This workshop is for developers and DBAs who need to plan long term changes to their environment.
- Practical Disaster Recovery Techniques (Paul – Post-Con Thursday) - Disasters happen – plain and simple. When disaster strikes a database you’re responsible for, can you recover within the down-time and/or data-loss limits your company requires? What if your plan doesn’t work? This workshop isn’t about how to achieve high-availability, it’s about how to prevent or overcome the obstacles you’re likely to hit when trying to recover from a disaster – such as not having the right backups, not having valid backups, or not having any backups! In this demo-heavy workshop, you’ll learn a ton of practical tips, tricks, and techniques learned from 15 years of experience helping customers plan for and recover from disasters, including less frequently seen problems and more advanced techniques. All attendees will also receive a set of lab scenarios for further study and practice after the class with assistance from Paul.
Now how much would you pay for all this? Three thousand? Four thousand? Ten thousand? BUT WAIT, THERE’S MORE!
For $1,894 before June 24th, you can get the Show Package: the conference, PLUS a pre-con or post-con of your choice, PLUS your choice of a Surface RT, Xbox, or a $300 gift card.
For $2,294, you get all that plus ANOTHER pre-con or post-con – five days of nonstop learning from the absolute best in the business.
No? You want more? Okay, you drive a hard bargain, buddy. Use discount code OZAR and you get another $50 off. Register now. Operators are standing by.