A few years ago, Erika and I went to the VW dealership to trade in her old Jetta and get a new one. The choice of a Jetta was already a foregone conclusion – she loved VWs at the time – and it was just a matter of picking out colors and options.
We took a test drive of the base model, and then the sales guy asked, “Do you wanna also test drive the 1.8L Turbo version?”
Me (immediately): “No.”
Erika: “Sure, why not?”
Me: “You were just saying how this one is so much peppier than yours. You – and by you I mean we – don’t need a turbo.”
Erika: “Come on, let’s try it.”
As soon as she accelerated onto a highway on-ramp, felt the power surge, and heard the turbo whistle, that was the end of that. Suddenly, I felt like a SAN administrator and Erika was the DBA. “No no no,” I was saying, “You don’t need that. You’re never going to go that fast, and I know because you yell at me when I take highway on-ramps that fast. Let’s not spend the extra money if we’re not getting extra capacity. Besides, let’s bring it back to numbers – let’s measure how fast the base version is to 60mph, and then measure the turbo version.”
She couldn’t hear me because I was in the back seat and she was negotiating price with the sales guy. The seat-of-the-pants feeling of speed was enough for her, and often it’s good enough for us DBAs too.
Measuring Your SAN the Easy Way
I’ve written about how to test your SAN’s performance with SQLIO, but I’ll be honest with you: that’s the hard way. It takes knowledge and time, and you only have one of those. (I’ll be charitable and not tell you which one.)
Instead, let’s get seat-of-the-pants numbers for your storage. Go download the portable edition of CrystalDiskMark (NOT CrystalDiskInfo) and put it on a network share. Run it on an idle server (not your live SQL Server, because it’ll slow things down while it runs.) It’ll look like this:
Across the top, there’s three dropdowns:
- 5 – the number of test passes you want to run. If you want a fast seat-of-the-pants guess, do 1, but keep in mind it can be wildly variant between passes if something else happens to be going on in the SAN.
- 4000MB – the test file size. I like using 4000MB to reduce the chances that I’m just hitting cache and getting artificially fast numbers. Smaller test file sizes may look fast but don’t really reflect how a large database will work.
- E: – the drive letter to test. Keep an eye on the free space there – you don’t want to create a test file that can run your server out of drive space.
After making your choices, click the All button. While it runs, here’s an explanation of each row’s results:
- Seq – long, sequential operations. For SQL Server, this is somewhat akin to doing backups or doing table scans of perfectly defragmented data, like a data warehouse.
- 512K – random large operations one at a time. This doesn’t really match up to how SQL Server works.
- 4K – random tiny operations one at a time. This is somewhat akin to a lightly loaded OLTP server.
- 4K QD32 – random tiny operations, but many done at a time. This is somewhat akin to an active OLTP server.
The more astute readers (and by that I mean you, you good-looking charmer) will notice that 4K operations don’t really measure SQL Server’s IO. SQL Server stores stuff on disk in 8K pages, and zooming out a little, groups of 8 8KB pages (64K extents). We’re not looking to get an exact representation of SQL Server’s IO patterns here – we’re just trying to get a fast, one-button-click-easy measurement of how storage performs. Usually I find that during the first round of storage tests, it’s not performing well period – and it doesn’t make sense to bring SQL Server into the game just yet.
Interpreting CrystalDiskMark Results
For magnetic hard drives (individually or in RAID arrays), sequential operations (the top column) are often 10x-100x the rest of the results. This metric is often limited by how the computer is connected to the storage, and you can get those numbers from the bandwidth rates in Kendra Little’s “How Big Is Your Pipe?” bandwidth reference poster. Keep in mind that the MB/sec numbers on the poster are theoretical limits, and in practice, we’ve got 5%-20% overhead involved.
For solid state drives, the difference between sequential and random operations isn’t always as dramatic, but it can still be 2-3x. If there’s no difference, then I’d look even closer at the connectivity method – the SSDs are probably outperforming the connection method (like 3Gb SATA, 1Gb iSCSI, or 2/4Gb FC.)
So what’s a good or bad number? If your server boots from a mirrored pair of local drives, and stores its SQL Server data somewhere else (like on a larger array or on a SAN), then test the local mirrored pair too. Compare the numbers for where you’re storing the valuable, high-performance data to where you’re storing the OS, and you might be surprised. Often I find that the OS’s drives perform even better because we just haven’t configured and tuned our storage.
Keep these original CrystalDiskMark screenshots in a shared folder for the group to access, and then challenge everyone involved to do better. Simple tuning techniques like tweaking the read/write bias on the RAID controller’s cache, right-sizing the NTFS allocation units, and working with different stripe sizes can usually yield double the storage performance without spending a dime.
Want to learn more? We’ve got video training. Our VMware, SANs, and Hardware for SQL Server DBAs Training Video is a 5-hour training video series explaining how to buy the right hardware, configure it, and set up SQL Server for the best performance and reliability. Here’s a preview:
If you want to learn more about the different join algorithms, consult Wikipedia for an introduction to the basic material and links to additional information:
Additional information is also found in the paper Implementation Techniques for Main Memory Databases.
March 31st is World Backup Day, and while most of the press will be focusing on backing up your priceless lolcat photos, it’s still a good time to talk with management about your database backups.
Your boss will have some simple questions, and you can arm yourself with a few easy queries.
Are We Backing Up Every Database?
Even if you think your backups are running successfully, you have to double-check. I had a client recently who’d copy/pasted jobs across several different servers and databases. While editing one of the jobs, someone made a typo, and they were backing up the master database instead of the user databases. The backups worked, ran successfully every night, but weren’t actually backing up the data that people really cared about.
This query will list the last full backup for all of your databases, and like all of the queries in this post, the results should come back nearly instantaneously with no blocking. They’re completely safe to run in production.
SELECT d.name, MAX(b.backup_finish_date) AS last_backup_finish_date FROM master.sys.databases d WITH (NOLOCK) LEFT OUTER JOIN msdb.dbo.backupset b WITH (NOLOCK) ON d.name = b.database_name AND b.type = 'D' WHERE d.name <> 'tempdb' GROUP BY d.name ORDER BY 2
Review that query result and double-check – then ask, “Do I really need to back up all of these databases?” If you’ve got the demo databases AdventureWorks, Northwind, or pubs on your server, and you’re backing them up daily, you’re wasting resources. Get ‘em off your production servers.
Are Our Backups Fast Enough?
In the msdb database, SQL Server saves backup size and duration for all backups. We can use those two numbers, we can use the power of math to get throughput:
SELECT @@SERVERNAME AS ServerName , YEAR(backup_finish_date) AS backup_year , MONTH(backup_finish_date) AS backup_month , CAST(AVG(( backup_size / ( DATEDIFF(ss, bset.backup_start_date, bset.backup_finish_date) ) / 1048576 )) AS INT) AS throughput_MB_sec_avg , CAST(MIN(( backup_size / ( DATEDIFF(ss, bset.backup_start_date, bset.backup_finish_date) ) / 1048576 )) AS INT) AS throughput_MB_sec_min , CAST(MAX(( backup_size / ( DATEDIFF(ss, bset.backup_start_date, bset.backup_finish_date) ) / 1048576 )) AS INT) AS throughput_MB_sec_max FROM msdb.dbo.backupset bset WHERE bset.type = 'D' /* full backups only */ AND bset.backup_size > 5368709120 /* 5GB or larger */ AND DATEDIFF(ss, bset.backup_start_date, bset.backup_finish_date) > 1 /* backups lasting over a second */ GROUP BY YEAR(backup_finish_date) , MONTH(backup_finish_date) ORDER BY @@SERVERNAME , YEAR(backup_finish_date) DESC , MONTH(backup_finish_date) DESC
The awesome part of this query is that it’s not measuring backup duration alone – sure, backups run longer as you add more data. This query’s actually measuring backup throughput, meaning how fast the backups can get data out.
When I review the results of this query, I focus on the throughput_mb_sec_avg field and look for big drops (or increases) over time. If backup speed dropped by 30% in January, I start asking questions about what network or storage changes we made at that time.
Compare that number to the bandwidth rates in Kendra Little’s “How Big Is Your Pipe?” bandwidth reference poster, and you’ll get a rough idea for comparison. If you can’t get at least the throughput of a 1Gb Ethernet connection, it’s time to start talking to your storage and network admins about teamed network cards, RAID 10 backup targets, and how to get shorter maintenance windows with backup compression.
Are We Backing Up Corrupt Data?
Just because your backups are succeeding doesn’t mean you’re backing up legit data. The backup process doesn’t do anything like the DBCC CHECKDB process, which checks that the data on disk actually makes sense.
I really, really care about this because if you get a data corruption error, you may have to restore the most recent database backup and see if the data’s corrupt there too. If it is, step back and restore the full backup from the day before that – and the day before that – and the day before that. If you’re lucky, you’ve got a copy of the data from before the IO corruption occurred. If you’re unlucky, you don’t, and then you start looking for an uncorrupted copy of your resume.
The key to success: run DBCC more often than you expire backups. If you only keep 7 days of backups, then you should run DBCC more frequently than that. If you only run DBCC once a month, you won’t find corruption until long after the clean backups are gone.
Here’s how to see the last time DBCC CHECKDB finished successfully on each database (for SQL Server 2005 and newer, and must be run in a database in compatibility level 90 or higher):
CREATE TABLE #temp ( ParentObject VARCHAR(255) , [Object] VARCHAR(255) , Field VARCHAR(255) , [Value] VARCHAR(255) ) CREATE TABLE #DBCCResults ( ServerName VARCHAR(255) , DBName VARCHAR(255) , LastCleanDBCCDate DATETIME ) EXEC master.dbo.sp_MSforeachdb @command1 = 'USE [?]; INSERT INTO #temp EXECUTE (''DBCC DBINFO WITH TABLERESULTS'')', @command2 = 'INSERT INTO #DBCCResults SELECT @@SERVERNAME, ''?'', Value FROM #temp WHERE Field = ''dbi_dbccLastKnownGood''', @command3 = 'TRUNCATE TABLE #temp'; --Delete duplicates due to a bug in SQL Server 2008 WITH DBCC_CTE AS ( SELECT ROW_NUMBER() OVER ( PARTITION BY ServerName, DBName, LastCleanDBCCDate ORDER BY LastCleanDBCCDate ) RowID FROM #DBCCResults ) DELETE FROM DBCC_CTE WHERE RowID > 1 ; SELECT ServerName , DBName , CASE LastCleanDBCCDate WHEN '1900-01-01 00:00:00.000' THEN 'Never ran DBCC CHECKDB' ELSE CAST(LastCleanDBCCDate AS VARCHAR) END AS LastCleanDBCCDate FROM #DBCCResults ORDER BY 1, 2, 3; DROP TABLE #temp, #DBCCResults ;
If you’re not happy with the results of this query, it’s time to start running DBCC CHECKDB more often. If you’re unable to run it in production due to performance/uptime issues, consider restoring your databases at least once a week to a development or QA or disaster recovery server, and run DBCC CHECKDB there. Some backup products even help schedule this for you.
Are The Backups Making It to Tape?
If you’re backing up to disk, and the network admins are backing up your disks to tape, … well, are they? How can you be sure?
The first step is to check the archive flag. Open Windows Explorer and navigate to the folder where you’re writing backups. Right-click on the bar where Name, Date Modified, and Type are shown, and click More. Choose the Attributes column, and your Explorer window will look something like this:
If you see an “A” in the Attributes column, that means the Archive bit is set – the file is ready to be archived. When your backup software runs and sweeps these files to tape, it resets the Archive bit to off, thereby telling you that your file no longer needs to be archived. If you see A bits on database backups from yesterday, that tells you they haven’t made it to tape yet, and it’s time to start asking questions about backup schedules.
Just a week ago, we asked who wanted to work with Brent Ozar Unlimited®. We talked about our benefits, our health check approach with clients, and our plans for Employee #1. The emails came in fast and furious, and we are completely floored and honored by the number of people who wanted to work with us. You people rock.
Early on, one factor helped weed out a lot of resumes: community participation. We wanted people who were so passionate and excited about SQL Server that they blogged and presented even when they weren’t being paid to do it.
Some emails started out with, “I haven’t been blogging or presenting, but I promise I’ll start once you hire me.” We just can’t recommend enough that you get started giving back to the community today in whatever form that excites you the most. Your online presence is your new resume. If you want a fun day job working with fun people, start by doing it after hours, and the day job will follow.
Several candidates fit the technical know-how and community participation requirements, and then we started narrowing ‘em down. Had we seen them present? Had we talked to them at user group meetings? Had we read – nay, subscribed to – their blogs?
Coffee-drinking candidates got bonus points if their animated behavior got us excited about technology. We like to think that we’re pretty lively geeks, but some folks out there are so upbeat and fun that we just want to spend more time with ‘em. Being in a small business, this kind of chemistry is so important. It’s not like we can go to one of our other coworkers and complain – it’s just the four of us! We can’t tolerate a whiny pessimist at this scale.
Hiring just one of the applicants was hard: even with our picky personalities, we were just overwhelmed by the number of people we wanted to hire. Every time an email came in, we’d DM each other and say things like, “Wow, I can’t believe so-and-so applied! Any employer would be honored to get ‘em, and I would totally hire them right now.”
Ultimately, the decision boiled down to making an investment in someone. We wanted to hire someone that could grow along with us as our company grows, someone that we knew would be a continuous source of fresh ideas about technology.
Let me start by introducing Employee #1 without giving her name. (See, there’s a clue – she’s a her.)
The first time Kendra saw our new hire speak was to a group of 50 people. The session was on a 200 level topic, and our new hire knew her subject front to back, and then some. She had a few authors in the audience who’ve been writing advanced books and designing solutions in her subject area for many years– and yeah, it’s a great sign that these folks wanted to see her speak! At one point, an audience member asked a 500 level question which was outside of the session’s scope. She gave a brief answer and started to move on, but the authors in the audience wanted to offer opinions. And then these authors started to talk loudly to each other. Letting this go on would have completely taken the focus away from the original topic and confused the rest of the audience.
Zillions of people would freeze up at this point. Our new hire had no trouble: she held the floor with a big smile and a friendly joke and quickly scheduled an in-depth conversation on the secondary subject after the session was over. She had everyone laughing- including the authors- and was back in business with all of her original excitement. The audience was truly with her and actively learning. Kendra was seriously impressed.
But there’s more to our new hire than just great teaching skills. She has the type of experience that’s perfect for consulting. She started out in systems administration, so she has the foundational knowledge of how operating systems, storage, and network that we use in performance tuning. She’s taken on database administration in large and small environments, she’s worked with SANs and tiered storage. She has a degree in programming, and is naturally drawn to tuning whole systems: everything from the storage to the application layer.
Oh wait, there’s more!
Reporting caught her eye early on and she’s written about SQL Server Reporting Services. We’re thrilled that our first hire is as diverse as the rest of us – she’s interested in learning, presenting, writing, SQL Server, and Reporting. Finding two of these things in one person is a feat. Finding all of them in someone is just plain exciting.
Less Than Dot has been hosting her blog for a while now and it has been exciting to watch her blog change over time. She has covered wide variety of topics with clarity. Her passion and excitement shine through in her writing.
At this point, I’m sure you’re anxious to find out who we hired. We were really excited when we made the choice. We wanted to tell everyone, but we managed to keep it a secret until her employer made the announcement internally.
New jobs are big changes for a lot of people. We’re incredibly excited to be starting this big change with her. We know that our first hire will jump right in. That’s part of why we knew we had to hire her – she’s excited and passionate about SQL Server, technology, and her hobbies.
Don’t think that we’re all about work over here. Our first hire is interested in more than just SQL Server. She was one of the first #SQLRunners in the Portland Half Marathon and she’s training for her first full marathon. If you’re a big fan of cheering, you can catch her (from the sidelines, of course) at the Wisconsin Marathon on May 5 in Kenosha, WI.
Richard Campbell’s RunAs Radio podcast is a relaxed, fun discussion between IT professionals like sysadmins and DBAs who have to manage infrastructure. Richard’s a hoot to talk to, and I really like hanging out with him. I think that comes through on the podcasts, too – I could spend hours chatting with him, and the time goes by like nothin’.
Wanna hear what it’s like? Listen to Brent Ozar on RunAs Radio and subscribe to the RunAs podcast feed. I talk about SQL 2012’s Availability Groups, running databases off NAS devices, and how the MCM program has changed over time. Enjoy!
We mentioned a lot of books in the webcast. Most of them are listed over on our list of SQL Server Books for DBAs. These are my favorites books for for database developers:
We’re looking for a few good database professionals.
Well, actually, right now we’re just looking for one. Things are going really well here at Brent Ozar Unlimited®, and it’s time to grow our family. We’re hiring our first full time employee, and just like everything else we do, we figured we’d blog about it. We figure we suck a lot less than most companies, so this is a fun way to introduce ourselves as potential employers. Let’s start by covering what the job is like.
What We Do for Clients
People email us when their database applications are causing them pain. We start new clients the same way an emergency room handles new patients – well, actually, nothing like that, because emergency rooms have a lot of paperwork and they smell bad. What I really mean is that we triage the patient with a SQL Server Health Checkup: we spend 2-3 days together (usually remotely over WebEx) going through an extensive health review to pinpoint the cause of their technology pains.
This health check involves:
- Querying the DMVs
- Reviewing database schema (tables, indexes, stored procs)
- Checking the plan cache for resource-intensive queries
- Examining the server’s hardware
- Digging into storage (SAN) and VMware, and much more
Unlike an emergency room, we actually teach the patient – I mean, client – as we go along. We show them how to diagnose health and performance issues so they can repeat this same process on their other servers. We go off on wild tangents to answer questions that they’ve always wondered about how SQL Server works.
We finish the health check with a presentation that covers the source of the pain points and a prioritized to-do list for the client’s staff. We don’t just jiggle a few knobs and call it good – we teach the client what needs to be done so they can do a better job of solving their problems quickly going forward. Sometimes it’s a query change, sometimes it’s a server configuration change, and sometimes it’s rearchitecting a new way to store and access data.
Doing these health checks requires a tremendous amount of expertise in SQL Server, hardware, storage, virtualization, application coding, and frankly, politics. Your job as a consultant will eventually be to run these health checks yourself – but we don’t expect you to get there overnight.
What We’ll Do for You
If you want to relieve technology pain, you’ve gotta do research and development – and that takes time and effort. To stay current on the latest techniques, we usually work 3-4 billable days per week. Your 1-2 non-billable days per week will be spent improving our products (health checks & training), learning about SQL Server, and giving back to the community. If you’d like to blog and present, you’ll have plenty of opportunity with our blogs, our free weekly webcasts, our email newsletter, our free posters, our online video library, and giving back on #SQLhelp – we’re always looking for new ways to help people for free.
Going into consulting with us is like a turbo button for your SQL Server experience: you’ll be exposed to amazing clients doing cool things with SQL Server, hardware, virtualization, and the cloud. This week, for example, Jeremiah is helping a business move their SQL Servers into the cloud, Brent’s planning a SQL Server 2012 AlwaysOn deployment for a TV network, and Kendra’s doing a health check and training a client on index tuning. (Tim’s decided not to go the consulting route, so he’s taking the SQLCruise portion of Brent Ozar Unlimited® and going a separate way – here’s his post about it.) You get to shadow us on work to increase your skills, and when you’ve got a question, you can get in-depth answers from 3 of the funniest people in the business.
We want happy people, so we offer great benefits:
- 6 weeks paid vacation per year
- 2 fully paid domestic conferences (you don’t have to be a speaker) and that doesn’t count against your vacation time
- If you’re an MVP, we’ll pay for your travel & time to the annual MVP Summit too
- Health insurance
- Full time telecommuting with a max of 1 week of travel per 2 months
- Paid certification attempts (whether you pass or fail)
- Home internet/VOIP/cell phone expenses paid
- $3,000 hardware/software budget every 2 years to pick your own tools (we like Apple gear, but you can pick whatever you want – you’re your own IT department)
Even though it’s full time telecommuting, this position is only open to US residents already authorized to work in the US full time. We can’t sponsor visas, and most of our clients are in the US, so we need to keep US travel expenses & time zones in mind.
How We’ll Pick the Right Candidate
We’re looking for people with at least a couple of years of hands-on database administration experience. We love all kinds of database professionals, and we might hire the rest of you later too, but this role is specifically for performing and improving our database health checks, and we want DBA experience there. No, we don’t have a name for this role yet – we’re not too big on titles, and we’re actually looking for input from you to define your title. Leave your job title ideas in the comments even if you’re not applying. The current front runners are Expert, Consultant, and Bacon Wrangler. I’m thinking about merging all three into Expert Bacon Wrangler Consultant.
If you’re the kind of person who loves to share by writing and presenting even when you’re not getting paid to do it, we’ll know you’re passionate about technology, and that’s what’s most important to us. You don’t have to be a Microsoft Certified Master – you just have to be genuinely excited by Microsoft SQL Server and willing to share your enthusiasm with others.
Sound like your kind of job? Email us at Help@BrentOzar.com. If you’ve got a resume, include it, but don’t go to any work to build a new one. We’ll Google your name to see what you’ve been doing for the community and where you’ve been speaking, and that’s better than almost any resume we could read.
And if you haven’t been giving back to the community by writing and presenting, well – here’s your sign that it’s time to start.
Because this won’t be the last person we hire.
UPDATE Friday 16th: We found our dream hire! Stay tuned….and if you didn’t have time to apply, drop us a line. We’ll keep you on our radar for next time.
Do you know how much bandwidth you have from your SQL Server to your storage? Are you clear on what all the options are and how they compare?
If you’re not sure, don’t worry. In this blog post I’ll explain the basics and link you up with a poster that will make everything more clear.
How Fast is That Connection?
The first thing to know is that “Gigabit” is NOT the same as “Gigabyte”.
When I talk to clients using SAN storage with their SQL Servers, I find that usually the connection speed and type is one of these:
- 1Gb iSCSI
- 4Gb Fibre Channel
Notice the small ‘b’ in each of those? That means ‘Gigabit‘. This isn’t obvious to most people because of the way this is commonly said out loud. Most people say “One Gig Eye-Scuzzy” or “Gig-E” (the E is for Ethernet) for the first option, and “Four Gig Fibre” for the second option. Based on the “gig” in there, it’s easy to think that this connection type can transfer 1 gigabyte of data per second.
That’s a long way from the truth. We’re talking one gigabit per second. That’s 1 billion bits. That sounds like a lot until you do the math. There are 8 bits per 1 byte. If we translate 1 billion bits per second into megabytes, we have a theoretical maximum of transferring only about 125 megabytes/sec through a 1Gb iSCSI connection. That’s also theoretical— in the real world, we aren’t going to get that much. Even if we could, when it comes to modern databases 125 megabytes/sec is a pretty tiny straw to slurp gigabytes and gigabytes of data through!
Why is 1Gb Ethernet So Common?
I find clients using 1Gb iSCSI a lot— a few are using 10Gb iSCSI, and others use fiber channel, but there’s still a lot of 1 Gb iSCSI out there. Looking at the comparatively tiny size of this pipe on the chart, that might seem pretty strange.
The use of 1Gb connections is because of cost. Historically, 10Gb Ethernet was super pricey. In 2002, the per-port cost of 10Gb was around $39K, compared to $1K per port for 1Gb. These days, you’re looking more at around $750 per 10Gb port— but lots of companies still have older equipment around. Change, like 1Gb iSCSI, is slow.
There’s Often More to the Story
If you have performance problems, don’t go lighting a fire in your storage administrator’s office immediately after looking at the Bandwidth Reference. There are lots of things to factor in to get to the bottom of a performance problem, and the connection speed to storage is just one of them. How many connections do you have? Can you use multi-pathing? How fast is your storage? Is storage really your big bottleneck? There’s lots to dig into.
Also, this is only the interface bandwidth. If you’re using a single magnetic hard drive doing random reads, it doesn’t really matter what interface you’re using, because you can’t fill a USB2 pipe. This chart matters most when you’re using multiple drives in a RAID array or when you’re using SSDs. (Not using RAID for your databases in production? Maybe the fire isn’t such a bad idea.)
To gets started, maybe just start some friendly conversations to figure out what your infrastructure is like.
Why is Memory on the Poster?
I like the Bandwidth Reference because it illustrates something else besides how slow some of the commonly used connection types are.
Check out the theoretical maximum bandwidth for memory. The bandwidth for memory is so big that 40% of its pipe needed to bend to fit on the page! This is staggering to think about in terms of application performance. We know logically that performance is better if we don’t have to read from disk, but this graphically shows how much faster that can be.
Remember one more thing, too– you can get these same speeds reading data from memory in more places than just your database server. You can also read from data cached in memory on your application servers without even talking to the database. Those are the fastest queries of all.
Download the Poster
To get the full sized goods, you just need to create a free login to our site. We promise not to share or sell your information with others. Download the poster here.
We produce a lot of free content on SQL Server, performance tuning, storage and virtualization. I went in this week to update our Free SQL Server Training Videos Page and was blown away by how much content we’ve prepared and recorded in the last six months— 27 new training videos!
Some readers have asked what’s the best way to find training content. Here are your options on how to get your hands on all the goods.
How to Watch Training Videos on your iPad, Tablet, or Phone
The best way to watch our videos on a tablet or phone is to subscribe to our YouTube Channel. You can check out all the video options and get automatic updates as we post new content.
How to Watch Videos at Work
You can also check out our collection of training videos by category here.
How to Get Content Using Our Facebook page
If you like to catch up on technical content each day after you un-tag your name from those embarrassing photos on Facebook, our video stream is integrated into the Brent Ozar Unlimited® Facebook page. ‘Like’ the page to get updates in your Facebook stream, or view videos through the “SQL Server Videos” tab in the top bar.
Join Our Webcasts – Live!
Get into the action live each Tuesday at 11:30 AM Central. Scheduling conflicts? Add a recurring calendar reminder so you can make it each week.
How to Succeed in Database Development Without Really Trying
Tuesday, March 13, 2012 11:30 am – Jeremiah Peschka
You love working with data, but you don’t want to be a DBA. What’s the right path to work with data? Six years ago Jeremiah Peschka was a developer who wanted to work with data. Since then he’s been a consultant, development DBA, software evangelist, and is now a partner in a database consulting company. In this 30 minute session, you will learn steps to take to get started, areas of specialization, skills employers are looking for, and hints for building a great resume to get a database developer job. Register here.
Monitoring SQL Server: How to Pick Your Strategy
Tuesday, March 20, 2012 11:30 am – Kendra Little
Monitoring is critical to protect your database servers AND your job: if you don’t know when a problem is occurring, how can you respond quickly? In this 30 minute session, Kendra Little will discuss availability and performance monitoring for SQL Server. She’ll summarize the tools available and give you guidelines to plan your strategy to tackle a monitoring problem. This session is appropriate for DBAs or IT Managers with one year or more experience with a production environment. Register here.
How to Test Availability Groups in SQL Server 2012
Tuesday, March 27, 2012 11:30 am – Kendra Little
Want a jump-start on testing the hottest scale-out feature in SQL Server 2012? Kendra Little will tell you why it’s critical that you configure your own lab, and what you need to get your test environment set up. She’ll also detail how to avoid some gotchas that could cost you hours of frustration. She’ll demonstrate the steps you need to enable the Availability Group feature in SQL Server 2012 and how to create and test your first Availability Group. This session is for DBAs with one or more years experience. Register here.
You’d love to become a Database Administrator, but how do you get your foot in the door? Ten years ago, Kendra Little was in your position. Since then, she landed her first job as a SQL Server DBA, worked her way up to Senior DBA, and is now a partner in a database consulting company. Along the way, she’s hired Junior and Senior DBAs and she helps employers develop interview procedures for database administrators. In this 30 minute session you’ll learn what employers look for, steps you can take to make yourself a great candidate, and how to build the right resume to get your first DBA job.
Interested in links discussed during the webcast? Scroll on down after the video is over.
One thing I forgot to mention in the webcast: get business cards for when you attend those user group meetings! They don’t have to be expensive or fancy, they just need to help people remember your name and how to contact you.
Need to practice for your next interview?
We’ve got training! For $29 you get 18 months of access to videos helping you prepare and practice for interviews with 100 practice DBA interview questions on on your desktop, laptop, iPad, or iPhone. Learn more or buy it now.