Blog

New Online Instructor-Led Course: Database Maintenance Starter Kit

Preview-Database-Maintenance-Starter-Kit

BYO popcorn

We’re changing things up a little bit. Jes Schultz Borland and I are offering a brand new course that combines two things we love:

  • Video training where you can learn anytime
  • A live instructor-led discussion for a small class of students (just 30 people)

None of it requires you to travel. And it’s all at a price that your boss can’t resist.

Meet the SQL Server Database Maintenance Starter Kit – Class is in Session for 30 Students on December 29, 2014

You know you need good maintenance for your SQL Server, but you’re not sure exactly WHAT maintenance you should do. You’ve run our free sp_Blitz® procedure — but you’re not clear on what to prioritize out of all the rows you return. You’ve heard of database recovery models, but aren’t sure about the risks of changing it. A little voice in the back of your head asks if you’re running backups often enough, and the right kinds of backups.

This course is for you! For $149, you get 18 months access to 3 hours of recorded training and the chance to attend an additional 3 hour webinar of group discussion on database maintenance.

Our first class will have its 3 hour webinar on Monday, December 29, 2014, 9 am to noon Central. Only 30 students can enroll!

3 Hours of Recorded Video Training on Database Maintenance (18 months access)

Demos galore

Jes introduces you to her good friend, the SQL Server Agent

The day you purchase the course, you’ll be able to dive right into 3 hours of recorded video content. Jes and I will teach you:

  • The pros and cons of maintenance plans vs SQL Server Agent jobs
  • How to select the right recovery model
  • How to plan a backup strategy
  • How to check for corruption in your databases (and get notified if it happens)
  • The different types of index maintenance operations (and what can go wrong)
  • Why you need to be careful with index maintenance

3 Hour Instructor-Led Discussion Session (small groups: 30 students per session only)

When you enroll, you pick a date and time for a three hour online discussion session, led by Jes and myself. Each class will contain 30 students maximum — that means you get customized content and a chance to ask questions! Right now we’re only offering this on Monday, December 29, 2014, 9 am to noon Central. On that day you can join a session where you will:

  • Learn more about the “challenges” we give you as a downloadable PDF as part of the course
  • Take quizzes on course topics (and see how your answers do)
  • Ask questions and make comments to Jes and myself using the GoToWebinar chat feature

The Fine Print / FAQ / LOL / The Deets

  • The 3 hour discussion session will NOT be recorded — it’s all about showing up and participating.
  • We can’t guarantee that your firewall won’t block GoToWebinar, that your computer audio will work, or that someone won’t schedule a meeting with you during that time period — those things all depend on you.
  • The course is non-refundable. Read the terms of purchase here.
  • If you’d like to test your connection to GoToWebinar before buying, join one of our free Tuesday weekly events.
  • If the date and time of the session don’t work for you, that’s OK. We may schedule more of these in the future at different times of day. Feel free to leave a comment on this post letting us know what region you’re in, so we know where interest is!
  • If you just want 18 months access to the videos, you can still buy the course and just not attend the Instructor-Led discussion session. There’s no discount price for that. (It’s still a good deal! Did we mention it’s THREE hours of videos packed with demos?)

Read more about the class or buy it now.

How many CPUs is my parallel query using in SQL Server?

Parallelism can be confusing. A single query can have multiple operators that run at the same time. Each of these operators may decide to use multiple threads.

You set SQL Server’s “max degree of parallelism” to control the number of processors that can be used, but it’s not immediately obvious what this mean. Does this setting limit the total number of CPU cores the entire query statement can use? Or does it limit the total number of CPU cores that a single parallel operator within the query can use?

Good news: we can see how it works by running a simple test query and looking at some SQL Server DMVs.

My parallelism test setup

This test is run against a virtualized SQL Server with 4 virtual CPUs. SQL Server’s “max degree of parallelism” setting is at 2. My test instance is on SQL Server 2014.

My simple parallel query

All of my observations are taken while running the following test query against a restored copy of the StackOverflow database. I have ‘actual execution’ plans turned on so that when I query sys.dm_exec_query_profiles, I get some interesting details back.

SELECT p.Id, a.Id
FROM dbo.Posts as p
JOIN dbo.Posts as a on p.AcceptedAnswerId=a.Id;
GO

Here’s what the execution plan looks like:

parallel plan

The plan has seven operators with parallel indicators. Two of those operators are scanning nonclustered indexes on the Posts table.

The question we’re exploring here is whether the scans of those nonclustered indexes will use multiple threads that share the same two CPU cores, or whether they will each get two different CPU cores (and use all four).

Starting simple: tasks and workers

While my parallel ‘Posts’ query is executing on session 53, I can spy on it by querying SQL Server’s DMVs from another session:


select ost.session_id,
    ost.scheduler_id,
    w.worker_address,
    ost.task_state,
    wt.wait_type,
    wt.wait_duration_ms
from sys.dm_os_tasks ost
left join sys.dm_os_workers w on ost.worker_address=w.worker_address
left join sys.dm_os_waiting_tasks wt on w.task_address=wt.waiting_task_address
where ost.session_id=53
order by scheduler_id;

Here are a sample of the results:
schedulers-workers

The scheduler_id column is key. Each scheduler is mapped to one of my virtual CPU cores. My query is using 2 virtual CPU cores. At this moment I have two tasks on scheduler_id 0, and three tasks on scheduler_id 2.

But why leave it at that, when we can overcomplicate things? Let’s poke around a little more.

dm_exec_query_profiles and query plan nodes

There’s one more thing you need to know about our query plan. Each node in the plan has a number. The index scans are node 4 and node 6:
parallel plan -nodes
If I run my query with ‘actual execution plans’ enabled, I can spy on my query using the sys.dm_exec_query_profiles DMV like this:

select ost.session_id,
    ost.scheduler_id,
    w.worker_address,
    qp.node_id,
    qp.physical_operator_name,
    ost.task_state,
    wt.wait_type,
    wt.wait_duration_ms,
    qp.cpu_time_ms
from sys.dm_os_tasks ost
left join sys.dm_os_workers w on ost.worker_address=w.worker_address
left join sys.dm_os_waiting_tasks wt on w.task_address=wt.waiting_task_address
    and wt.session_id=ost.session_id
left join sys.dm_exec_query_profiles qp on w.task_address=qp.task_address
where ost.session_id=53
order by scheduler_id, worker_address, node_id;
cpus-threads-query-nodes

Click to nerd out on this in a larger view

Here’s a sample of the output:

I’ve only got two schedulers being used again – this time it happened to be scheduler_id 1 and scheduler_id 2. Looking at the node_id column, I can see that the index scan on query plan node 4 is using both scheduler_id 1 and scheduler_id 2: the very top line and the bottom line of the output show the current row_count for the runnable tasks. The scan on query plan node 6 isn’t really doing work right at the instance this snapshot was taken.

Recap: Maxdop limits the cpu count for the query

Even if your query has multiple parallel operators, the operators will share the CPUs assigned to the query, which you can limit by the ‘max degree of parallelism’ setting.

Credits: thanks to Paul White confirming that I had the basics of this concept right. If you liked this post, you’d love his great post, Parallelism Execution Plans Suck.

Reading the New Fast Track Reference Architectures from HP & EMC

James Serra caught three new SQL Server 2014 Fast Track Data Warehouse Reference Architecture designs released by EMC, HP, and Lenovo. I love reading these because they show each vendor’s state-of-the-art storage infrastructure.

Two of them have remarkably similar goals – to hold a 28 TB data warehouse:

Here’s a simplified summary of their results:

Fast-Track-Reference-Architecture

A few things to take away here – first, and obviously, the HP storage wipes the floor with the EMC storage. It’s not clear from the limited test results if the EMC solution would have been more competitive had it used modern CPUs. The EMC one was built with an HP DL580, a 4-socket server, using older CPUs, and it left two of the CPU sockets empty. That’s quite an odd choice for a benchmark test.

The EMC solution takes up dramatically more rack space than the simple HP 2-u server solution, and involves dramatically more management complexity.

However, if you want automatic failover with minimal downtime and no data loss, local solid state storage probably isn’t going to cut it. It’d be relatively easy to add high availability in the form of Windows failover clustering to the EMC solution, but complex to build reliability into HP’s. (It’d require AlwaysOn Availability Groups or database mirroring, both of which would impact the workload speeds seen here.)

Cost is a tougher question – your mileage may vary given pricing discounts on gear like this, but note that HP’s solution uses four of these $28k USD cards. The solid state storage alone is $100k, which sounds like a lot, but remember that we’re talking about 24 cores of SQL Server licensing anyway – in the neighborhood of $165k just for the software.

Solid state changes the game for everything in databases, and you don’t have to buy the ultra-expensive cards, either. I’d love to see a reference architecture built with Intel’s new PCI Express drives, but since Intel hasn’t been involved in Microsoft Fast Track Data Warehouse Reference Architectures, that’s left as an exercise for the reader.

Corrupt Your Database – On Purpose! [Video]

Corruption: it can strike at any time. You know this, so you have your page verification option set to CHECKSUM and you run DBCC CHECKDB regularly. When the dreaded day finally arrives, what do you do? If you haven’t faced corruption yet, or you want to brush up your repair skills, Jes will show you how to corrupt your database with a sample database and a hex editor – and how to fix it!

Tune in here to watch our webcast video for this week! To join our weekly webcast for live Q&A, make sure to watch the video by 12:00 PM EST on Tuesday. Not only do we answer your questions, we also give away a prize at 12:25 PM EST – don’t miss it!

 

The script is available for download, with the understanding that this is purely for test purposes, and will never, ever be used on a production database. Ever.

How to Restore a Page in SQL Server Standard and Enterprise Edition

One of the many restore features in SQL Server is the ability to restore one or more pages of data. This can be very convenient in some narrow situations – for example, corruption occurs on one page or an oops update is made to one record.

The page restore process is not straightforward, however, and, as I recently discovered, the Books Online article about it is confusing. See, you have to perform the restore offline in all versions except Enterprise Edition – but the only example Books Online gives is…Enterprise Edition.

Here’s a straightforward breakdown of how to do a page-level restore both offline and online. For the sake of brevity, let’s say I have two databases – TestRestoreOnline and TestRestoreOffline. Both are in Full recovery. Each has one damaged page, which I’m going to restore. (For a full demo script, click here.)

Offline

You should already have an existing full backup – mine is at D:\SQL Backups\TestRestoreOffline-Backup1.bak. I also have one transaction log backup, D:\SQL Backups\TestRestoreOffline-LogBackup1.trn.

/* This example uses an OFFLINE restore, which is applicable to all versions of SQL Server. */
USE master;
GO

/* The NORECOVERY statement in this last log backup makes the database "offline" - you don't actually set it OFFLINE. */
BACKUP LOG TestRestoreOffline TO DISK=N'D:\SQL Backups\TestRestoreOffline-LogBackup2.trn'
WITH NORECOVERY;
GO

/* Restore full backup, specifying one PAGE.
I used sys.dm_db_database_page_allocations to find the page number. */
RESTORE DATABASE TestRestoreOffline
PAGE='1:293' --Have multiple? Separate with commmas.
FROM DISK=N'D:\SQL Backups\TestRestoreOffline-Backup1.bak'
WITH NORECOVERY;

/* Restore log backups */
RESTORE LOG TestRestoreOffline FROM DISK=N'D:\SQL Backups\TestRestoreOffline-LogBackup1.trn'
WITH NORECOVERY;

RESTORE LOG TestRestoreOffline FROM DISK=N'D:\SQL Backups\TestRestoreOffline-LogBackup2.trn'
WITH NORECOVERY;

/* Bring database "online" */
RESTORE DATABASE TestRestoreOffline
WITH RECOVERY; 

That is an offline page restore. By putting the database in a NORECOVERY mode before the restores begin, the database can’t be accessed.

Online

An online page restore is only available in Enterprise Edition. This will allow users to access other objects in the database while you are restoring the page(s) needed.

You should already have an existing full backup (D:\SQL Backups\TestRestoreOnline-Backup1.bak) and log backup(s) (D:\SQL Backups\TestRestoreOnline-LogBackup1.trn).

/* This is an example of an online page restore. */
USE master;
GO

/* Restore full backup, specifying one PAGE.
I used sys.dm_db_database_page_allocations to find the page number. */
RESTORE DATABASE TestRestoreOnline
PAGE='1:293' --Have multiple? Separate with commmas.
FROM DISK=N'D:\SQL Backups\TestRestoreOnline-Backup1.bak' 
WITH NORECOVERY;

/* Restore log backups */
RESTORE LOG TestRestoreOnline FROM DISK=N'D:\SQL Backups\TestRestoreOnline-LogBackup1.trn'
WITH NORECOVERY;

/* With Enterprise Edition, the "online" restore - a log backup with NORECOVERY - goes here. */
BACKUP LOG TestRestoreOnline TO DISK=N'D:\SQL Backups\TestRestoreOnline-LogBackup2.trn'
WITH NORECOVERY;
GO

/* Restore the last log backup */
RESTORE LOG TestRestoreOnline FROM DISK=N'D:\SQL Backups\TestRestoreOnline-LogBackup2.trn'
WITH NORECOVERY;

/* Restore database */
RESTORE DATABASE TestRestoreOnline
WITH RECOVERY; 

The steps for an online restore differ slightly. The tail-log backup is taken after all the other log backups are applied, instead of at the beginning of the sequence.

Want to know more?

Backup and recovery skills are essential for every DBA, but it can be complicated! If you want to learn more, check out my Backup & Recovery Basics or Backup & Recovery Advanced video training!

How to Configure Quorum in SQL Server (video)

“Quorum” is incredibly important to keep your SQL Server online when you use Windows Failover Clustering or AlwaysOn Availability Groups. Learn what quorum is, how to see the current quorum configuration, how to change it, and guidelines for how to configure quorum in three common real-world scenarios.

#SQLPASS Summit 2014 Keynote LiveBlog

10:03AM – Ranga back onstage to finish things up. Thanking Pier 1 for letting them share the exploratory work. And we’re out!

10:00AM – Power BI adding a new authoring and editing mode. James removes the pie chart and replaces it on the fly. Now that is knowing your audience – nice job.

9:58AM – Demoing updated Power BI dashboards. Man, these screens are always so gorgeous – Microsoft is fantastic at reporting front end demos.

9:56AM – Oops. Looks like somebody is running out-of-date Apple software that needs an update. Note the screen.

Time for some Apple updates

Time for some Apple updates

9:54AM – “I’ve been watching the Twitter feed, and I kinda have a pulse of where people are sitting.” Talking about why Microsoft is moving to the cloud. “I can ship that service every single week.” It gives Microsoft agility.

9:50AM – Microsoft’s James Phillips takes the stage by briefly recapping his background with Couchbase. (That name has come up a few times today.) “Data is a bucket of potential.” Love that line.

James Phillips

James Phillips

9:49AM – Demoing a phone app to locate items in, you guessed it, Pier 1. Was there some kind of sponsorship deal here? I do love the opportunity to tell a single story though. Just wish they’d have tied it to a single individual’s journey through the store through the entire keynote.

9:44AM – “Using 14 lines of SQL code, you can do real-time analytics.” Uh, that is the easiest part of the entire demo. How about putting Kinects in stores, building the web front end, etc?

9:40AM – Demoing a browser-based SSIS UI for Azure Data Factory.

9:37AM – Sanjay Soni taking the stage and explaining in-store analytics demos using Kinect sensors to watch where customers go using a heat map.

Sanjay Soni taking the stage to do another Pier 1 demo

Sanjay Soni taking the stage to do another Pier 1 demo

9:35AM – Apparently the “wave” trick had absolutely nothing to do with the session – maybe some kind of ice-breaker? This keynote has now gone completely surreal. He’s now moved on to machine learning. No segue whatsoever.

9:32AM – Joseph Sirosh, machine learning guy at Microsoft, starts by having the community applaud themselves. And now he’s teaching us to do the wave. Uh, okay.

9:30AM – Demoing a restore of the local portion of the database. Even restoring a database got applause. Tells you how desperately rough that first stretch of the keynote was.

9:26AM – Demoing stretch tables: 750mm rows of data in Azure, plus 1mm rows of data stored locally on the SQL Server. It’s one table, with data stored both in the cloud and local storage. You can still query it as if it was entirely local.

Demoing stretch tables in Azure

Demoing stretch tables in Azure

9:22AM – Showing a demo of a Hekaton memory-optimized table, plus a nonclustered columnstore index built atop it. This is genuinely new.

Mike Zwilling's demo of a nonclustered columnstore index atop a Hekaton table

Mike Zwilling’s demo of a nonclustered columnstore index atop a Hekaton table

9:20AM – “What if you could run analytics directly on the transactional data?” Doing a live demo of a new way to run your reports in production.

9:18AM – Ranga announces a preview coming at some point in the future. “And that’s the announcement.” Literally one clap, and it was from the blogger’s table. This is bombing bad.

9:15AM – Uh, big problem here. Ranga just said Stack Overflow is using Microsoft SQL Server’s in-memory technologies. That is simply flat out wrong – Stack does not use Hekaton, Buffer Pool Extensions, or anything like that. Just plain old SQL Server tables. Very disappointing. If you’re at PASS, look for Nick Craver, one of their database gurus who’s here at the conference.

9:12AM – Now talking about something near and dear to my heart: how Stack Overflow is pushing limits.

9:10AM – To recap the demo, Azure SQL Database does geo-replication and sharding.

9:06AM – The utter and complete dead silence in the room tells a bigger story than the demo. This is just not what people come to PASS to see. If you think about the hourly rate of these attendees, even at just $100 per hour, this is one expensive bomb.

8:58AM – Quite possibly the most underwhelming demo I’ve ever seen. If you want to impress a room full of data professionals, you’re gonna need something better than a search box.

8:56AM – Pier 1 folks up to demo searching for orange pumpkins.

Tracy Dougherty doing Pier 1 demo

Tracy Dougherty doing Pier 1 demo

8:52AM – “The cloud enables consistency.” Errr, that’s not usually how that works. Usually the cloud enables eventual consistency, whereas on-prem enables consistency. I know that’s not the message he’s aiming for – he’s talking about the same data being available everywhere – but it’s just an odd choice of words.

8:49AM – Discussing Microsoft’s data platform as a way to manage all kinds of data sources. This is actually a huge edge for Microsoft – they have the Swiss Army knife of databases. Sure, you could argue that particular platforms do various parts much better – you don’t want to run a restaurant on a Swiss Army knife – but this is one hell of a powerful Swiss Army knife.

8:45AM – Ranga’s off to a really odd start. He starts by talking about Women in Technology and says “We’ll do our best,” and then segues into an odd story about his wife refusing to use online maps. Really, really awkward.

8:41AM – Microsoft’s T.K. Ranga Rengarajan taking the stage.

Ranga Taking the Stage

Ranga Taking the Stage

8:39AM – Watching several minutes of videos about people and stuff. No clapping. Hmm.

8:34AM – Tom’s a natural up on stage, totally relaxed.

Tom LaRock at PASS Summit

Tom LaRock at PASS Summit

8:33AM – Thanking the folks who help make the event possible: sponsors and volunteers.

8:31AM – Hands-on training sessions for 50 people per workshop are available here at PASS. Good reason to come to the conference instead of just playing along online – convince your boss by explaining that you can’t really get this hands-on experience anywhere else as easily.

8:27AM – PASS has provided 1.3 million hours of training in fiscal year 2014. (Would be interesting to see that broken out as free versus paid, and national versus local user group chapters.

8:22AM – PASS President Tom LaRock taking the stage and welcoming folks from 50 countries, 2,000 companies.

PASS President Tom LaRock

PASS President Tom LaRock

8:18AM – From the official press release: “The PASS Summit 2014 has 3,941 delegates as of last night and 1,959 pre-conference registrations across 56 countries for a total of 5,900 registrations.” The registrations number is always a little tricky because it counts pre-con attendees multiple times, but that delegates number is phenomenal. Nearly 4,000 folks is a lot! This is far and above the biggest Microsoft data event.

8:16AM – Folks are coming in and taking seats.

8:00AM Pacific – Good morning from the Blogger’s Table at the Seattle Convention Center. It’s day 1 of the PASS Summit 2014, the biggest international conference for SQL Server professionals. A few thousand data geeks are gathered here to connect, learn, and share.

The keynote session is about to start, and here’s how this works: I’m going to be editing this blog post during the keynote, adding my thoughts and analysis of the morning’s announcements. I’ll update it every few minutes, so you can refresh this page and the news will be up at the top.

You can watch the keynote live on PASS TV to follow along.

Here’s the abstract:

Evolving Microsoft’s Data Platform – The Journey to Cloud Continues

Data is the new currency and businesses are hungrier than ever to harness its power to transform and accelerate their business. A recent IDC study shows that business that are investing in harnessing the power of their data will capture a portion of the expected $1.6 trillion dollar top line revenue growth over the coming four years. SQL Server and the broader Microsoft data platform with the help of the PASS community are driving this data transformation in organizations of all sizes across the globe to capture this revenue opportunity.

In this session you will hear from the Microsoft Data Platform engineering leadership team about recent innovations and the journey ahead for Microsoft’s data platform. Learn first-hand how customers are accelerating their business through the many innovations included in SQL Server 2014 from ground breaking in-memory technologies to new highly efficient hybrid cloud scenarios. See how customers are revolutionizing their business with new insights using Power BI and Azure Machine Learning and Azure HDInsight services. Learn about the investments were are making Microsoft Azure across IAAS and PAAS to make it the best cloud hosting service for your database applications.

Ignore the first paragraph, which appears to be written for salespeople attending the Microsoft Worldwide Partner Conference, not the Professional Association for SQL Server PASS Summit. The second paragraph – not to mention the neato changes at Microsoft lately – offer a glimmer of hope for us geeks. If Microsoft wants to win market share away from Amazon’s huge lead, they’re going to have to bring out features and services that compete. The terms “investments” and “journey ahead” implies that we’ll be hearing about future features in Azure and SQL Server vNext.

Let’s see what they’re announcing today – and remember, like I wrote last week, my new perspective today is a chillaxed 1960s car show attendee. Bring on the flying cars.

For the latest updates, refresh this page and check the top.

Announcing sp_BlitzCache™ v2.4

Welcome to sp_BlitzCache™ v2.4. This release brings a few changes and bug fixes.

  • Fixed a logical error in detecting the output table. Thanks to Michael Bluett for pointing that out.
  • Sorting by executions per second finally works. Thanks to Andrew Notarian and Calvin Jones for submitting this week.
  • Added a @query_filter parameter – this allows you to only look at “statements” or “procedures” in the plan cache.
  • A check was added to identify trivial execution plans that have been cached. If you’re seeing a lot of these, you need to fix it.
  • The @reanalyze parameter was added. When set to 0, sp_BlitzCache™ will pull fresh data from the plan cache. When set to 1, though, sp_BlitzCache™ will re-read the results temporary table. This is helpful if you want to save off results in Excel and display results so you can tune queries.
  • Added the ability to see a query’s SET options. This is hidden just to the right of the plan in the results grid.
  • Moved the #procs temp table to a global temp table named ##procs. This shouldn’t be a problem because you’d probably get angry if two DBAs were running this stored procedure at the same time any way.

Download it right now!

Update: Denis Gobo noticed that sp_BlitzCache™ could potential clobber global temp tables. Global temp table names have been updated in sp_BlitzCache™ to avoid this in the future. Make sure you’re using v2.4.1.

What Changed Our Career Trajectories?

Jeremiah-Caffeinates-TSQL

Jeremiah has coffee at the PASS Summit

We’re a group of specialists and teachers here at Brent Ozar Unlimited. But we didn’t start that way– we started out as engineers and developers.

So what was it that changed our career trajectories? For each of us, we had a lot of smaller factors along the way, but there have been one or two big game changers that have really made a difference.

Jeremiah: The PASS Summit

The first time I went to the PASS Summit, I attended at my own expense and used up half of my vacation time to attend. I’d been involved in the SQL Server community for a few short months, and I had been chatting via email with a goofy DBA named Brent Ozar.

Brent and I both attended pre-conference sessions, but we were in different ones. It happened that our speakers took breaks at same time, and I ended up running into Brent over on one of the breaks. We chatted and ended up eating lunch together.

I’m a shy person and a big conference can be daunting when you don’t know anybody, so I ended up tagging along with Brent throughout a lot of the conference. The upside of hanging out with Brent is that he’ll talk to anybody – speakers, Microsoft employees, or random attendees. The PASS Summit was my gateway into both the SQL Server community – I made a lot of friends that week – and into my current career.The connections I made at the PASS Summit that year planted the seeds for the rest of my career.

Kendra: Twitter

Someone helped me discover the typo and find it.

Kendra’s first tweet. #SQLHELP didn’t exist yet.

I never would have guessed that Twitter would change my life. And I’ve never been passionate or crazy about Twitter – I think it’s useful, but if I go a day or two without using it, I don’t mind.

But Twitter is the way that I started connecting to the larger SQL Server community. I started following people, then following their friends, and started tweeting about SQL Server. I asked and answered questions. Twitter gave me a sense of what other SQL Server developers and DBAs cared about, what problems they faced, and what they were interested in.

I’m a pretty naturally shy person, and I never knew what to talk to people about at conferences and events when I first met them. Twitter has helped me solve that problem: you can sense the mood of a conference while you’re there. You can tweet about it. You can talk about the tweets with people!

To get started with Twitter, check out our free eBook.

Jes: SQL Saturday

Jes at SQLSaturday Chicago

Jes at SQLSaturday Chicago

Specifically SQL Saturday Chicago 2010 – my first! I’d been attending my “local” user group for a few months, I’d been reading blogs, I was participating in forums, and I was even on Twitter before I went to this event. I knew I loved working with SQL Server, and I knew there were people that could help me when I had questions. But getting to attend an event – for free, nonetheless! – and getting to meet these people was amazing.

I remember attending a session by Brad McGeehee, who’d written “Brad’s Sure DBA Checklist”, which I kept in a binder at my desk. I remember getting to watch Chuck Heinzelman and Michael Steineke set up a cluster and test it. I even went to a session by one Brent Ozar about a script called sp_Blitz. At the end of the day, with a full brain, I went to the lobby and saw a group of the presenters sitting around chatting. I bravely approached and joined the circle, which included Brent and Jeremiah, and chatted with them. Had I not attended that event, had I not made the connections I did, I doubt I would be where I am in my career today!

Look for a SQL Saturday near you!

DOUG: SQL CRUISE

Dinner with Vikings

Doug as his true viking self

I’d recently presented at my first two SQL Saturdays when the opportunity came to go on SQL Cruise. A contest that for some cosmic reason I felt was begging me to enter — send in your story of victory with SQL Server and win a free spot on the cruise. I made a video about refactoring bad code with a magical hammer, and won the contest. But that was just the beginning.

During the cruise, I:

  • Had a dream about a murder mystery happening on the cruise, which led me to write and present my SQL Server Murder Mystery Hour session.
  • Got some phenomenal career advice from people like Brent, Tim Ford, and Buck Woody, that is still helping me to this day.
  • Became friends with the three people who ended up hiring me three years later for the best job I have ever had.

That’s my career adventure. What will yours be?

Brent: Meeting “Real” DBAs

In My Native Habitat

The DBAs told Brent this hat would make his code better.

I was a lead developer at a growing software business and I felt like I had no idea what was going on inside the database. Our company suddenly grew large enough to hire two “real” database people – Charaka and V. They were both incredibly friendly, fun, and helpful.

I was hooked. Who were these cool people? Where did they come from? How did they learn to do this stuff?

They both took time out of their busy jobs to answer whatever questions I had, and they never made me feel stupid. They loved sharing their knowledge. They weren’t paranoid about me taking their jobs or stealing their secrets – and looking back, they probably just wanted me to be a better developer.

They insisted that sure, I had what it took to be a database admin. I went for it, and it’s been a rocket ship ever since.

What to Expect at the PASS Summit and SQL Intersections (Video)

If you’re going to a SQL conference for the first time, join us to learn how things work. We’ll share what happens when you walk in the door for the first time, where to go after hours, what to bring, and what NOT to bring.

css.php