Blog

Advice to an IT newcomer

Women in Technology

Women in Technology (yes, Clippy is a woman, too)

We recently got the following question for Kendra and Jes in the Brent Ozar Unlimited® mailbox:

Six months ago I stumbled into the IT world. Do you have any advice for someone (a woman) starting off in the IT industry, specially someone without a computer science degree? I really enjoy working with databases and data. I would love to one day be a Database Administrator and get into business intelligence.

JES SAYS…

There has been a lot written about women in technology – positive and negative. I’m happy to say that my experience as a woman in technology has been incredible. The SQL Server community has been accepting, helpful, and nurturing. If anyone, man or woman, came to me, asking how to succeed, I’d give them this advice.

Ask questions. Constantly learn. There is no such thing as too much knowledge. The most successful people I know in IT – not just the SQL Server world – are lifelong learners. They read books and blogs, attend user group meetings and conferences, and learn new programming languages in their spare time. Build your own computers. Know how to set up a network. You don’t have to be an expert at every facet of technology, but knowing enough to talk to other people in “their” language will go a long way.

Speaking of user groups…find the closest user group, and attend it regularly. Don’t have one nearby? Start one, or join a virtual user group (like those at http://www.sqlpass.org/PASSChapters/VirtualChapters.aspx). Not only will you learn about things you may not be exposed to at work, you’ll have a chance to build a network. I can’t tell you the number of questions I’ve gotten answered through this channel, or the number of people I know that have gotten new (better!) jobs this way.

Never, ever be afraid to say, “I don’t know, but I can find the answer.” People will respect you far more if you are honest. Don’t try to be a know-it-all. If you haven’t dealt with a particular technology or situation before, acknowledge that, make note of what the person is asking for, and research it later. You’ll learn something new, and you won’t get caught giving bad – or wrong – answers.

Don’t think, “I’ll never know as much as him/her.” Yes, there are some people in IT that started building computers or robots or software before they could drive a car. Instead of thinking that you will never know as much as they do, remind yourself how many years of experience they have – and how they can help you. Ask to pick their brains. Ask them what books they read or what tools they use. Learn from them.

Most of all, don’t fall victim to the so-called “impostor syndrome”. Someone always appears smarter, faster, more organized, more accomplished, less stressed, or hasn’t spilled coffee all over herself yet today. Don’t let that invalidate where you started from and what you have accomplished. Keeping a blog – even if it’s private – that you can go back and reference over the years is a great way to value yourself. I know I’ve seen a dramatic change in my writing over five years, from both a technical and editorial perspective.

Good luck! Being in IT – and especially the SQL Server field – is exciting and rewarding.

KENDRA SAYS…

Remind yourself that you’re qualified to be a DBA. You mention that you don’t have a computer science degree. Great news: that’s not required. There’s no degree or certification that is the Official DBA Training Program or Proving Ground.

I had a period where I felt I wasn’t “legitimately a DBA” because I also worked with some C# and drove some non-SQL software development processes. But I was doing some really cool things with SQL Server administration, too. I should have felt awesome about being a DBA, regardless of my job title.

Never feel that your background or your current job have to fit a specific mold for you to be a “real” DBA. There is no mold!

Remind yourself that you are designing your own career as you go. Just like there’s no set educational and certification “solution” to end up with all the right skills, job progression for DBAs is usually not linear. It’s a rare person who starts with a “Junior DBA” job title and works their way up to “Mid Level DBA” and then “Senior DBA”.

Instead, most people these days struggle to find the right training, the right mentor, and discover if they want to specialize (Virtualization? Performance Tuning? Data modeling?), be a generalist, or dive off into uncharted waters (Hadoop? Different Platforms? Business Intelligence?). There are many paths, and there are new paths all the time.

Expect to repeatedly redefine your interests and redesign your career. Make sure that every six months you have a conversation with your manager about what kind of work makes you excited, and where you’d like to be in a year or two.

Remind yourself that you do great stuff: and write it down. For a few years, I had a formal review process where I was regularly required to write out my accomplishments. And you know what? That was great for me! Each item on the list seemed small, but when I put it all together it gave me a new view of myself.

This may be difficult, but it’s worth it. Keeping track of the great things you do boosts your confidence and makes you ready when opportunity comes around.

GET INSPIRED

Is this advice only for women? Heck no! Sometimes it’s just nice to ask advice from someone who’s part of a group you also identify with and hear their perspective.

Want to learn more about how to build a great career working with data? Hop on over to Brent’s classic post, “Rock Stars, Normal People, and You.”

Five Reasons a Production DBA Should Learn to Read Execution Plans

While chatting with attendees before my Don’t Fear the Execution Plan webcast, a question was posed: “As a DBA who mainly worries about backups, index fragmentation and checkdb, should I also be looking at execution plans?”

YES! Here’s why.

  1. Performance is important. A production DBA’s main focus should be the safety and security of the data, but the performance of servers is also something to be concerned with. Companies pay a lot of money in hardware, support, and licensing costs for these servers. Being able to squeeze every bit of performance from them will save the company money – which helps everyone in the company in the long run.
  2. Be a superhero. Your first response to an ongoing, persistent performance problem may be a hardware fix. Add more memory, add more disks, ask the SAN team to give you dedicated LUNs, beg for a new server. But at some point, it is going to be worth your time – and the company’s money – to tune bad queries. By being the person that can identify the bad queries and provide ways to improve them, you are putting on your superhero cape and saving the day. (How can you find those queries? Find out in Jeremiah’s Exploring the Magic of the Plan Cache blog!) superhero-no-paper-shutterstock_163129538-[Converted]
  3. You’re being proactive, not reactive. We all need to be reactive sometimes – that’s a facet of the job. But being proactive – identifying top resource-consuming queries on a server, identifying problem spots in the queries, and suggesting fixes – makes us look better. We show we are bringing value to the company.
  4. Grow your knowledge of SQL Server. You never know when you may need the information. A new developer may have been able to ask the DBAs questions at her last job; you may need the skill at your next job. You will also learn things along the way about SQL Server internals, and that’s great knowledge to have.
  5. Increase communication between DBAs and developers. If your developers already use execution plans to tune queries, you will be able to speak their language. If they don’t, you can teach them as you learn – and there is no better way to learn than to teach something! Breaking down communication barriers is a goal my clients typically have. It involves some work and willingness to give, but will make your job better.

What happens to in-flight data compression in an emergency?

Data compression can have many uses and advantages, but it also has its drawbacks. It’s definitely not a one-size-fits-all strategy. One of the things to be aware of is that initial compression of a table or index can take quite some time, and will be resource-intensive. It also is an offline operation, so the object can’t be accessed while it’s being compressed or uncompressed. (Clarification: by default, an ALTER TABLE statement is an offline operation. You can declare it online, but, as David notes in the comments, “Although the operation is ONLINE, it’s not completely idiot-proof.”)

So what would happen if your SQL Server service or server restarted while you were in the middle of (or, as it usually goes, 90% of the way through) compressing an index? Let’s investigate.

I have a table that is 1.1 GB in size, with a 1.0 GB nonclustered index.

Table name Index name total_pages PagesMB
bigTransactionHistory pk_bigTransactionHistory 143667 1122
bigTransactionHistory IX_ProductId_TransactionDate 131836 1029

I need to reduce the size of the nonclustered index, so I decide to compress it. After using sp_estimate_data_compression_savings, I determine that I will benefit more from page compression.

I apply page compression, and allow my server to work.

ALTER INDEX IX_ProductId_TransactionDate
ON dbo.bigTransactionHistoryPage
REBUILD WITH (DATA_COMPRESSION = PAGE);

Now, let’s say there is an emergency during this operation. Perhaps a component in the server breaks; maybe the data center loses power. (Or I restart the SQL Server service.) Uh-oh! What happened?

When the service is running, I check SSMS. I see the following error.

Msg 109, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. 
(provider: Shared Memory Provider, error: 0 - The pipe has been ended.)

What happened to the data? Is compression all or nothing, or is it possible that some pages are compressed and others aren’t? I first check the index size.

Table name Index name total_pages PagesMB
bigTransactionHistory pk_bigTransactionHistory 143667 1122
bigTransactionHistory IX_ProductId_TransactionDate 131836 1029

Nothing has changed – the index is not one byte smaller. This tells me the operation was not successful.

I also check the error log to see what information it can provide.

error log rollback

There are 2 transactions that are rolled back. As I am the only person in this instance right now (the benefits of a test environment), I know those were my transactions.

SQL Server has treated my data compression operation as a transaction. If there is a restart at any point, the operation will be rolled back to maintain data integrity.

Don’t Fear the Execution Plan! [Video]

Have you ever been curious about how SQL Server returns the results of your queries to you? Have you ever opened an execution plan but been bewildered by the results? Have you dabbled in query tuning, but weren’t sure where to focus your efforts? Join Jes as she takes you through the basics of execution plans. She’ll show you how to read them, how to spot common problems, how to spot help, and tools that will make your job easier.

Rollback: What happens when you KILL a session?

It’s Friday afternoon. You, the DBA, are looking forward to a long weekend. You’re going to walk out the door in 30 minutes and enjoy some family time. The phone rings and you pick it up. On the other end, Joe Developer says, “There’s a query that’s been running for a long time. Can you kill it?”

You log into SSMS and use sp_who2 to confirm that the user has an open transaction. You issue the (infamous) KILL command.

Twenty-five minutes later, you’re still sitting there, waiting for the transaction to roll back. You’re sweating. You really want to leave for the weekend. Another ten minutes pass, and finally, it completes.

What happened? Why did SQL Server take so long to kill the process? What happens in a rollback situation?

What Happens in a Transaction

When a transaction is being committed, each step is written into the transaction log. Each value that is being inserted, updated, or deleted is recorded so that if there is a failure before the log is written to the data file, the appropriate rollback or roll forward can take place. The transaction may take and release locks on tables or indexes.

Want to see this in action? You can use the undocumented fn_dblog command to view the entries in the log.

What Happens in a Rollback

When we issue a KILL, every step of the transaction that has been written to the transaction log must be undone. The object the transaction was changing must be put back to the state it was in prior to the change. Was a record inserted? It needs to be deleted. Was a record updated? The previous value must be restored. Was a record deleted? It needs to be added back in. You can use fn_dblog to see this in action, too.

If SQL Server is simply undoing what was once done, why does it sometimes take longer?

If the transaction had to take a lock on a row or table to perform the operation, it will have to take that lock again – and other processes might be using that object now. Also, consider that a rollback is mostly single-threaded. If the transaction ran using four cores initially, and the rollback is now only using one, it will take more time.

Try going down backwards, hopping on one foot. I dare you!

Try going down backwards, hopping on one foot. I dare you!

Imagine this: you’ve decided to climb a tower with 10,000 stairs. You get to stair 9,999 and decide you don’t want to complete the climb. You can’t wish yourself to the bottom step – you have to go back down. But now, you’re tired – and have to do this single-threaded. Imagine having to hop down the stairs backwards, on one foot.

Tracking Progress

You are able to track the progress of a rollback. After you issue a KILL command, run KILL [Session ID] WITH STATUSONLY. This will show you approximately how much time is remaining in the rollback – but other processes in the database could affect that.. Consider the information here a guideline, not a hard-and-fast time.

Another thing to keep in mind: what you might not have known when you issued that KILL on Friday at 4:30 pm is that the transaction was going to finish at 4:32 pm. But, you had to wait until 5:05 for the rollback to complete. Always be cautious when using this command!

Solving the root cause

A one-time KILL issued to solve an “oops” may not need further investigation. But if you find yourself having to issue these commands for the same application, at the same time each week, it’s time to dig deeper. You want to work with the owners of the application or process to determine why this keeps happening, repeatedly. If it’s a long-standing process or transaction, what’s changed to make it take so long now? Perhaps you’re doing a mass insert or update, and could break it into chunks. If it’s a new process, look at what else is going on at that time. Could the statements be written more efficiently? Could it be scheduled at another day or time?

How to count the number of rows in a table in SQL Server

“How many rows exist in a table?”

It seems like such an innocent request. It isn’t too hard to get this information out of SQL Server. But before you open SSMS and whip out a quick query, understand that there are multiple methods to get this information out of SQL Server – and none of them are perfect!

COUNT(*) or COUNT(1)

The seemingly obvious way to get the count of rows from the table is to use the COUNT function. There are two common ways to do this – COUNT(*) and COUNT(1). Let’s look at COUNT(*) first.

SELECT COUNT(*)
FROM dbo.bigTransactionHistory;

The STATISTICS IO output of this query shows that SQL Server is doing a lot of work! Over 100,000 logical reads, physical reads, and even read-ahead reads need to be done to satisfy this query.

(1 row(s) affected)
Table 'bigTransactionHistory'. Scan count 5, logical reads 132534, 
physical reads 3, read-ahead reads 131834, lob logical reads 0,
lob physical reads 0, lob read-ahead reads 0.

Looking at the execution plan, we can see an Index Scan returning over 31 million rows. This means that SQL Server is reading every row in the index, then aggregating and counting the value – finally ending up with our result set. The cost of this query? 123.910000.

count 1

The query results: 31,263,601 rows.

count 2

Now, let’s look at the behavior of COUNT(1).

SELECT COUNT(1)
FROM dbo.bigTransactionHistory;

We can see from STATISTICS IO that we have a large number of logical reads – over 100,000.

(1 row(s) affected)
Table 'bigTransactionHistory'. Scan count 5, logical reads 132531, 
physical reads 3, read-ahead reads 131849, lob logical reads 0,
lob physical reads 0, lob read-ahead reads 0.

The execution plan again shows an index scan returning over 31 million rows for processing. The query cost is the same, 123.910000.

count 3

The results here are the same – 31,263,601 rows.

count 4

The benefit of using COUNT is that it is an accurate indicator of exactly how many rows exist in the table at the time query processing begins. However, as the table is scanned, locks are being held. This means that other queries that need to access this table have to wait in line. This might be acceptable on an occasional basis, but I frequently see applications issuing these types of queries hundreds or thousands of times per minute.

sys.tables + sys.indexes + sys.partitions

We can join several SQL Server catalog views to count the rows in a table or index, also. sys.tables will return objects that are user-defined tables; sys.indexes returns a row for each index of the table; and sys.partitions returns a row for each partition in the table or index. I am going to query for the table ID, name, and count of rows in all partitions.

DECLARE @TableName sysname
SET @TableName = 'bigTransactionHistory'

SELECT TBL.object_id, TBL.name, SUM(PART.rows) AS rows
FROM sys.tables TBL
INNER JOIN sys.partitions PART ON TBL.object_id = PART.object_id
INNER JOIN sys.indexes IDX ON PART.object_id = IDX.object_id
AND PART.index_id = IDX.index_id
WHERE TBL.name = @TableName
AND IDX.index_id < 2
GROUP BY TBL.object_id, TBL.name;

The output of STATISTICS IO here shows far fewer reads – 15 logical reads total.

(1 row(s) affected)
Table 'syssingleobjrefs'. Scan count 1, logical reads 2, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'sysidxstats'. Scan count 2, logical reads 4, physical reads
0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'sysrowsets'. Scan count 1, logical reads 5, physical reads 0,
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'sysschobjs'. Scan count 1, logical reads 4, physical reads 0,
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

The execution plan is more complex, but much less work – the query cost here is 0.0341384.

count 5

The results of the query are also the same – 31,263,301.

count 6

The benefits of using this method are that the query is much more efficient, and it doesn’t lock the table you need the count of rows for.

However, you need to be cautious when counting the rows on a table that is frequently inserted into or deleted from. The TechNet documentation for sys.partitions.rows says it indicates the “approximate number of rows for this partition”. How approximate? That information isn’t documented. Understand, though, that if you use this method, you potentially sacrifice up-to-the-moment accuracy for performance.

sys.dm_db_partition_stats

A third option is to use the dynamic management view sys.dm_db_partition_stats. This returns one row per partition for an index.

DECLARE @TableName sysname
SET @TableName = 'bigTransactionHistory'

SELECT OBJECT_NAME(object_id), SUM(row_count) AS rows
FROM sys.dm_db_partition_stats
WHERE object_id = OBJECT_ID(@TableName)
AND index_id < 2
GROUP BY OBJECT_NAME(object_id);

The STATISTICS IO output of this query is even lower – this time, only two logical reads are performed.

(1 row(s) affected)
Table 'sysidxstats'. Scan count 1, logical reads 2, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

The execution plan is less complex than our second example involving the three system views. This query also has a lower cost – 0.0146517.

count 7

The query results are the same as the previous examples – 31,263,301 rows.

count 8

Using this DMV has the same benefits as the system views – fewer logical reads and no locking of the target table. The query is also simpler to write, involving only one object.

But again, the TechNet documentation for sys.dm_db_partition_stats.row_count says it is “the approximate number of rows in the partition”, and when this information is updated is not documented. Here, you are also potentially sacrificing accuracy for performance.

Time to do some digging

The questions that you need to work with the business to answer are, “How up-to-date must the row count be? What is the business purpose? How often do you insert into or delete from that table, and how often do you count the rows?” If the accuracy of the row count is crucial, work to reduce the amount of updates done to the table. If performance is more important, and the row count could be approximate, use one of the system views.

Improving the Performance of Backups

There will come a time when your database backups will not be as fast as you would like them to be. Over time, the volume of data you’re storing will grow, and backups will take longer – but your weekly or daily maintenance windows don’t grow as well. Backups can be resource-intensive, using I/O and network resources – and if they’re running when users are trying to work, the users may notice the performance impact.

Here are two ways to make your backups go faster.

Option A: Full and Differential backups

A typical backup scenario that I see is a full backup daily and, if the database is in Full recovery model, log backups throughout the day. But what happens when that full backup begins to take hours, instead of minutes? A differential backup may help.

A full backup contains all the data in the database, and enough of the log to recover it. A differential backup contains only the extents (groups of 8K data pages) that have changed since the last full backup. Because a differential is usually smaller than a full backup, it can take less time to complete.

The commands to run the backups are very similar. A full backup:

BACKUP DATABASE AdventureWorks2012 
TO DISK=N'E:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full.bak';

To perform a differential backup, you add the WITH DIFFERENTIAL option:

BACKUP DATABASE AdventureWorks2012 
TO DISK=N'E:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Diff.bak'
WITH DIFFERENTIAL;

Combining full and differential backups throughout the day or week can reduce the time you spend waiting for backups to complete. There are two scenarios I usually see. The first is a full backup one or two days per week, with differential backups on the other days, and regular transaction log backups. This makes sense when a full backup every day would be too time-consuming. The second is a full backup daily, differentials every few hours, and regular log backups. This usually makes sense when fast recovery is necessary, and it would be better to be able to restore a full backup, a differential backup, and a subset of log backups – rather than the full backup and all log backups for the day.

This can add a layer of complexity to a restore strategy – instead of restoring the last full backup and ensuing logs, the most recent differential must be accounted for as well. You should practice this restore sequence so you understand the steps involved. You also need to monitor the size of the differential backups closely – depending on the rate of change in the database, you may not be saving as much space as hoped.

However, when your time to do backups and have them impact your users is shrinking, yet database size is growing, the full + differential strategy can be used effectively.

Option B: Backing up to multiple files

The standard database or log backup is done to one backup file:

BACKUP DATABASE AdventureWorks2012 
TO DISK=N'E:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_20140109.bak'
WITH Name = 'AdventureWorks2012 Full One File';

When a backup is written to one file, one thread is assigned. You can back up to multiple files – in one or more locations – and multiple threads will be used. This can make a backup take less time – sometimes significantly. Here’s the command to create a full backup across four disks:

BACKUP DATABASE AdventureWorks2012 
TO DISK=N'E:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_1__20140109.bak',
DISK=N'F:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_2__20140109.bak',
DISK=N'G:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_3__20140109.bak',
DISK=N'H:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_4__20140109.bak'
WITH Name = 'AdventureWorks2012 Full 4 Files';

Having multiple backup files can make a restore more complicated. All of the files must be available, and all must be referenced in a restore session. To restore the backup files shown below, use the following command:

RESTORE DATABASE AdventureWorks2012 
FROM DISK=N'E:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_1__20140109.bak',
DISK=N'F:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_2__20140109.bak',
DISK=N'G:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_3__20140109.bak',
DISK=N'H:\SQL Backups\AdventureWorks2012\AdventureWorks2012_Full_4__20140109.bak'
WITH NORECOVERY;

RESTORE DATABASE AdventureWorks2012 
WITH RECOVERY;

However, the benefits can be significant. I performed five tests – one file on one disk, two files on one disk, four files on one disk, two files on two disks, and four files on four disks. The results show that adding more disks decreases the backup time.

backup time chart

Note: these tests don’t cover all scenarios. If you have a group of disks in a RAID 5 or RAID 10 array, multiple threads writing to multiple files could be faster than one thread and one file. Lesson learned: test your backup files in your environment to determine what is best for you!

Backups Are Your Best Friend

And just like your best friend, you want to spend some time with them on a regular basis, getting to know them better. Learning tricks like the two I’ve given you here can help you when you face a situation where you need to improve performance. Having experience with these options – even if you do them in a sandbox environment or test it on AdventureWorks – is a huge asset.

Want to know even more about backups, restores, and cool things you can do with them? Check out my Backup & Recovery Step by Step training!

Announcing Backup and Recovery, Step by Step Training

We’ve said it over and over and over again: as a DBA, the most important task you need to be doing is backing up your data.

There are a lot of things to think about. What recovery model is best for your database? How can you automate backups? What steps need to be taken to restore a database that had full and log backups taken?

To help answer some of the most common questions, I’ve put together a 115-minute, five-module course on Backup and Recovery, Step by Step! 

 

Problems My Training Will Solve

You’re new to this whole SQL Server thing. You’ve heard people say you need to be backing up your data, but you aren’t sure where to start. This training will guide you step by step through the process.

You’re the “reluctant DBA” and you know you have regular full and log backups. However, you’ve never had to restore a series of them to solve a production issue. Yet. The time to learn is not when there is a problem. Learn the steps now and be prepared when you need them!

You’ve heard of the different recovery models, like simple, full, and bulk-logged, but you aren’t sure what the differences are and when they are appropriate. All of these will be explained.

Start here if you’re interested in learning more!

Why Are You Still Using SQL Server 2005?

The year: 2005.

What was happening with the Brent Ozar Unlimited crew?

I was working on the help desk for a small company while attending Fox Valley Tech College to earn my associate degree. I think I still wanted to go into programming at that point in time! I’d never been to a user group meeting or posted on a forum. I wasn’t even a runner!

Brent took his first full-time DBA gig at Southern Wine & Spirits. He worked on his first cluster and his first 1 TB+ database. He’d never written about Perfmon or attended a conference.

Kendra worked for a dot-com in Seattle. She spent a lot of time with 32-bit servers, automating installs, restores, and configuration of SQL Server for large environments.

The hottest phone of 2005?

The hottest phone of 2005?

Doug was working on a VB6 app for insurance adjusters. At home he had a smokin’ PC with a 1.0 GHz AMD Athlon and a 19” CRT monitor. He and his wife didn’t have kids yet, and he surprised her with a trip to Venice for Christmas.

Jeremiah was (don’t laugh) a developer. While sitting on the bench, he decided to revise the company’s project management software. He rapidly prototyped a new system using SQL Server instead of Access and the .NET Framework with an ORM instead of ad hoc SQL written in VBScript and classic ASP.

The Technology World

Star Wars Episode III was released. YouTube was launched. The Xbox 360 was released.

And, after long last, Microsoft released SQL Server 2005. It had been a long five years since SQL Server 2000. There were drastic changes between the two editions. DMVs (dynamic management views), CLR, hot-add memory, ranking functions, and the XML data type were introduced. Database Mirroring, a new HA feature, was available as of SP1. SQL Server Management Studio: need I say more? These were big changes!

Star Wars III: You know you saw it in the theater

Star Wars III: You know you saw it in the theater

Moving On

Jump in the time machine. 9 years later, I’m writing this blog post. It’s January, and a new year is beginning.

I’ve finished college, I’ve moved up from the help desk (way up), I’ve been to – and spoken at – a couple user group meetings just in the last month, and I’ve run a couple marathons.

Brent? He’s done OK. He’s attained Microsoft Certified Master (MCM) status in SQL Server. He’s gone from DBA to software evangelist to consultant. He’s spoken all over the world.

Kendra has also attained MCM status and become a consultant. She’s learned a lot about hardware, query tuning, and when it pays to upgrade your environment instead of sinking countless people-hours into solving a problem.

Doug is the newest member of the Brent Ozar Unlimited crew. He’s spent his 9 years learning everything he knows about SQL Server, and becoming a blogger, presenter, and user group leader.

Jeremiah used his software project to set the course for his career. He learned he had a knack for databases, and he ran with it. He too has become a successful consultant, blogger, and speaker.

Technology? It’s changed a bit, too.

Our iPads are now as powerful as Doug’s computer was back in the day. The Xbox One and PlayStation 4 launched this year. Instead of carrying around a phone, a laptop, a camera, and scanner, we carry one smartphone that does everything.

SQL Server, 9 Years, and You

SQL Server has changed dramatically, too!

SQL Server 2012 has been out for well over a year, and SP1 was released in 2013. This release brought some big changes. Internally, the way memory is handled was changed, improving operations and efficiency. There are improved T-SQL functions, such as more windowing functions. AlwaysOn Availability Groups were introduced as the latest HA/DR technology. With a GUI, Extended Events is ready to take over Profiler’s job. Columnstore indexes were introduced to make data warehouse storage and retrieval more efficient.

What’s next? We’re awaiting word on a release date for SQL Server 2014. This release is going to have the usual improvements, and then some. There’s a new method for cardinality estimation. A new engine is being introduced – In-Memory OLTP. Backups can be done directly to “the cloud” in Windows Azure. Clustered columnstore indexes will be updateable. There’s more – check out my recent webcast!

Let Me Ask You a Question

And let’s focus on your job. Are you still using the same laptop you did in 2005? If you’re on call, are you still using the same phone – or, help us all, the same pager – you had 9 years ago? Has your company’s main business application undergone change since then?

2005: HP Compag nx9600 - loaded with a  Pentium 4 650, 2048 MB RAM, 60 GB 7,200 RPM hard drive, and almost 10 pounds

2005: HP Compag nx9600 – loaded with a Pentium 4 650, 2048 MB RAM, 60 GB 7,200 RPM hard drive, and almost 10 pounds

You have a newer laptop. You have a newer phone. The applications have been updated, or changed entirely. Why are you still on SQL Server 2005?

Yes, the process of testing a database upgrade is time-consuming. But it’s well worth it. The changes to the internals and the new features available are nothing but positive. And let’s not forget that in just over two years – Microsoft is currently stating April 12, 2016 – extended support for SQL Server 2005 ends.

Start making plans to upgrade today, and reap the benefits!

What’s Coming in SQL Server 2014 [Video]

It’s coming…the next version of SQL Server. What surprises are in store for you? “In-memory” is a hot buzzword. Windows Azure capabilities are expanding. Data Explorer is being talked about. Jes will give you an idea of some of the new features and enhancements coming next!

Get an overview of new backup capabilities, integration with Windows Azure, new capabilities in Resource Governor, incremental statistics, and an overview of In-Memory OLTP (the artist formerly known as Hekaton)!

One question came up several times: which features will be Enterprise Edition only, and which will be available for all versions? As of today, Microsoft hasn’t released that information – but keep an eye out for “Features Supported by the Editions of SQL Server 2014″ on msdn.microsoft.com.

css.php