Blog

Building a Report to View Memory Usage

Memory is one of the most-used resources in SQL Server. Generally, the more you have, the better query performance you’ll get. This blog isn’t about the magic of the buffer pool or the plan cache, but you should understand how important they are to your server’s performance.  Since memory is such an important resource, you want to know how much of it you’re using at any time.

How can you track your server’s memory usage? One way is to use the Performance Monitor (Perfmon) counters exposed through the sys.dm_os_performance_counters DMV. One indicator of memory performance is Page Life Expectancy (PLE). You can capture basic memory usage over time by setting up a SQL Server Agent job to query this DMV, inserting the results into a table, and reporting on the table results.

I’ll show you how to collect this data and report on it!

Collecting the Data

I have a “DBAInfo” database on my instance that I use to track metrics and other information. I create a new table, MemoryHistory.

USE DBAInfo;
CREATE TABLE MemoryHistory
(ID INT IDENTITY NOT NULL,
CollectionDateTime DATETIME,
PerfmonObjectName NCHAR(128),
CounterName NCHAR(128),
CounterValue BIGINT)

Then, I create a new SQL Server Agent job that runs every 5 minutes.

memory report 1

The only step in this job is the below query, which queries the DMV and inserts the results into the table I created.

INSERT INTO MemoryHistory
SELECT CURRENT_TIMESTAMP,
object_name,
counter_name,
cntr_value
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Buffer Manager';

memory report 2

I schedule the job to run every five minutes.

memory report 3

Viewing The Data

Now, this data isn’t going to do me any good unless I view it, and make a decision or perform an action based on what I learn.

To view the data I’ve collected, I run the following query:

SELECT ID,
CollectionDateTime,
CounterName,
CounterValue
FROM MemoryHistory;

memory report 4

That’s a lot of junk to sort through when all I want to see is PLE, so I narrow down the query a bit.

SELECT ID,
CollectionDateTime,
CounterName,
CounterValue
FROM MemoryHistory
WHERE CounterName = 'Page life expectancy';

memory report 5

But who wants to read through results like that each time there’s a problem to see when PLE rose or fell? Not me. I’d rather see it in a graphical format. How can I do that?

SQL Server Reporting Services

I have SSRS at my disposal. I’m going to create a very simple report that will allow me to enter start and end dates, and will display a line chart for PLE during that time.

Reporting on the Data

I set up my report to have DBAInfo as my data source. In order to choose dates, I use the following query as my dataset.

SELECT ID,
CollectionDateTime,
CounterName,
CounterValue
FROM MemoryHistory
WHERE CounterName = 'Page life expectancy'
AND CONVERT(DATE, CollectionDateTime) >= @Start
AND CONVERT(DATE, CollectionDateTime) <= @End;

I change my @Start and @End parameters to “Date/Time” so I get a date picker.

I drag a Line Chart onto the design surface and add the CounterValue as my Value and CollectionDateTime as my Category Group.

memory report 6

I can preview the report to view it:

memory report 7

Last but not least, I’ll deploy this report to Report Manager so that I and others can run it, or even schedule a regular subscription.

Homework

There are several ways to improve this report. How can you modify the query to capture date and time data individually? How do you add parameters to the report so the user running it can choose their own date range? How would you collect and display data for different instances?

Want to know more about how to use Reporting Services to create reports for your environment? Check out my 90-minute SQL Server Reporting Services Basics training video!

Who Needs an Operating System?

In the 1950s, the global economy saw a tremendous change – container ships revolutionized global commerce. Shipping costs got 36 times cheaper with the introduction of containerization. What if you could reduce operational costs and revolutionize application and database deployment in the same way?

Containers to the Future

In the last few months, the developer world has been excited about docker. Docker is a container system. Basically, it’s an easy way to do application deployments. Rather than deploy an entire VM, Docker lets developers deploy a consistent stack – the developers can set up individual services with independent configurations and deploy them.

Sounds like a contained database, right?

The best part about these containers is that, if you do it right, you might not need an OS under the covers. Or, if you do, it doesn’t matter which one! The container hold the configuration it needs to run. It doesn’t matter if you deploy on Linux, Solaris, or (maybe) even Windows – as long as that container knows what to do, you’re good.

With the database, should you really care which edition of SQL Server you’re running on as long as it supports the features you need?

Surely My Storage Is Safe From This Madness!

The storage game is rapidly changing. Not that long ago, traditional storage vendors required that you buy expensive and proprietary big systems. That part of the industry was disrupted by modular technology like the Dell EqualLogic. As storage has become cheaper, it has returned into the chassis in specialized servers; you can easily cram 24TB of SSD storage into a server with gear you can buy off the shelf.

Large scale storage is getting even more accessible: Seagate have announced their Kinetic Cloud Storage. It’s storage that uses an API, rather than an operating system. The drives feature new technology and Ethernet connectivity. This makes it possible for application developers to interact with the drives using an API rather than through a storage controller, cache tier, and storage tiering.

The idea might sound crazy, but third party libraries already exist that take advantage of this technology. Come launch time, developers will have drive simulators at their disposal, more libraries, and you’ll have a better idea of what this costs. You’ll be able to put 2.4 petabytes into a full rack of hardware. All without buying a complex storage appliance.

How Much Farther Can It Go?

Think about it:

  1. Applications can be deployed as containers.
  2. Storage will soon be deployed as containers.
  3. Databases are on their way to being deployed as containers.

Just as the cargo container had far reaching implications for the shipping industry, the software container has far reaching implications for our industry. The operating system and even the database platform become raw materials that are used to support the applications deployed on them.

What Does It Mean?

What’s it all mean for IT professionals? It means that the times, they are a changin’. If the application can be deployed as a container, there’s likely to be a decrease in managing the complexity of production applications. Once you can treat storage as an API, you change how storage administrators interact with storage. Over the next few years, containerization is going to change the way you manage the IT infrastructure.

What Developers Need to Know About SQL Server

What Developers Need to Know About Designing Databases

One of my favorite tips, and I never thought of it that way. Code is agile, and databases are brittle. It seems easy to refactor the database in the beginning, but as your app grows and more stuff interfaces with it, life gets a lot harder.

Johan means you should use the right data types, like using date or time when you’re just storing one of those. Plus, if you know a field is nullable or unique or relates to a field in another table, tell the database by setting up constraints or foreign key relationships. This can actually make your queries faster because SQL Server uses this knowledge to build better execution plans.

For example, DATE is a narrower field than DATETIME, requiring less storage, so you can pack more rows per page on your date indexes.

NVARCHAR is Unicode, which SQL Server uses 2x the storage space for as compared to just plain VARCHAR. This means you can cache half as many rows in memory, and your storage IO takes twice as long. If you’re not seriously going to store Unicode, stick with VARCHAR. (Then of course right-size your fields as opposed to using MAX, which can be difficult to index and may get stored off-row depending on data size and config options.) On a related note:

While database vendors will say, “Sure, you can use our database as a file server!” remember that the licensing on databases is a lot more expensive than licensing on a file server. Plus, these big files hit the transaction log, making your smaller/faster transactions compete for log file access, and slowing everything down.

Primary keys (and clustering keys) should be unique, narrow, static, and ever-increasing. Don’t go redesigning an existing database to change this, because it’s probably not your biggest bottleneck, but when starting from scratch, keep these guidelines in mind.

I wrote triggers a lot too when I was a developer, but as I moved into the database side, I realized how tough they are to scale. They’re synchronous, and they can cause concurrency problems. Consider moving to an asynchronous solution that doesn’t do as much locking/blocking.

Narrow one-field indexes are usually less useful because you’re selecting more than one field. Plus, every index you add is like another copy of the table. Insert a record into a table with ten nonclustered indexes, and you’ll be doing eleven (or more) writes to disk each time.

SQL Server Management Studio’s index suggestions have no concerns whatsoever about adding overhead to your inserts/updates/deletes, and they’re often supersets or subsets of existing indexes.

When you first get started with a new database, don’t go crazy adding indexes with guesses about what will get queried. Indexes are easy to add later – wait to see how the queries actually shape up, and which ones get run the most often.

What Developers Need to Know About Writing Queries

This is one of my favorite tips. SQL is really easy to learn, but the underlying server code behaves very differently than you might expect. We’ve gotta think set-based. On a related note:

Learn more about isolation levels.

SQL Server licensing is about $2k USD per core for Standard Edition, and $7k per core for Enterprise Edition. These are usually the most expensive cores you’ve got in the shop. Aaron’s guidance is based on minimizing the amount of CPU clock cycles we need to burn in the database tier, and thinking about moving those to the easier-to-scale-out app tier.

Sometimes the business design requires outer joins, but beware that you can get much worse execution plans as your data grows.

Databases perform very differently at scale, whether it be scale of data or scale of underlying hardware or scale of simultaneous queries.

What Features, Commands, Datatypes, Etc. Should Be Generally Avoided

If you say BETWEEN December 1st and December 31st, SQL Server sees that last parameter as the very beginning of December 31st. Any data from, say, 1AM on December 31st and afterwards is going to be excluded.

Simplicity is a big theme here. Manhandling the engine around with hints usually results in worse performance overall.

We’re not saying nested views are wrong – sometimes they can work really well – but KBK is pointing out that you may end up dragging a bunch more tables into your query inadvertently.

Always good advice – but it’s advice for managers rather than developers, right?

Sometimes, a scan is easier – especially for grouping.

For more details, watch our There’s Something About NOLOCK video.

The Best One-Line Things to Improve Databases and Apps

That’s actually how I got started in database administration – I got tired of learning the Language Du Jour because I’m not all that bright. I have so much respect for developers who can rapidly learn new languages and frameworks and then deploy them effectively. I switched to databases because I could learn one language that’s been the same for decades, and it’s even used by multiple back end platforms. I’m lazy. On a related note:

Nick is from StackExchange, the guys who built Opserver, an open source monitoring system. They’re brilliant about watching the top resource-intensive queries on the database server, and you should be too. When something new pops up, you need to know why it’s suddenly using more resources than you expect.

Query execution plans are road maps for how SQL Server will process the data for your query. Learn more about those.

You don’t have to know how to FIX the bottlenecks, necessarily, but you have to know what’s going to be the pain point as you scale.

The best DBAs are partners, not predators. When you find a good one, they’re willing to help make you look like a star.

I’m all about this. By the time somebody brings me in for advice, the best practices guidelines aren’t usually working for them, and they need to learn how to bend the rules.

Live Video of the Twitter Chat

Here’s the 50-minute video of the Twitter chat where we built this list, plus covered a lot more tips:

I apologize for the audio – WebEx’s audio codecs are horrendous. (I’ve got a Blue Yeti microphone.)

Sign Up for the Next One:
What DBAs Need to Know About Monitoring

Page life expectancy. Disk queue length. Page splits. There’s so much old bogus advice out there around SQL Server monitoring, and it’s time to update the rules. Join Microsoft Certified Master Brent Ozar as we collaboratively write an e-book with new advice on monitoring. You can contribute too – we’ll be working together over Twitter using #DellSQL.

We’ll discuss and share:

  • What metrics every DBA should monitor
  • Which metrics are meaningless
  • Our favorite free DMV scripts

Register now to watch it live on Thursday, November 14th.

“You get free videos! And you get free videos!” #SQLintersection

This week, Jeremiah, Kendra, and I are presenting at SQL Intersection, a conference that also happens at the same place and time as DevIntersection and AngleBrackets. I like Intersections because there’s a bunch of different technologies covered – SQL Server, all kinds of development, SharePoint, cloud, and infrastructure.

Kendra Polling the Audience

Kendra Polling the Audience

For our pre-con, the Accidental DBA Starter Kit, we had a little fun with the attendees. We had a little contest, picked a couple of winners, and brought them to the front of the room. Their prize: our Make SQL Server Apps Go Faster video course, but then things got trickier. We offered to give them a different prize if they’d give that up for what was inside envelope #2. After a few minutes of toying with their emotions, we announced that they weren’t the only winners, and attendees should check under their seats. 

Attendees checking under their chairs

Attendees checking under their chairs

Exactly half of the audience – the right hand side – had coupons for the video course too. We toyed with the other half of the audience (LOSERS!) for a while, and then gave them the prizes too, complete with an Oprah moment. “You get free training videos! And YOU get free training videos! Everybody gets free training videos!”

We love this stuff.

For those of you who joined us in our PASS pre-con earlier this month, you might be asking yourselves, “Hey, how come you didn’t do this at the Summit?” Well, we tried, but PASS wouldn’t let us do it:

Your prize giveaway contained direct references to your company and services…. The training material is combined with direct links to your website, other course offerings, and has your company logo viewable….

That’s a shame – we really would have loved to give away the free training to all of the attendees. After all, it’s the exact same training material they already paid PASS for, and we just wanted to make their note-taking and ongoing learning easier.

Ah, well – guess we’d better focus on our SQL Intersection post-con on Thursday. (Don’t bother checking under your chairs – or maybe I’m just saying that to throw you off the trail.)

Upcoming User Group Sessions in Chicago and LA

Coming soon to a user group near you, it’s…me.

November 13: Chicago .NET User Group
Brent’s How to Think Like the Engine: SQL Server Internals for Developers

You’re comfortable writing queries to get the data you need, but you’re much less comfortable trying to design the right indexes for your database. In this 90-minute session with Microsoft Certified Master Brent Ozar, you’ll learn how the SQL Server engine looks at your indexes and builds your query results.

December 5, Los Angeles SQL Server User Group:
Brent’s How to Think Like the Engine

You’re a developer or a DBA, and you’re comfortable writing queries to get the data you need. You’re much less comfortable trying to design the right indexes for your database server. In this 90-minute session with Brent Ozar, you’ll learn how the SQL Server engine looks at your indexes and builds your query results.

December 11: Chicago .NET User Group
Brent’s T-SQL Antipatterns

It’s not your fault: there are some things that SQL Server just doesn’t do very well. Microsoft Certified Master (and recovering developer) Brent Ozar will explain things that compile great, work great on your desktop, but then explode when you try to scale. We’ll cover implicit conversions, string processing, functions, and sargability.

Frequently Asked Questions About TempDB

The questions came fast and furious in one of my recent TempDB webcasts, so here’s the ones I wasn’t able to answer during the live session:

Q: Virtualized OS, SAN, no dedicated LUNs, most likely scenario with no gotchas, theoretically: dump TempDB with everything else all on one virtual volume, including logs — or split to separate virtual volumes and separate from other databases and also from logs? (Matthew N.)

When building virtual SQL Server machines, always use a separate VMDK/VHD for TempDB. This lets your VMware and Hyper-V sysadmins manage your TempDB differently. Some storage systems can automatically move different VHD/VMDK files to different tiers of storage based on their access patterns. If you suddenly enable RCSI and need a lot more TempDB throughput, it’s easy for your VMware admin to adjust that when it’s in a separate file (or LUN).

Q: How many SGAM pages we have in one data file? (Alvaro C.)

One per each 4GB of data file space. To learn more about the internals of how that works, check out Microsoft’s knowledge base article on TempDB.

Q: Can filtered indexes eliminate the use of tempdb on complex queries? (Greg J)

It’s possible, but I would want to find the exact queries causing the TempDB use before I tried to prescribe a method to get pain relief. I can envision a scenario where a query gets insufficient memory grants and then frequently spills to TempDB when doing sorting and joining to other tables, but … I’ll be honest, this is kind of a stretch. Filtered indexes wouldn’t be the first place I’d turn here.

Q: How do you diagnose TempDB problems in SQL Server 2000? (Debbie C)

I don’t. I don’t work on SQL Server 2000 anymore. It’s unsupported. If anything breaks on it, you’re out of luck. When a client comes to me with a SQL Server 2000 performance problem, step 1 is to get them onto a supported version of SQL Server.

Q: If I only have RAID 5 storage available, should I keep tempdb in 1 datafile? (Mike K)

If your server’s biggest performance bottleneck is SGAM contention in TempDB, then you need multiple TempDB data files to alleviate the bottleneck. If, on the other hand, your server’s biggest performance bottleneck is storage writes, then you shouldn’t be playing around with TempDB. :-D

Q: is there a rule of thumb on the autogrowth settings for the temp db files? (Lee T)

I actually want to pre-grow my TempDB files whenever possible. They shouldn’t be growing frequently on a production box. For example, if I bought 200GB of space for my TempDB data files, I’m going to go ahead and pre-grow out TempDB to take that space. It doesn’t really make sense to wait around for users to grow it out when I’ve got dedicated storage for it. Because of that, I don’t usually focus on TempDB autogrowth sizes.

Q: My TempDB avg write is around 5ms, is that good enough? (Welly S)

Only your server can tell you: using wait stats, is your biggest bottleneck right now the TempDB writes? If yes, and if your users aren’t satisfied with performance, and it’s user queries hitting TempDB, then it’s not good enough. If TempDB writes aren’t your biggest bottleneck, or if your users aren’t complaining, then it’s fine.

Q: Sometime I use DBCC FREESYSTEMCACHE (‘All’) to shrink the tempdb. Any side effects to that? (Ashwani M)

What’s the problem that you’re trying to solve? Why are you shrinking TempDB? Grow it out to the space it needs to be, and leave it there. If it keeps growing larger, that’s SQL Server telling you it needs more space. If you want to permanently address the root cause, you need to find the queries that are allocating TempDB space and fix those.

Q: If you have only 1 tempdb file, what is the best way to add 3 more, especially if the single file is very large already. (David S)

I’m a stickler about change control. I want to make my production changes at a controlled time when I know it’s safe. Because of that, I tend to schedule non-critical changes for a maintenance window, do all of them then, and then restart the SQL Server instance afterwards. Changing TempDB configurations is a great fit for that – keep it in a change window.

Q: Can we start initially with 2 files of Tempdb or it should be minimum 4? (Syed A)

Microsoft’s Bob Ward recommended 4 in his TempDB talk at the PASS Summit, and I’d tend to stick with that guidance. I don’t see a benefit in using just 2.

Q: We tried using and SSD in various configurations and had minimal suddess. What should we look for? (Brad P)

In my recent post on benchmarking SSDs, I discussed the approach I use for load testing. I’d start there. We often uncover bottlenecks in the RAID controller, its settings, or the cabling.

Q: We have SQL Server 2008 R2 in Production. Would you recommend using SQL Server 2012 going forward? (Sreedhar L)

Sure. It’s got lots of nifty improvements that make it my favorite SQL Server version yet, and it’s pretty stable now that it’s had a round of updates.

Q: In a SQL Server 2008 R2 cluster of two nodes, would this affect the number of tempdb files? (William N)

No, the number of nodes in a failover cluster don’t affect the number of TempDB files you choose.

Q: we have 24 cores and allocated 24 tempdb files as per the recommendation 3 years ago. Is this still valid? (Nageswararo Y)

Most of the storage I test seems to work worse as you throw more random load at it. Using 24 files instead of 8 makes some storage perform much worse. Because of that, I wouldn’t go with the one-file-per-core recommendation unless I was having serious SGAM contention – and then, I’d want to load test my storage under extreme random load to make sure it performed well.

Q: I am at war with my san admins who deny there is a san issue. tempdb (using your blitz script) shows a high write stall time of 51ms to 10ms, could there be any thing else that i can do to check? or any other stats that i can gather to help me proof we have a problem? (Ozzie B)

Sure, check out our videos on storage troubleshooting and our 6-hour training class on storage, hardware, and virtualization.

Q: what to do when tempdb is full in realtime and need to troubleshoot without sql restart? (Ravi S)

Normally, I don’t want to troubleshoot in real time – I want to kill the queries involved (try sp_WhoIsActive to see which queries have TempDB allocations) and then fix the problem long term by getting more TempDB space. Someone’s always going to run a crazy query to fill up TempDB, and we need to plan for that. I’m a big fan of using third party performance monitoring tools for this type of thing – they can jump back in time to look at what was using TempDB at any point in recent history.

Q: What are the challenges if we maintain TempDB on local storage? (Nageswararao Y)

Make sure you monitor for drive failures, make sure the hot spare drive automatically takes over for failed drives, and make sure drive performance doesn’t go down as the drives get older.

Q: What if my read is 2.5 msec but my write is like 40 msec. All on raid 5 this still means I need a raid 10 or a ssd with raid1? (Pramit S)

If wait stats show that writes are your bottleneck, then yeah, you want to investigate what it would take to make the storage go faster. This can include changing to RAID 10, adding more drives, switching to SSDs, or reconfiguring your RAID controller’s settings.

Q: You mention to put TempDB files on local drive within cluster for better performance which are there any drawbacks on doing that? (John H)

It’s unsupported on SQL Server 2008 R2 and prior versions. You also have to make sure the local drives are actually as fast as the SAN, too – heaven forbid you do this with three locally attached magnetic hard drives in a RAID 5 config.

Q: Any recommendations as far as transaction logs for TempDB? (Dominick S)

TempDB’s log file tuning is the same as user database log files, so no special guidance there.

Q: Have you started using SMB 3 for SQL storage? How much gain can be made with the ability to use RAM cache for SMB 3.0 shares? (Fred P)

If I’m going to invest in more memory somewhere, it’s going to be in the SQL Server, not the file server. That’ll avoid the round trip latency off to the Windows file server box, and SQL Server can use it for other things like caching execution plans and query workspace.

Q: You mention that you don’t like to store TempDB in RamDisk, Is there a big reason for that? (Jose S)

RAM drives aren’t built into Windows – they’re third party software and drivers. If I can avoid installing software on the SQL Server, I will. I wish I could find completely bug-free software, but I haven’t seen that yet. :-D

Q: should the Recovery_Model be set to FULL OR SIMPLE for tempDB? (Sushant B)

Full recovery lets you take transaction log backups. Since you won’t be doing that in TempDB – you can’t restore that database – TempDB goes in simple recovery mode.

Q: Also, we encountered a strange issue recently which is after re-starting our staging SQL Server 2008 R2 , some queries were running slow. Why would it be? Would you need to re-build statistics or Indexes after the temDB restart? I appreciate your help. (Sreedhar L)

You got a different execution plan after the restart because when SQL Server restarts, the execution plan cache disappears. Avoid restarting SQL Server if possible. When you do run into an issue like this, you have to investigate issues like parameter sniffing and statistics. Start your journey with Erland Sommarskog’s excellent post, Slow in the Application, Fast in SSMS.

Q: since initial tempdb data file calculations are often based on core count, how does hyperthreading impact that formula? (Allen M)

As far as SQL Server is concerned, hyperthreading just doubles the number of cores. SQL Server doesn’t know whether they’re real cores or virtual. If you give SQL Server 8 cores, it just sees 8 cores. (NUMA and SMP architectures aside.)

Q: Any risks or drawbacks of using Instant File Initialization? (John M)

There’s a security risk. Say you’ve got a development SQL Server that houses a database with all your customer and payroll data on it. You drop the database, and then you go create a new database. The storage gear decides to use the exact same storage area as the database you just dropped. With Instant File Initialization, the contents of your newly created database are actually the contents of the old database file. Now, granted, you won’t see the tables in there in SSMS, but if you now detach that database, copy the MDF somewhere else, and use a hex editor on it, you’ll see the original data pages. Without Instant File Initialization, on the other hand, when you create the new database, Windows will erase out the space before it’s available to the new database.

Q: any suggestions for TempDB on Amazon EC2? (Vladimr F)

Unfortunately, if you’re seeing storage throughput as your biggest problem in EC2 (and it very often is), you’re going to have to do sysadmin-level tuning first. We’ve written a lot about EC2 tuning here at BrentOzar.com – use the search for EC2 – but the work involved changes pretty frequently as Amazon updates their infrastructure. This is one of those cases where your systems administration work is never really done. You have to figure out how to take the virtual hardware they give you and make it perform well, and those techniques change frequently.

Q: Does the Temple DB need be backup? how often? (Li Z)

No. Isn’t it nice that we have these little breaks every now and then?

Q: What type of RAID did you recommend for Local TempDB SSDs? (David W)

I suggest starting with a mirrored pair (RAID 1). If you need more space than that, you’ll need to do load testing on your server, your RAID controller, and your drives, because each combination is different. For example, in the Dell PowerEdge R720 tests we did recently with Samsung 840 drives, we found that we didn’t really pay a speed penalty for RAID 5 once we stuffed it with drives.

Q: If SQL Server recreates the tempdb when it restart, how do we configure the 4 tables every time? (Carlos M)

When you do it once, SQL Server will repeat the configuration every time. It gets saved.

Q: When SQL Server restarts would the transactions be lost inside the tempdb? (Mahsa A)

Yes. (It’s definitely called TempDB because it’s temporary.)

Q: best practice for putting temp DB on SMB 3.0 file share? (Fred P)

The best practice is not to do it – I’d always recommend using local mirrored solid state storage for TempDB because it’s cheap and fast. The instant I have to hit the network in order to access TempDB, performance goes down – because remember, I’m sharing that network for my data file access too.

Q: Should all this stuff not be done by the engine itself rather than be done by a poor DBA? (Lakshminarayan N)

If you’re a poor DBA, there’s never been a better time to look for a job. It’s a fantastic job market for DBAs who understand how to do this kind of performance tuning. Hey, think of it as job security – it’s a good thing! If SQL Server managed itself, you wouldn’t be getting paid to read this blog. (You *are* reading this blog from work, right?)

Learn more in our free Secrets of TempDB video.

Solving Session State Slowndowns(video)

Session state – we can all agree that it needs to be as durable as our application database, but does it need to be in the database? It’s common for teams to move from the ASP.NET in process session state provider to a SQL Server session state provider as soon as they move from a single web server to a web farm. Unfortunately, this means that the database server needs to scale with session state – even if session activity is overwhelming application activity.

Session state is inherently ephemeral – your application is storing what a users was doing at a particular moment in time. This is only necessary because the internet, specifically HTTP, is not stateful. If we were to manage state data in the browser, we would have to keep sending that information back and forth between the user and the server.

Rather than increase traffic, developers can choose to keep session data in a local data store. To start with, this is invariably an in process data store that is little more than a glorified cache living inside the web server. Once you’re successful, you need two web servers. At that point, you need somewhere else to store session state. Frequently, the SQL Server ends up as the session state server. This is a Bad Idea®.

Putting session state in SQL Server leads to a lot of problems. Every read to SQL Server refreshes a last updated column (this keeps the session alive while users are only reading data). A large amount of small writes can lead to WRITELOG waits in SQL Server. But, beyond that, storing a lot of data in session state can lead to other problems since the session data can grow large enough to move off row (which can also lead to WRITELOG waits). And, to top it all off, on very busy servers, session state activity can lead to waits on reading data out of the table.

In short – putting session state in SQL Server is a very bad idea.

What About Hekaton?

What about it? Even though Hekaton is being stored in memory, there are a number of features of Hekaton that make it unsuitable for session state when you compare Hekaton to other options. (For those that don’t know, Hekaton is the SQL Server in memory OLTP database that’s being released as part of SQL Server 2014.)

Hekaton isn’t a good fit because it ties up valuable SQL Server memory space for storing temporary state. Yes, Hekaton is fast. Yes, it avoids the locking problems. But if you want session state to persist between restarts, the log file is still involved and that can become a significant bottleneck. Hekaton tables also don’t support LOB storage mechanisms, so teams wanting to use this feature will have to write a custom session state provider.

When you’re already going down that route, why not use something else?

ASP.NET Session State Providers

Anyone could write their own session state provider – Microsoft document the API and it’s pretty easy to do. A number of developers have created session state providers for different databases. How would you choose between one of the many options available?

In the video, I cover different criteria like:

  • High Availability
  • Fault Tolerance
  • Durability
  • Latency
  • Resource Utilization

There are many different reasons why you should choose one session state provider over another, including using the SQL Server session state provider. The most important consideration is matching a set of criteria up with your application requirements.

Changing the Session State Provider.

Let’s say you’ve decided to use Redis as a session state provider. How would you go about configuring your ASP.NET application to use it? As it turns out, changing the session state provider is as simple as changing a few lines in the web.config file.

By default, your web.config file will contain something like this line:

<sessionState mode="SQLServer"
        sqlConnectionString="data source=127.0.0.1;
        user id=<username>;password=<strongpassword>"
        cookieless="false" 
        timeout="20" />

This just tells ASP.NET which SQL Server to use and provides a set of credentials for login. So far, this is pretty easy, right? How do we change the session state provider?

After you’ve downloaded or installed the right DLL, you can just change the relevant lines in the web.config file:


<sessionState mode="Custom"
                 customProvider="RedisSessionStateProvider">
  <providers>
    <clear />
    <add name="RedisSessionStateProvider" 
         type="Harbour.RedisSessionStateStore.RedisSessionStateStoreProvider" 
         host="my-redis-host.contosofroyo:6379" clientType="pooled" />
  </providers>
</sessionState>

That change will configure ASP.NET to immediately start using the RedisSessionStateStore library for managing ASP.NET session state. Make sure you time the change with an outage, because that cut over will be swift and may lose user data.

…and they all lived happily ever after

It is very easy to get session state out of SQL Server – you just need to pick a new provider and set everything up. Different session state providers will have different requirements, so it’s important to think about what is involved in setting up one provider as opposed to another one. Not all session state backends are created equal – evaluate what you need from your session state provider and go from there. Developers, you can make a few simple changes today and earn the love, or at least grudging respect, of your database administrators.

Index Hints: Helpful or Harmful?

Let me ask you a question: do you want to boss the SQL Server query optimizer around?

Query Optimizer flowchart

If you answered no: good. You’re willing to let the query optimizer do its job, which is to find the least expensive way to run a query, as quickly as possible.

If you answered yes: you’re a brave person.

Maybe you’ve hit upon the perfect index for a specific query, but for some reason the optimizer won’t use it. But you know it improves performance. How can you make the query optimizer listen to you?

Index hints are a powerful, yet potentially dangerous, feature of SQL Server.

Let’s look at an example. I’m working with AdventureWorks2012. The database has two related tables, HumanResources.Employee and Person.Person. They are related through the BusinessEntityID column. I want to retrieve information about the users and their logins.

USE AdventureWorks2012;
GO

SELECT PER.FirstName,
PER.LastName,
EMP.LoginID
FROM HumanResources.Employee EMP
INNER JOIN Person.Person PER ON PER.BusinessEntityID = EMP.BusinessEntityID; 

Let’s look at the execution plan.

index hints 1

A nonclustered index seek is performed on Employee.AK_Employee_LoginID, a nonclustered index on the LoginID column. A clustered index seek is performed on Person. Note the cost is 0.217439.

I notice that the Employee table has another index, PK_Employee_BusinessEntityID, which is on the BusinessEntityID column. I want to force my query to use this index instead. I can do this by using the WITH (INDEX) hint.

SELECT PER.FirstName,
PER.LastName,
EMP.LoginID
FROM HumanResources.Employee EMP WITH (INDEX (PK_Employee_BusinessEntityID))
INNER JOIN Person.Person PER ON PER.BusinessEntityID = EMP.BusinessEntityID; 

Let’s look at this execution plan.

index hints 2

Now, a clustered index scan is performed on Employee. Note, though, that the query cost has increased – to 0.220402.

In your case, the index you force the query to use might get the better cost – for now. But what happens when more data is added to the table, and statistics change? What happens if you update SQL Server, and the query optimizer changes?

Eventually, the index you’re using may not be the best one for the job – but SQL Server is going to continue to use it anyway. You’ve told it to do so, and it’s doing to keep doing it.

Think about how you would get rid of it. Is there an easy way to search all of your code – application code, stored procedures, report definitions – for this specific index hint, to remove it?

Another point to consider is what would happen to that code if the index was disabled or deleted? Would it continue to run? Let’s give it a try. I issue a disable index command.

ALTER INDEX PK_Employee_BusinessEntityID ON HumanResources.Employee DISABLE; 

Then I run the same query as before, and what happens? I get an error:

Msg 315, Level 16, State 1, Line 1
Index “PK_Employee_BusinessEntityID” on table “HumanResources.Employee” (specified in the FROM clause) is disabled or resides in a filegroup which is not online.

Index hints can be a powerful feature, but use with caution. Look for other ways to optimize your query – perhaps a different index can be created, or your query can be rewritten. If you can’t find a better way, document the use of the index hint and its purpose. When you upgrade SQL Server, test whether that index is as effective.

Remember, the query optimizer does its job really well – let it.

Announcing sp_AskBrent® for Troubleshooting Slow SQL Servers

When someone tells you the SQL Server is slow, what do you do?

  • Run sp_who looking for queries that might be blocking someone
  • Check the SQL Agent jobs to see if there’s a backup job running
  • Fire up Activity Monitor looking for problems
  • Remote desktop into the server to look at CPU use
  • Open Perfmon to check your favorite metrics
  • Run a wait stats sampling query looking for the biggest bottleneck

That’s all a lot of work, and I’m lazy. So right now, as we speak, I’m onstage at the PASS Summit unveiling a single stored procedure that does all that in ten seconds, plus more.

You already know how our free sp_Blitz® gives you a prioritized list of configuration and health problems on your SQL Server. Now when your SQL Server is slow, you can find out what’s going on just by asking Brent – with sp_AskBrent®. Here’s how it looks:

sp_AskBrent® in Action

sp_AskBrent® in Action

In this example, I’ve got three problems, and I can click on links to get more details about the specific finding, how to stop it, and the query text.

I can also turn on Expert Mode and see some of the raw data that sp_AskBrent® checked for diagnostics. Here’s Expert Mode turned on with a server having a whole mess of problems:

sp_AskBrent with Expert Mode Enabled

sp_AskBrent® with Expert Mode Enabled

It returns results including:

  • Which wait stats were the biggest during the last five seconds
  • Which data and log files are getting the most reads & writes, and how fast they’re responding
  • Which queries consumed the most resources during the sample

Using the @OutputDatabase parameters, you can also log the results to table. This means you can set up a SQL Agent job to run it every few minutes, and then when someone wants to know why the server was slow last night, you can run it with the @AsOf parameter to get the results as of a past date and time.

To learn more, check out sp_AskBrent®’s download page.

#SQLPASS #Summit13 Women in Technology Lunch – Live!

Welcome to the second day of #SQLPASS #Summit13! I’ve been having a blast this week – presenting a precon with Brent and Kendra, watching great speakers like Erin Stellato and Bob Ward, chatting at the Community Zone, and walking the vendor booths.

Today is one of my favorite events – the annual Women In Technology Luncheon! Our topic is Beyond Stereotypes: Equality, Gender Neutrality, and Valuing Team Diversity. We have a great panel – Erin Stellato, Rob Farley, Cindy Gross, Kevin Kline, and Gail Shaw. This is an incredibly diverse and brilliant group! 

12:10 pm – Tom LaRock is kicking off the event! He introduces our moderator, Mickey Stuewe. She reminds us to ask questions and follow along on Twitter using the #passwit hashtag. 

12:12 – Today we’re talking about diversity – making sure everyone is included. Mickey introduces the panelists! Cindy Gross is an MCM and member of AzureCAT. Rob Farley is a business owner and MCM and MVP, and outgoing PASS board member. Kevin Kline, pass president of PASS, MVP, and author – and father of several daughters! Erin Stellato is an MVP and brilliant consultant. Gail Shaw is an MCM and MVP, and contributes to the community in many ways. 

12:14 – Our first question is about fitting in. Gail tells us she doesn’t always fit in – not because she’s a woman, but because she’s a geek. She spends her weekends playing D&D! (I played many years ago. I loved it. I want to do it again!) 

12:15 – Have you experienced subtle cultural differences that make it hard to fit in? Rob says he does see people who exclude others because they are different. But that’s not him. In any environment where there is a large number of the same people, it can happen that there are assumptions. Let’s not forget about religious differences as a form of diversity and exclusion as well – important to remember at a large, international event like this too! Remember to tell people how you value them. 

12:19 – Cindy, how can you tell when you’re being treated differently, and why? She’ll go to a coworker or friend and ask for advice. She has a group of people she can go to for a second opinion. “When I react to something, would I react that way if it came from someone else?” is what she often asks. 

Kevin: There’s a lot of interesting scientific research going on right now. The average person can know about 150 people really well. Beyond that, stereotypes can save time. “They’re like the index pages in a database” gets a good laugh. He talks about the difference between introverts and extroverts. Introverts will internalize and be introspective about comments made towards them. Follow Cindy’s advice – talk to others about a situation or comment that was made and get their advice about it. 

12:26 – Kevin, how do you explain that you think you’ve been treated differently to your boss? Database professionals will spend a lot of time debugging code and resolving problems, but we don’t spend enough time figuring out the people we work with. Why do they get up in the morning? What drives them? Talk to your boss. If you ask them if you made a molehill into a mountain, and he or she says yes, you did – think of it in one of two ways. They could have a very different set of values from you. Or, we may not have properly expressed how this behavior affects our values. You need to have your own “values statement” – and share that with your boss, and even coworkers. (What I hear is: we need to realize that although our job is technology, we work with PEOPLE. You have to be willing to understand and talk to PEOPLE.) 

12:33 – Rob, how do you deal with being on a team when you’re excluded? If you see someone being excluded, you have to speak up. If you see a situation that is wrong and you don’t say something about it, it doesn’t help. You’ll be wracked by guilt. Stand up and be the person who champions what is right. 

Gail says that if the person you call friends are the ones that say you need to change, you might need to redefine who your friends are. She recently had to “break up” with a friend because she wanted Gail to be something she wasn’t. She wasn’t accepting. Sometimes it’s best to burn those bridges. Find friends that are interested in who YOU are and who listen to your concerns. 

12:40 – Cindy says there is so much more than just gender that we need to take into consideration. It could be so many things – your personality, what you support politically or socially. We all have our own stereotypes, too. 

Gail says that she has many male, geek friends. They feel excluded because they would rather stay in and play a card game instead of going to a football game. It’s not just women who feel this. 

12:45 – Erin, how do you establish and build relationships with coworkers in a diverse setting? You have to look at how you build any relationship – you build a common ground. You share interests and beliefs. Find common ground with your coworkers – we can all find something. You have to grow and celebrate that. Use the common ground to make your team better. “You can lead without being the leader.” Reach out to every member of your team – even if a “boss” isn’t doing so. Any good relationship takes time – and that’s OK, you just need to accept that. Find people’s strengths and celebrate the diversity. 

Kevin: Even if you don’t have an opinion on a topic, say that. Don’t say nothing. People take silence as you don’t care, or you agree with what is going on. Introverts will often be quiet, and let extroverts run the show – even when it’s, “Where do we go for lunch?” 

12:50 – What techniques have you seen to make events more inclusive in a professional setting? Kevin says when PASS was being founded, they didn’t have the resources other groups did – like money. They had to be better at people. “The antidote to swagger is humility.” 

12:55 – Best part of the day! Questions from the audience! 

Question from online – how do we celebrate conclusions vs. milestones? Rob says we need to recognize that different people celebrate different things. Even here this week, some people come to Summit to see friends. Some come to work on their career. Some come for specific sessions. We are all different. Recognize that we all need to celebrate – support others in the way they choose to do so. 

“How do we foster compassion in the workplace? There’s a feeling that compassion is the antithesis of a successful team.” Kevin – there’s a ton of research in how to make teams work faster, but what about “how do we make this team higher quality?” Someone has to start by demonstrating it – one person. You have to get the people with influence and authority to model those behaviors. Erin reminds us, “You can lead even if you aren’t the leader.” And if it bothers you that much, sometimes, you may be at the wrong company. If you can’t affect change, you may need to leave. 

“What is the best way to disarm someone who is applying a stereotype to you?” Gail says, there is something in common between you. Be polite. Find that something in common and work from there. You’ll never get someone to stop applying a stereotype by bashing your head against it. Cindy adds that if you try to take it head-on and convince that person that YOU aren’t that stereotype, you’re not solving that problem. The stereotype will still be in that person’s head. Rob says, on the inside, have pity for them. They are missing out on who you actually are. Care for them. Try to get to know them for who they are. 

“I was walking around the expo yesterday and had a conversation. You’re a woman, in technology, you’re a foreigner, you’re an athlete – how are you going to succeed in this industry? How would have you answered?” Erin says, “I would have said, “How am I NOT going to succeed?”” Gail would have said, “Who are you to tell me I’m not going to succeed?” Cindy had a professor call her into his office right before graduation. He commented on her hair color and told her she wouldn’t succeed in the real world. Now look at her! What other people think is irrelevant. 

1:12 – We could talk about this all day, but what is our call to action? 

Gail – take a look in the mirror and make sure you’re not the one doing it. Don’t be the problem. 

Rob – love people back regardless of how they are treating you. Don’t be part of the stereotype. 

Wow, this was inspiring! Thank you to our panelists for being on stage and sharing your experiences. Thank you to those who stepped up and asked questions. Thank you to the audience, both in person and online. I look forward to next year’s event! 

css.php