This is new in the preview of SQL Server Management Studio 2016, available for download now, and available for playing-around at the TechNet Virtual Labs. It even works when you’re connected to SQL Server 2014 – but not 2012, because it relies on DMVs that were only shipped with 2014.
It’s so cute to watch it working on multiple indexes at once:
And the bigger your plans get, the more fun it is to watch. I’m only showing 600-pixel wide images here though to make your blog reading easier, but here it is working with multiple statements in a batch:
It’s not perfect – if SQL Server believes only a few rows will come back, but its estimates are wrong, then the operator will keep showing 100%. Here’s an example:
The top right operator starts first, and SQL Server only expects 1 row for this clustered index scan. As it turns out, there’s more data – note that the line leaving the scan keeps moving, indicating we’re still getting data.
Interestingly, the clustered index seek immediately underneath it keeps changing its time numbers. It’s as if SQL Server Management Studio is saying, “Okay, we started this now and – wait, no, hang on, not yet, now. Okay now we’re – no, maybe not yet.”
The catch (because you knew there was a catch)
Great news – it even works when connected to a server running SQL Server 2014, as long as you’re using the CTP version of SQL Server Management Studio 2016!
Bad news – you can’t just view someone else’s live query plan. The live query stats plumbing has to be turned on before the query starts.
Here’s how to start that plumbing – choose any one of the following:
- In SSMS vNext, before you run a query, click Query, Include Live Query Statistics
- In any SSMS, before you run a query, run SET STATISTICS XML ON or SET STATISTICS PROFILE ON
- Start an Extended Events session and enable the query_post_execution_showplan extended event. For example, use sp_BlitzTrace™ with @TraceExecutionPlansAndKillMyPerformance = 1. Danger: this enables live query stats on all sessions, and as you can guess by how Kendra named the parameter, there’s going to be a big performance hit for that on a busy server. (There’s a Connect request about that already.)
I know. You want the capability to get there from something like sp_WhoIsActive, but it’s not there. Look, it can’t be all candy and roses. They have to save something for the next version.
If your application can sometimes connect to SQL Server – and sometimes not – it can be really tough to troubleshoot. If SQL Server doesn’t even hear the call, it can’t log any errors.
Here’s the questions I ask to get to the root cause:
When it happens, does it happen to all applications? For example, do you have monitoring tools pointed at the SQL Server, and are they able to consistently connect to SQL Server even when the problem is happening?
Does it happen to all application servers? If you have several app or web servers, are they all affected? (If you’ve only got one, now is an excellent time to set up another one for troubleshooting, and balance the load between them.)
Are all queries in the application affected, or just some queries? Sometimes I see long-running queries keep right on going, but only new connections are affected.
Are there any errors logged in the SQL Server or application servers? In one case, we saw that all of the application servers lost network connectivity at the same time, on a regular basis. Turns out there was a bad switch involved.
Is there a pattern to the days/times of the timeouts? Start writing them down or documenting when they happen. For example, in one case, we saw that the days/times were exactly correlated to the security team’s regularly scheduled port scans.
During the timeouts, is the app server able to ping the SQL Server? When all else failed with one troubleshooting triage, we put a free network monitoring tool on the app server to ping the SQL Server every 10 seconds. Sure enough, the next time the app had query timeouts, we were able to prove that even pings weren’t working – thereby ruling out a SQL problem.
Ask those questions, and sometimes you don’t even have to troubleshoot SQL Server at all – the answers tell the whole story.
Do foreign keys hurt the performance of inserts? Okay, we all know that foreign keys do require some work, but the crazy people of the internet have wildly varying guesses as to just how much work is involved. Estimates varied from “barely any” to “as much as 50% slower”.
I figured that you were going to do the work, so I might as well do it.
I created two tables –
parent table has an
int primary key and a fixed width
filler column. I varied the size of the
filler column across multiple test runs, just to make sure I wasn’t biasing the test by using an artificially large parent table. The testing happened with a parent table that had 2 rows per page, 4 rows per page, and 8 rows per page.
You can find the test code in fk.sql.
The test code inserts 1,000,000 rows in batches of 5,000 rows. Although this isn’t strictly realistic, it’s better than timing a single batch of 1,000,000 inserts.
What happens during the test?
It turns out that no matter what the page size it takes approximately 40ms to insert 5,000 rows with the foreign key in place. When we remove the foreign key, inserting 5,000 rows takes ~26ms.
Although the difference looks big at 5,000 row batch sizes, each insert is taking, on average, 0.0072ms with the foreign key and 0.0052ms without the foreign key. These differences are hardly worth noting. And, let’s face it, 2 microseconds is a pretty minor price to pay for data integrity.
I ran the test one last time with a batch size of 1 row, just to see how much difference there was in insert performance.
When it comes down to single row performance, the difference is neglible. We’re spending all of our time waiting for other parts of the system.
It’s not terribly real, but it’s real enough to say “Yes, foreign keys have overhead” and follow that up with “No, you shouldn’t care, just add the keys.” The tests were done in VMware and the hardware in question is mid-2012 Retina MacBook Pro.
In the real world, we’ve got a lot more than inserts happening, but it’s worth quantifying the cost of a foreign key lookup and realizing that it’s worth having foreign keys.
Kendra says: Wanna know if your foreign keys are ‘trusted’? Check out our script here.
Brent & Jeremiah are coming to Pittsburgh on October 2-3, 2015. We’re doing a one-day pre-con, and then sticking around to present at SQLSaturday Pittsburgh too.
Developer’s Guide to SQL Server Performance – Live!
You’re stuck with a database server that’s not going fast enough. You’ve got a hunch that the biggest bottleneck is inside the database server somewhere, but where? In this one-day class, you’ll learn how to use powerful scripts to identify which queries are killing your server, what parts of the database server are holding you back, how to tackle indexing improvements, and how to identify query anti-patterns.
In one day of training with Brent Ozar and Jeremiah Peschka, you’ll learn:
- How wait stats tell you where to focus your tuning
- How the plan cache shows you which queries are the worst
- How to make fast improvements by picking the right indexes
- How to identify and fix the most common query anti-patterns
This is a pre-con before SQLSaturday Pittsburgh. If you haven’t attended a pre-con before, there’s a few things to be aware of. The session will be held at:
Pittsburgh Technical Institute
111 McKee Road
Oakdale, PA 15071
Lunch is included with your admission, but it will not include hands-on lab exercises or copies of the slides. This is how we keep SQLSaturday pre-con costs low – these SQLSaturday pre-cons are the most cost-effective way to get a day of training from us.
Use coupon code EarlyBirdPittsburgh to save $50 for registrations this month (May) – see you there!
After the SQL 2014 SP1 mess, I thought, “How are people with a real job supposed to keep up with updates?”
Go try to find the most recent SQL Server service packs and cumulative updates on Microsoft.com. It’s miserable – they’re scattered all over the place. Eventually, a lot of us started relying on SQLServerBuilds.blogspot.com, which is cool, but I have no idea who maintains it, and there’s no way to subscribe to updates there whenever a new patch comes out.
And then just try to figure out when support for a version ends – that’s even worse.
So I built SQLServerUpdates.com:
Does what it says on the tin, and you can subscribe via RSS or email as well. Hope that helps!
You want your SQL Server VM to go as fast as possible, but you don’t have a lot of time or budget money, and you’re not allowed to reinstall it or upgrade. Good news – we’ve got a webcast for you.
In part 1, my demo-focused session will explain the VM-friendly performance features of SQL Server 2012, 2014, and even the next version, 2016.
You’ll learn how to measure a running application’s performance, and then simply by turning a few knobs and switches, you’ll see how performance improves. Microsoft Certified Master Brent Ozar will explain which features are right for your workloads, plus give you resources you can use when you get back to the office to go faster and learn more.
In part 2, Veeam will explain how to get fast transaction-level recovery of SQL databases, including agentless transaction log backup and replay, so you can restore your SQL databases to a precise point in time and achieve low recovery time and point objectives (RTPO™). Learn more at Veeam Explorers™.
This session is for sysadmins and DBAs running production workloads in VMware, Hyper-V, and Xen, whose end users are unhappy with performance.
Last week at Microsoft Ignite 2015 in Chicago, I demonstrated how to tune a few SQL Server workloads with my favorite process:
- Measure how fast the SQL Server is going
- Check its bottlenecks
- Apply some easy-to-tweak configuration settings and measure the difference
There’s no camera in this hour-long video, just a screen capture, so you’ll have to imagine my jazz hands.
As I picked up a nutritious breakfast from my favorite bakery (love ya, Magnolia), I noticed a sign in the window saying they were certified Kosher. As I usually do, I thought about how that kind of thing applied to databases.
The Star-K symbol on food products or restaurants means that the ingredients and processes have been inspected by a rabbi or a kosher certification company. The way I understand it, given my extensive scanning of the Wikipedia entry on kosher certification agencies, a rabbinic field rep closely monitors the restaurant’s ingredient acquisition and food preparation. He makes sure everything matches up with kosher standards, and then puts his reputation on the line to vouch for it.
I’d love to have that for databases.
What if we had a “DBA Approved” stamp that covered things like data model design, index tuning processes, good query writing, and backups that matched the company’s RPO and RTO goals?
There could be a centralized list of people who were approved to give out that certification, and they’d get regular ongoing training to make sure they were qualified to put their stamp of approval on a database.
That’d be the kind of certification I could actually get behind.
The closest I’ve seen so far is kCura’s Best in Service certification for Relativity. It’s not just about DBAs passing a test – it assesses a customer’s hosting environment and actually post the scores live every week at Trust.kCura.com so that end users know which environments are the safest bets to host their e-discovery data. I like this approach a lot, and I wish Microsoft would take a similar approach with SQL Server.
This afternoon, we had one of my favorite sessions: the off-the-cuff Q&A with some of Microsoft’s best and brightest. I probably missed somebody, but for sure we had Conor Cunningham, Joe Yong, Jos de Bruijn, Kevin Farlee, Luis Carlos Varga Herring, Mike Weiner, Sunil Agarwal, and Tiffany Wissner.
These sessions are fun to attend because sometimes you can get hints and details that weren’t originally scheduled to go public.
I didn’t transcribe all of the questions – often the answers are well-known amongst smart people like you, dear reader. You’re also attractive too, but I digress.
Q: How do you control engine versioning in Azure SQL Database?
Conor answered: SQL Azure is platform-as-a-service (PaaS). It gets you automatic patching. We build SQL Azure and SQL Server off the same mainline branch of code, and we ship first in SQL Azure. We can turn them on for limited sets of accounts first, then turn it on more broadly in public preview mode, and then eventually make it generally available. We have a whole fabric layer used to manage our versions and deploy them as we need it. You’re getting the next version of SQL Server before it ships to the public.
Followup Q: Can you use a database compatibility level to control the version you’re on?
Conor answered: In January, we launched the new round of v12 servers. Before v12, we didn’t expose compatibility level settings. Now, we’re starting to expose the 130 compatibility level, which will also be the compatibility level in SQL Server 2016. As we get more features aligned to the boxed product, it’ll make more sense.
Q: When I combine AlwaysOn AGs with replication, how do I manage jobs?
Luis answered: you have to manage those manually by copying the jobs to all servers.
Q: Is Always Encrypted going to work with Windows logins?
Conor answered: in the first version, no, but at some point, yes.
Q: What’s the future around DACPACs and deploying a whole app?
Conor answered: Windows Azure has a deployment and servicing model that they’ve been working on, and we’ve been talking about how we can make it work with DACPACs. We don’t have anything to announce today. I’ll be honest with you – there’s a lot of problems with doing this correctly, especially with complex deployments like AlwaysOn AGs.
Q: Besides a single SSDT, are there any other investments in developer tooling or ALM?
Conor answered: We haven’t been doing a ton of major investments there. Anything you’re interested in? (Data generation and automated builds.) There’s some stuff we’ve been working on with the Visual Studio team, but they’re driving it. We don’t have anything to announce in the SQL Server 2016 release time frame though.
Q: I have SQL Server sprawl. How are you helping customers consolidate and move to Azure?
Conor answered with a huge smile: We’ve got the MAPS toolkit, but there’s a couple of things we’re working on that we are working hard on and we can’t share yet.
Luis added: We’re working on making it easier to look at your current database and your AlwaysOn AG configuration, and recommend a set of VMs with the right power in Azure.
Q: In SQL Server 2014, there are a lot of limitations on in-memory OLTP. Is that still true in SQL 2016?
Kevin answered: In 2014, we targeted specific scenarios where we knew we could be successful, and we nailed those. In this release, we’re working on getting it as broad as we can. SQL 2016 will have foreign key constraints between in-memory tables, but not regular tables.
Q: We’re looking at rolling out MDS, but we’re not happy with the current UI. Is Microsoft improving that?
Mike: I’ll have to defer that. (There are no BI experts in this particular session.)
Q: What’s enhanced about the Backup to Azure, and can we get an easier cleanup in maintenance plans?
Luis answered: We deployed it in SQL 2014, and we heard your feedback. We’re now supporting cheaper blob storage and striping (beyond the 1TB limit, up to 32TB). Backup will be parallelized across those stripes, so it’ll be even faster. That will be out in the next CTP. For automated cleanup, consider the managed backup feature rather than maintenance plans.
Q: How will licensing work in SQL Server 2016?
Tiffany answered: We haven’t made those decisions at this point in time. We just went through some significant changes in licensing, and most of our customers are still absorbing that change.
Conor added: You’re asking, is it possible to put all features in all editions? We don’t have anything to announce today. If you look at SQL Azure, it has most of the same features in all editions, and we vary pricing by resources, and that’s something to think about. But we don’t have anything to announce today.
The end – at least of my summary. I had to leave a little early to meet Erika, Jeremiah, and Kendra for dinner. (Gotta love it when conferences happen in my home town.)
If you attended Microsoft Ignite in Chicago this week, you’re probably barely keeping your eyes open at this point. There’s been so many after-hours get-togethers, and it’s been tough to get any spare time between sessions. (Jeez, these things are spread out all over McCormick.)
One of the things I love about Ignite is that the videos and presentations are available during the conference itself. That makes it easy to catch up on the sessions I wasn’t able to attend, and then I can go talk to the presenters with my questions the next day.
I realized I should turn my session notes into a slide deck, and give it away. I’ll update it a couple more times this week as more sessions finish.
Attendees – you can go back to the office, and run through this deck as a lunch-and-learn with your coworkers who couldn’t go. This way, your boss will be more likely to send you back next year because you were such a good note-taker.
User group leaders – you can use this as a quick intro for your chapter, before your next meeting, to get ’em up to speed fast on SQL Server 2016.
This deck is licensed in the public domain, meaning you can do anything you want with it. Enjoy!