[Video] Office Hours 2016/12/14 (With Transcriptions)

This week, Brent, Erik, and Richie discuss their craziest SQL server troubleshooting issues, common causes of SSPI handshake errors, setting deadlock priority “high” on a query, vmotion high availability, best practices on Windows Server 2012 page files, and whether a career as a Database Administrator will be good for the next 40 years.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2016-12-14


Brent Ozar: Let’s see, I suppose we should get into questions. It is 15 minutes after the hour. We’ll start off with the first one, it says, “I have a replication question. Just kidding.” I guess this person sees that we don’t have Tara here, so we’re fresh out of replication.


Brent Ozar: Fellow human says, “I have a nephew who is very interested in what I do. Is database administrator still a good career choice for someone who’s in high school now? Is this career going to be good for the next 40 years?”

Erik Darling: In some form, but maybe not in like a traditional DBA sense. I think data is always going to need a steward. I think it’s always going to need someone to take care of it and to protect it in some way and to watch to make sure that certain things are happening that safeguard it. I don’t think that it’s always going to be the same backup and restore and corruption checks and, you know, query and index tuning that it is now. But I think in some way data is always going to need someone to make sure that the right processes are happening with it. It’s good to learn what’s happening now and maybe get into it now and get your feet wet so 40 years from now you can say, “Back when I started, I used to do backups myself. I used Ola Hallengren—the senior, not the junior.” So it’s good to get your feet in and start doing stuff now but whether this job, in this role, and the way it is now is going to last for forty years, I doubt it. Think about cab drivers five years ago.

Richie Rump: Look at me. I am the future of your data. This is it. My role.

Brent Ozar: That’s terrifying.

Richie Rump: It’s the future.

Brent Ozar: So what do you think, Richie?

Richie Rump: I think it’s going to be the person that understands the development side as well as the data side, that is going to be your steward. That is going to be the person that in the future with the cloud and all this other stuff that is going down, I think that’s going to be the person that is going to understand the data, how it works. It’s not going to be this low-level “Oh, I do the backups,” because those are all going to be automated in SQL Server 2030, whatever that is.

Erik Darling: But who runs the cloud?

Richie Rump: Someone who understands the code, someone who understands the data, that is going to be the future of the DBA.

Erik Darling: Who clouds the cloud, Richie? Who clouds the cloud?

Brent Ozar: I think if you step back generally, too, if you say, is any career going to be good for 40 years? That answer is probably no. With our parents, or grandparents, like my grandpa held the same job for I want to say 40 years at the same factory, he was a union’s rep or a union relations rep. Retired, he knew everybody in the community. My dad has held—I want to say four different careers. I’ve held a bunch of different jobs even, just every two or three years switching out. The things that you have to learn so much quicker is just very different.

Erik Darling: My dad has been doing the same kind of thing forever and ever doing sort of like technical sales writing and marketing but the companies and the stuff he’s been doing it for have been all over the place over the years. He’s at Oracle now but he started a little company called Data General in Massachusetts that was doing, at the time, tiddlywink hardware where like NT clusters were like holy crap.

Brent Ozar: Yeah.

Richie Rump: So now you don’t have to go anywhere else, Erik. You’re in your last job.
Erik Darling: That’s true. Ever.

Brent Ozar: We think he’s going to die with the rate that we eat and drink. So you can have one career, it’s just a really short lifespan now.

Erik Darling: I basically have the lifespan of a fly.

Brent Ozar: Watch him age in front of our eyes.


Brent Ozar: Next question, we’ve got some interesting soft skills questions here today. “What is the craziest thing you’ve heard someone wanting to do with SQL Server? Or the craziest troubleshooting issue that you’ve had?” Well, Richie went second last time, we’ll make him go first this time. Richie, what’s the craziest thing you ever heard someone wanting to do with a database server, or just in general?

Richie Rump: With a database server? Oh, geez, I’d have to think about that one.

Brent Ozar: Then Erik goes first.

Erik Darling: Well, a couple things that pop out in memory since I’ve been here was one person who basically, it was using CDC, change data capture, but it wasn’t working the way they wanted so they rewrote a layer over change data capture with Dynamic SQL. It was like CDC broke for us so we had to rewrite it. So they basically had this layer over CDC that did stuff for them. Another one that came to mind was people who are partitioning data by day but when they receive data it could update prior day’s stuff, so all these merge statements that they wrote were just going kablooey. Like they were just not getting the kind of partitioning magic that they were hoping for with things. There was sparse little partition elimination happening in that scenario.

Brent Ozar: How about you, Richie?

Richie Rump: The craziest thing, and actually it didn’t turn out to be so crazy, but I was working with a QSR company, fairly large. They had decided to write their own NoSQL database.

Brent Ozar: Oh, okay, which is common now these days.

Erik Darling: Everyone is doing that.

Richie Rump: Yeah, so it’s all this payment processing that’s coming in and they decided to write their own SQL database to store all this information. Where I came in is that they had no way to report off of that. So they started dumping it into SQL Server and that’s how I came into the project. It ended up working out for them. They put a ton of effort into building their own NoSQL stuff, but it worked out. So again, maybe crazy like a fox, maybe a bit. So I can’t wait to hear them upgrade to Cassandra. That would be fun.

Brent Ozar: I’d say the weirdest thing—there are a couple, two come to mind. One was storing Access databases inside a SQL Server database and they were redoing a terabyte of Access databases every day. So that was spectacularly challenging. The thing that I was most horrified by was I went to go troubleshoot queries and I’m like, “Where are these queries from?” The guy says, “Perl.” He goes, “Yeah, we have a Perl application, a Perl script that goes through and generates dynamic T-SQL.” I’m like, “Okay. All right. I’m curious now. You have to show it to me.”

Erik Darling: So where does Perl sit? She writes some interesting T-SQL. What’s her email? Let’s bring her in here. Another crazy thing is a friend of mine works for—he’s a lead developer at a rather large company that does news stuff. They’re still currently, as far as I know, using Oracle. Years ago, when I first started working with databases, I was like, “Do you guys do anything with databases?” He’s like, “Yeah, we use Oracle but we’re migrating away. We’re going to start doing everything with files on disk because there’s no locks and it’s faster.” So like every six months I’ll be like, “Hey, how’s that migration going?” He’s like, “We still haven’t got off Oracle.” Still working on it.

Richie Rump: The Oracle rep was not worried about losing that account, at all.

Erik Darling: He’s like, “Oh, cool.”

Brent Ozar: “Yeah, you want another renewal?”

Erik Darling: “Whenever you’re ready, just sign them.”

Richie Rump: “No discount, sorry.”


Brent Ozar: Let’s see here, next question, “What are some common causes of SSPI handshake failed errors?” Dagnabbit, what I’ll do is I’ll add in the weekly notes a link that I’ve got. Robert Davis has a post. If you search for Robert Davis SSPI and setspn, set service provider name, just all one word, setspn, he has a great post on how to fix that.

Erik Darling: Yeah. His posts offer the T-SQL Tuesday really cool about backups and restores.

Brent Ozar: Yeah.

Erik Darling: Because I don’t do a lot of backup and restores anymore so it’s cool to like reread someone who is like, “No, this is my thing.” Talk about LSNs and, you know, catching stuff up with differentials and logs, and like NoRecovery posts. Interesting stuff. I love reading his blogs.

Brent Ozar: You know, he has a function to convert the LSNs from fn_dump_dblog.

Erik Darling: Yeah, but it’s a scalar valued function so I can’t use it.

Brent Ozar: That’s not fast enough for me.

Erik Darling: No, it’s against my religion. I can’t do it. I can’t have the parallelism inhibited when I’m converting LSNs.

Richie Rump: So it’s against the Jedi religion? I didn’t know that. That’s good to know.


Brent Ozar: “Have any of you found useful situations in which to utilize SET DEADLOCK_PRIORITY high?” He’s got a whole paragraph of other stuff in there, but basically, have you ever thought about setting DEADLOCK_PRIORITY high on a query?

Erik Darling: No, only low, and only for like index rebuilds which I think was what it was for, for the most part. But, no.

Brent Ozar: Yeah, I always get worried overriding someone else’s session and like making them rollback when they may not be the cheaper one to rollback.

Erik Darling: Yeah.

Brent Ozar: I would also say if you find yourself doing DEADLOCK_PRIORITY high and if it’s anything like a report, maybe it’s time to think about a reporting server or a different isolation level.

Erik Darling: Or at least there’s NOLOCK hints, man. Come on.

Brent Ozar: Yeah. Dirty reads.

Erik Darling: Wrong data.

Brent Ozar: We need to kill a DBA on a song called dirty reads.

Richie Rump: Yes.

Brent Ozar: Someone make this happen.

Richie Rump: Straight out of Detroit, it’s Dirty Reads.


Brent Ozar: Fellow human asks, “We run update stats with full scan on our key tables every night. I noticed the other day that there was a command update statistics with full scan that was threaded out, meaning it went parallel. My concern is that I currently have MAXDOP set to 1. Do some Microsoft commands simply ignore MAXDOP and do what they want?”

Erik Darling: Yes, they do.

Brent Ozar: Yes, they do. Does update stats do that? I don’t even know.

Erik Darling: Yeah, I want to say it’s Bob Ward, but it might have been another Bob or someone else at Microsoft, but I just associate most of the really good blogs with Bob Ward—or the Bobs, the Bob blog. But there was one, I think it might have been during his 2016 “It Just Rans Faster” talk. I think he was talking about how sometimes even on a server with MAXDOP set to 1 you’ll still see some CXPACKET waits for some things. That’s because some system commands will use parallelism behind the scenes, whether you want them to or not.

Brent Ozar: Which is cool. I like that.

Erik Darling: I think stats is one of them. There was that trace flag where you could update more than one stat at a time on a table. That was one of those things where it was like, “No, these are definitely going to go parallel because you probably need it if you’re doing all that.” But there was something about parallelism in that one too.

Brent Ozar: Nice.


Brent Ozar: Oh, god, Tammy asks a question. I’ll try to keep the poker face on for Tammy there. “You are all smart so I thought I’d ask.” There’s your first mistake. “I want to query Active Directory.” Okay, strike one. “Using PowerShell.” Strike Two. “To get all the CNAMEs for my servers.” Strike three. “Any suggestions?” Mike Fal. You want Mike Fal. It’s like the word fall but only with one L. Mike Fal—really nice guy, PowerShell guru, knows systems-type stuff and he may be able to answer that one for you. If not, I would actually post it on Stack Exchange, dba.stackexchange.com, and I bet somebody has got a script for it already.

Erik Darling: If you’re more of a Slack person, the SQL community chat has a big PowerShell room where they work on DBA tools. Someone in there might be able to help because, shot in the dark, but I’m willing to bet someone in there is maybe doing something with log shipping where getting CNAME alias would be really helpful.

Richie Rump: As opposed to going to DBA, Server Fault is probably the better one to post that in.
Brent Ozar: Oh yeah, you’re right. Yeah.

Richie Rump: Server Fault, so not the dba.stackexchange.

Erik Darling: It’s weird that there’s no powershell.stackexchange.com, right? You’d think they’d want to keep those people out of general population.

Brent Ozar: I thought when Erik said, “go to the Slack channel,” it’s basically because if you tell someone, “Can you do this in PowerShell?” the answer is always yes.

Richie Rump: Yes, there’s lots of Thors wielding the PowerShell hammer in that room over there.

Brent Ozar: Which is good, we love them. Nothing against them. DBAtools.io is where you’ll go find their site and their Slack channel.

Erik Darling: I change my underwear with PowerShell.
Brent Ozar: Only once a week though.

Erik Darling: True story.


Brent Ozar: “Would like to get Brent’s take on vMotion for high availability. Can it replace SQL Server clustering?” Erik, how about your take?

Erik Darling: I don’t have one. You know me and VMs, man. I never spin those up. So VM, vMotion, no, because if SQL crashes, all right, where a cluster would pick that up and say, “Hey, we’re going to try to restart SQL or I’m going to try to restart SQL on another node,” VMware doesn’t really cover that too well as far as I know. So I don’t know if that’s the way I would go with it.

Richie Rump: We could actually say that we got a call on Sunday with this exact problem, where they had a vMotion on a backup and they thought that was adequate and the whole database got corrupt and there was a whole bunch of other stuff and they couldn’t get it back. So there’s your answer.

Brent Ozar: That wasn’t even availability, let alone high availability. It was un-availability.


Brent Ozar: Next question, “What’s the best practice on Windows Server 2012 page files? As my Window’s admin says we only need 16 megabytes for the crash dump and then leave the rest to physical RAM but I’m seeing a scenario where SQL Server needs more memory and it’s showing RESOURCE_SEMAPHORE waits.” Okay, so hold that thought. So, Erik, do you want to answer that one? Would you increase the page file size…?

Erik Darling: No. Because I really wouldn’t want SQL Server paging that to disk. I would probably want to add more actual memory or increase the max memory if it’s undersubscribed right now. Like if you have more memory in the server than you’re giving to SQL, but really RESOURCE_SEMAPHORE means that SQL Server does not have enough physical memory. That is not a page file problem. Page file should be teeny tiny on a SQL Server because you shouldn’t be doing anything with a page file there. You should avoid that like the plague. Eek, no. You know what’s funny? During the first DBA days someone was like, “Should I have a 512 gig page file?” I thought that you guys were messing with me. I thought I was being hazed. It was like, “Did you ask me that right?” I was like 512 gigs? No. I was like they’re screwing with me. Like Jeramiah is going to come up behind me and give me an atomic wedgie after I answer this question and is going to like Indian burn. There’s no way this is a real question. Really, no. RESOURCE_SEMAPHORE waits are horrible. They are poisonous. So RESOURCE_SEMAPHORE, you may also see RESOURCE_SEMAPHORE query compile, when that pops up and that is stuff just does not have enough memory to do anything when that happens. You have queries waiting to compile because it can’t get memory to compile and queries waiting to run because they can’t get memory to run. I don’t see the question but if you’re on a newer version of SQL Server, you can run sp_BlitzCache and sort by memory grants. So if you’re on SQL Server 2012 SP3, or 2014 SP2, or 2016 any ole version, you can run BlitzCache and sort by memory grants and get all sorts of good information about what queries are using giant chunks of memory and maybe get a hint there. But really, if your server has like 12 gigs of memory in it, that’s your bigger problem, not the query memory grants.

Brent Ozar: I have to say, that’s one of my favorite questions ever. I like that quite a bit.

Erik Darling: Well then you answer it.

Brent Ozar: I saw your face go and I’m like, there he goes, that’s awesome.

Erik Darling: It just reminded me of that DBA Days thing that I got all nostalgic and I was like they asked me if I should have a 512 gig page file.

Brent Ozar: And now those of you watching on the webcams can see Jeremiah sneaking up and giving Erik an atomic wedgie.


Brent Ozar: Next, “We have a transaction log backup that keeps hanging, meaning it runs for hours until I manually cancel and restart it. Normally it only takes ten seconds to run and it just happens randomly. Any suggestions on where to start investigating?” I like WhoIsActive for that. I would run sp_WhoIsActive and see if there’s either that something is blocking it, because I can envision scenarios where MSDB, there’s blocking in MSDB, and the thing isn’t able to update the MSDB completion-type stuff. Or, that it’s waiting for something, like a wait stat shows something, growing a file, creating a file on a network file share. That’s probably where I would start.

Erik Darling: So like what waits would you look for? Would it be backup I/O or async I/O completion or backup thread? Or just one of those things maybe just going?

Brent Ozar: I have a feeling that it’s—and of course, I’m just making stuff up—but I have a feeling that it’s creating the file. That it’s either creating a file or writing to the file. Like if you have a network blip and suddenly there’s a problem writing to the network server on the other end.

Erik Darling: Or like if it happens at a particular time where it’s kind of like it’s repeatable, I bet a bunch of stuff is writing to that drive at the same time and it’s all just kind of saturated, that could be it too.

Brent Ozar: Yeah, over and over again I see DBAs go set up all their log backups to run every 15 minutes and everybody is in like nuclear clocks synchronized so exactly at 15 minutes the file share just falls over.


Brent Ozar: Fellow human says, “vMotion cannot get guarantee transactional integrity.” Put the crack pipe down. It totally works just fine in terms of transactional integrity. What you might be thinking is it can cause a cluster to failover, or database mirroring to fall over, but it is completely transparent to SQL Server otherwise.

Erik Darling: One thing that does come up sometimes with vMotion is that if Windows crashes, you have the reboot time wherever it brings back up. So you know, it’s not as fast as a cluster where that other node is just sitting there like, “I got Windows. I got some ready to go.” It’s just like where you fail SQL Server over and it’s just pretty much the SQL Server startup time and whatever resource failover happens.


Brent Ozar: Follow up on the page file question. He says, “We did increase the physical RAM but then earlier we’d had a paging issue and modified memory grew. Even after we added physical RAM the modified memory is not getting released.” I’m not sure what you mean by modified memory. If you want, search for—just hit Google and Brent Ozar “Sysadmin’s Guide to SQL Server Memory.” We explain how the basic memory process works in there and whether or not it gets freed up and why.


Brent Ozar: Next one, “I need to migrate a bunch of larger databases and I can’t take a significant outage on Saturday night.” Okay, well, so how about Sunday? Sunday works for me. Sunday works not for me, I’m going to be somewhere else. “I was thinking about Saturday morning restoring the last full backup and diff and then an hour before a short outage, I take this…” All you really want to do is log shipping. Either do log shipping or database mirroring. I’m kind of a fan of log shipping for this because it works kind of like what you’re describing. Database mirroring is a little bit more painful to setup. It’s still pretty easy, but log shipping is really slick. We actually cover this exact technique for migrations in our Senior DBA class, which I happen to be teaching this week online. Yesterday, I was talking about how you use log shipping to migrate databases from one place to another using the WITH NORECOVERY for your last transaction log backups, so check into that.

Erik Darling: I think I’m working again.

Brent Ozar: I don’t know that you’ve ever really worked, but you’re audio is working.

Erik Darling: Before I worked here. Before I got hired here rather. I did quite a bit of work, sir.


Brent Ozar: Eric asks, “I went away from Idera’s defrag manager and I went to Ola Hallengren scripts. However, the update usage and stats step is trashing tempdb. Why is this? I’m on SQL Server 2008 R2.”

Erik Darling: Update usage? Ola Hallengren runs update usage?

Brent Ozar: I’m surprised. I can see it in the CHECKDB step maybe, maybe in the CHECKDB, it has DBCC update usage, but I’d be surprised at that too.

Erik Darling: No, no.

Brent Ozar: No.

Erik Darling: I’ve never seen that. I used to run that all the time and I never ran into that.

Brent Ozar: Stats step too, I mean, updating statistics that would trash tempdb, I would imagine, so I’m going to say something horrible. Anything Ola’s stuff does, Idera should have been doing. So that would kind of make me wonder the opposite thing. If Ola script is doing it, I wonder why Idera’s is not.

Erik Darling: Yeah. I would also be curious about what values you’re running Ola scripts with. That’s what I would have some questions about because you know like for the years and the number of machines that I’ve run Ola scripts on, I never had a problem with them trashing tempdb. The one time that a stats update trashed tempdb was when I updated with a full scan all the stats on a table used for some kind of logging that was around 2.8 terabytes in size. So that was the one time I messed up tempdb doing a stats update. That wasn’t Ola though, that was me. That was one file that got real big.


Brent Ozar: “What kind of advice do you guys have for managing batch loads to a database while end users are still querying it? Like do you manage locking when you’re doing batch loads?” Richie, how about you? Have you got experience with that? Like doing batches?

Richie Rump: While they’re querying? No, not that I’ve had any problems with locks or anything like that. So not on some really high transactional system. We’ve always been able to dump data and locks are handling it. Usually if your hardware is good enough, you should be able to handle pretty much anything.

Brent Ozar: So they’re basically saying that your hardware isn’t good enough. That’s what Richie is saying here. Have an SSD, good luck. How about you, Erik? Any advice on that one?

Erik Darling: It depends on what kind of situation you’re in and how you’re doing your loads. If you’re using bulk insert, you can control the batch size so that maybe you don’t put a million rows in at once. You can do like smaller chunks going in. If you’re on expensive edition or 2016 SP1, you have some partitioning tricks you might be able to do so you don’t totally wonk up people’s stuff when you’re loading data into a new partition and then swapping it in. I guess as a third option, if you’re on just about any edition and you sort of have some competency, you could always try using optimistic isolation levels to cut down on the amount of locking problems you have while you’re pushing data into a thing, jam it on in there, like Al Bundy with a fat woman’s foot.

Richie Rump: I love bulk loads. It’s so awesome.

Brent Ozar: I’m just going to leave that there. I’m not going to ask anything about that. I’m just going to leave that just where it is.


Brent Ozar: J.H. asks, “I have SQL Server 2008. I’ve got the latest and last service pack applied to it. Is it possible to apply vulnerability hotfixes afterwards? Should a specific KB article be applied to fix it?” That’s a great kind of question that you can go ask Microsoft Support or go into stackexchange.com. Usually, if you go filter for those kinds of KB articles, you can find more details about those. We don’t do any patching on SQL Server 2008. Nothing against it, it’s just not something that we go dig that much into.


Brent Ozar: Fellow human says, “I’ve discovered ten custom statistics. Thank you for sp_Blitz. I deleted them. As a result of deleting my custom statistics, CPU utilization dropped by 20 percent. Does that mean that custom statistics were giving me suboptimal execution plans?”

Erik Darling: Maybe.

Brent Ozar: Maaaaybe. If you suspected that a change to SQL Server was making execution plans better, where would you go look?

Erik Darling: If a change to SQL Server was making execution plans better, where would I look? Well that’s really hard for statistics because you don’t know what statistic SQL Server is using unless you use some black magic trace flags. So it’s tough to look there for statistics. You don’t get a lot of information about stats the way you do about indexes. You don’t get like usage counts or like how fragmented they are. Just kidding. But you don’t get that kind of information about statistics unless you use some crazy trace flags out there, which you can Google about. Paul White talks about them in several blog posts. I use them in several blog posts but you should really go back to the source and look at Paul White’s blog posts about where stats are getting used.

Brent Ozar: You would have to know the queries ahead of time and run them with those trace flags in order to know whether or not it was getting better or worse. I would just, if I was you, take full credit for it and tell your boss, “Yep, I totally did that and then keep right on trucking.”


Brent Ozar: On that point, “We dropped 400 hypothetical indexes the other day. Hypothetical indexes have real stats. My mind was blown. In all my years of being—in all my year—of being a DBA, I’d never seen that.” Well, yeah, because SQL Server has to figure out whether or not it would use those hypothetical indexes. So it builds stats on them when it goes and builds the hypothetical index.


Brent Ozar: Christopher says, “I have a SQL Server 2012 Always On Availability Group created and it works well. However, a dev has asked me for a reporting server off of this data. I don’t want their reports to interfere with production. Do I make a new separate SQL Server or can I just add it to the existing secondaries and make sure they put Application Intent = Read Only and how do I police that?” That’s a good set of questions.

Erik Darling: I wish Tara was here.

Brent Ozar: So the first thing you want to know is like is it synchronous AGs? If you’re doing asynchronous AG, then putting queries on the secondary can totally slow it down. Even in an async, you can get into positions where queries can cause locks that will slow down applications on the primary. I wouldn’t let that slow me down. If you’re interested in it, what I would let it slow me down is the licensing because if you now have to run reports against a server, it now has to be licensed. So I would want to check with management and see, and of course that’s Enterprise Edition if you’re talking readable secondaries, that’s real money.


Brent Ozar: Interesting question. “Have you guys considered talking about any add-on tools such as MDS, SSIS or Analysis Services in your podcasts?” No. Only because we don’t use it. We’re super specialized in just the engine. We just focus on the database engine. Even if you just look at the database engine, the size and scope of that thing has grown so much in the last two releases: R, PolyBase, Hadoop, columnstore indexes, CLR, DLLs. I mean you could go on and on for all the stuff in the engine that we don’t even cover. So yeah, we try to just be good about queries, indexes, high availability, disaster recovery.

Erik Darling: There are so many people out there who are already good at that stuff and specialize in that stuff that if I need to figure something out or help someone with something or learn something about a particular topic outside of the core engine realm, then I can go Google it and read about it. But me, I still haven’t opened a single one of those and I install SQL on my laptop. I only install the core engine, and well Management Studio, it’s a separate, but I don’t touch that other stuff.

Richie Rump: If I have to touch SSIS again, I’m going to run out of this room screaming.

Brent Ozar: Every time there’s a new release I always add in all the extra features that are new and then I discover within a month that I don’t touch them. Like I installed R and I saw that it added like 20 service accounts to my engine and I’m like what in the Sam Hill is this? I don’t even know what an R script looks like, so yeah.

Erik Darling: It’s not that I wouldn’t want to learn R, because like Buck Woody talks about R and makes it super interesting. I always want to look at his blog and be like, “Oh, cool, let me see what’s going on here.” But then like at the end of the day, I just have so much regular SQL stuff to do and learn and write about that I just can’t. Unless I was really forced to, I don’t think I could invest that much time in R.

Brent Ozar: Yeah, and I’m not going to be the guy who forces you to do that.

Richie Rump: So the answer is can’t touch this, that’s pretty much…

Brent Ozar: [Hums U Can’t Touch This by MC Hammer] If you want to learn about the stuff that’s outside of SQL Server’s engine what you should do right now is, for those of you who are watching the live podcast, go over to groupby.org—as in the SQL statement—groupby.org has sessions available for voting right now. There are sessions in the running for R, columnstore, Apache Zeppelin. One of the ones that’s highly ranked right now is SSRS, a high-performance SSRS session that looks like it’s going to make the cut. So if you’re listening to the podcast afterwards, voting is already finished. Voting closes Friday, December the 16th. Then the live free webcast will be on Friday, January 13th. So you can go over to Group By and register to attend the free webcast.

Erik Darling: One of the crazy things about all these new features is that sometimes you just have to be your own expert in them, because there isn’t one. There’s no, you know, no one knows all the best practices around PolyBase and that stuff these days. There’s no one who’s an expert in it. So maybe you can be the expert in it. Maybe you figure that all out.

Richie Rump: Maybe you can be Lewis or Clark—which one shot himself in the head? Lewis?

Brent Ozar: Every time there’s a new feature out, like before when columnstore came out, I’m like, you could be the Kalen Delaney of columnstore. You can be the Paul Randal of R. It’s just wide open to anybody who wants to blog about it. But you are going to have to use it in production, get your hands dirty, and then blog and share what you learn. All right, well thanks everybody for coming and hanging out with us this week. We will see you guys next week in Office Hours. Adios.

Previous Post
Indexing Temp Tables
Next Post
Hidden in SQL Server 2017 CTP v1.1: sys.dm_db_stats_histogram

1 Comment. Leave new

  • Eric here – I’d like to clarify that we were running Ola’s scripts on a server that we were previously not monitoring with Idera defrag manager. Turns out our tempdb was rather small (20 gb) we simply added a few more gig and everything was fine.


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.