[Video] Office Hours 2016/07/06 (With Transcriptions)

This week, Brent, Jessica, Richie, Doug, and Angie discuss snapshot replication, file and table-level restores, whether you should enable lock pages in memory, redistributing data across files or a database, and Doug’s new video class.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Office Hours Webcast – 2016-07-06

 

Why is snapshot replication always running?

Jessica Connors: Replication, that’s the first question I see. Trizzle has a question. He or she says, “I have a snapshot replication that runs once a day but it is running all the time. Is it possible to run it on a scheduled time only, not the whole time, because it is blocking when it runs all the time.”

Brent Ozar: Sounds like a big snapshot. I mean, growing, growing bigger and bigger kind of a deal. Snapshot replication you’re pushing the whole thing across. Do you really want to take snapshots all that often or do you maybe want to just incrementally push across smaller objects? What I’d ask is how big is that table and is it time to start doing something other than snapshot replication. Tara, have you ever used snapshot replication for anything?

Tara Kizer: I have not, our requirements have always been close to real time replication. So we’ve always used transactional replication. Anytime where we could have a 24-hour delay in data, we just used backup and restore instead of snapshot replication.

Brent Ozar: Yeah.

Tara Kizer: Now of course, that’s dependent upon your technologies and database size how you’re going to get that backup and restore done but I think you’re right about the snapshot replications, the size and it’s the blocking that occurs when it has to do I believe it’s a schema lock to make sure that the schema hasn’t changed.

Brent Ozar: The only time I’ve ever seen it make sense and I’ve never used it, I’ve just heard other people talking about it, when you have a ginormous database but you only have like one tiny table or a couple tiny tables that you want to replicate out somewhere, so it’s quick to take the snapshot but as soon as that starts growing to any percentage or size of the database then you’ve got to start doing some change management.

Tara Kizer: Yeah.

 

How can I decimate tempdb?

Jessica Connors: Question from Justin, he says, “For someone who doesn’t want to create the query, does anyone know of a good site that has queries to decimate tempdb?”

Tara Kizer: Decimate? I have to go to Google real quick…

[Laughter]

Brent Ozar: Data.stackexchange.com has all kinds of crappy queries on it that are written against the Stack Overflow database. The problem is you actually have to have the Stack Overflow database. The funny part is that Erik and I write queries like this all the time because we’re always trying to break stuff so what I would do is create a table that only has two fields in it. It has like an identity key and then it also has a char 6000 field. Some people say “char,” some people say “care,” I say “care” because I’m a caring kind of guy. A char 6000 field which means it’s going to take a whole page, a whole 8k page to store every single record you put into there. Then just do select from system tables, like sys.all_columns or sys.all_objects and just insert stuff into there. You don’t even have to fill that char 6000 with anything just put like your name in there and it’s going to take a whole page to store every object. Man, you can dump a buttload of tempdb into there super fast. Then just run it from a whole lot of sessions or cross joins sys.all_columns to itself in order to produce a buttload of records. You can knock a server over pretty fast with that.

Tara Kizer: Or just have a business user run a query. That’s what we had at my last job. There was a query that took a half terabyte of tempdb space. Half terabyte.

Doug Lane: That’s more than decimating, that might be like “centimating” it or “millimating” it.

Tara Kizer: Yeah, we got an alert that the tempdb mount point was running out of space. I was the on-call DBA so I took a look. Checked WhoIsActive and what was running, it was just like this massive query. All the data in the world.

Brent Ozar: Who does this stuff?

Richie Rump: Career-limiting query.

Angie Rudduck: End users, that’s who does it.

Tara Kizer: Yes, exactly.

 

Why do two inserted records get the same identity?

Jessica Connors: Question from Joe O’Connor, he says, “Have you ever encountered a situation where two records inserted into a table get the same identity back? Our developer is claiming that this has happened but I thought they were supposed to be unique.”

Tara Kizer: How are they querying it? Is it @@IDENTITY? What are they using to query it?

Brent Ozar: Yeah, there are bugs where @@IDENTITY returns someone else’s identity. You want to use, if I remember right, SCOPE_IDENTITY, right?

Doug Lane: Yeah,

Brent Ozar: @@SCOPE IDENTITY. Because you could…

Tara Kizer: No, no @@. Just SCOPE_IDENTITY and it might be ().

Brent Ozar: That’s it, especially because if you have like triggers that are firing, you can get the wrong identity, like the identity that happened from the trigger too.

Tara Kizer: Yeah. I’m not even too sure why they use @@IDENTITY if it doesn’t produce the right result all the time. Just get rid of that variable then.

Brent Ozar: Job security, for people like us who got burned by it.

Doug Lane: Sequence is still part of SQL Server, right? They didn’t yank that did they?

Richie Rump: As of 2012.

Doug Lane: Okay, because that had issues, I remember that but I never hear about anyone actually using it so I wondered if maybe they’d even pulled it, but nope.

 

What is Doug’s new training video about?

Jessica Connors: Speaking of you, Doug, Greg has not seen your new video.

Doug Lane: It just went out the door. So don’t feel bad.

Brent Ozar: Yes, if you go to brentozar.com and click training up at the top, why don’t you talk about what your new class is about?

Doug Lane: This one is on statistics, in particular how SQL Server makes guesses about the number of rows that are going to be involved in an operation and an execution plan. I talk about single column stats, multi column stats, multiple single column stats, ascending key problems, filtered stats, all kinds of stuff like that that if you’re wondering how SQL Server comes up with some of the oddball guesses that it comes up with, there’s answers there. There’s also stuff on—cardinality estimator is in that one too, right?

Brent Ozar: Yep.

Doug Lane: So differences between the 2012 and 2014 cardinality estimator are included too. So you get a better idea of why some guesses are different between the old and the new.

Brent Ozar: It’s super dense. Like everybody is going to learn something out of that. It’s a 90-minute series of classes and I guarantee everyone is going to learn something out of that. Everybody on the team did. It was really good.

Jessica Connors: Is it as entertaining as Level Up?

Brent Ozar: Not as many costumes.

Doug Lane: Yeah, it’s entertaining. There’s more stuff following that theme that is going to be coming. It has sort of a gameshow theme but you don’t see as much of it in this particular video.

Jessica Connors: Gotcha.

 

How do I restore a single table from backup?

Jessica Connors: Question from J.H. He says, “For a table level restore on a huge database, which there’s no temporary server to restore the database, would filegroup restore be the only way and do you think SQL Server will ever support table level restores?”

Tara Kizer: I’m going to show you my age or how long I’ve been using SQL Server but SQL Server 6.5 used to have table level restores. I’m not too sure why they pulled it out or if it was just tricky to do, but yeah, 6.5 used to have it.

Brent Ozar: I didn’t know that. Wonder why they yanked it.

Doug Lane: Is it LiteSpeed that does table restores?

Tara Kizer: I think all of them do, yeah.

Doug Lane: Do they? Okay.

Brent Ozar: And they’re so cheap. So Dell LiteSpeed, Redgate SQL Backup, Idera SQL Safe, they’re all like between $500 and $1,500. Leave it to a consultant to call that “so cheap” but compared to SQL Server licensing, it’s really, really cheap.

Tara Kizer: Per server.

Brent Ozar: Per server, yeah. So then that way, it just restores the individual table. So much faster, especially faster than shuffling stuff around. Second part, do we think that SQL Server is ever going to offer it? I had this thought that SQL Server was kind of dying on the vine there for a couple of years and then all of a sudden it’s like Microsoft woke up and went, “Oh my god, there’s a lot of money here. We should keep developing this.” They’ve been adding all kinds of stuff, like the one Erik just wrote a blogpost about, compressing encrypted data. I was like, “What? They’re actually doing that? That’s amazing.” So it’s definitely possible.

 

What’s the new name of sp_BlitzFirst?

Jessica Connors: Speaking of blogposts, I saw that we renamed sp_BlitzFirst.

Brent Ozar: Yes.

Richie Rump: Get that Brent out of here, get it out.

Jessica Connors: I was just looking through the blogpost and I saw that we asked the community what the new name should be.

Brent Ozar: Yeah. Wow, we got a lot of… what were you guys’ favorite entries? We renamed sp_BlitzFirst to make it open source because we wanted it to have a less Brent-y name, just more community-ish name. So you guys who looked through the answers, what were your favorite suggestions over there?

Tara Kizer: Well, BlitzNow of course, since that was mine. The BlitzyMcBlitzyFace or whatever it was.

Brent Ozar: Yeah, that was good.

Angie Rudduck: I liked BlitzPerf.

Doug Lane: Yeah, I was a big fan of BlitzNow too just because you’re answering the question, “What’s going on with my server right now?”

Brent Ozar: Yeah. There were so many good answers. BlitzyMcBlitzFace was one of my favorites as well.

Jessica Connors: Nothing offensive came out of that though, huh?

Brent Ozar: A lot of offensive things.

Angie Rudduck: Wasn’t there sp_BlitzWTF?

Brent Ozar: Yes, yes, so many offensive things. I’m proud of our readers. They did not let us down.

 

When should I install multiple instances of SQL Server?

Jessica Connors: Question from Samuel. He says, “In what scenarios would you have multiple SQL instances on one server? In the era where individual VMs can be split up easily, I’m struggling to argue for a case for them.”

Doug Lane: Good.

Angie Rudduck: Good, yeah. Don’t argue the case, argue against them.

Doug Lane: Instance stacking can lead to problems because you want to know how much of your resources are being consumed by SQL Server and if you’ve got multiple servers on there, when you query one instance, it only knows about itself. It doesn’t know about any other instance. So if you’re asking, “How much memory are you using up? How busy are you keeping the CPUs?” You’re only getting part of the picture. So that’s the big problem that I see with instance stacking is that it makes it really difficult to troubleshoot. Okay, if I’m starved for resources, where is that coming from? You have to examine all the instances on that server. It’s just so much easier to only have the one.

Tara Kizer: I don’t mind it in a non-production environment where maybe you need to support multiple versions on the host. But production, don’t instance stack.

Angie Rudduck: Someone is claiming it’s cost effective, but it’s not really cost effective if you already have the VMs, right? I mean, I guess you might… you still have to license both instances if you put them on the same box, right?

Brent Ozar: No, so this is kind of tricky. I want to say you have to have 49 instances if you do instance stacking, I can’t remember if it’s 49 or 50. So you could do Standard and then instance stack like crazy. But come on now.

Angie Rudduck: That’s really bad but I was trying to figure out what cost effectiveness they thought they meant. So maybe that’s it. They want 49 instances stacked.

Brent Ozar: We won’t use your name, the person who suggested that, but we will say this: There’s never been a better time to look for a job. It’s a great time in the DBA market. If your company is under that much pressure, it’s probably time to go check. Because they’re probably also not sending to you like training classes or to conferences.

Angie Rudduck: Which is why you’re here for the free Office Hours.

Brent Ozar: Yes, yes. That’s probably true.

 

How do I create a table with all countries in the world?

Jessica Connors: This is an interesting one from Wes. He says, “I need to create a table that contains every country in the world. It needs to have all three letter abbreviations of the countries as well as the name of the country. What is the best way to go about getting the names and abbreviations without having to manually type them all out?”

Tara Kizer: I would actually just post the question on stack exchange and I’m sure someone already has a table like this and they could get you the data.

Doug Lane: There’s also—I can’t remember the name of it—it’s that data service you can just go through Excel.

Brent Ozar: Melissa Data?

Doug Lane: Is it Melissa? Well, it used to have a different name.

Richie Rump: That’s one of them but they’re kind of pricey.

Doug Lane: Yeah, there’s something where—marketplace—you can go through the marketplace and you can actually purchase a dataset of something like that. I don’t think they’re that expensive, especially for what you’re talking about. That’s not going to be, you know, millions of rows. It’s going to be like 200 or something. There might actually be something in the—it used to be like Azure DB Marketplace or something crazy like that. It was an Excel plugin though that was built in. You could just click on it and go explore.

Richie Rump: Yeah, I’d also check out the USPS, the United States Postal Service. They have some datasets that are out there. Some of them for purchase, so maybe they’ve got something out there for you.

Brent Ozar: I’m so bad, so here’s what I’m going to do. I’m going to go google for a list of countries somewhere, like either Wikipedia is going to have one or USPS, something like that. They’re going to give it to you in either a comma delimited format or an Excel file and then I’m just going to put in additional columns that say “insert into:” so that I can then copy all of that, paste it into SSMS and then I’m done. Or, I would go to open source, there’s a ton of projects out on GitHub that have dropdown lists for countries and they’ll have inserts into a table that does the same thing. WooCommerce is one of the ones that I use a lot. Most insert statements are going to be not database specific, so you can cannibalize from there.

Doug Lane: Looks like people are answering that in the questions with links.

Brent Ozar: Nice.

Doug Lane: One of them is Wikipedia, that’s why I laughed. Showed up right as you were talking about it.

Angie Rudduck: Can any of the attendees see the links?

Brent Ozar: Nope.

Angie Rudduck: Yeah, they can’t.

Brent Ozar: We’re keeping them secret.

Richie Rump: We will keep the links to ourselves, thank you.

Brent Ozar: I sent the Wikipedia one out to everyone, so now you can see it.

Doug Lane: Okay, there you go.

 

Where are all these transactions coming from?

Jessica Connors: Question from Graham Logan. He says, “Transaction count in master and user database is consistently over 200 transactions per second. Database is only 4 megabytes but log is 180 megabytes. No long run queries in either databases, open transactions, blocking issues, or tempdb growth. Awaiting response from vendor regarding crazy transaction count. What could be causing this high transaction count? Crappy queries executed by the vendor application?

Tara Kizer: What are you trying to solve here? Transaction count isn’t something that I normally monitor anyway. It can be wildly much higher than say batch requests per second. Just because a database is small doesn’t mean that it’s not being used. If you’re looking at the master database, I think that that’s going to show all systems stuff. So you’re going to want to look at this at just the user database and if you don’t think that there’s much activity, just run a trace or an extended event to see what kind of activity there is. But I wouldn’t recommend using transaction count for anything really. I don’t know why it’s useful.

Brent Ozar: I love a 10 second trace and you’ll know the answer right away.

Tara Kizer: Yeah.

Brent Ozar: It could be IntelliSense, it could be your monitoring software. And 200 transactions a second isn’t a big number. In our training classes, we talk through like you can do up to a 1,000 batch requests a second really easy on commodity hardware. I wouldn’t really freak out until I’m in the thousands of batch requests a second and even then, I don’t freak out. 200 isn’t that big of a deal. You asked an interesting question, why is my data file 4 megs and my log file 180 megs? I’d wonder if you’re running log backups. Then I’d wonder how often you’re running them too.
Jessica Connors: All right, here’s an easy one, I think. It’s from Samuel. “Is it best practice to enable lock pages in memory in SQL 2012 or above?”

Tara Kizer: Yes.

Brent Ozar: Then why?

Tara Kizer: I don’t have the why. I don’t remember it. It’s part of the document that we had at various jobs that say enable this when you’re setting up a server. You just do it on every single server.

Brent Ozar: It’s one of the great debates in SQL Server communities. Some people say yes, some people say no. If that’s your biggest problem, you’re in really good shape. Generally, I’m like, I don’t want to change something unless I’m solving a problem but I don’t have a problem with people setting that either.

Tara Kizer: But what about as a standard practice on a new server though?

Brent Ozar: So I don’t because—and Kehayias and I had this big debate about it. He wrote this glorious—Jonathan Kehayias of SQLSkills—writes this big, glorious long post explaining all the reasons why you should do it. He even references me in there. He’s like, “Brent says this and then I say this, and here’s why.” Glorious, long post, all kinds of details. The first comment somebody leaves is, “What happens if this happens?” this particular scenario. Kehayias responds back and he says, “Well, a few things can happen one of which is Windows may crash due to an out of memory error.” So I’m like just generally speaking, I don’t like enabling something by default that causes blue screens of death. I love Kehayias, good friend of mine, this is just one of those personal things where when in doubt, I’m going to error on the side away from blue screens of death.

Tara Kizer: Was that an edge case? That sounds like an edge case, I mean like really, really rare.

Brent Ozar: Well, it’s when, for example, people don’t set max memory on SQL Server. It goes and takes a huge majority and then some application starts to use some and Windows can’t hard trim fast enough, Windows will try to hard trim SQL Server, not be able to, and then you’ll get the crash due to out of memory.

Tara Kizer: Got you. So when I refer to documentation, that document would have said, “Set max server memory.” So you’ve got this whole list of things that you have to do when you’re setting up a new server.

Brent Ozar: Yeah. And I don’t—if people want to set it, I’m cool with it. I’m just saying this why I usually encourage people to do it because—everyone, just close your ears for a second—you’ve seen what kind of servers our clients have. Good god, they’re cesspool sewers. All right, you can listen again now.

Angie Rudduck: I check for that in triage but I’ve never recommended it. I would rather recommend instant file initialization before lock pages in memory. And same thing, at my last place, we had a standard where it was in the GPO where it was just automatically granted to the SQL permissions, that and IFI, but I don’t think it’s as important to be the first thing to look for.

 

How do I balance out data files of different sizes?

Jessica Connors: Question from Aviola. They say, “I have a database that has multiple data files but one of the data files has grown larger than the other data files. How do I rebalance or redistribute data across other files for this database?”

Brent Ozar: No one jumped up. I thought everyone would jump up.

Angie Rudduck: I would start by checking your file growth settings to make sure that they’re growing the same amount so that you don’t run into this for future. But as for fixing it, I just send everybody to the link that’s like very long and scary. So, defer to the experts.

Brent Ozar: Yeah, assuming that that one is where you’re going to future grow to, grow the rest of the files to match it. Then you can rebuild indexes and it will balance the load across all of them. It won’t be perfect, it will just kind of round robin allocate. Proportional fill is the word I’m looking for. It’s going to proportional fill according to who has the most empty space. Once those other files have empty space, they’ll get used and hammered.

Tara Kizer: I would wonder about how these data files, what’s inside the data files. Is it a filegroup that has certain objects in it and that filegroup doesn’t have the multiple data files? I would take a look at your filegroups as well.

Brent Ozar: I like that.

 

What do you look for when tuning a query?

Jessica Connors: Another question from J.H. “What are the first things you look for when query tuning an optimizer and does the largest cost percentage usually a place to focus on or is it not necessarily so?”

Doug Lane: The bigger question is, what hurts? If you’re doing the query tuning, start with the ones that are causing people agony as opposed to the most expensive ones. Because a lot of times if you look, like using sp_BlitzCache for example, you’ll see really high cost queries and they will have executed once and it was maybe overnight and nobody saw it. So start with what hurts the most, what people are complaining about. Then from there, you can look at queries that have a higher cost and maybe you just want to tune them up to make a process faster that’s running into other processes or something like that.

Brent Ozar: When you find a query that does suck, the other thing is you might be looking at which percentage or which items in the operators in the plan cost the highest percentage. You’ve got to know that those are the estimates, not the actuals. So when you see a really high cost percentage, that doesn’t mean that’s the thing that’s actually costing the most in the plan. SQL Server’s estimates could be wrong. We actually talk about that in more detail in our performance tuning classes, which is funny, because I’m giving one as soon as I step out of here. We’re talking about this exact instance. If you go to brentozar.com/go/tunequeries, you’ll be able to see a video of me in Copenhagen talking through this exact scenario.

 

Why does one database keep coming up in recovery?

Jessica Connors: Andrew just migrated SQL 2008 R2, their QA server, from a Windows 2008 R2 server, to a Windows 2012 R2 server. He just did a bunch of migrating. Now whenever the SQL engine service is restarted, there is one database that keeps coming up in recovery. To resolve it, “I put the database in single user mode, then multi user mode, then it comes up fairly quickly. Any ideas what could cause this issue?”

Tara Kizer: I don’t know how switching it from single user mode to multi user mode would have fixed it but it sounds like you have a VLF issue. So when you start SQL Server, databases go through crash recovery, you know, rolling forward, rolling back transactions. If you have a lot of VLFs, that could take a long time. I had a production database take 45 minutes to complete crash recovery because of the number of VLFs. This was many, many years ago and I didn’t know about VLFs back then. This is how I learned about VLFs because this was a mission critical, it had huge, huge SLAs and it was bad. So I would run DBCC LOG_INFO () inside that database and whatever the row count returns, that’s how many VLFs you have on there. If it’s really high, you know, thousands, I’m pretty sure it was a VLF issue, but I don’t know that you could resolve it by doing single user mode to multi user. That doesn’t even make sense.

Angie Walker: Have you checked it for corruption?

Brent Ozar: Oh god, that’s horrible. And run sp_Blitz. sp_Blitz will tell you if a database has too many VLFs and it will also give you a more info link as well. I like that answer though.

 

Should I be worried about 59ms latency in tempdb?

Jessica Connors: Let’s take one more from Dan. Dan with a temp log question. “We’re experiencing high latency reads above 5-9 milliseconds on tempdb templog.ldf. Is this something to be concerned about?”

Tara Kizer: 5-9 milliseconds isn’t that far off from what your optimal value should be.

Brent Ozar: And it’s reads, it’s not even writes. If you’re reading from tempdb, I’d be like, “Hey, what are you doing reading from tempdb?” So start tuning the queries that are putting stuff into tempdb. Start at wait stats. Run sp_BlitzFirst—see I almost said sp_BlitzNow—run sp_BlitzFirst and it will tell you what the emergency performance problems are on the SQL Server right now. If tempdb’s latency is the biggest deal, that will tell you, but otherwise it will tell you which wait stats are big and then go focus your tuning there.

Jessica Connors: I guess we’ll take one more, how about that?

Brent Ozar: All right, one more, then I’ve got to go get food before my class starts.

Jessica Connors: Oh yeah, okay.

 

Should NTFS allocation unit size be 64KB?

Jessica Connors: James asks, “Should the allocation unit size be formatted to 64 kilobytes for data, logs, and tempdb drives?

Doug Lane: Are you on a SAN, yes or no? If you’re on a SAN, they might have their own recommendations for that. A lot of SAN firmware is smart enough to say just stick with the default 4096 and we’ll handle it from there. If you’re on physical storage, the recommendation is—or I should say local, physical storage—then the recommendation is yeah, 64k. That hasn’t changed guys, has it?

Richie Rump: Nope.

Brent Ozar: Nope, and it’s still in our setup guide. We find there’s less people who are noticing a difference as you switch to 64k. There’s some controversy about, “Hey, does it really matter that much?” So we were still waiting to see. That’s one of the things I want test at DBA Days actually. Dell DBA Days is coming up.

Tara Kizer: We had it as the standard as part of our GPO as well the last few jobs. I would always on a server that I did not setup, I would check to see what it was because some people would forget to do this step. Did it make a difference? I don’t know. Just make sure that you’re not setting it like on the C drive, a place where you have smaller files. You want this only on your database files. So you want separate mount points or separate drives for these files in order to set this.

Jessica Connors: All right, Brent. Go get some food.

Brent Ozar: Whoo hoo. Thanks, everybody. See you guys next week. Adios.

Previous Post
Last Day Shenanigans
Next Post
New Sample Databases From Microsoft

3 Comments. Leave new

  • “6.5 used to have it”
    Prandal should confirm, but the reason table level restores met the big bit bucket in the sky is that table level restores are liable for their creation of logical inconsistencies.

    Saying “Your honor, this database is _usually_ consistent” just doesn’t cut the relational/transactional mustard. As lawyers might say, “Better to let some other poor sod testify…” (hence why others sell it).

    Logically, a table restore can magically change a child into an orphan. Uhh, yeah, there’s CHECKCONSTRAINTS (when the RI stars align), but deciding what to do with CHECKCONSTRAINTS’ complaints is still a PITA (“Do I kill this baby or do I kill that baby?”). The reality could be that you are left with an orphaned child that should have died with its parents (in a car crash, back on that LSN highway).

    Physically, a table restore created a new allocation structure, the object ids within the LOPs for pages were read into the new structure, each pages forward and backwards pointers were stitched together, and ba-da-ba-bing – there you go – a table!

    But … what about the changes? Pondering what to do with the tlog left SQL Server developers (actually, all 6.5 users) with a quandary: “Should a log restore replay _all_ LSNs, or _just_ the LSNs for the table being restored?”

    All LSNs need to be considered (for the sake of logical consistency). But how to determine what LSN had already been applied to any page in the database, especially when that page now has a far newer LSN, that is the question. Do you read the tlog once for each page, or build a hierarchy, hash and pray? Should mommy have died and daddy have survived in that car crash?

    To retain logical consistency, you still have to start with a know good checkpoint and replay it all. You cant sweep that logical need _under_ the physical carpet, no matter how many dead bodies you roll up in it 🙂

    Physically, table level restores were a mixed blessing in 6.5: Restoring a table could magically remove all allocation & page header corruption. In-row corruption remained, but lots of table level corruption could be magically wiped away. Who cares if a few rows should have been deleted, but now are mangled? (Hint: your lawyers might).

    Reply
  • GitHub have a list of countries as a series of insert statements – https://github.com/raramuridesign/mysql-country-list/blob/master/mysql-country-list.sql

    Reply
  • re: WHEN SHOULD I INSTALL MULTIPLE INSTANCES OF SQL SERVER?

    For production:
    I have managed one and working on another

    1) a VM with SQL EE, with one instance dedicated to a vendor data warehouse (their requirement) and one instance dedicated to dozens of vendor databases of different types and varieties

    The theory here was the vendor data warehouse ETL code was not optimal, had extreme requirements for the ETL and hogged the VM from midnight to 7am. Then everyone would come in, get their reports, charts, etc. then the VM with significant resources attached to it would sit idle from 9am until midnight the next day. The dozens of databases in the other instance were busy from 8am until roughly 10pm. So it made sense to us to use those expensive EE licenses instead of watching them sit idle for 14 hours…

    2) a read-only always on group replica in one instance, operational data and data marts in other instance

    Since we need a license for the operational data stores and data marts and like most operational data stores and data marts are busy for the ETL and occasional reporting processes then sit idle. Why not have a read-only replica of the production vendor database? Point all reporting to that server leaving the primary server to serve the vendor database … ???

    3) I’ve also seen organizations do instances when there simply wasn’t money for additional licensing per OSE and/or architectures changed and/or Microsoft changed their licensing (socket 2008 – core 2012 – BI edition 2014 – no more BI edition 2016)…

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.