[Video] Office Hours 2017/08/02 (With Transcriptions)

This week, Brent, Erik, and Richie discuss unit testing, tSQLt, availability groups, duplicate indexes, reporting services 2016 on availability groups, troubleshooting Entity Framework queries, joining views in tables, SQL Server caching issues, and why procedures have plans inside the cache.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.

Leave us a review in iTunes

 

Office Hours Webcast – 2017-08-02

 

How do I sync settings across SQL Servers?

Brent Ozar: We’ll start with Rowdy. Hello, Rowdy. He says, “Hi folks, I’m migrating off a well-designed…” He’s only saying that because he doesn’t want to insult his predecessor. “Agent availability group, it’s SQL 2012 on Windows 2008 R2, it’s new hardware and a new OS  but unfortunately it won’t involve a SQL Server version change.” He says, “Are there any tools you’d use to check to make sure that all the settings and users and jobs are identical in the new availability group before cutting over?”

Erik Darling: Well, knowing Rowdy, I’m surprised that he hasn’t stumbled on DBA Tools for that because I know that they have instance comparison functions for SQL Server. I’d be surprised because I know he’s a PowerShell guy.

Brent Ozar: Erik and I were talking about this as part of the Faux Pas Project, the thing that we’re doing – these days it’s log shipping at scale but for a while, we were talking about building availability groups at scale. And the idea was could we conceptually script out all of the contents of a server, like all the metadata, trace flags, sp_configure settings, dump all that out into one file and then use it to import again into another one. It’s not out there; like the functionality to do it just isn’t there. There’s nothing to compare two servers identically. The closest two things that I’ve seen are, a Redgate schema compare will check and see the differences between DMVs, if you want it to, but of course, it’s going to produce all kinds of false positives that are just garbage; false change differences that are just garbage. Or Kendal Van Dyke wrote this SQL Power Dock that dumps out a lot of configuration into Excel. It’s built atop PowerShell. Kendal hasn’t maintained it for at least a year or two. He went to work for Microsoft and just stopped working on the project…

Richie Rump: People get busy at Microsoft, Brent, come on now.

Brent Ozar: I actually wrote a blog post about that this morning, it’s very funny how that works. But yes, so that’s a way you could go down and start, but I haven’t seen anything that catches all the trace flags, sp_configure settings, master database objects…

Erik Darling: I don’t even think there’s a good framework for it. Like, if I had to do it from a purely tSQL point of view, I couldn’t see creating a linked server or creating synonyms to all the DMVs that point to the other server and then just doing like an intersect or accept query that would catch things that are different between the two. That’s a nightmare because I would miss something; I would miss lots of things.

Brent Ozar: And even that, we’re talking about just differencing scripts. We’re not even talking about application either, you’d have to apply the differences too, and that’s even harder. Rowdy says, “I have found DBAtools.io…” So you know what I would do is, DBAtools.io has a super lively participation group in Slack. I would ask over there because I think – I’m sure that they don’t have an answer either, but I bet that they know people who would be interested in building that kind of thing, just long term.

 

Is there a benefit to duplicate indexes?

Brent Ozar: Let’s see, next up, Hannah says, “Is there any reason that I should want to keep duplicate indexes? I’m a junior DBA at a new software development company. I’m only working with developers and I’m the only DBA. They create stuff nobody has ever gone back to review or maintain. We have thousands of duplicate indexes in an instance, some because of uniqueness, some clustered with nonclustered indexes, some because there was a typo. So is there any reason I should keep those?”

Erik Darling: She mentioned a couple of the reasons that I think would be good if they were enforcing referential integrity or uniqueness. You know, if they were backing up a foreign key or something – but exact duplicates, where you see one has all the reads and the same amount of writes as another one that has none of the reads, then I would just start disabling those.

Brent Ozar: There’s one gotcha, which is if some developer has named the index in an index hint, their query will simply break. It won’t adapt and use another identical index, it will simply not work. Me, I look at that as a benefit, because you shouldn’t be hard coding index names inside your queries. So just if you do it, just disable the indexes rather than drop them. The only reason I say that is because then that way if somebody runs in and says my queries are all failing, then you can do an alter index rebuild and it’s hydrated back up in a relatively short period of time.

Richie Rump: Yeah, I love that problem, because us devs, we take a look at it, we run in a query and all of a sudden it says, hey you may be missing this index. And then we’re like, oh okay, let’s go ahead and throw that on there; let’s throw that shrimp on the barbie, I don’t know what it does, we’ll just go ahead and do it. And sure enough, it probably was there already, it just didn’t use it at that point and here we go, duplicate, duplicate, duplicate, duplicate.

Erik Darling: Now you have 40 indexes called, name of missing index SNAME.

Brent Ozar: So I go to a client and go onsite and I’m in the room with the DBA. They’re having all these performance problems, and there’s nobody in there, just me and the DBA. One of the things I run is sp_BlitzIndex and we’re looking at it and one of the tables has 174 identical missing indexes. 174 indexes, not like on different tables, 174 on the same table. So you try to keep the poker face as a consultant and I’m like, hey so what’s up with this? Can you tell me a little bit about where we got to these 174 indexes? And the developer says, well when someone brings me a slow query, I go look at it, figure out – it says indexes are recommended and then I go and add those indexes. Again, I’m trying to keep the poker face and I’m like, well but all these are identical. He says, “Yeah it always recommends that I create an index, but the name already exists; I just figured it’s the name because you’re using tools like the DTA and the missing indexes [crosstalk].”

He would just put ASDF, ASDF, you know, bang on the keyboard at the end of the index name to give himself a name. I said okay, so where are you doing this at? Where are you doing your index tuning? He says, “I’m doing it over in development. I know better than to do it over on the production environment.” And I said, “When was the last time you refreshed development from the production environment? When was the last time you restored it over?” And he stopped and thought and he goes, “I don’t think I ever have.”

So the poor guy was repeatedly recreating exactly the same index because it was missing in dev and never actually bringing it over from production. You’d be amazed at what you see out there. The good part was is I said, look I’m explaining to you what’s going wrong. It’s just you and me in this room so what we’re going to do is drop 173 of these indexes and we’re going to go back and tell your boss that we found the magic go faster button and everyone’s just going to love you to death; and they did. Everybody learns somehow.

 

How can I easily troubleshoot EF queries?

Brent Ozar: Next up, Grahame says, “I’m troubleshooting entity framework generated SQL queries. Is there a methodology or a tool I could use to make this easier? I’ve suggested that the developer start using LINQPad, but is there something else?

Richie Rump: I guess that’s my question, right? There are a couple of monitoring tools out there; I can’t think of them off the top of my head right now…

Erik Darling: ANTS, Redgate ANTS. Just kidding…

Richie Rump: Well ANTS is for memory and some other stuff. If you’re running through it, the code, it will actually give you times. Like writing your code and you’re debugging it, how long this ran and how long that ran. If you’re dealing with specific entity framework problems, it’s not going to tell you what the query was or anything like that. It’s just going to tell you that this function that you ran it in ran in 30 seconds. And then it will tell you, probably, the query in itself ran for 29 seconds. That’s what ANTS will give you. I love ANTS, I think ANTS is a great product.

There are a couple of Entity Framework tools, and it depends if you’re using Core versus Entity Framework 6; they’re essentially different products that you could go and actually monitor stuff. I’ve used them. If I needed to do an EF thing right now, I probably wouldn’t. I’d probably just go into dev, run profiler and start grabbing some of the queries from there and just start tuning some of that stuff. Probably actually run sp_BlitzIndex or something and start taking a look at the indexes, because that’s probably what it is. That’s where I go first, then I start looking at the queries themselves and see what’s going on inside of them. BlitzCache is another good one for that, if I start taking a look at what’s going on inside of the cache and where my queries are and how long things are running.

That’s probably where I would start, but there are some monitoring tools out there that you can use for Entity Framework to kind of see what’s going on inside and what’s taking so long. It’s probably some really complex queries that’s generating some suboptimal SQL and you probably just need to back away from that and use a stored procedure, or figure out what exact data you need in that connect query.

 

Any gotchas to adding SSRS to an AG?

Brent Ozar: Let’s see, Matt has a follow-up. He says, “I already have a 2016 availability group for a production website and this will just be on the same group of servers but in a different AG.” Yeah, in that case, I don’t think we’ve got anything since we don’t do reporting services work. I would just read the books online, stuff on adding availability groups in for reporting services because I want to say that there are some gotchas there. But we don’t do a whole lot of that, so you got me there.

 

How many cluster nodes can go down at once?

Brent Ozar: Gordon says, “I’m somewhat confused as to the minimum number of cluster nodes that need to be online for the cluster to still be up.” And he’s got a whole bunch of information down there about his specific scenarios. He’s got “Two nodes and a file share witness with dynamic quorum enabled, what happens if I lose one node, what happens if I lose another?” Now, unfortunately, this is beyond the scope of what I can answer fast. I actually have a whole 30-minute session on it in the senior DBA class, where I walk through the problems with dynamic quorum and dynamic witness. In theory, if you’re lucky and the right node owns the vote at the time, you can get all the way down to last man standing. If you’re unlucky, in a two node plus witness configuration, and the wrong node goes down, then you can’t get down to last man standing. So that’s why things are a little bit tricky there in terms of the articles you’ll see out there. Some of them just assume that the right node happens to own the vote. Some of them explain it, if you don’t have the right node owning the vote, you go down hard.

 

Are views slower when joined to tables?

Brent Ozar: Don asks, “I have a query that’s joining a view, then joining a table to add another column that probably should have been added in the view. Performance is terrible. Is there a known issue with joining views in tables?”

Erik Darling: No, there’s not a known issue with views in tables, but if those views call other views, like if you have nested views, you could end up with a whole weird thing. What I would want to do is start with the query plan for that and I would want to start with taking a look at that to see if there’s anything obvious jumping out at me there for why this might be so god awful slow if you’re joining just to add one column in. I know this is going to sound weird and hand wavey, but every time there’s been a case with me as a DBA where a developer has said, I’m just going to add this one column in. Two days later, this query is still running with, like, this awful key lookup and I’m sitting there like, yeah you just added that one column in, thanks, boss, saved the day.

Richie Rump: Yeah, I love the whole views on views things, because as a developer, that’s kind of what you want to build, right. Hey, I’ve got this one little piece of code and I’m going to build it on top of that, and now I have this, you know, flexibility. So if something changes here, it automatically goes up here and – all of a sudden I start nesting all these views and it makes great sense, right, from a developer’s perspective. But for performance, man, it’s just dog, it is terri-bad, just don’t even do it; don’t even think about it.

Erik Darling: I’d really want to get the difference too between the query that the view produces on its own versus the query that it produces when it’s joined to the other table because that could also give you some unique insight into what’s changing underneath.

Brent Ozar: It can be – judging that you’re asking this question too, what you probably want next is help from someone looking at the views or looking at the execution plans. This is why we made PaseThePlan.com, and by we, I mean Richie. So PasteThePlan.com is where you can go paste in execution plans on our site and then they’re uploaded and shared with the public. Now, the first thing you need to know is these are now public execution plans. Anyone can see them. So if you have stuff in there that the company’s kind of ashamed of or social security numbers, credit card numbers, you probably don’t want to paste the plan.

If it’s totally public and you don’t mind sharing them, then put both of those two execution plans on there, upload both of them and then go to DBA.StackExchange.com. And at DBA.StackExchange.com you can ask questions about SQL Server that span multiple paragraphs, include the links to Paste The Plan and people will go through and look at those differences. If your query plans are private or you’re not supposed to show any of that data, go get SQL Sentry Plan Explorer. [crosstalk]

Erik Darling: SentryOne, rebranding…

Brent Ozar: They’re going to be cursed with this for years. SentryOne’s Plan Explorer is totally free and one of the features that is has is the ability to anonymize your plan. So it takes your execution plan, replaces all the tables with object one, object two, object three. Now from there, you can take that execution plan and post it on Paste the Plan. But here’s the thing, you don’t want to bother doing that, because no one wants to read your hideous execution plan once it says things like object one, object two, field one, field two.

So instead, at the top of SentryOne’s Plan Explorer, there’s a button that says post to SQLPerformance.com. This is where people, smart people like Paul White and Aaron Bertrand are sitting around waiting for your execution plans. And because they’re paid to do it, they are much more likely to look at that heinously ugly execution plan through object one, object two, object three. They can give you advice, but know that their advice is going to be things like, you should join object three field 60 to object two field 11 in order to improve your cardinality, and it just won’t be as easy as having somebody look at the real plan.

Erik Darling: And to be fair, they are also sitting around doing the same thing on DBA.StackExchange.com for free, so the advice is way worse, because it’s free, so you really can’t ask for a refund on that. Paul gives great answers, for free.

Richie Rump: Crazy great answers. It’s like whoa, I need to clear my mind, go do some yoga, read it, go do some more yoga, meditate on what he just wrote… Yeah.

Brent Ozar: I have read whole book chapters that are easier to get through than one of Paul White’s Stack Exchange answers…

Erik Darling: But you also learned less.

Brent Ozar: By far, yeah, yeah. Whenever I see Paul White has a new answer that’s more than a page long or he posts a blog post, I’m like alright, let me clear out a spot in my calendar here because I’m going to need to focus for a while.

Erik Darling: Pressing the print button…

Brent Ozar: Can I bring this with me to the plane?

Richie Rump: You’ve got four different color pens, you know, and writing notes on it.

Brent Ozar: I find stickers to be really helpful, you know, stickers with little arrows, maybe a unicorn for the good parts.

Erik Darling: I hired some paralegals to just go through and make sure everything – cited it and annotated it and put headers and footers on all these things. I don’t feel dumb…

Brent Ozar: I still feel dumb when I finish but – no I feel smart enough.

 

Why does the plan cache use so much memory?

Brent Ozar: Kelly says, “I have one SQL Server instance, 2012 Service Pack 1…” Well first off, you should get on a supported Service Pack, because that’s not even supported anymore. “Where the optimizer accumulates and holds memory over time, maybe weeks, until there’s very little memory available for database caching. I’m seeing this via IDERA SQL Diagnostic Manager. When I restart the instance, that resolves it, but then it just restarts again accumulating more cached plans. What should I do?

Erik Darling: Stop restarting your SQL Server instance. So that sounds like a pretty natural way that SQL Server works to me. Over time you read more things into memory and all of a sudden you’re fully cached, or at least you have as much stuff cached as you can fit into memory. I’d probably want – Brent has a good article on that, it’s called the Sysadmin’s Guide to SQL Server Memory.

Brent Ozar: That’s true.

Erik Darling: That’s true, he does have that. I think, that sounds like the tree you’re barking up, where SQL Server keeps using more and more memory and you don’t like it.

Brent Ozar: I wonder, when you say that SQL Server has little memory available to cache data, I would question, what’s the problem there? Are you seeing page I/O latch waits? Page I/O latch waits means that SQL Server has to go out to storage in order to pull data back and it’s waiting for that. If you’re looking at page life expectancy and going, this is way too low I have a problem, don’t do that; that’s not the real problem there. I’m going to give you a truly horrible answer too, if you wanted to reduce the amount of memory used to cache execution plans – is anybody looking? You could run DBCC FREEPROCCACHE, which will clear out all the execution plans stored in memory. This is a spectacularly bad idea, but if you were determined to do bad things, that would be a bad thing that you could do.

Erik Darling: Also, I’d be concerned about what IDERA Diagnostic Manager is telling me, because SQL will only reserve so much space available for caching query plans, and it will evict those plans quite frequently. You see all sorts of memory pressure stuff coming in. So I guess what I would be concerned about, maybe, is maybe I have a query that’s asking for a gigantic memory grant that’s clearing out the buffer cache and the plan cache. But still, SQL is pretty aggressive about the way it chooses to cache data where it points to – you know, it will want to cache data for queries that are going to read stuff versus where it’s going to cache stuff like execution plans or DMV info or other things like that.

Brent Ozar: Brian Maynard also says, “Maybe check ad hoc plans and use optimize for ad hoc.” Yeah, that will reduce the issue. I would just kick back and go – I don’t think it’s really an issue at all.

Richie Rump: Maybe the answer is, what’s the problem?

 

Why does CHECKDB cause a restart?

Brent Ozar: Tami says, “Hi…” Hello Tami, “I have a SQL Server 2014 development instance. When I run CHECKDB on a 25GB database, it uses up most of the memory then crashes the virtual server and requires a reboot.” First, I would step back right there and go, any time something crashes, that’s probably not a SQL Server bug, there’s probably something else going on, or even a SQL Server workload issue. The crash, I might even think about opening a support call with Microsoft, because a crash shouldn’t happen regardless.

She goes on and says, “The server has 16GB of memory and about 13GB of it is allocated to SQL Server. If I reduce the allocated memory a little bit, it doesn’t crash. Does it seem right that DBCC should be able to cause this issue and is it a bug?

Erik Darling: So the answer is yes, and I actually dealt with this at my last real job where I had a server with 521GB of memory, and when I had max server memory set to 485GB, DBCC CHECKDB on a 4.5TB database would crash the server. When I set it to 475GB, it would run fine.

Brent Ozar: Get out of here. Because it got a different memory grant based off that or?

Erik Darling: I think – honestly I don’t know what it was. I didn’t have the memory grant saved from one to the other, but I’ll just tell you, that 10GB of max memory changed everything. I want to say that Windows hit some breaking point where it was just like, I can’t manage this, and quit and that was it. Because it wasn’t SQL Server crashing, it was Windows.

Richie Rump: So Windows rage-quit, is that what you’re trying to say?

Erik Darling: Basically.

Brent Ozar: She says, “If I set it to 12.2GB it doesn’t crash.”  There’s two things I would say. Make sure you’re fully patched because there was a known bug around CHECKDB’s memory grants.

Erik Darling: That’s old though…

Brent Ozar: The other thing I would say is, this is a laptop from 2012, circa 2012. It has 16GB of RAM on it. I don’t know that a VM with 16GB of RAM – I don’t know that I would go troubleshooting that much further if you’re having memory issues. I would go add RAM, and if you’re really hard up, I’ll loan you some at a conference, because this isn’t even a recent laptop…

Richie Rump: Yeah, 2013, 32.

Erik Darling: The one on my desk, 32, the one under my desk, 128.

Richie Rump: My tablet, 16.

Brent Ozar: My phone, 128… No that’s [inaudible]. My thumbnail, a micro SSD card.

Erik Darling: My water bottle…

Brent Ozar: In my pants…

 

How can I learn to troubleshoot replication?

Brent Ozar: Marcy says, “SQL Server transactional replication is so freaking hard to understand. Are there any good websites to decode it for troubleshooting?”

Erik Darling: TaraKizer.com…

Brent Ozar: You know, I don’t even know if she owns that.

Richie Rump: Well I do now…

Brent Ozar: If we weren’t on camera I would totally go buy that. I would just call Microsoft in terms of support. There are a couple of good books on Amazon, Hillary Cotter had a book, I want to say Kendal Van Dyke had a book on it as well.

Erik Darling: There’s a free one from like Redgate or something. Like Understanding Transactional Replication, I want to say. I started to read it like two or three years ago because I said, I’m going to put my nose to the grindstone and learn about replication because customers keep asking. And then I got like three pages in and said I’m not learning about transactional replication, you’re going to hear about something else instead.

Brent Ozar: It’s really funny, the people who wrote those books even have all gone on to other things in their career. There was a while there when one of the authors was emailing us and going, can you please take all replication consulting from me going forward? I was like, nope, not interested.

 

Why does my SQL Server use all of its RAM immediately on startup?

Brent Ozar: Matt says, ‘I have a server that’s got 32GB of RAM…” Like both of the laptops we just saw. “SQL Server is configured to use 24GB of RAM. When it starts, it consumes all 24GB. Is there a better way to configure memory for SQL Server? Is there a way to not have it consume all 24GB when it starts?”

Erik Darling: Why would you not?

Brent Ozar: I don’t know, I’m just the questioner here. Maybe he wants to save it for later. It’s like when I get a big [inaudible] from Wishbone Café down here in Chicago. [crosstalk]

Erik Darling: You could set max memory lower and then slowly increase it as it’s up longer, but I wouldn’t.

Brent Ozar: That’s cool…

Erik Darling: No it’s not.

Richie Rump: Maybe you could set an alarm for every hour that you could actually – every time it’s up, you could increase it 10MB…

Erik Darling: Or get one of those, like, bird things that goes like this and just hits F5. So like, every once in a while it will increase max memory.

Richie Rump: Maybe you should write a PowerShell script to do that, Erik.

Erik Darling: Yeah, that’s – I’ll get right on that, and I’ll see you guys in two years.

Brent Ozar: I would say, you may check to see if you’re using any trace flags or if you’re using lock pages in memory. There was one trace flag that involved enabling large pages. I want to say, for some reason, it was 834, I can’t quite remember, which causes you to allocate all memory on startup, which is by design…

Erik Darling: And could cause slow startup on servers with a lot of RAM. But Matt, you are not in that danger zone.

Brent Ozar: Tami who had the 16GB of memory says, “I’m being memory shamed?” Hey listen, it was much worse on my call this morning when somebody had a 1TB database and had 80GB of RAM. I was like look, here’s the deal, my desktop has 64, Erik is showing pictures of his desktop with 128…

 

 

Brent Ozar: [Tashal] points out that the free book that Erik mentioned is called Fundamentals of SQL Server 2012 Replication.

Erik Darling: There it is, written by [Tashal].

Brent Ozar: By all means, even though it says SQL Server 2012, it’s still good for 2014, 2016 and 2017, because Microsoft hasn’t put one peso of development into improving those features.

Erik Darling: Me either.

Brent Ozar: Me neither.

 

More about my ginormous plan cache

Brent Ozar: Kelly says, following up on her IDERA SQL Diagnostic Manager and plans question, she says, “Following up, most of the memory gets pinned to the optimizer, FREEPROCCACHE has no effect and ad hoc is enabled.” We’re hitting the end of the call, but the thing that I would dig into for follow up, because you could post this to Stack Exchange if you wanted. There’s a great place for that. Also include the total amount of memory on the server and the total database size. If you look at sp_Blitz, there’s an output type from markdown. Markdown is the question format that Stack Exchange uses, Google for sp_Blitz markdown and there’s instructions on how you can include that output on your Stack Exchange question, that will save a whole lot of back and forth with people asking for more details.

Erik Darling: You know, I’d probably also want to go to IDERA and figure out exactly what that means, because when you say something like memory gets pinned to the optimizer, I’m sort of baffled by like how memory gets pinned to the optimizer. Because the optimizer is not really something that memory gets pinned to, it will take a little bit of memory to compile a plan, but it’s not a memory [crosstalk], yeah.

 

Why would a proc have multiple cached plans?

Brent Ozar: Alright, and the last one we’ll take, Rowdy says, “I’ve been working on getting intended execution plans published to our development team. One of the things I’ve found when I was exporting execution plans from cache is that some of my procedures have multiple plans in cache. Why would a procedure have multiple plans inside the cache?”

Erik Darling: Lots of reasons…

Brent Ozar: Like…?

Erik Darling: So many reasons. Different white space, different things getting passed in. God, there are so many; I’m trying to think of a good one, though.

Brent Ozar: My favorite good one is – so say you have the same stored procedure and it’s called by – say one is by SSMS and one is by your app server, if you have different ANSI NULL options ARITHABORT settings, anything about your session, you’ll end up getting different execution plans cached.

Erik Darling: If you have the same stored procedure in multiple databases you’ll have different ones.

Brent Ozar: Even if it’s the same procedure in the same database, if you call it from different databases, you’ll get it. If you call it fully qualified from, say, tempdb and master and a user database, you’ll get it. And the epic blog post on this is Slow in the App, Fast in SSMS. This comes up at least once at every Office Hours, and frankly, I would say folks, any time you’re troubleshooting any kind of issue with SQL Server, whether it’s a failing cluster, always on availability groups broken, 2005 no longer under support or a query performance problem, you should start with Erland Sommarskog’s excellent post, Slow in the App, Fast in SSMS. And if you Google for that, it’s about 50 pages long, has a table of contents, but it makes your hair fuller, it brightens your skin…

Erik Darling: It’s a testosterone booster, it’s everything, it’s just amazing.

Brent Ozar: Makes grown men cry, makes young men cry, makes women laugh, yes. Well thanks everybody for hanging out with us this week at Office Hours and we will see you guys next week, adios.

Richie Rump: Can I go back to Postgres now, please?

Brent Ozar: You can go back to Postgres now.

Erik Darling: Get out.

Previous Post
Chaining Agent Tasks For Alerting The Right Way
Next Post
Five Mistakes Performance Tuners Make

7 Comments. Leave new

  • Thanks for all you guys do! Still curious about your response about “duplicate indexes”. At the moment, I can’t fathom any reason to have an index that is exactly the same as another index on a table. This has frustrated me since version 6.5 and the fact that SQL Server lets you create a duplicate index on a table… calling it a different name of course. I have to believe that it would take only a few moments to change the Create/Alter index logic to check and make sure a “duplicate” index doesn’t exist. For now, at the very least give the DBA/Developer a warning that such an index already exists. Just think of all the I/O ops / CPU cycles / memory saved if we could collectively remove all of the duplicate indexes from SQL Server.

    Reply
    • Say you inherit a database with an existing index, but the name is wrong – doesn’t match company standards, is misleading, etc. Go create the new replacement index, and SQL Server will use the existing index to build the new one. Then drop the old one when you’re done.

      If SQL Server prevented you from having duplicate indexes, you wouldn’t be able to do that.

      Reply
  • Hey Brent, I want to comment on your statement “There’s nothing to compare two servers identically”. Have you looked at the Aireforge Studio tool by AireForge? (https://www.aireforge.com/)

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.