Five Things That Fix Bad SQL Server Performance

Is your SQL Server slowing you down? Are bad queries giving your application a case of the blues? Before you go looking for help with SQL Server, make sure that you’ve done everything you can to solve the problems on your own. Many of SQL Server’s ills can be solved with some easy preventative maintenance, patching, and TLC.

5. Upgrade to the latest SQL Server version

If you aren’t on a recent version of SQL Server, what are you waiting for? SQL Server 2000 and 2005 are no longer under active development – the query engines are intolerably old and diagnostic support has improved by leaps and bounds in newer versions of SQL Server. Once you have the licensing figured out, use our SQL Server setup and post installation checklists to make sure you configure SQL Server the right way the first time through.

Why is a new version of SQL Server going to make things faster? New versions of SQL Server get new versions of the SQL Server query optimizer. While Microsoft sometimes sneaks big performance fixes into a service pack, the biggest improvements come in major version releases. New versions of SQL server also contain fixes for bugs, take advantage of new CPU instruction sets, and are filled with the latest and greatest in software development techniques. In short: you wouldn’t expect a 10 year old sports car to be as fast as a brand new sports car, why would you expect the same thing from SQL Server?

Some vendor applications can’t be upgraded to SQL Server 2008/R2/2012. Odds are these applications are also sitting on older physical hardware. When you’re stuck running old versions of SQL Server on old hardware, it’s a no brainer to virtualize SQL Server and assign enough resources to keep things running quickly.

Even if you can only upgrade from 32-bit to 64-bit SQL Server, take the time to make the change. It’s well worth it to remove the limits of a 32-bit memory space.

4. Add more memory

SQL Server 2008-2014 Standard Edition maxes out at 64GB of memory, and SQL 2016 Standard takes 128GB. If you don’t have at least 64GB of memory in your SQL Server, what are you waiting for? Head on over to your vendor of choice and expense account your way to better performance. Since SQL Server uses memory to cache data and avoid additional trips to disk, you might as well drop a bit of coin and max out SQL Server’s memory. You’ll get the added benefit of additional memory to cache query plans, perform bigger joins and sorts in memory, and may even see a reduction in CPU and disk utilization to boot.

Make sure that you increase SQL Server’s max memory setting after you add more memory, too, or else SQL server won’t take advantage of that new RAM that you just added.

3. Check Task Manager

You’ve upgrade to the latest and greatest SQL Server and you have 64GB of RAM in the server. Things are still slow, what now?

Open up task manager and sort first by CPU and then by memory. If there is anything running that you don’t know about, kill it. If anti-virus is installed, configure exceptions. If other users are RDPing into the server to develop SSIS packages, revoke their access. Get rid of every extra piece of software that is slowing the server down or eating up memory.

If you’re on Windows Server 2008 or newer, you should also make sure that the Windows file system cache isn’t eating up all of your memory. Windows will cache data for file system access when you drag and drop a file, copy it with xcopy, or push a backup across the network. This behavior is by design because it’s supposed to add to the feeling of faster performance for the end user. Unfortunately, it also steals RAM from server-side processes like SQL Server. You can check this by opening up Task Manager, switching to the Performance tab, and checking the Cached number under “Physical Memory (MB)”.

2. Look in the Event Log

Check both the SQL Server error log and the Windows Event Log. Both locations contain a potential motherlode of information that most teams ignore. If SQL Server or Windows is running into any kind of problem, they’re going to be complaining about it. In both locations you’ll be able to see if SQL Server is dumping core, waiting on disks for a long period of time, or encountering any hardware related issues. You’ll also be able to see what other services might be running into problems on the server – just because anti-virus isn’t running right now doesn’t mean it hasn’t been running in the past. The Event Log is another place to look to get an idea about what’s running on your server, what kind of problems it’s having, and what you can do about it.

Filter down the Event Log to just warnings and errors and you may find out if you have hardware that’s on the verge of failing. Most hardware and drivers should report problems up to Windows and the Event Log makes it easy to track down these issues, diagnose the problem, and potentially find a solution. Eliminating hardware problems can be an easy way to eliminate causes of poor performance – you never know what kind of bugs are lurking in out of date drivers and firmware.

While you’re thinking about SQL Server and the Event Log, take a minute to make sure that you have some SQL Server alerts configured. These alerts will make sure that you’re being kept up to date on all of the goings on inside your SQL Server. As the person responsible, you should know if your storage is failing or if SQL Server is encountering serious errors.

1. Run sp_Blitz®

The last thing you should do before going to get help is run sp_Blitz®. Adapted from our consulting, sp_Blitz® is a fast way to diagnose any configuration problems with your server. The procedure looks at configuration, database design, statistics, and index status to give you a set of general recommendations for improving server help. Don’t freak out after you run sp_Blitz®; work through the output, check out the reference links, and make the changes that you need to get your server in working order. Download it now.

Want help? Talk to Brent for free.

See sample findings now

The problem probably isn’t fragmentation – you keep defragmenting and the problems keep coming back.

Our 3-day SQL Critical Care® is a quick, easy process that gets to the root cause of your database health and performance pains.

Learn more, see sample deliverables, and book a free 30-minute call with Brent.

Previous Post
The Use and Abuse of RECOMPILE in SQL Server (Video)
Next Post
What’s a DBA’s Defining Moment?

64 Comments. Leave new

  • Michael Burman
    June 28, 2012 8:35 am

    Filesystem comment did not go quite right. The filesystem cache is using additional available memory to cache filesystem access and it’s not to “make everything look faster”, but is quite equal to the way SQL Server uses memory – to reduce disk trips.

    There’s no point in disabling filesystem cache, since it won’t steal memory from the SQL Server. But if you happen to have some extra memory available there, it might actually cache something your SQL Server will request from the disk – thus avoiding disk I/O again.

    Reply
    • Great point, Michael – I wouldn’t want to give anyone the impression that they should start mucking about with trying to turn off the filesystem cache. It’s an important part of Windows, after all. There are some scenarios where the file system cache will grab hold of memory and never release it. In those cases, the cache size will directly affect the performance of SQL Server since the kernel is hanging on to memory pages that it firmly believes are important.

      But, in short – don’t figure out cunning ways to disable caching and remind people who RDP into SQL Servers to stop doing that.

      Reply
      • Jeremiah,

        Thank you for this good post on issues to address.

        The file cache issue on SQL Server DB instances is one that still puzzles me. The file server components of the server OS need to be present on most / all DB instances, but it does not mean that system should allocate / consume large quantities of memory resources for file cache (intentionally or not) as if it is primarily a file server (resulting in memory pressure on the DB instance(s)). What I would prefer is an OS configuration setting that works like the SQL max memory config parameter for the file cache, as a file cache memory cap that will be observed to avoid DB instance memory pressure issues (and not violated to cause DB instance memory pressure issues).

        Some common scenarios I anticipate are:

        – System is a dedicated SQL Server DB instance with lesser file server needs: file cache should be configured / capped a some smaller maximum level – not turned off, but managed – just like a DBA manages DB instance memory as a part of the overall system memory resource sub-allocation process.

        – System is mixed use (DB instance(s), app services, web services, etc.): Same as above, but must also manage memory caps for DB instances (easy – max memory config) and other non-DB app services also to avoid memory over-subscription and pressure (easy? – not so much! Ever try managing memory with IIS? – not a pretty sight – no memory caps or LRU, just app pool recycle intervals that estimate how often problems will occur and throws the baby out with the bath water – go from hot to cold app cache and must incur overhead to rebuild the app cache – not LRU, but 0 to max to 0 to max …). Or maybe I am missing some other app service memory management config controls?

        – System is primarily a file server and also has DB instance(s): Technically possible but probably a bad idea, as the file cache and DB instance cache(s) (data / procedure / etc.) both have large memory resources requirements to run well.

        Michael’s reply suggests that the file cache serves a similar purpose as the SQL Server memory cache(s) – true on the surface, but assumes that the file cache is equally or more important on the same system than the SQL Server memory cache(s) – I disagree. File cache is very important on a dedicated, heavy use file server (preferably with no competing apps – including DB instances – that require large memory resources), but is of much less importance on a DB instance that happens to only lightly use file sharing services (the more typical case).

        Windows Server 2003 had two registry keys to identify whether the system was primarily a file server or an app server (these registry key values could be set through control panel dialogs or through RegEdit):

        HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache
        HKLM\SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\Size

        http://technet.microsoft.com/en-us/library/cc784562(v=ws.10).aspx
        http://blog.mischel.com/2011/06/10/more-windows-file-caching/
        http://msdn.microsoft.com/en-us/library/ee377058(v=bts.10).aspx
        http://serverfault.com/questions/37609/is-the-maximize-data-throughput-for-network-applications-setting-still-valid-o

        These registry key settings had an indirect impact on whether the file cache was capped at a much lower size (leaving more memory available for app services / DB services). Unfortunately, these registry keys and their supporting Control Panel dialogs were obsoleted starting in Windows Server 2008. I have not yet found another method to manage / cap the file cache on newer server OS versions. I still believe that file cache caps would be a worthwhile safeguard to manage unanticipated file cache memory pressure issues in situations that warrant such management.

        The only potential solutions I found so far (but not yet tested) for Windows Server 2008 and later:

        – Sysinternals CacheSet command (http://technet.microsoft.com/en-us/sysinternals/bb897561.aspx) – not yet sure if it works on Windows Server 2008 and later

        – “fsutil behavior set memoryusage” command (http://technet.microsoft.com/en-us/library/cc785435(v=ws.10))

        Any other thoughts on server OS file cache cap config parameters?

        I agree that avoiding bad behaviors (RDP, large file copies, etc.) are good practices, but they can’t be blocked if they occur (intentional or not) without adequate safeguard policies and mechanisms. File cache safeguards would be a good thing, in my opinion.

        Scott R.

        Reply
        • Unfortunately, I don’t know of any other way to reduce file system cache apart from not incurring the penalty in the first place. The CacheSet and CacheMan options from Sysinternals work very well, but it’s pretty easy to get into trouble with them – you can easily set cache values too low and starve the operating system. In the end, adopting remote management techniques is going to be one of the best ways to solve this problem.

          SQL Server 2012 gives us an easier option, too: installing SQL Server on Windows Server Core. While it’s still going to be possible to bloat the file system cache using a buffered copy, it’s going to be more difficult for people to do.

          Reply
  • What did you mean under “anti-virus exeptions”?

    Reply
    • You want to configure your anti-virus software to ignore SQL Server data files as well as the SQL Server program files directory. You’ll find instructions in our SQL Server 2005/2008 Setup Checklist.

      Reply
      • Also in my experience it helps to turn of indexing of the data and log disks.

        What is wrong with RDP into the servers and how do you recommend to manage the servers? Thanks!

        Reply
        • RDPing into a server uses memory and CPU resources that are better left dedicated to SQL Server. Using other applications, like SSMS or IE (if you’re RDPing into SQL Server, I bet you’re also using it to hit the internet), also steal memory from SQL Server.

          Every application that you’re running on that server can be run remotely. Perfmon can be set up to collect counters remotely, SSMS can run remotely, and you can even access server level settings remotely. In addition, you can use tools like PowerShell to give you incredibly rich remote administrative options.

          If it’s a production SQL Server, I do my best to lock it down from all access, even my own.

          Reply
          • I definitely agree with Jeremiah on this!! I have seen multiple gigabytes of RAM being consumed by multiple RDP’er into a production SQL Server. I have seen multiple full cores running at 100% because SSMS or IE or whatever decided to hang into an infinite loop on something stupid. RDP should VERY rarely be used, if ever.

  • Hi Jeremiah,
    Couple of questions in my mind.Thought would clear that up.

    If I carefully set my Max and Min memory thresholds and set LPIM too after evaluations,then the file system cache will utilize the memory reserved only for the OS right?

    At any point of time will the file system cache ever requests SQL to release memory even if LPIM is set?

    Please confirm.

    Reply
    • I’m not going to lie – I have no idea how that would work out. You should set up a test system and determine how SQL Server and Windows will behave.

      SQL Server will not allocate the minimum memory you set up for SQL Server unless it’s actually being used. So you could definitely end up in a situation where SQL Server runs into memory pressure despite min and max memory being configured as well as LPIM being set.

      Reply
      • Setting min memory will cause SQL to keep that memory and not allow other processes to grab it. I generally see SQL going right up to the max setting and not letting that go either. I like to set SQL so it never uses more than about 75% of the CPU unless it’s a dedicated SQL box then it can be a little higher, it all depends on what has to run on that server.

        As for performance issues, the biggest hit I get is actually setting min and max memory and not leaving it at the default setting. 2005 would allow SQL to eat all the memory on the box, even squeezing out the OS until you cannot log on to the server. I ‘heard’ this is resolved in later versions, but I still see servers with later versions, with the default settings, having unsolvable performance issues, which disappear when I set min and max (no restart required).

        Reply
        • Out of curiosity, what’s your technique for setting SQL Server so it can’t use more than 75% of available CPU?

          What you’ve seen in SQL Server, when max server memory has not been set, is the correct and advertised behavior. Windows will not request that SQL Server trim the working set of memory until Windows is under memory pressure, which generally happens when around 12MB or less of free memory is available.

          I don’t recommend setting min server memory because, frankly, I don’t recommend having anything else running on the SQL Server.

          Reply
          • To set max memory to use 75% of the CPU… it depends! Each server you need to take into account what else is running and the needs of that application. (I wish we had dedicated SQL boxes…Believe me that’s what I recommend.)

            I do a calculation and set max memory of SQL accordingly. I then watch the memory. If it’s going up to 90% and stays there I lower max memory. 85% is as high as I try to let it get. Having said that, newer versions beware because the memory for some in memory operations is not included in the max setting (so I’ve read….) and even if you say use only 7GB, it is likely using more. So basically, keep memory and CPU reports and check it out when you first configure a server or when there are changes.

  • The 64GB memory limitation for SQL Server 2008 R2 Standard Edition – is that per server, or per instance?

    Reply
  • I like to run the awesome free tool SQLJOBVIS to look for overlapping SQL Agent jobs that can very easily cause an exponential breakover in performance due to resource starvation on a server. SQLJOBVIS shows agent jobs in a gantt chart arrangement. Very slick.

    Reply
  • Jeremiah, I think you may have missed one of the requirements for 64gb of SQL Server memory on #4. According to MS (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx), the standard editions of Windows Server 2003/2008/R2 only support up to 32gb of RAM. It’s definitely doable (IIRC Windows Server Enterprise isn’t nearly as pricy as SQL Server Enterprise), but it’s still a requirement.

    Reply
  • Hi Jeremiah Peschka,
    Thanks for performance analyzing points. But I don’t think it is possible for every developer to have 64GB of RAM running. Although sometimes it is hard for the smaller hosting companies too.
    So it is hard to accept.

    Reply
    • Muhammad – 64GB of server memory is around $600. If you can’t justify that, then you don’t really need performance, do you?

      Reply
      • Man I wish I could sell that. My VM admins give me this: “It’s not about “a $600 investment in RAM”; it’s about right-sizing the solution so the VM runs as efficiently as possible in our environment. I need to work on upping my persuasion game and provided memory pressure stats.

        Reply
        • Well, flip things around. Are the business users complaining about performance? If so, point them to the VM admins along with the “$600 investment” line. You’d be surprised at how well business users can convince IT people to spend $600 when performance is on the line.

          Reply
    • As someone already mentioned, 64MB of RAM is dirt cheap. Seriously cheap. Less than 2 days of a DBA’s salary kind of cheap. Man-hours of experienced staff are very expensive, far more expensive than equipment.

      Reply
  • komplikator
    July 2, 2012 8:43 am

    OMG, this is wrong on so many levels…

    Upgrades are never made to optimize performance in such manner. Upgrade is dangerous and should always be tested carefully. Also, sometimes it doesn’t work and you must not upgrade. Then what?

    4. You can do plenty even with just 8GB of memory. Money doesn’t solve everything. If your understanding of SQL sucks, additional RAM will do you little good.

    3. and 1. Why diagnose using your tool? Why not simply diagnose using provided tools? Job activity monitor, sql trace, performance monitors, SSMS, and everything combined together with a little brainwork – that’s much more effective. Learn to use them, understand SQL mechanics just a little bit more than nothing, and that’s all you’ll ever need. Plus, it’ll be fun.

    Reply
    • Upgrades can be dangerous, but upgrades also include huge performance boosts. To name a few: database mirroring performance is vastly improved in SQL Server 2008 thanks to some compression changes, table partitioning parallelism within a single partition in SQL Server 2008 as well, and SQL Server 2012’s query optimizer has some new query rewriting rules that eliminate a lot of correlated subquery problems.

      As far as why use the tools that we distribute? They’re free, they build on the existing tools inside of SQL Server, and they work right away to point the user at the exact problems they’re having. The curious can open up the stored procedure and see how we determine that XYZ is a problem, dig into the mechanics, and come out smarter.

      When your data size is less than the amount of RAM, spending money on RAM is the fastest way to performance. $600 for 64GB of RAM is a very small number of man hours. When your data set exceeds the amount of memory you can cheaply put into a server, that’s when it’s time to start investigating using manpower to solve the problem.

      Reply
  • Jerimiah’s first and third points in that last post are spot-on. RAM ABSOLUTELY can make up for “sucky SQL understanding” in many cases. I acknowledge (and have seen in my own consulting) that there are indeed cases where you simply cannot buy big enough hardware to overcome design/data flaws.

    Reply
  • Eugene Krabs
    July 2, 2012 12:45 pm

    Reboot

    Reply
  • For MS SQL better performance you can use safepeak (www.safepeak.com) for caching queries without touching your SQL server.

    Reply
    • John – according to your product’s FAQ, it doesn’t yet support SQL Server 2012. That’s one of the things that really make me nervous about using third party products to “accelerate” database servers – often they do the exact opposite because they hold the company back from using the latest and greatest tools.

      Reply
  • Chris Adkin
    July 3, 2012 3:05 am

    On the subject of upgrades, I believe that people who upgraded from Windows server 2003 to 2008 found that this resolved a lot of memory related issues which manifested in SQL Server.

    Reply
  • Great post.

    http://ask.sqlservercentral.com/index.html and http://dba.stackexchange.com/ are also a great place to ask sql server performance tuning help.

    Reply
  • Gabriel Federico
    January 24, 2013 10:34 am

    Hello Jeremiah, i am running a query on SQL Server and is getting so slow, the same query a ew days before, was taking just 8 minutes to complete now is taking 43 minutes, i refreshed materialized views, and everything goes equal, i activate SQL Server log, to see what is going on, but we are not able to see anything and still SQL Server is taking 100% CPU during this query.

    PLease do you have any suggestion to solve our problems?

    Reply
    • Jeremiah Peschka
      January 24, 2013 10:42 am

      Hi Gabriel, that’s a great question. There are a lot of possible reasons that you can see these differences and each one of them is the subject of a full blog post (if not more). Any advice I can give you here would be general and may not apply to your problem. If you’re okay with generalized answers, you should narrow down the question as tightly as possible (even providing sample schema and queries) and post it on http://StackOverflow.com or http://ServerFault.com. If you’d like individual help on this, head over to our contact form and send over additional details; I’d be happy to chat with you about a what a consulting engagement to get to the bottom of this problem would look like.

      Reply
  • Recently we faced one very odd incident where SQL Server (2008r2) did not released memory to OS (2008r2) afterg reconfiguring max memory to from 6.5 gb to 5 gb from total 8 gb (we had around 1500 pages/sec). We verified whether sp_configure did released memory or not after couple of hours even from task manageger, memory usage etc. However afteer couple of days we ran again the same sp_configure and it did. Only these time the diff. what would have helped these change later but not in first time.
    Thanks,

    Reply
    • Chetan – hmmm, troubleshooting this is a little outside of the scope of something we can do after-the-fact in a blog post comment. Have you thought about rerunning an experiment to do the same thing on a development server?

      Reply
  • Great article indeed
    if somebody is going for a tool to improve SQL SERVER performance
    can you suggest some tools plz

    Reply
  • Hi team,

    while upgrading from sql 2008 r2 sp2 to sql 2012 sp1 all components are updating apart from sqlserver data tools. We are getting below error message. can you please suggest the step by step approach

    Product: SQL Server 2012 BI Development Studio – Update ‘Service Pack 1 for SQL Server Business Intelligence Development Studio 2008 (64-bit) (KB2674319)’ could not be installed. Error code 1603. Additional information is available in the log file C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20140222_130128\_SHARED_\sql_bids_Cpu64_1.log.

    Reply
  • Wow! What a simplistic view of a very complex problem!

    I am an aforementioned VMware administrator:

    1. Just spend $600 on some RAM – Yeah, times that by the number of VM cluster nodes we have and that figure goes up substantially as if you don;t balance out the cluster with resources you end up essentially breaking all that is good about a virtualised environment unless you are running on a single host. Lets not size anything properly and just throw an infinite amount of resource at it to make it run faster. Makes the SQL admins job easier and just pushes the problem elsewhere, i.e. constantly having to refresh hardware due to lack of tuning on the SQL part or troubleshooting runaway or poorly written queries. Great idea!

    2. Whilst spending all that money on RAM too, where abouts will it go when there are no free RAM slots, which funnily enough, aren’t infinite as yet on today’s motherboards? Constantly adding more and more hardware to VM hosts ultimately means you are then forced into buying new hosts to accomodate for growing RAM requirements, again due to not sizing your workloads properly. The figures for buying such massively specced hosts will far outweigh any SQL admin sat not doing their jobs properly.

    Your article is very simplistic and doesn’t take into account the rest of the infrastructure you are working with. CHucking resources at a problem is not solving that problem, it’s just temporarily masking the symptoms of that problem. Sooner or later you need to deal with the issues which cause the problem in the first place. Why not do it sooner rather than first forcing massive amounts of spend and resources that probably aren;t needed for 90% of cases.

    Reply
    • Jamie – you note that you have a large number of VM cluster nodes and that you want to balance them out. What’s awesome about current versions of VMware is that they automatically balance the load out for you with DRS. It’s a really cool feature, and I’d highly recommend checking it out. Most of our clients don’t keep evenly balanced hosts – it’s just too painful and expensive as you note. Enjoy!

      Reply
      • Funnily enough I know all about DRS! I look after an environment with 700+ production virtual machines and many more development machines, so yes. I know all about this feature!

        Keeping nodes balanced out and evenly matched resource wise is essential to keeping a healthy environment. A massive over balance will cause you no end of issues if you have a host failure that holds far more resources than the other cluster nodes.

        You have 4 hosts, for example. 3 of them have 64GB RAM and 1 of them has 128GB RAM. The host with 128GB RAM fails. Now, where do all those virtual machines go when HA kicks in? They go to the other nodes. If these nodes are at say 90% RAM capacity already what do you think happens when another 128GB worth of virtual machines hits them? You end up with contended memory, swapping and a very nasty impact on performance. Any form of clustering requires balanced nodes for a healthy environment.

        Also, another issue that has popped up with SQL admins is wanting a vast allocation of CPU resources for virtual machines running SQL, even when performance metrics don’t merit it. Again, they get a little upset when told no and I have to explain to them the wonders of VMware CPU scheduling and ready times. Over allocation of CPU resources actually do virtual machines harm if they aren’t touching that allocation as the more cores means a harder time for those cores to get simultanious scheduling by the vmkernel. The Ready time goes up and virtual machines are sat waiting to get all of their cores scheduled in slots free on the hosts. Example; we had some servers in the past that have had 10%+ CPU Ready time over a 1 week period. Do the maths. What is 10%+ of a 1 week period in terms of hours waiting for CPU resource? There’s 168 hours in each week, which means a figure of 10% ready time means that virtual machines can spend 16.8 hours per week ready to do work but are unable to secure time on the scheduler.

        Resources are not the answer to all of life’s problems. Correct management is. It’s this misguided and simplistic assumption that makes environments messy, poorly managed and prone to cluster wide performance issues for the sake of a few machines. Don’t get me wrong, I would love massive amounts of RAM and CPU myself to play around with in my clusters, but a sad fact of life is that it’s better to put more legwork into actually finding the real cause of problems rather than allowing them to bloat out, most of the time unnecessarily. 🙂

        Reply
        • Jamie – actually, you answered your own question:

          “You have 4 hosts, for example. 3 of them have 64GB RAM and 1 of them has 128GB RAM… If these nodes are at say 90% RAM capacity…”

          Bingo – there’s your problem. It’s unwise in a virtual environment to run with 90% resource allocation unless you’ve got at least 10 hosts (thereby giving you failover capacity for one host). In resource allocation, this concept is called N+1.

          “Resources are not the answer to all of life’s problems. Correct management is.”

          I wholeheartedly agree. Correct management means not running at 90% resource allocation. I think we see eye to eye there. 😉

          Reply
        • “… more cores means a harder time for those cores to get simultaneous scheduling by the [hypervisor’s] kernel. ”

          Is there still a requirement for co-scheduling for guest vCPUs on the current version of ESX? How about in Hyper-V, Xen, etc? If there is, it would seem to running well-performing highly concurrent workloads in a virtual machine.

          Reply
          • It would seem to *preclude*, I mean.

          • Newer versions of all hyper-visors have far less strict co-scheduling requirements. It’s possible to run highly concurrent and resource intensive workloads in virtual machines. AWS is a perfect example of how Xen can be used effectively in a highly concurrent environment.

          • Thanks for the reply, Jeremiah.

            I’ve read similar things elsewhere, but thought I should ask because I still see people trying to run large (for some definition of ‘large’) database servers on 1-3 vCPUs.

  • I still feel you’re missing the point here:

    1. The figures I used are purely illustrative.

    2. Getting spend for resources in some companies is not as easy as just asking for it, even with valid justifications. Even if it was, it’s not the best approach. Tightly manage resources and spend when that avenue is exhausted. Correct sizing of machines is important even if you have a massive spare resource overhead. You wouldn’t send a book sized parcel in a cardboard box suitable for a 50″ TV.

    The one thing that really irks me too is supplier recommendations for spec. They are usually wildly exaggerated. Lets forget about SQL server here for minute. I am talking other 3rd party applications. We have requests for machines specced with a crazy amount of cores, sometimes 6, 8 or 12. When you roll the service out and 100 users are connected you see that the load for the server doesn’t even touch the sides of 2 CPU cores. A lot of vendors make these recommendations so they don’t have to deal with resourcing in any support related queries. Unfortunately though, this approach usually pushes massive overspend on to anyone attempting to roll out their product or service. This is one of the battles I face on a weekly basis.

    I see your points. SQL server does like memory but properly tuned environments can go a long way to maximising the resources at your disposal with minimum waste and still get great performance out of the systems you run.

    Reply
    • You wrote:

      The figures I used are purely illustrative.

      Right. They illustrated my point, not yours. You’ve gotta come up with figures to illustrate *your* point.

      Getting spend for resources in some companies is not as easy as just asking for it, even with valid justifications.

      Re-read the first two sentences of this post – it’s about when you’re having bad performance, and you care enough to Google for the fix. In that case, getting money for resources can often become much easier than you think. We’re not arguing for large amounts of vCPUs or memory for the initial setup – just for fixing bad performance.

      I think that’s where you’re getting hung up.

      Reply
  • hi all,
    i am facing a issue in one of my server.Last Sunday after DB server reboot, all queries are going for parallelism
    including all simple select queries also.Please help to fix this issue.

    Reply
  • The Link “virtualize SQL Server” in Step 3 leads to https://www.brentozar.com/sql-server-training-videos/tuning-sql-server-on-vmware/ which only displays “page not found”. I think, this is not the expected information.

    Reply
  • Performance between our local and Maquet system we could see a huge difference in the performance.

    Performance Statistics:

    Stored Procedure run time in Maquet Server : 22 Minutes 30 Seconds.

    Stored Procedure run time in local System : 5 Minutes 20 Seconds

    Enclosed system configuration.

    What is the reason ?

    Reply
  • Hi there,

    I have a question.

    Are there any tips you can give for improving performance of a client desktop app that connects to a remote sql server using TCP/IP ? remote tcp connection is about 2-5x slower, depending on the query. Upload speed on the server is not that great, but maybe there are some things we can try to help it?

    Thanks!
    Dave

    Reply
  • make gantt chart
    September 10, 2018 11:05 am

    Thanks, great site. Some time ago I’ve intersted this theme and found a lot of useful , but this article made me to review my opinion.

    Reply
  • Hello,
    I am using window server 2012 r , my system memory is 64gb . problem is my sql services use min 28gb of data so in this time my whole network is too much slow. in this scenario what should i do for best performance.
    thanks

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.