SQL Interview Question: “Talk me through this screenshot.”

After writing about “For Technical Interviews, Don’t Ask Questions, Show Screenshots”, lots of folks asked what kinds of screenshots I’d show. Here’s this week’s example.

I show each screenshot on a projector (or shared desktop) to the candidate and say:

  1. What’s this screen from?
  2. What does the screen mean?
  3. If it was a server you inherited from someone else, would there be any actions you’d take?
  4. What questions might you want to ask before you take those actions?
  5. Would there be any drawbacks to your actions?
  6. What would be the benefits of your actions?
Rorschach test

Rorschach test

After a few days, I’ll follow up with my own thoughts.


UPDATE 2016/05/20 – Great thoughts, everybody. This one was fun because it stems from real-life scenarios I’ve seen several times. You wouldn’t believe how long it takes folks to recognize this screen in real-life interviews – often it takes DBAs tens of seconds to realize they’re looking at TempDB. (They often start by talking about some other database because the file name tempdev is so deceiving.)

The DBA heard that they were supposed to create a file for each core, but they misunderstood the difference between cores and processors. The server had 2 processors, each with 4 cores – but they created 2 data files originally.

They had a super-fast SSD attached to the SQL Server as E:, and it’s got a relatively limited amount of space – say 128GB – so they started with small file sizes and let them autogrow.

At some point, the SSD ran out of space, so the DBA added another emergency overflow file on a slower drive (M:). Maybe they shrank it back manually, or maybe they have a job to shrink it – in either case, I get a little suspicious when I see small file sizes because there’s probably shrinking going on.

I got a chuckle out of the answer about the server being a dev box because the database file is named tempdev – even though I see a ton of servers, the default “tempdev” makes me pause every time because it was such an odd file name choice by Microsoft. Funny how everybody’s just as bad at naming conventions as I am.

So to answer the questions:

3. Would I take actions? I’d check to see if there are shrink jobs set up on TempDB, and if so, I’d start by disabling those. I might consider adding more TempDB data files, although if it only had one data file, I’d be a little more careful because it can have a funny side effect.

4. What questions would I ask? What wait types is this server facing? Is the E drive actually a good spot for TempDB? How are the file stats looking on that drive? Have we had a history of running out of space here? How big are the user databases? Are we sorting indexes in TempDB?

5. Any drawbacks? If TempDB is getting regularly hammered, and it runs out of space and needs the overflow file, I might not know it due to the shrinks. I’d start by disabling the shrink jobs so that I can see if this thing grows, and what it ends up growing to. That’ll help me plan for capacity.

6. Benefits to my actions? Some folks mentioned adding files or pre-growing files can make it faster for end users, but be really careful there. Anytime you say something will be faster, then as an interviewer, I’m going to challenge you to define what you would measure, and how it would change. If you don’t have metrics at the ready, then I’m going to suspect cargo cult programming.

Previous Post
[Video] Office Hours 2016 2016/05/11
Next Post
What TRY/CATCH Doesn’t Handle

60 Comments. Leave new

  • greatness 🙂 I’m using this!

    Reply
    • For me it looks like there is not enough free space on drive E:, so someone has added temp_overflow on another disk. Maybe it’s time to upgrade/add some drives to this server.

      Reply
      • Michael – is there a situation where you can imagine having this setup as normal?

        Reply
        • Andrew Hill
          May 17, 2016 12:36 pm

          I have a setup like this on an aws server, drive e is fixed size files on local ephemeral and high performance, overflow is pointing at spare space on a large general purpose drive.
          Downside of this is the cleanup after the overflow is used. (This is a reporting server, not transactional.
          Has a significant impact on performance as sql server doesn’t take performance into account during file round-robbin.

          This means most queries but the high performance drive, without saying ‘No big queries’.

          Reply
  • SQL Management Studio properties of the tempdb

    1. Files are not all the same size.
    2. Can’t see what resources the server has (disk, memory , CPUs, etc.) but the initial size of the files seems really too small. But need to verify all of this before making changes.
    3.Would look at settings for model DB as well.
    4. Set autogrowth to larger size than 256 MB. Maybe 1GB each (again depending on what I see once I look at available resources).
    5. Changes to tempdb — need a reboot of the server to take effect especially changing the initial size of files.
    6. Benefit would be better performance since the tempdb would not be expanding files before being able to create objects in it.

    Reply
    • Tracy – great start! Here’s my followups:

      1. What does that mean to me?
      2. What would you base your decisions on?
      3. What would you look for?
      4. How do you pick that number?
      5. Tell me more about that.
      6. How would you measure the performance improvements? What would users expect to see?

      Reply
      • What does that mean to me? Can cause uneven fill of the files as well as performance issues.

        2. What would you base your decisions on? Disk configuration would dictate where and how the temp files are placed. CPU would influence how many files to have. Starting place of one file for each CPU core in system, then monitor tempdb file contention for the Page Free Space (PFS), Global Allocation Map (GAM), and the Shared Global Allocation Map (SGAM) pages to see if need more.

        3. What would you look for? The tempdb is created based on the model database. So make sure the model autogrowth is set automatic with a size that makes more sense like 1GB for each file. Turn off unlimited growth to prevent filling up the hard drive if a process wacks out.

        4. How do you pick that number? Set the number based on the size of the drive the files are on, what the server is used for (OLAP or OLTP).

        5. Tell me more about that. Reboot would be required if I had to move files (can’t see drive configuration) to drive with more space.

        6. How would you measure the performance improvements? What would users expect to see? As above, I would monitor page free space, GAM and SGAM pages. Users could find better performance on sorts, temp object creation (table variables, temp stored procedures, cursors (YIKES!!), etc.)

        Reply
        • Paw Jershauge
          May 17, 2016 7:48 am

          Im pretty sure that 3. is not correct, as the TempDB settings overrides the model settings… so if you set the TempDB settings other than default, these will be used.

          Reply
        • Tracy – uh oh, it’s time to brush up on some of these, especially the one-file-per-core, rebooting Windows in order to see newly added drives, etc.

          Reply
  • 1) Database Properties from SSMS.
    2) You’re looking at the file layout for tempdb on a particular instance.
    3) Immediate low-impact changes would be to correct the 1MB file growth, size the files identically. I’d ask to see the rest of that file path to see where these files are laying on disk. I’d also look at the rest of the server specs to see if tempdb is pre-sized appropriately.
    4) Why is there an overflow file at all? Was it a one-time thing? If so, can it be removed? If it’s continually necessary, should the disk/files be rearranged to accommodate? Do you have any baseline data or trending info for this server? Is that a local account connecting? Are we using AD here?
    5) If there are other instances on this box, are we sharing nicely?
    6) If tempdb is under-sized and we size it appropriately, we’re preventing auto-growths (especially multiple 1MB growths), sizing each file the same gets better proportional fill. World peace. 🙂

    Reply
    • Jeff – gotcha. Followup questions:

      3. When you say “correct” it, what does that mean? What are the correct configurations?
      4. Elaborate more on what you mean by an overflow file.
      5. What do you mean by sharing nicely?
      6. Can you be more specific about those benefits? Will users notice, and how?

      Reply
      • 3) Ah, I said autogrowth when I meant initial size. Matching the file size to the others allows for proportional fill, otherwise (and I’m going to sound unsure here if I were being interviewed) I believe the other files get used before the “overflow” file is used. Knowing the “correct” size and autogrowth would be impossible to determine based on a screenshot, but clarifying questions can get you closer (like what the largest table on the instance is, how much space is available on the drive).
        4) The file is specified as “temp_overflow”, which has an implication that tempdb needed some extra data space to breathe at some point. Seeing that it’s on a different drive seems to support that. More questions here would be a good idea.
        5) If multiple instances were on the same box, I’d want to make sure that either the tempdb drives are isolated from each other, or are capped properly. Something I see occasionally, robbing Peter to pay Paul.
        6) Users will most likely not notice these changes, but if I’m in charge of this server I’d want a solid starting point.

        Reply
  • #3 – Ask what’s our expense budget so we can hire Boba Fett to bring us my predecessor?

    Reply
  • 1. TempDB’s database properties screen
    2. It shows TempDB’s file configuration (mdf/ndfs/ldf)
    3. As mentioned previously, it’s hard to set the initial size and increments without knowing drive space values. However, they would likely be increased and standardized to match one another. I would also look into the possibility of pushing the Log file to it’s own drive if possible, separate from the data files. The naming convention would also be standardized (I don’t like temp2 as a filename). Possibly push the overflow to it’s own filegroup (not sure about that one)?
    4. Specifics on drive space allocations, file naming conventions, drive layout configurations in regard to the instance standards.
    5. A reboot would be required to initialize the new tempdb sizes
    6. It would help keep file structures standard; and the new sizes would help improve performance on the instance (assuming the files were adjusted properly in respect to the drive sizes).

    Reply
    • Thomas Franz
      May 18, 2016 2:08 am

      3. “Push overflow to own filegroup” would not make any sense, since you’r apps / queries would neither use partitioning nor create the temporary tables in another filegroup -> would never been used

      5. as mentioned some posts above you do not need to reboot – just restart the SQL Server service

      Reply
  • Andy (@SQLBek)
    May 16, 2016 9:36 am

    1. “Database Properties – TempDB” The title of the window tells us that, duh!
    2. That you’re overly reliant on GUIs.
    3. I’d close the wizard & write T-SQL to adjust TempDB properly.
    4. Why are you using a wizard?!?
    5. I’m not using PowerShell to adjust TempDB?
    6. Fixing all the things!!!

    😀

    Reply
  • 1. TempDb properties window for server SQL2016A. Also it is from Sql server management studio (ssms)
    2. Well it means you are looking at the properties of tempdb.
    But it tells me that there are three data files for tempdb and a log file. Although the temp_overflow lives on a different drive.
    3. Yes, I would look at how many cpu’s this server has available, how the drives are setup. I am looking to see that data, log, system and tempdb files all have their own drive.
    I would see if it is possible to get those to be setup like that. Take the tempdb and take its total size available and divide that by the number of cores. That is the size of each of the files I would need to create. Then turn off autogrow.
    What questions might you want to ask before you take those actions?
    4. Is their a maintenance window in which we can reboot the server in order to make these changes.
    What Kind of traffic does this server get? If it doesn’t have to high of traffic then having all these extra tempdb files wouldn’t be that great of added benefit.
    5.It could create some slowness. If moved incorrectly SQL server might not start backup again since it would be missing files.
    6. Should see less locks when queries are trying to use tempdb. So performance.
    7. Question for follow up? Why did you set it up this way in the first place?

    Reply
    • Brent – a few followup questions:

      2. What does that mean to you?
      3. Why do you want the data, log, system, and tempdb files to be on their own drives?
      4. So if a server only has one query running, there’s no benefit to multiple TempDB files?
      6. Can you elaborate on that?
      7. I’ll turn that around and ask you: the last DBA is gone, and I’m just a lowly manager. I’m bringing you on board to fix this. 😀 Why do you think the DBA might have set it up this way?

      (That last one’s a little tricky: I always try to assume the last DBA was an amazing genius, and she had a really good reason for doing it, and it’s my job to figure out what it was.)

      Reply
      • 2. Means I will be working with SQL server, in SSMS and not having to work it through powershell, or Access.
        3. I think that having each on its own drive would allow for easier read/write to the data files and logs. It would also allow read and writes to the tempdb at the same time that wouldn’t be competing with the read and writes to the data and logs. I like this model for monitoring sizes of the db’s. I have found in my current environment that should a drive fail it has made troubleshooting issues a little easier.
        4 & 6. Since this on the internet I can just post a link which I couldn’t do in an actual interview: https://www.brentozar.com/blitz/tempdb-data-files/
        “For most of the world, one data file is okay, but as your system starts to grow, it may run into an issue called page contention on GAM, SGAM, or PFS pages. SQL Server has to write to special system pages to allocate new objects. ‘Latches’ — think of them as lightweight locks — protect these pages in memory.

        On a busy SQL Server you can end up waiting a long time to get your ‘latch’ on a system page in tempdb. This can make your queries run very slowly in some cases.”
        7. I too will assume the previous DBA was a genius and ask did they have it documented as to why they did it that way. If not I would argue that going forward it would be easier to support if it were in a traditional setup.

        Reply
  • 1. What does that mean to me? Can cause uneven fill of the files as well as performance issues.

    2. What would you base your decisions on? Disk configuration would dictate where and how the temp files are placed. CPU would influence how many files to have. Starting place of one file for each CPU core in system, then monitor tempdb file contention for the Page Free Space (PFS), Global Allocation Map (GAM), and the Shared Global Allocation Map (SGAM) pages to see if need more.

    3. What would you look for? The tempdb is created based on the model database. So make sure the model autogrowth is set automatic with a size that makes more sense like 1GB for each file. Turn off unlimited growth to prevent filling up the hard drive if a process wacks out.

    4. How do you pick that number? Set the number based on the size of the drive the files are on, what the server is used for (OLAP or OLTP).

    5. Tell me more about that. Reboot would be required if I had to move files (can’t see drive configuration) to drive with more space.

    6. How would you measure the performance improvements? What would users expect to see? As above, I would monitor page free space, GAM and SGAM pages. Users could find better performance on sorts, temp object creation (table variables, temp stored procedures, cursors (YIKES!!), etc.)

    Reply
  • Boo to unlimited maxsize on tempdb files.

    Reply
  • Besides the previous comments I would also explain how the proportional fill algorithm creates hot spots and contention when data files have uneven free space. I would also check if trace flag 1117 had been implemented to make sure my files grows evenly after I sized them correctly. To further limit SGAM contention I would also consider trace flag 1118 to limit single page allocations.

    Reply
    • Allan – are there any downsides to implementing those trace flags?

      Reply
      • Trace flag 1118 would cause an increase in the size of the database since single page allocations cannot share extents but how much would depend on how many small objects you are creating.

        Reply
        • Joe O'Connor
          May 19, 2016 6:48 am

          If the server name is an indicator of the SQL version level, then those trace flags are not necessary to set on the TempDB. SQL 2016 will autogrow all data files within TempDB at the same time, and TempDB will always have uniform extents.

          If it were a prior version, then the downside is that those settings apply to all databases within the instance, not just TempDB

          Reply
  • Frank Garcia
    May 16, 2016 4:05 pm
    Reply
  • Andre Ranieri
    May 16, 2016 4:50 pm

    Specifically, it’s the Object Explorer properties for the TempDB database, the file tab contains physical file attributes for the database.

    It’s hard to make concrete determinations without knowing more about the server, especially how the I/O subsystem is laid out, what the predominant wait type is and how big TempDB allocations tend to be. I support one OLTP server that pushes a 60 GB TempDB allocation twice a month during payroll reporting. File this under Brent’s “Performance tuning when I can’t fix the queries” heading.

    In very general terms, the initial database file size seems too small. It’s also concerning that the initial file sizes aren’t all the same and that one of the data files is on a separate volume, but the log file is on the same volume as the other data files. I’d also want to look at sys.master_files and see how big the individual .mdf and .ndf data files are. If they’re different sizes, the TempDB’s round robin allocation won’t work properly because it will skew towards the larger file. In that case we have a risk of SGAM contention or PFS becoming a hotspot. If the file sizes are different I’d consider enabling TF 1117 to force all files in a filegroup to grow at the same rate, but probably not if any of the user databases are spread across multiple file groups. Based on a sidebar discussion with Sean and Jen McCown at last year’s PASS Summit, I tend to set auto grow settings in model, TempDB and other databases to a minimum 1 GB increments to keep file growth locking and VLF count to a minimum for most situations, unless we’re talking about a VLDB.

    Reply
    • Andre – is there anything you’d recommend looking at at the same time when setting a 1GB autogrowth size?

      Reply
      • Andre Ranieri
        May 16, 2016 5:24 pm

        I’d look at the available space on any volumes containing TempDB files before making changes. I’d also want to circle back and look at the VLF count on databases to see if excessive auto growth events in the past might have pushed VLF counts above 700, the number stuck in my head as a threshold for taking action.

        One another note, TempDB is split into three data files which seems like an unusual number, I’ve read that the best practice is that generally there should be one TempDB data file per CPU core for the first 8 cores, then add additional TempDB files in groups of 4 if there’s still a contention issue. Three data files seems like an odd number (sorry for the pun) because a SQL instance with three CPU cores would be unusual. This is less than the 4 core minimum licensing requirement for the SQL 2012 and above.

        Reply
  • I like most of my colleagues the answers, I would just like to add that this configuration seems like a better configuration than the default (initial temp/temp#/log of 8 megabytes with autogrows of 64 MB) still depending on how big the system is, this might be more than enough and probably the overflow was something in progress or just a test to see how it behaves (that’s something I would’ve tried).

    Additionally to what orders have already asked in this post, I would ask for complaints on performance and ask to see other configurations to see if this tempdb configuration could be an actual issue

    Reply
  • Not going to replicate what other wrote, so heres something new -> why the Fulltext Enabled ???
    its not supported anymore…
    And I can’t really imagen when the h… I would have to use the fulltext index in tempdb anyhow. just saing.. 😉

    Reply
  • The configuration of TempDB is very important to SQL.
    The best practice is to have multiple data files, all of the same size with NO GROWTH. In your case you have three data files. This should be in multiples of at least two depending somewhat on the number of cores. I have seen the idea of 8 for 8 cores and then added in 4’s.
    Great if you can have the data files on a different lun than the log and even better if the different data files can be on different luns.

    I haven’t really looked at all the other comments in here as they probably reflect what I am saying.

    Reply
  • Graeme Martin
    May 17, 2016 11:50 am

    Of course, some of us think this might be a clandestine interview… 🙂
    1. What’s this screen from?
    a. This is the Properties screen for tempdb. It is accessed through ObjectExplorer in SSMS by right-clicking the (tempdb) database and selecting Properites.
    2. What does the screen mean?
    a. It is showing the layout of files that make up tempdb.
    b. There are three data files and one log file.
    c. Two data files + the log file are located on an “E” drive. One data file is located on an apparently different drive (M), although M could be a mapped alias.
    d. All but one data file are sized to 0.5 GB (on restart), with a growth setting of 256 MB. This means if the file needs to grow, it will grow in 256MB increments.
    3. If it was a server you inherited from someone else, would there be any actions you’d take?
    a. I need to know what the server is used for. I would also want to know if they are experiencing any delays that could be related to tempdb size or file location. For example, if the tempdb files exist on the same spindles as other data or log files (some exceptions with modern SANs), we may want to move them to faster resources.
    b. The data files should be sized equally to allow for better balance when accessing. Depending on the version of SQL Server, I may wish to enable trace flags 1117 and 1118 to provide for more equal use of the files.
    c. I would check the number of data files against the number of cores. For best results, the number of data files should at least equal the number of cores. (NUMA nodes are counted by cores per node.)
    d. We also need to look at the nature of the disks on which tempdb resides. Are these the optimum disks available?
    e. If we can dedicate disk resources to tempdb, we can split the space available among the tempdb data files and restrict file growth to that size as a max.
    f. Although the logical names aren’t critical, I would want to rename the files so they follow a pattern.
    4. What questions might you want to ask before you take those actions?
    a. (As above) – what is this server used for?
    b. Are you experiencing any delays?
    c. What are your plans for the server’s future? How many databases do you plan to add?
    d. How many users simultaneously access the server either directly or through web applications?
    e. Are there any other applications (services, e.g.) using the server?
    f. If I move these tempdb files to drive X, what is drive X used for? Will we have space contention?
    5. Would there be any drawbacks to your actions?
    a. If the server is only used for sales demos, 256MB is probably adequate. Increasing the size or number of tempdb files may only cost more, instead of bringing better performance.
    b. Changes to the size, number and location of tempdb or the startup trace flags would require a restart. This could be disruptive.
    6. What would be the benefits of your actions?
    a. Better performance through reduction of file growth events.
    b. Better performance of tempdb through balance of files vs cores.
    c. Better performance of tempdb by placing files on faster disks.
    d. Better management of files by establishing a standard pattern of naming, location and sizing.

    Reply
  • What’s this screen from? – as it says on top: the databse properties from the tempDB, it’s a screenshot from studiomanager

    What does the screen mean? – Not sure if propertyoverviews have meaning beyond what they show, unless things like ‘the instance is running’

    If it was a server you inherited from someone else, would there be any actions you’d take?
    Not without context. Looking at the unusual setup, i want to confirm that this is a dev machine. (it has DEV in one of the logical names)

    What questions might you want to ask before you take those actions?
    i want to confirm that this is a dev machine. I want to know the size of the disk (as there’s no limit to autogrowth). I want to know if they are aware of the 256MB growth and if I may correct this. I want to know how the performance behaviour is perceived, and how the workload is characterised

    Would there be any drawbacks to your actions?
    I would only change the 256MB growth, and change filenames and locations if performance figures dictate such actions
    What would be the benefits of your actions?

    Reply
    • forgot the “What would be the benefits of your actions?”

      My actions ( change 256MB growth, check for potential diskfill up) is to check for potential space issue’s.

      Apart from that, the context can dictate the following actions:

      Following conventions (e.g. naming conventions) if needed (e.g. scripts rely on them).

      proper file distribution, if performance problems exist and the analyses points to the physical IO of the tempdb.

      Reply
    • Willem – tell me more about it being a dev machine. What would you expect this screen to look like otherwise?

      Reply
      • “What would you expect this screen to look like otherwise?”
        Dev machines are often smaller and with little load, so there’s more room for (setup) errors. Just be careful with the tempDB as these developers may kick off some process that will fill up the tempDB during their try-and-error programming. But dev machines aren’t gold-SLA, so it’s not such big fish, and i wouldn’t tamper with filelocation and names just for the sake of it.

        If this is an important productionmachine, then there is need for a closer look. Sloppy work is not confined to the tempdb file setup, so run sp_blitz and sp_blitzindex for a quick healthcheck. There may be low hanging fruit in all directions. So for me, it’s a “SLOPPY WORK”-sign, and if analyses dictates it, I may change the nameconvention and filelocation.
        But there’s probably much bigger fish swimming around

        Reply
        • Willem – sorry, I was trying to be vague without being too specific, so I’ll try to be more direct.

          You wrote that “it has DEV in one of the logical names” – so what might this screen look like on a production server? Would it still have dev in the name?

          Reply
  • The Screenshot is from a blurry monitor. Clearly the user either needs a new prescription or the monitor is old. Perhaps this is a Powerbook of some sort, using some ancient VNC software? Hard to guess here.

    Definitely I’d say this is some old SQL instance, likely SQL 7, circa 1998. Maybe powering some music collection software, perhaps a VB6 rewrite to use a small database server. I’m sure 1GB of tempdb is overkill here, and I’d suggest shrinking this down to a proper 10MB, removing one file, and certainly lowering autogrow to 10MB as well.

    The log file is clearly too large, I mean face it, who buys albums, CDs, or even music anymore. I’m sure the user has moved on to Tidal, Spotify, or even Apple Music.

    I’m sure this isn’t Brent’s instance, so clearly some Photoshopping was done to change project a modern instance since SQL 7 didn’t allow instances.

    Reply
  • Thomas Franz
    May 18, 2016 2:50 am

    Files:
    – the two “main” data files for the TempDB would be only ok, if it is a DualCore-Workstation or VM
    – if the server has more than 2 CPUs create more files with the same size as the other two (see below), up to 8 data files to prevent latching problems when multiple temporary tables (e.g. from procedures or because of spilling to TempDb) where created
    – the overflow-file on a separate drive is a nice idea, but you have to monitor and shrink it regulary (I guess, that M: is a bigger but slower drive).
    – Without shrinking the overflow (after needing it) SQL Server would use it, even if the regularly (fast) files are empty -> lower performance.
    – you should use the EMPTYFILE parameter to DBCC SHRINKFILE when doing it, so that it would move temporary objects out of the file (particulary if you have no maintenance window and your apps are still creating temporary objects)
    – consider to create a overflow log file too (if e: is full and your logfile has to grow it will result in a high severity error / session kill / rollback / something you don’t want to see this in real))

    Sizes:
    – depends on what the server is really doing
    – ideally e: would be a local SSD used only for the TempDB
    – in this case divide the disc space at e: by the number of datafiles + 1 (for Logfile)
    – create up to 8 even sized files with NO autogrowth (important – otherwise it would grow your overflow-file on M: too, when you have TraceFlag 1117 activated (as the most of us have) – on the other hand you could disable TF 1117 when you set up your TempDB this way)
    – use the remaining space for the log file (depending on your workload it could be neccesary to create a bigger logfile) on drive e:
    – the initial size and autogrowth value for the overflow file will be ok, but ensure that the windows user who is running the SQL server instance has the privilegue for instant file initialisation
    – if e: is no SSD (but a local RAID 10 or worser a single HDD) you should consider to move the log file to another disk.
    – if e: is in the SAN then trust the SAN to distribute the work over the phyisical drives (and hope e: is not on a RAID 5 LUNE)

    PS:
    – if the SQL Server only runs a single, very small app and it does not really need more then initial 0.5 GB tempdb size than go away and look for another job – otherwise you risk to be underemployed

    Reply
  • Tricia Crighton
    May 18, 2016 3:45 am

    I’m sure I read that SQL 2016, removed the need for T1117, since the equi-allocation now works ?

    Tricia.

    Reply
    • Bingo. I came here to say this. The server name indicates it’s a 2016 server.
      Also in 2016, the default TempDB settings have been improved to create 1 TempDB file for each CPU core, so this server may have had 2 or 3 cores to begin with, but likely has been modified from the default install wizard specs.

      Reply
    • Thomas Franz
      May 19, 2016 4:49 am

      As far I know, TF1117 is a database option now, but I’m not sure, if I can set it on the TempDB or if I have to change it in the model database (I guess that the TempDB option because TempDB was the main reason for this flag)

      Reply
  • I have to say… this concept of Interview Screenshot Questions is really Cool !!!

    Hope Brent can make this a staple, it’s fun and such a good way to learn.

    Many Thanks Brent

    Reply
  • Great answers, everybody! I’ve added my thoughts/answers at the bottom of the post. The next one’s scheduled for Monday. Enjoy, and have a good weekend!

    Reply
  • ” The DBA heard that they were supposed to create a file for each core, but they misunderstood the difference between cores and processors. The server had 2 processors, each with 4 cores”

    I guess i’m one of those DBA’s that misunderstands that – I thought a core can have CPU’s – not the other way round.
    BTW This format of screendump talkthoroughs is very interesting.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}