In a recent Office Hours post, David Reabow suggested letting ChatGPT answer a few of the questions y’all posted at http://pollgab.com/room/brento/. Brilliant! Let’s see how it does. I purposely cherry-picked the questions that I thought ChatGPT would have the easiest time answering.
Rollback is single threaded asks: Hi Brent! An update statement has blocked all other sessions. What should I do? Killing it, restarting SQL server service, waiting, etc. What’s your opinion? Thanks!
If an update statement has blocked all other sessions, there are a few options to consider:
- Monitor the situation: If the update is taking a long time, monitor it to see if it’s making progress. If the update is hung, it may eventually clear up on its own.
- Kill the blocking query: If the update is not making progress, you can use the KILL command to stop the blocking query. This will allow other sessions to proceed, but any data changes made by the blocked update will be rolled back.
- Restart the SQL Server service: Restarting the SQL Server service can clear up any blocked sessions, but this should be done as a last resort because it will cause an interruption to all other sessions.
- Wait: In some cases, waiting is the best option. If the update is expected to run for a long time, and there are no pressing issues with other sessions being blocked, waiting may be the best course of action.
In any case, it is advisable to take a backup of the database before taking any action to avoid data loss.
Brent says: bullet points 1 and 4 are basically the same. I abhor bullet point 3 because it doesn’t add anything that bullet point 2 wouldn’t have done. It was on the right track – bullet points 1 and 2 are good starting points.
Peter Riis: Hello Brent. We got a database storing mainly images in BLOB. Storing BLOB’s in the database got several disadvantages, so we thought about implementing FILESTREAM. What are your opinion on storing files in a database?
Storing large files like images in a database can have the following disadvantages:
- Increased database size: Storing large files in a database can significantly increase its size, making it slower to manage and query.
- Slower performance: Querying and retrieving large files from a database can be slower than accessing them from a file system.
- Limited scalability: Storing large files in a database can limit scalability as the database grows in size and complexity.
On the other hand, FILESTREAM is an option that allows you to store large binary data (BLOBs) in a database, while maintaining the efficient streaming access to that data. FILESTREAM integrates the SQL Server database engine with the NTFS file system to store and manage large BLOB data.
In conclusion, whether to store large files in a database or not depends on the specific requirements and use cases of the application. If efficient streaming access to the large files is required, FILESTREAM can be a good option. However, if the focus is on performance, scalability, and managing large files efficiently, storing the files outside the database in the file system might be a better solution.
Brent says: nailed it.
Stone Tablet Monk: When creating a new index, what is your opinion of using OPTIMIZE_FOR_SEQUENTIAL_KEY for minimizing last page insert contention? Is this a good solution? Is this the only solution?
The OPTIMIZE_FOR_SEQUENTIAL_KEY option can be used when creating a new index to minimize last page insert contention, by prioritizing index page allocation for new data rows at the end of the index. This can help reduce index fragmentation and improve insert performance.
However, this is not the only solution to minimize last page insert contention. Other solutions include:
- Increasing the fill factor: The fill factor determines the percentage of space that is reserved on each index page for future growth, and increasing it can reduce the frequency of index page splits.
- Using a clustered index: A clustered index physically orders the rows of a table, so new rows are added to the end of the table.
- Monitoring and defragmenting indexes: Regular monitoring and defragmentation of indexes can help maintain their efficiency and reduce the frequency of page splits.
In conclusion, using OPTIMIZE_FOR_SEQUENTIAL_KEY is a valid solution for minimizing last page insert contention, but it should be evaluated along with other options to determine the best approach for a specific scenario.
Brent says: ChatGPT is simply flat out wrong here. Fill factor is a setting that isn’t honored during inserts. Clustered indexes are irrelevant here, too. This answer illustrates one of the challenges with AI – it can’t synthesize information on brand new terms and topics where there isn’t already conventional wisdom for it to analyze. Now, for the answer: what I’d ask is, how do you know that you have last page insert contention? This typically shows up at over 1,000 inserts per second, sustained – and most people I see asking this question, don’t even have the problem.
Piotr: When are SSMS client statistics helpful in troubleshooting query performance issues?
SQL Server Management Studio (SSMS) client statistics can be helpful in troubleshooting query performance issues in the following situations:
- Long-running queries: If a query is taking a long time to complete, client statistics can provide information on the execution time, wait time, and other details about the query.
- Slow network performance: If a query is running slow due to network latency, client statistics can show the time spent sending and receiving data between the client and the server.
- High CPU or memory usage: Client statistics can help identify queries that are consuming high amounts of CPU or memory on the server, allowing you to focus your performance optimization efforts on the most resource-intensive queries.
- Blocked queries: If a query is blocked by another query, client statistics can provide information about the blocking and blocked sessions, which can help in identifying the root cause of the problem.
To enable client statistics in SSMS, right-click the query window and select “Include Client Statistics” in the context menu. After the query has completed, the client statistics will be displayed in the results pane.
Brent says: again, ChatGPT is just bullshitting here, and most of this is just flat out wrong. Client stats don’t show wait time or blocking queries. As to the real answer – for me, client statistics haven’t been useful.
Here’s the problem with ChatGPT.
It speaks with the same level of authority no matter how much or little confidence it has in its answers. It never says “I think” or “I suspect” or “Perhaps” or “I don’t know.” It just confidently struts right onstage and speaks loudly.
I stopped after four questions because I think these 4 really sum up the problem with ChatGPT today.
If you have tech questions, ChatGPT could theoretically be useful if you already know the right answers, and you can weed out the garbage. But … if you already know the right answers, what’s the point?
I do think AI has tons of good uses, like writing real estate listing text, but answering database questions isn’t one of ’em.
Wow. Great tests, Brent!
Yesterday I tested ChatGPT for answers about T-log usage, query tuning (I passed a simple trivial query and it returned the correct index for it), etc.
For simple answers, it didn’t give bad responses. For complex answers, it tends to answer incorrectly.
Overall, I think it has the potential to be a helpful tool in finding solutions for common tasks or providing guidance for problems. If it could be trained with context data (just SQL Server), it might answer better.
Note: This English comment was corrected by ChatGPT.
Wow, I can see stackoverflow / dbastackexchange having their hands full in identifying / filtering out ChatGPT generated answers.
This is my worry plus the ability to create a load of junk blog sites that seem plausible and pollute Google search even more than it is now. It might even drive us back to the manuals and my O’Rielly books. Trouble is stackoverflow is full of real-life problems and the manuals tend to be too abstract.
to be fair there is a lot of bad content generated by people who just don’t know what they are doing, chatgpt allows it to be done at much higher speed.
I was reading the other day an article that came to the conclusion that the belief in and spreading of Conspiracy Theories comes from a misplaced confidence in one’s own knowledge more than from ignorance. Given Brent’s criticism of ChatGPT’s seeming certainty in its answers, could ChatGPT be spreading Conspiracy Theories on how to fix SQL Server problems?
I’ve seen ChatGPT described as MaaS – Man-splaining as a Service
The excitement over this tool for tech applications I don’t understand.
Beyond this just adding more time in carefully reviewing what it produces, even if you could 100% trust its output, it is still susceptible to sufficient scales of attack seeding bad information and it may go from having a good solution one day to a destructive one the next.
On the code generation side I have seen it produce code that doesn’t even compile, references obscure or deprecated libraries or functions, or use legacy syntax and provided you actually know the language it can certainly be more time consuming to diagnose and fix the code than just fix it in the first place, provided that the logic is even correct.
100% agreed. It is social media on steroid. There is no way to know if the answer is based on sound analysis.
Does ChatGTP ever answer with “it depends”?
Question 1 answer 1 also seems wrong to me, to make it correct you need to add NOT
Monitor the situation: If the update is taking a long time, monitor it to see if it’s making progress. If the update is <> hung, it may eventually clear up on its own.
If the update is hung it will never clear up on its own
err I obviously used markup without meaning to – where there is it should say NOT
Chat GPT seems to be on par with asking your know-it-all co-worker for a second opinion. Now we just need a GUI GPT that can click on the SSMS buttons and complain about working late.
Google beats chatGPT and is handy…so…?
Sky net 0 Brent 1
There is no substitute for the human learning experience: communication, knowledge, experience, and comprehension. Unfortunately, some (many?) “easy buttons” makes things worse.
In the first question, it said nothing about how the rollback will continue to block for Answer 2 nor did it say it would continue to rollback after the system came back up which means the blocking would also continue.
For question 2, I’ll leave that alone. It played back what everyone else says. I have a heterodoxic bit of reasoning on that question and the answer, of course, starts with “It Depends”. I’ll also say that if the word FILESTREAM was in the question, it probably wouldn’t have included FILESTREAM as an option (based on how it answered some of my questions).
On question 3, I agree. Total bullshit. However, that “total bullshit” is a playback of what a whole lot of people say. For example, Lowering the Fill Factor will do nothing because, except for one special case, “broom tip ” inserts go in until the page is as close to 100% as the row sizes will allow it. That also means that it won’t help at all for the massive contention caused by page splits during inserts followed by “ExpAnsive Updates” to the “hot spot”. The OPTIMIZE_FOR_SEQUENTIAL_KEY setting won’t help at all there, either.
I’ll quit there because that’s enough to convince me (along with some questions that I asked it) that it’s not the way to go. My hat is off to it, though. One of the “features” of “AI” is how human it sounds. It’s definitely sounding like a human in its “Confident Incorrectness). In other words, I (so far) think its the most impressive automatic bullshit grinder I’ve ever seen. It should run for office. 😀
It really is an impressive automatic bullshit grinder!
I’ve found it incredibly useful in a couple cases where I know the information I’m looking for is incredibly well documented, but I can’t easily locate it with the normal “2 second” Google searches. For example, if you ask it “How do I deploy a microservice in Visual Studio?”, it’s amazingly helpful.
My problem with a lot of Azure documentation is that there is so much of it out there at the 20k foot level that it’s often a pain to get through that to what I’m actually looking for, so I think scenarios like this are a great use for ChatGPT.
Oh… there is one thing ChatGPT is pretty fantastic at: text manipulation. When the find/replace tool and alt-cilck-dragging rectangle blocks of text to copy/paste aren’t doing the trick, try telling ChatGPT what you want to do with it.