For this month’s T-SQL Tuesday, Kevin Chant asked us to dream up our fantasy SQL Server feature. I’m sure people are gonna ask for flying cars and gene splicing, and that’s awesome. I’m excited to hear the kinds of things they dream up.
Me? I’ve got a really simple ask.
(When you’re talking to Microsoft people, you have to use “ask” as a noun.)
Today, if you have problems with a single file in your multi-file database, you can do an online file restore like this:
1 2 3 4 |
RESTORE DATABASE StackOverflow FILE='StackOverflow2013_1' FROM DISK=N'StackOverflow_Full_20190910.bak' WITH NORECOVERY; |
That one file is restored from backup, and you can update it by applying subsequent transaction logs, and then eventually bring it online. You can restore individual (or multiple) pages, too. But the thing is – I rarely need to restore a file, or specific pages.
So my ask is:
1 2 3 4 |
RESTORE DATABASE StackOverflow TABLE='Comments' FROM DISK=N'StackOverflow_Full_20190910.bak' WITH NORECOVERY; |
I just need a single object, like a table someone dropped, or a stored procedure from a mucked-up deployment that somebody didn’t really save in source control the way they were supposed to. I know, there are gonna be foreign key issues, and I’m fine with the keys being marked as untrusted at that point.
Like it too? Vote for Gonzalo Bissio’s feature request.
31 Comments. Leave new
While it doesn’t work with data, I typically have a nightly DACPAC capture that runs across all the databases on our server, that captures the schema of the entire database. That way when deployments go amuck, you can just schema-compare production against the previous night’s dacpac, and just redeploy the previous versions. Now selective table restores? Yeah that would be nice…
Maybe this already exists, I would like it if the engine told me when it was rewriting stupid code. For example, if I did a left join, and the optimizer was executing as if I wrote a (not) exists instead.
Having lead Oracle and SQL Server teams, In my experience this is less useful than you’d think. Oracle has the ability to restore tables but the feature isn’t really baked into the database. It’s implemented internally as a hack, where the table restore does a shadow restore of the entire database and then moves the data over for table.
Real-world experience with this half-backed feature in Oracle was disappointing. My fear, we’d see a similar hack from Microsoft in SQL Server since it’s very hard to do internally in the database.
I agree 90% of the time no one wants to restore an entire database or even files groups. Usually it’s a table or subset of tables and they want it without taking the database offline. This is why for most restores we restore to an alternate server, research data, sometimes request additional restores for different point-in times and fix data by creating an ad-hoc data load data from the alternative copy to the production server.
For the purposes of dropped tables, Oracle has an interesting feature called recycle bin, only used for dropped tables though. I’ve seen it used on occasion but a fairly narrow use case of dropped tables, but not updates, delete or truncates (I think). It does trash your constraint names with system generated names but seems to recover everything in tact. This could be interesting to see in SQL Server.
For deployments where you know you’ll be making destructive changes I’d rather rely on detailed and tested fallback plans. Depending on the situation this may entail:
1. back-out scripts
2. pre/post backups
3. a simple copy of table: select * into objcopy.dbo.mytable_20190910 from sourcedb.dbo.mytable (I always prefer to store this in an alt database with date suffix to keep main db clean)
4. Snapshots either at SAN OR native database level for fast recovery. Backups and snapshots of recovery points though are only useable for a small window of time since we would not want to recover from snap outside of deployment window.
I remember the old days using a delayed logging shipping setup, if the issue was caught before next log restore, we had the ability to recover the log shipped database, then pull data over from database to production. It would be interesting to see something baked into ADG allowing point in time recovery. Like flashbacking a secondary ADG copy to a point in time to allow fast data research and the ability to pull data over from secondary. This would at least automate and eliminate the restore to alternative server approach described above and for a multi-TB DB, speed recovery.
+1 for the DACPAC nightly snap suggestion. We use nightly Red Gate snaps for DDL and keep 30 days per database of snaps. Also allows devs to compare RG snaps for research build.
We leverage logshipping right now to satisfy these types of woes right now. (Ends up being multiple lines of SQL since we don’t just drop the table and recreate / insert) Would certainly be nice to just do a simple restore from a .bak in some situations though!
That would be handy indeed. They say, if a morning coffee doesn’t wake you up, try dropping a table in production.
Just put this trigger on the database, and never have to worry about tables being dropped again!! 😉
CREATE TRIGGER drop_safe
ON DATABASE
FOR DROP_TABLE
AS
PRINT ‘Nice try, developers!!’
ROLLBACK
Preach it, Brent, preach it.
All jokes aside, I completely agree, and this is one of the better features of many 3rd party products. Why wouldn’t a company like Microsoft look at bundle-in products, see what their best selling features are and bake them into their own products? It’s almost as if those 3rd party product companies are doing market research for free for Microsoft.
I’d like to see a selective restore to another database or instance, as well as the ability to partition swap across DBs and filegroups.
[…] #TSQL2sday: I Just Wish SQL Server Could Restore a Single Object from Backup. – Brent Ozar Unlimited® — Read on https://www.brentozar.com/archive/2019/09/tsqltuesday-i-just-wish-sql-server-could-restore-a-single-object-f… […]
I am 100% for an Object Level Restore (OLR) feature being added to SQL Server restore. At my current company we did use LiteSpeed until it was decided not to renew licencing due to cost cutting measures. When you have to restore a 10+TB database to recover one table it really does become a chore. LiteSpeed made it so simple. It would be great to see that capability added to SQL Server native restore.
I would like to see an alter column with a ‘rebuild_indexes’ option.
Currently, for instance if I have to change a column from varchar(30) to varchar(50), I have to drop all the
indexes the column is referenced in, make the change, then recreate the indexes.
This would save the time of trying to find out what indexes reference the column, then create drop/create commands for all of the indexes.
I use a script to do this now, but it would be nice to have it be done automatically.
set statistics io, time on – as a button on the toolbar.
Agreed. Would be nice to specify an alternate destination table name as well.
Oracle Export is a nice feature, and – whilst far from a back-up – is great for saving tables as ‘point in time back-ups’ and also moving them around databases; either single tables, those in tablespaces/schemas or entire databases.
Just an amusing anecdote. SQL Server 6.5 actually had this feature and it was removed in subsequent releases.
The justification at the time was key/referential integrity issues, but if it was a case of just restoring data due to misadventure ( 🙂 , theoretically you could support restore TableA to TableA_1 with column definitions and data only and let the dba/dev sort out the rest. It’s not like that isn’t essentially what you end up doing with restore of full db to second copy and then dig the data out manually anyway.
You could restore an object back in 6.5 days. When backup was a dump. If that ages me, I was a pre teen back then ;). I don’t know why it was later removed.
Well, there was a utility by Idera that did exactly that!
It was “Virtual Database” I believe, and If I recall the price was around $500 per copy but that was quite some time ago.
It worked great! You would just mount the backup file and in SSMS environment (can’t recall if simulated or actual) you could access the objects just like a normal database.
Get the object you need and done!
Now that I have been reminded of this, I think I will see if this is still available and available for latest versions of SQL Server.
Yeah, all the third party backup utilities (Quest Litespeed, Idera SQLsafe, Red Gate SQL Backup, etc) all do it.
Nice, didn’t know that, we actually have Redgate’s SQL Backup part of the Developer toolkit, but we don’t use it – will be checking it out…
Your problem is much Draper than you realize – you are using SQL Server. Switch to PostgreSQL and forget all your troubles.
Oh, almost forgot – it’s free.
Sorry for the typo: deeper, not Draper
Only makes sense though, since you’re here doing advertising. 😉
[…] can read Brent’s post in detail here. Afterwards, I recommend clicking on the link to Gonzalo Bissio’s original feature request if […]
As a SQL Server and Sybase DBA, I’ll point out that Sybase has offered this feature in the product for nearly 15 years. You mount a DB backup file as a read-only “archive database” (occupying minimal space) which exposes all objects to SELECT statements (can even do DBCC to verify the backup), allowing “SELECT * INTO FROM .dbo.”. This was handy to have a number of times in my career, and it was *free*. Maybe we can shame MS into doing this.
Some elements of my post were eaten by the parser.
Here’s the missing stuff:
SELECT * INTO “missing_table” FROM “archivedb”.dbo.”lost_table”.
I have a whole host of improvements for SSIS and SQL Server Agent that we’d love to see here since we have a dedicated Data Development Team.
Database-level SSIS packages that fail over with the AG:
Essentially, we have a ton of ETL packages that are database specific. It would make it super convenient if we could have an SSIS package that is a part of the database and fails over with the AG. Right now there’s not a super clean way to keep SSIS packages in sync between nodes of an AG
More granular security for SQL Server Agent:
We would like our Data Developers to be able to publish and schedule their jobs. However, the current security model is set up such that they would need SA to the entire server to do that, and we don’t want that.
Multi-subnet aware connection strings for SSIS:
Currently, the only connection type that supports multi-subnets is ADO.NET, which our Data Developers don’t care for. It’d be nice for the other connection types to support that flag as well.
Go ahead and create requests for those at the site in the ending paragraph of the post. No sense writing about it here, heh.
a lot of third party backup products have this feature but i’ve never used it. There is an overhead in storage and backup time to run the backups that recognize individual objects. It’s usually easier and cheaper to just have a few hundred GB of storage lying around to restore a large database to if you need a table
Alen – easier and cheaper? Yes. Faster? No. I know this may come as a surprise to you, but in some shops, when there’s an outage, the users actually care about how quickly you can recover.
I know, crazy, right? Users. Go figure.
I think part of the issue here is that Microsoft is not willing to sour the relationship with the partners that already offer this feature. RedGate, LiteSpeed, etc.
Not shooting the idea down mind you, just pointing out that it isn’t as simple as 1. Implement feature, 2. Profit.
Also, I worry about Bull in a China shop DBAs that would restore, potentially overwrite production data that’s not in the backup, and unknowingly leave FKs untrusted and set to no check. You know no one RTFD.
Also, wouldn’t the eazy peazy table restore process have to set No Check on both dependent and depends on FKs?
Untrusted would be the result, but Check would have to be disabled, in other words.
Didn’t Microsoft already do this by building backup compression and encryption into the product itself?
Don’t these third party products already have no-check on the FK results?