How to minimizeplan ahead the effect of running sp_delete_backuphistory? Measure the gains too!Optimization: Delete with Top and minimize the where clause date rangeHow to automatically stagger transaction log shipping to minimize bandwidth usage peaks?How to avoid using variables in WHERE clauseHow to give the least permissions for CRUD-managing and running SQLServerAgent jobs as well as Maintenance Plans?How to measure or find cost of creating a query plan?how to minimize the effect of updating statistics on the plan cache?A methodology for tracing a query that sporadically runs for hours instead of secondsRunning queries one after the other in psql/scripting environment?Query performance - lots of temptables, loops, updates and calculations for statistical reportMeasure Agent Job failure and running jobs with 'execution_status'
Calculate Levenshtein distance between two strings in Python
Is there any use for defining additional entity types in a SOQL FROM clause?
What to wear for invited talk in Canada
COUNT(*) or MAX(id) - which is faster?
Is domain driven design an anti-SQL pattern?
Shall I use personal or official e-mail account when registering to external websites for work purpose?
Extreme, but not acceptable situation and I can't start the work tomorrow morning
Could Giant Ground Sloths have been a good pack animal for the ancient Mayans?
How can I fix this gap between bookcases I made?
Pristine Bit Checking
How to manage monthly salary
Could a US political party gain complete control over the government by removing checks & balances?
Copycat chess is back
How could a lack of term limits lead to a "dictatorship?"
Email Account under attack (really) - anything I can do?
Is it legal to have the "// (c) 2019 John Smith" header in all files when there are hundreds of contributors?
Can I legally use front facing blue light in the UK?
How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?
Can the Produce Flame cantrip be used to grapple, or as an unarmed strike, in the right circumstances?
How would photo IDs work for shapeshifters?
What are the advantages and disadvantages of running one shots compared to campaigns?
Prime joint compound before latex paint?
When blogging recipes, how can I support both readers who want the narrative/journey and ones who want the printer-friendly recipe?
extract characters between two commas?
How to minimizeplan ahead the effect of running sp_delete_backuphistory? Measure the gains too!
Optimization: Delete with Top and minimize the where clause date rangeHow to automatically stagger transaction log shipping to minimize bandwidth usage peaks?How to avoid using variables in WHERE clauseHow to give the least permissions for CRUD-managing and running SQLServerAgent jobs as well as Maintenance Plans?How to measure or find cost of creating a query plan?how to minimize the effect of updating statistics on the plan cache?A methodology for tracing a query that sporadically runs for hours instead of secondsRunning queries one after the other in psql/scripting environment?Query performance - lots of temptables, loops, updates and calculations for statistical reportMeasure Agent Job failure and running jobs with 'execution_status'
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
add a comment |
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– marcello miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
add a comment |
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
sql-server backup delete scripting monitoring
asked Mar 28 at 11:49
marcello miorellimarcello miorelli
6,0722163146
6,0722163146
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– marcello miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
add a comment |
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– marcello miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
2
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– marcello miorelli
Mar 28 at 12:28
@EzLo thank you for the link MSDB Performance Tuning
– marcello miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
add a comment |
2 Answers
2
active
oldest
votes
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
add a comment |
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233375%2fhow-to-minimize-plan-ahead-the-effect-of-running-sp-delete-backuphistory-measur%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
add a comment |
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
add a comment |
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
edited Mar 28 at 12:57
answered Mar 28 at 12:47
Erik DarlingErik Darling
22.7k1269113
22.7k1269113
add a comment |
add a comment |
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
add a comment |
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
add a comment |
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
answered Mar 28 at 12:11
Scott HodginScott Hodgin
18.3k21635
18.3k21635
add a comment |
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233375%2fhow-to-minimize-plan-ahead-the-effect-of-running-sp-delete-backuphistory-measur%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
-backup, delete, monitoring, scripting, sql-server
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– marcello miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34