Jump to content
vsharapov

Backup drives are full

Recommended Posts

Hello people.

I currently experiencing issues with Retrospect in terms of backup drives got full.

In example:  

Total size of data to backup is 4.6TB

"onsite media set" consists of two drive and has in total 9TB of storage, it holds up to 6 months of backups

"offsite backup" drives A and B are 8TB rotated every couple days and it got filled like immediately during a week despite the fact that it should hold only 1 backup (drive A) , drive B still has 1.2TB available

 

 

Grooming is not helping, attempted to verify catalog on media set.

 

The reason I'm here is because Retrospect spammed yesterday my email (~1300+ emails, last 400 came in less then 1min), plus today over night 500+ arrived more. Yesterday Retrospect engine was restarting itself, but after I turned off proactive backup script it allowed me to start verification of the media set. 

 

After verification got these errors:

 

!Generated MD5 digest for file "/Volumes/Boot RAID/Library/Server/Wiki/Database.xpg/Cluster.pg/pg_xlog/archive_status/00000001000000180000003A.done" does not match stored MD5 digest.

Additional error information for Disk Backup Set member "1-New  Home Backup",
Can't read to file /Volumes/New  Home Backup 1/Retrospect/New  Home Backup/1-New  Home Backup/AA007994.rdb, error -1101 (file/directory not found)
Additional error information for Disk Backup Set member "1-New  Home Backup",
Can't read to file /Volumes/New  Home Backup 1/Retrospect/New Home Backup/1-New  Home Backup/AA008000.rdb, error -1101 (file/directory not found)

 

!Generated MD5 digest for file "/Volumes/Boot RAID/Library/Server/Wiki/Database.xpg/Cluster.pg/pg_xlog/archive_status/000000010000001C00000017.done" does not match stored MD5 digest.


!Generated MD5 digest for file "/Volumes/Boot RAID/Library/Server/Wiki/PostgresSocket/.xpg.skt.lock" does not match stored MD5 digest.
Additional error information for Disk Backup Set member "2-New  Home Backup",
Can't read to file /Volumes/New  Home Backup 2/Retrospect/New  Home Backup/2-New  Home Backup/AA012574.rdb - AA014055.rdb , error -1101 (file/directory not found)

 

 

 

Any ideas on what might happening and how to release space?

Even if I will start backup from scratch and will delete catalog file, is there any chance to inherit previous backups to continue work with them? Or just in case for future restores?

 

 

PS: sorry for grammar and thank you for assistance.

 

Regards

 

 

Share this post


Link to post
Share on other sites

You wrote that you have about 4.6 TB of files to backup. What kind of files are they? Many small documents? A few very large database files? 

In the latter case, have you turned on block-level incremental backups for those files? If not, when as little as a single byte changes, the whole file is backed up again. 

 

What kind of grooming options do you use? Retrospect's predefined policy? Or keeping 100 backups? Or just 2?

 

What options du you use for the copy to the offsite backups "A" and "B"? Copy latest snapshots?

Share this post


Link to post
Share on other sites

You wrote that you have about 4.6 TB of files to backup. What kind of files are they? Many small documents? A few very large database files? 

In the latter case, have you turned on block-level incremental backups for those files? If not, when as little as a single byte changes, the whole file is backed up again. 

 

Many small files, files are being added occassionally - today maybe 10 or might be 500

 

 

What kind of grooming options do you use? Retrospect's predefined policy? Or keeping 100 backups? Or just 2?

 

onsite media set is groomed as of retrospect policy (12 months to keep)

offsite drives set to keep one backup

 

What options du you use for the copy to the offsite backups "A" and "B"? Copy latest snapshots?

 

I really dont know how to answer on this question

Share this post


Link to post
Share on other sites

Do you use a "Copy Media Set" script or a "Copy Backup" script to copy from the onsite backups to the offsite backups?

 

I'm really curious to why grooming doesn't help.

Do you run a groom script, say, once every weekend?

Share this post


Link to post
Share on other sites

Im not running it manually or by script, the only way how grooming is work is what defined by policy. But even this doesn't work, as drives got full.

 

To make an offsite backup im using script that backups same data as onsite backup script.

Share this post


Link to post
Share on other sites

Im not running it manually or by script, the only way how grooming is work is what defined by policy. 

 

 

That is your problem. When you don't run a groom script (say) once every weekend, your destination volume WILL get full. When it does get full, only enough data to complete the current backup will be groomed out.

 

When you run a groom script, more data will be groomed out.

 

What else do you have on the destination volume, besides the backup? I hope you are not storing the catalog files there, too. Retrospect needs quite a bit of free space to update the catalog files and when the disk gets full, you are in a lot of trouble. Likewise if anything else but Retrospect is adding data to the volume.

 

 

To make an offsite backup im using script that backups same data as onsite backup script.

 

 

What is the source of that operation?

What kind of script do you run?

Share this post


Link to post
Share on other sites

 

That is your problem. When you don't run a groom script (say) once every weekend, your destination volume WILL get full. When it does get full, only enough data to complete the current backup will be groomed out.

 

When you run a groom script, more data will be groomed out.

 

What else do you have on the destination volume, besides the backup? I hope you are not storing the catalog files there, too. Retrospect needs quite a bit of free space to update the catalog files and when the disk gets full, you are in a lot of trouble. Likewise if anything else but Retrospect is adding data to the volume.

 

 

What is the source of that operation?

What kind of script do you run?

 

 

 

On destination volumes there is only backup files, catalog are stored on host.

 

 

Speaking about the onsite backup:

Source is the external drive that hold main data (4.6TB) , script is "backup"

 

Same for offsite backup.

 

 

Sorry if Im slow a bit. as what I understood I have to create a script " Groom" include drives that are full and it should groom out big amount of data? 

And one more thing - is retrospect shouldnt takeoff data that is older? or kinda update backups ? 

Lets say main data got 100 new files in total 5GB per week, so the main chunk should grow only with this 5GB? correct?  I mean if I have 4645GB + 5 GB = 4650GB , so how then data overfilling drive? 

 

Appreciate your assistance, sorry for grammar.

 

Regards

Share this post


Link to post
Share on other sites

When you run a groom script, Retrospect grooms out backups of files deleted a long time ago and/or old versions of files that was updated a long time ago. (So, how long ago is "a long time ago" really? That depends on your groom settings.)

 

If you don't run a groom script, Retrospect will groom just a tiny bit of data to be able to finish the backup that is currently running.

 

Retrospect also backs up changed/modified files, in addition to the new files. If you check on "Activities", you can click on the latest backups and see how much data that was added to the Media Set for each backup. Don't forget to add the size of the "snapshot". Is it really only 5GB that is being backed up?

Share this post


Link to post
Share on other sites

5GB was just an example, it does usually different amounts, from 100 to 300+ - GB

Thanks a lot for your explanation!

 

What would be your recommendation for media sets in terms of grooming policy? Is it better always to keep fixed amount of backups or not to groom at all and rely only on Grooming script that will create?

 

 

Regards

Share this post


Link to post
Share on other sites

Say your backups average 200 GB. With five backups (one week), that's 1 TB of "new" data. In a month that's 4 TB of new data. No wonder your backup drives gets full. :)

 

I would schedule a groom script to run every weekend.

Start with Retrospect's predefined policy. If you see that not enough data is groomed out, change to a fixed number of backups.

Share this post


Link to post
Share on other sites

I was wonder as I thought that grooming on its own should clean, or at least it will add data only that is new and update previous.

 

At this point you explained alot! Appreciate it.

 

The last thing are how to prevent retrospect to crash, as it crashes engine and keeps restarting. Should I just wipe backups, uninstall Retrospect and reinstall from scratch?

Share this post


Link to post
Share on other sites

You could try to rebuild the catalog file(s) and then run a groom script.

(Click on Media Sets on the left, select the Media set in question and click on the "Rebuild" icon above.)

Share this post


Link to post
Share on other sites

You could try to rebuild the catalog file(s) and then run a groom script.

(Click on Media Sets on the left, select the Media set in question and click on the "Rebuild" icon above.)

 

I did rebuild it, started for a second and task finished with no errors and nothing done. I tried this before start topic here and unfortunatelly it didnt help.

Share this post


Link to post
Share on other sites

I did rebuild it, started for a second and task finished with no errors and nothing done. I tried this before start topic here and unfortunatelly it didnt help.

 

If it was that fast, you did a "Repair", not a "Rebuild". You should do a "Rebuild".

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×