Jump to content

Block-mode incremental backup


Ivar

Recommended Posts

Hi,

 

Example if I have some very big file and every time change only small portion of it, then

now Retrospect copyes entire file again instead of copying only that small portion in block-mode incrementally. Yeah, this is exclusive feature, now even Veritas dont have such kind

of feature (in Veritas NetBackup block-mode is allowed only with Oracle database and not with every file).

But some 3rd party small (and crap) backup companys have such block-mode with all files,

but this kind of software is crap for other reasons.

Link to comment
Share on other sites

Quote:

Hi

 

The problem with block based backup is that it makes restoring such a problem. Imagine having to pull blocks from 20 different tapes....

 

Thanks

 

nate

 


 

 

 

I dont see where is the problem - if you backup to tapes then anyway while restore all tapes

 

that contain backup-set are needed. But better is make incremental backups at all to HDDs.

 

There is no difference - incremental backup only changed files or incremental backup changed

 

files blocks. The only difference is that while backup, files must be compared with backed-up files to find out what areas of file are changed or save checksums of blocks. This makes of course backup slower but good idea is to put some selective feature to what files to process normally and what files to process in block-mode.

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

  • 3 months later...

The number of backups participating in a block-based / delta restore process can be heavily decreased by storing every nth backup as full backup or by using a tree-like dependency structure. Both approaches maintain most of the space savings. Storing every nth backup completely would lead to a maximum of n file copies participating in the restore process. Using a tree structure would lead to a logarithmic number of composition operations during restore (for example if the nth backup of a file is stored as delta to file copy number floor(n/2)). Perhaps the best way would be to combine both strategies, for example by using the binary tree approach and storing each backup with an odd number as delta to the previous backup. For example the versioning system SubVersion is using such strategies to reduce the number of delta compositions required to retrieve a specific file revision.

 

However, to avoid the reading of previous backups during the backup process it would be necessary to store hashes of the file blocks inside the catalog file.

 

Having such a feature would heavily reduce backup space required for incremental backups of many file types, for example:

 

- Access / Outlook databases and other database-like files

- Encrypted data containers (for example SafeGuard PrivateDisk containers)

- VMware / VirtualPC disk images

- Log files

 

And many other types...

 

Best regards,

 

Andreas Koltes

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...