Jump to content

Software compression improvement


Recommended Posts

After a backup session with activated software data compression I noticed that the backup set created by Retrospect was much larger than the original data. I did some tests with the software compression built into Retrospect and got some strange results.

 

All tests were done on a Dual Xeon 2,4 GHz HT Workstation with a RAID 5 Array (4x160 GB) controlled by a Promise FastTrak S150 SX4 controller.

 

I backed up 2x4 GB of data from two single files on the RAID to a disk backup set on the same volume. The first test file contained 4 GB of mixed data (a tarball containing mixed files, truncated to exactly 4 GB). The second file consisted of 4 GB of random data.

 

After testing the performance of the built-in compression of Retrospect I tested the performance of two commonly used compression libraries.

 

The results were a little bit strange:

 

Mixed data:

 

No compression: 4.00 GB --> 4.00 GB - ca. 2.5 min

Compression enabled: 4.00 GB --> 3.27 GB - ca. 9.5 min

GZip fast mode (based on ZLib library): 4.00 GB --> 2.23 GB - ca. 5.5 min

LZO library: 4.00 GB --> 2.41 GB - ca. 2.5 min

 

Incompressible data:

 

No compression: 4.00 GB --> 4.00 GB - ca. 2.5 min

Compression enabled: 4.00 GB --> 5.88 GB - ca. 14 min

GZip fast mode (based on ZLib library): 4.00 GB --> 4.00 GB - ca. 9 min

LZO library: 4.00 GB --> 4.00 GB - ca. 3.5 min

 

In both cases the Retrospect compression algorithm produced full load on one of the CPUs all over the time the backup session was running.

 

Especially when doing D2D or D2D2T backups it would be very useful if the compression algorithm used by Retrospect could be improved.

 

Best regards,

 

Andreas Koltes

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...