Jump to content

file backup size limit arggh


Recommended Posts

can someone here disabuse me of the fresh discovery that a disk drive or array (firewire in this case) is useless for workgroup backup, since network clients MUST be backed up to a file backup set, which has the useless size limit of 2G. have i missed something, or am i unfortunately correct. i have several clients to back up, each not more than say 6G, and totalling not more than 25G at this time. if it is not possible to back up network clients of larger than child size, then we have outgrown retrospect. i don't want to hear that but i will if i must. thanks in advance for any tips or abuse.

Link to comment
Share on other sites

well, sadly, so far at least no one seems to have any suggestion for me (unless there was one that was best left unsaid). so i tried something else just now. i created a duplicate script for one client with destination a subvolume on the backup disk and source an appleshare-mounted client on the backup server's desktop. this server will never be mounted on a client desktop, for those who wonder such things. this effort is proving to be a bad joke. the network connection is over standard ethernet 10bt, and speed of each pass of a client backup to tape used to be 40-60 MB per minute. this mounted volume duplicate took 10 minutes to scan the client (5-6G, 106,000 files), then started out at 0.1 MB/min, and after 20 minutes had worked up to the feverish pace of 14 MB/minute. this will not be a good solution. any other ideas? i should mention that all macs involved are G3-G4 running OS 9.x, and that memory is not the problem, and that retrospect is v5.something. again thanks in advance. Commiseration is also welcome.

Link to comment
Share on other sites

ok. that effort concluded as a bad joke indeed. although i had formatted the portable part of the disk backup sets, a 40G firewire pocket drive, as HFS+, i had then foolishly allowed retrospect to erase the disk because it thought it was unrecognizable. that was before i read somewhere that hard drives should not even appear in the devices window in retrospect, and realized that one should never allow retrospect to erase a hard drive, as it creates an HFS file system with 512k minimum file size. yes, you guessed it; that 6G source volume overflowed the 40G destination at only 4.2G actual. so i re-created the file system as HFS+ w/ 4K min size and restarted the trial (a most appropriate term here), after repairing the script for the 'new' destination volume. the first attempt had managed to creep up to 21MB/minute, and the second try looks like it may be just a tad faster, but still quite slow due to the appleshare overhead. that client used to back up at 42-55MB/minute to scsi tape drive. this is still a weak solution, and will require a simple script to mount the client volumes on the server desktop prior to the backup start. i'm still looking for pity or ideas from anybody. surely i'm not the first person to want to do workgroup backup to hard drives.

Link to comment
Share on other sites

I'm not clear from your posts how you are defining "clients." In computer terms, this would indicate a remote volume. In Retrospect terms, the client is a remote volume that is being accessed via our client software. I'm not clear as to whether you are backing up these drives only as mounted volumes, or if you've tried the Retrospect Client in any scenarios.

 

 

 

The most efficient way to backup clients is via the Retrospect Client. However, if your goal is to mount the volumes for backup, you can configure Retrospect to automount the harddrives before the backup.

 

 

 

Mount the volume. Then, while in Retrospect, go to Configure > Volumes. Highlight the volume. Go to the Volumes drop down menu and select Configure. You will be prompted for a user name/password. You will need to do this for each volume you wish to automount.

 

 

 

As long as you have formatted your destination drives as HFS+, using File Backup Sets with Retrospect 5.0 should work fine for your backups.

Link to comment
Share on other sites

ah! at last, a voice in the wilderness. thanks for replying.

 

 

 

yes, i did switch my usage of 'client' from the traditional retrospect client to that same volume mounted on the server. and as a further note to yesterday's chagrin, the mounted volume method is not only slow but will not work for windows 'clients.' i had backed these mixed clients up to tape for many years, but then we started exceeding the capacity of the existing tape backup sets and rotation schemes. even if i tightened up the rotation, it would be only a short time until the schemes burst again. switching tapes on unattended backup is not practical, and an autoloader was an expensive alternative. so i looked at tape systems with greater capacity, but they too seemed a little expensive for this environment. firewire popped into mind, especially since a pocket drive of 40G can carry a current backup off-premises. and building an on-premises set of archives is relatively easy, just adding large capacity destination drives as required (which could always be moved to another site for redundancy).

 

 

 

there. all that talk was just to fill in some history.

 

 

 

so you say that i should try file backup sets again, this time to the HFS+ file system, and that i won't run into that dreaded 2G size limit. ok then, i will try that. it would be the preferred method, since that would get the speed back to the near-ethernet-limit (10 not 100) of up to 60MB/minute, using off-hour bandwidth piggery. if i don't come back, i will be in backup bliss...or on to the next hill.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...