gloyer Posted February 5, 2002 Report Share Posted February 5, 2002 I've been trying to do Backups using Retrospect on W2000 to a network drive on a server running Redhat 7.1 and Samba. I'm successful with backup sets that get to 1.1Gb in size, but larger (5Gb is the next size I've tested) fail with a bad file header problem, every time. Am I running into a 2Gb file size limitation (like dump)? If so, does anyone know of a workaround? Link to comment Share on other sites More sharing options...
fender464 Posted February 5, 2002 Report Share Posted February 5, 2002 Are you backing up to a tape drive on the Win2K machine or a hard drive? If it's a hard drive and the drive is formatted as FAT32, it could be because of the 4 GB file limitation FAT32 has. Because Retrospect writes File Backup Sets to a single file, each backup set cannot exceed 4 GB due to the file system's limitation. If this doesn't solve it, give some more info. Is does the backup fail at a certain point or file? or does it fail right away? Link to comment Share on other sites More sharing options...
gloyer Posted February 12, 2002 Author Report Share Posted February 12, 2002 (email replies seems not to be working, sorry for the slow response) I'm backing the W2K client to a Linux hard drive server over the network. The backup appears to succeed, in that a file of the approximate correct size of about 6.5Gb is written to the disk, but it fails when it tries to verify the backup with a bad backup file header error. The error occurs at different apparent addresses, but appears to fail every time at the first verify attempt. Nothing is ever verified. It also goes into a loop trying and retrying the header, but that's an unrelated problem, I think. I have tested backups up to 1.1Gb successfully, but the next size up I tried was over 4Gb. I suggested the 2Gb limit because "dump" has that limitation on Linux. Link to comment Share on other sites More sharing options...
mean_ogre Posted February 14, 2002 Report Share Posted February 14, 2002 That's the filesystem limitation, of course. You might want to replace the ext2 filesystem with a jornalling one. I know that reiserfs has a filesize limit of 17.6 TBytes on a 32-bit system, ext3 (standard on RedHat 7.2) also should improve the situation. Link to comment Share on other sites More sharing options...
mean_ogre Posted February 14, 2002 Report Share Posted February 14, 2002 That's the filesystem limitation, of course. You might want to replace the ext2 filesystem with a journalling one. I know that reiserfs has a filesize limit of 17.6 TBytes on a 32-bit system, ext3 (standard on RedHat 7.2) also should improve the situation. Link to comment Share on other sites More sharing options...
gloyer Posted February 15, 2002 Author Report Share Posted February 15, 2002 I guess I just needed to hear that(!) Thanks for the tip on the ext3 file system. I'd heard about reiserf but didn't really get the value...sounds like it's time to crack the how-to... George Link to comment Share on other sites More sharing options...
mean_ogre Posted February 15, 2002 Report Share Posted February 15, 2002 You know, I couldn't find the relevant info on ext3 filesystem. The upgrade is much simpler than that for reiserfs (you don't need to reformat the drive). However if it is not the system drive (and you have temp storage for your data) I'd recommend using reiserfs - it's a sure thing. Link to comment Share on other sites More sharing options...
byteback2 Posted March 23, 2002 Report Share Posted March 23, 2002 I think you will find that it's a limitation of Samba and not necessarily the Linux file system (although this could be the case depending on the FS). This page http://www.suse.de/~aj/linux_lfs.html has some great information on large file support in Linux and mentions the Samba limit. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.