Jump to content

Backup Set Catalog Size Limit?


Recommended Posts

Is there a size limit on backup set catalogs? I had a catalog that grew to almost 4GB in size (backing a single server with over 3 million object on the server, 17K to 25K objects added or changed daily). After reaching that size, the backup set would no longer open during the backup script. Everytime I attempted to backup that server, the system would throw an error with the following info:

 

- 10/19/2006 7:06:50 PM: Copying Volume (E:) on MYSERVER

TMemory: heap 904 86.7 M

virtual 228 428.0 M

commit 428.0 M

purgeable 0 zero K

Pool:pools, users 373 483

max allowed mem 614.0 M

max block size 8,192 K

total mem blocks 18 144.0 M

used mem blocks 18 144.0 M

file count, size 2 16.0 M

requested 1132 3.8 G

purgeable 0 zero K

avail vm size 7,376,896 B

TMemory::mhalloc: VirtualAlloc(129.5 M, MEM_RESERVE) failed, error 8

TMemory: heap 903 86.1 M

virtual 229 437.5 M

commit 437.5 M

purgeable 0 zero K

Pool:pools, users 373 482

max allowed mem 614.0 M

max block size 8,192 K

total mem blocks 17 136.0 M

used mem blocks 17 136.0 M

file count, size 2 16.0 M

requested 1132 3.8 G

purgeable 0 zero K

avail vm size 6,328,320 B

TMemory::mhalloc: VirtualAlloc(129.5 M, MEM_RESERVE) failed, error 8

Not enough application memory

Link to comment
Share on other sites

The set had about 30 sessions in 1 member with over 550GB used. The backup hardware network specs are:

 

Dell PowerEdge 2950

Windows 2003 R2

2 - Intel Xeon CPU Dual Core 3.00GHz

2 - 1GB Eth

4GB of RAM

PAE

C:\ - 12GB Total (Internal SATA2 RAID5 w/ System and Program Files)

D:\ - 1.8TB Total (Internal SATA2 RAID5 w/ Backup Sets)

F:\ - 465GB Total (External USB2 w/ Archives of Backup Sets)

G:\ - 465GB Total (External USB2 w/ Archives of Backup Sets & Backup Set Catalog Files)

Z:\ - 688GB Total (Private Network Connection over Secondary GB Eth w/ Archives of the Archives)

GB Eth Switch connecting all Servers and Clients

Link to comment
Share on other sites

OK, I think I have found the answer after talking to EMC Tech Support. This error is not caused by insufficient virtual or physical memory. There is an issue in Retrospect whereas backup sets that contain large file counts (anything over 4 Million files per session) will cause excessive application memory use. This in conjunction with a Windows 32-bit OS (Win2K3), which has a 2GB application memory limit by default will cause this TMemory error. Therefore to alleviate this issue you have to do the following:

 

- Switch to a 64-Bit OS since they have a 4GB application memory limit

- or Run the 32-Bit versions of Win2K3 Ent or Datacenter with the PAE switch enabled in the Boot.INI

 

I am currently running Win2K3 Standard, so the PAE switch is not an option. Therefore I am currently experimenting with the /3GB switch in the Boot.INI, which increases the application memory limit to 3GB, to see if that corrects the issue. If anyone else uses a 64-Bit OS or running Win2K3 Ent or DataCenter with PAE, let me know if that corrects this issue for you. And I will update the forum with my experiment results. Thanks.

Link to comment
Share on other sites

The increase in application memory did not help the error. It did increase Retrospects TMemory and performance. Therefore, I am following Russ' (and EMC Tech Serv) suggestion of breaking the backup into 2 or 3 different sets. I setup a primary script to backup the drives excluding the subvolume that all of the excessive files. I defined subvolumes for all the folders that contain the mass amount of files, then setup 2 separate scripts with their own backup sets to handle backing those files up. Therefore, I should have backup sets with no more than 1.75 Million files in each session. Hopefully I can upgrade the server to Win2K3 Ent and add another 2 or 4 GB of RAM then use PAE to increase the TMemory and alleviate all these workarounds.

Link to comment
Share on other sites

Quote:

I setup a primary script to backup the drives excluding the subvolume that all of the excessive files.

 


That's not quite what I suggested. I don't know about Retrospect Windows, but Retrospect Macintosh still does a file scan of all files excluded by selectors, so the number of files doesn't go down.

 

I could be wrong, but I believe that you might have to do a top-level split into subvolumes, then treat each subvolume as a source. That's why it's so unpalatable, because you lose all hierarchy info.

 

We had this problem a while back (because we have lots of old files for old clients), and our eventual solution turned out to be that we had to split the user data off onto a separate volume with a few top-level directories, and then define each and every top-level directory as a subvolume, back each up separately.

 

Russ

Link to comment
Share on other sites

Quote:

I believe that you might have to do a top-level split into subvolumes, then treat each subvolume as a source.

 


 

Yes Russ, that is what I was trying to convey. Let me explain, I have a server running an enterprise document management solution, whereas the database needs access to images that are housed in a folder on the server. The root folder is structured like this:

 

E:\images\

- \0001\

- \01

- \02

- \03

- \04

... etc

- \98

- \99

- \100

 

Instead of your situation where you lost your hierarchy, I decided to maintain mine. So first, I defined each of the 100 subfolders in the path E:\images\0001\ as subvolumes of the E: drive. I then created a backup script that took care of backing up C: and E: and EXCLUDE files and folders in the Windows path E:\images\0001\. This way I don't backup the 100 subfolders in this script. This gave me a backup set with 2 snaps and 2 sessions.

Then I created 2 additional scripts to handle the 100 subvolumes, using their own respective backup sets. Thus giving me 50 snaps and 50 sessions in each backup set. Since the database controls where the images are housed, I may have a nightmare on my hands when it comes to incrementals. Since I am not sure how Retrospect is going to view those subvolumes it may never increment the subvolumes properly. The questions remain, is each subvolume a separate entity to themselves or will Retrospect account for situations where File001a moves from subvolume 01 to 02 due to consolidation, and not back it up since it has not changed anything but it's path. So hopefully Retrospect is sophisticated enough to handle this situation (which, so far up until this, I haven't been disappointed in its backup logic). Otherwise I will have to find an alternative product for this one server. Fortunately we are using IBM's enterprise tape backup system (TSM) to backup the Retrospect server and array to tape, so that may be my solution for backing up this document management server as well.

Link to comment
Share on other sites

Retrospect considers each "subvolume" as an entity into itself. If a file moves from one subvolume to another, it will disappear from the first subvolume's snapshot and appear (be backed up) to the second subvolume's snapshot.

 

I think that the problem you describe (too many files) may still exist with your scenario. Here's what we had to do for our setup. Each of our firm's clients have an alphabetized directory, with subdirectories in that for different matters for the client. Over the years, many, many clients and many, many matters. We had to move all client files to a separate disk with five top-level directories:

A-F

G-M

N-S

T-Z

Miscellaneous

 

Then define each as a subvolume, back each up as a separate source. Similar to the situation you have. What we weren't able to get working with a different structure was the part corresponding to your "Exclude E:\images\0001\" - you might want to watch that happen, but I seem to recall that Retrospect did the recursion first through the entire tree before doing the exclusion, which meant it had the same limits during the scan. Maybe things have changed, it was a while back. That's what I was trying to say.

 

The inconvenience for us is that we can no longer search one backup set for, to use an example, a matter number directory without knowing the client's name (e.g., try to find all files beginning with matter number xxxxxxx). You have to search each of the multiple backup sets.

 

Russ

Link to comment
Share on other sites

OOOHHHH!!! I understand now. It appears that the scan issue you described, where eventhough I told Retrospect to exclude copying the files, it still scans them is still an issue. However Retrospect does not actually have a problem (at least not in my case) with scanning a source server with over 4M files, it has a problem scanning and comparing Backup Sessions with over 4M files. If I have 20M files in a backup set, Retrospect doesn't miss a beat, but as soon as I break that 4M file barrier in a Session (or volume) then it craps out. And as for the subvolumes being separate I can't restructure my file system since it is database and middleware driven. But I figure it shouldn't be a big deal. I don't forsee tons of folder movement by the files (hopefully I'm right about that). Thanks again.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...