Jump to content

misleb

Members
  • Content count

    30
  • Joined

  • Last visited

Community Reputation

0 Neutral

About misleb

  • Rank
    Occasional Forum Poster
  1. misleb

    Hardware for Multi-Server

    Quote: If you're looking at buying new hardware I'd personally suggest a dual or quad core CPU to help with data crunching for compression and encryption - you can never have too much power! As far as I can tell, Retrospect will rarely, if ever, use more than one core. I've heard of peopel with 8 core servers only getting 13% utilization. Even dual core would be a waste. Your best bet is to get the fastest single core CPU around. And even then, speed isn't a huge issue unless you're dealing with more than a few hundred thousand files per volume. Then you might need some CPU power to do the file matching and stuff. Well, unless you're writing backups to software RAID. Then you'll want another core to do the parity/mirroring stuff. And don't be fooled by some of those cheap SATA RAID adapters like the ICH?R series which are really "fake" RAID. -matthew
  2. Hi, I am trying to restore some files from a snapshot (latest of 15 or so incrementals) with about 1.2 million files and it has been "matching..." for about 18 hours now since selecting the source. It is just stuck there using 100% CPU. I know Retrospect isn't hung because jobs have run since. This is a 2.4Ghz Athlon 64 (32bit Win2k3, Retrospect 7.5). What can I do? I really need these files. IN the future I guess I am going to have to break up the volumes into yet smaller subvolumes, but right now I need to get at the current backup ASAP. Do I just wait it out and hope it finishes by the end of the day? -matthew
  3. Are you connecting to the server console via RDP? Or are you doing a regular RDP connection? At least with older versions of Retrospect, I found that retrospect woudn't even launch at all in an RDP session unless I was connected to the console. In your connection profile you should have a line like this: connect to console:i:1 -matthew
  4. Hi, I've been using Retrospect 7.5 for a while and I just recently started using tapes for some backup jobs. I noticed that, unlike backup to disk, only the most recent snapshot is available for restore from a particular backup set. I'm only doing "Normal" backups (haven't recycled any yet) and I can tell from the logs that only new files are being backed up, but only the most recent snapshot is ever available for restore. I can't go back to previous snapshots to retrieve older version of files. I have "save source snapshots for restore" checked and all other options are the same as the toDisk scripts. What's up? -matthew
  5. I am just backing up the Users folder. That is where all 2.5+ million files are. Anyway, I ended up writing some scripts to move all user directories into directories called A-J and K-Z. This meant making mass LDAP updates as well as doing a string substitution on .plist files to make sure user preferences weren't screwed up (yes, OSX apps often use absolute paths for user preferences, grrrrr!) -matthew
  6. It isn't the size of the files copied, it is the number of files that is the problem, AFAICT. Just wait until you have more than 2.5 million files. The worst part is that I have no easy way of breaking it up into smaller chunks. It sucks. The only way I'm going to get this backed up is with a good ol fashioned manual copy to an external drive ir something. So that'll get dated pretty quickly. Even when I COULD get a full backup with slighly fewer files, it would take HOURS just to load the snapshot and select a file to restore. -matthew
  7. misleb

    Windows 2003 SP2

    Try editing your connection profile and tell it to connect to the console of the server instead of a regular TS session. Add this line to your profile with a text editor: connect to console:i:1
  8. Is it just me or does an 8 core server seem like massive overkill for a backup server? I can't imagine you're going to find any backup products that will parallelize file matching. At best, you'll be able to run 8 jobs at a time and have each use a core. Sadly, this is the drawback of going for multi-core systems instead of faster cores. We're going to see this problem more and more in the coming years. I have a dual CPU Athlon with 4GB of RAM (only 3.6 available because it's 32bit) and Retrospect doesn't seem0 to utilize both CPUs. But my big problem is that once you get around 2.5 million files, you'll have trouble getting a full backup at all. I've been unable to get a full backup of my user home directories (400GB, 2.6 million files) at all. After spending all night copying files, Retrospect eventually gives up because it is unable to allocate enough memory when building the snapshot. (see my post about this) -matthew
  9. Hi, I'm currently trying to backup about 400GB of data from an OS X server which contains user home directories. As you may know, OS X creates a LOT of files and folders for each user by default. I have about 700 users and nearly 2.5 million total files. Retrospect simply cannot backup that many files without getting "error -625 (not enough memory)" . My backup server (Windows 20003 32bit) has 4GB of RAM. I believe the error actually occurs while it is building the snapshot.... or whatever it does after it is done copying the the files. Anyway, I've given up on trying to get the full backup to work. I've tweaked memory settings and other things. Retrospect (7.5) just won't do that many files. Right now I want to try to split it into two jobs. Maybe users A-M and users N-Z. I wish there was a different structure, but the reality is tht all these users are in a single directory so I can't just make a couple subvolumes and be done with it. Is there any way to group the user directories in a script? I don't want to have to manually maintain a list of directories to backup every time a user is added or removed. As far as I know the backup "Include" selection selects files for the whole tree. So I can just say bakcup all files and foldera that start with A through M because that rule would apply to ALL files and folders.. not j ust the first level. -matthew
  10. Hey all, This isn't really a Retrospect specific question, but I imagine the specific features of Retrospect may be relavent. I'm wondering what the best option is for disk-to-disk backups. I'm set on using disks (preferably removable disks) for storage because 1) big tape drives are really expensive and 2) I have to be able to backup multiple clients at once to make daily backups feasible. Right now I have some beige box PC with 2 internal 400GB SATA drives and one removable (to rotate drives in and out) 250GB SATA drive. We're running out of space and quite frankly adding more drives to this frankenstein box does not seem to be the way to move forward. I was thinking of going all out an purchasing a nice 2-3U server with maybe 6 bays for hot-swap SATA disks. One option is to simply RAID all of these disks and create one big backup volume, but then I lose the ability to take drives off-site and/or rotate media in and out. The other option is to use it as JBOD and use each disk as a backup set. In practice, how well does the latter option work? I mean, does Windows/Retrospect deal with with hot-swap disks being removed? Or does this only really work well for USB/Firewire drives? That is another option, of course. I could go with a physically smaller server and backup to several external drives, although that would look kind of sloppy. What do ya'll do for large amounts (say 2TB) of disk backup, some or all of which needs to be moved offsite? How well does Retrospect deal with the arrangement? Thanks, -matthew
  11. misleb

    Error -557

    Heh, good to see this isn't only happening with Netware clients. Ok, maybe it isn't "good" necessarily, but it is nice to feel not so alone. I've found that it happens on many different files. I was not able to narrow it down to just one file/folder. It does so happen that this is one of my fastest clients (fasted backup time), so maybe it has something to do with the rate at which the data get to the Retrospect server. Overflowing some buffers? I mean, since you mentioned that throttling down one server fixed it for one client. I can't throttle the Netware server, but maybe the Retrospect server.... I've been waiting for a solution for this for like 8 months now. Maybe now that it is affecting Windows clients, there will be more attention given to it. Good luck. -matthew
  12. I am backing up a "Homes" volume on an OS X server. It is just hundreds of home directories. There's 350GB of data and and nearly 2 million files. Retrospect is having a difficult time just scanning all the files. I regularly get "out of memory" errors when backing it up. The machine has 4GB of RAM, but still can't manage to scan it all half the time. So what i would like to do is find a way to break it up into at least two backup jobs. Ideally, I would just select user names starting with a to m and then users n to z in another job. I can't make subvolumes because these are all in one folder. And as far as I know, I can't do an "include" for all directories that start with a-m (like with a regular expression) because that would still try to scan everything and then apply the filter to that. Basically I need some way to keep Retrospect from scanning all the files/folders. But I am not in a position to restructure volume. Plus, having fewer files in a backup set will make it easier to scan when I want to do a restore. Currently, it can take hours just to get a list of available files to restore. The restore itself is fine, but the wait is unreasonable. Any ideas? -matthew
  13. *bump* Any update on this? I still can't fully backup one of my Netware servers.
  14. I have the same NLM versions except for SMDR. I have version 6.54.04. And yes, help *would* be appreciated. ;-) I was considering opening up regular (paid?) tech support incident, but since it is now more than me with the problem, perhaps it is best resolved here. -matthew
  15. I actually recently came across the connTcpConnection: invalid code found: 111' error in the logs on one of our FreeBSD servers (running the Retrospect client through Linux emulation). That server isn't have a problem. I couldn't find any Netware error messages related to the backup. As far as Netware is concerned, everything is AOK. -matthew
×