Jump to content

SimonHobson

Members
  • Content count

    55
  • Joined

  • Last visited

Community Reputation

0 Neutral

About SimonHobson

  • Rank
    Occasional Forum Poster
  1. I have it on a number of Debian machines. I manually pulled the files from the package and created (well adapted another) init file. You also have to install 32 bit libraries on a 64 bit system. It makes a mockery of the claim to support Linux clients when really they don't - providing a couple of out of date installers that don't support the majority of installations doesn't really count IMO. There's no excuse (other than "can't be a**ed") for not providing 32 and 64 bit clients in both RPM and DEB formats which would cover the majority of current Linux systems.
  2. RetroEngine using 100% CPU while idling

    Not one for "me too" posts, but arrived at this thread after having to force-quit the server yet again. In m y case, it seems to happen any time I do something like browse a backup or if I've terminated an in-progress backup - I don't recall it having happened after a scheduled backup that completed normally.
  3. I vaguely recall a similar thread from some years ago. The only solution offered (apart from persuading the network admin that allowing selected subnet broadcasts didn't turn his expensive switch into a hub) was to add each possible remote address as a /32 subnet. Thus the server would check each address individually - hence creating more traffic (and more server load and delays) than if the network admin allowed selective broadcasts.
  4. Help with Unix Path rules

    Think I've got it now Here's a list of things I found. That didn't work : Using the preview which must be what I was looking at earlier. Using "folder" "unix path" "is like" "/store*/myth/*" That did work : Using "folder" "unix path" "is like" "/store1/myth/*" Using "folder" "unix path" "is" "/store1/myth/"
  5. Help with Unix Path rules

    Sorry for not getting back earlier - for some reason I didn't get notified of a reply IIRC (thinking back) I did try actually running the backup, not just looking at the preview. I'll have another look at it later when the engine isn't busy. I've since found a couple of other threads on similar problems - at least I'm not alone !
  6. Can someone help me with the format for unix paths in rules ? I've trying to setup a proactive script that will backup a Linux box, but exclude certain directories (loads of large TV recording files for MythTV). So I have a volume mounted on /store1, and within that a directory called myth that I want to exclude - but without excluding other random directories on the system that might be called myth. Also, for this script, I'm backing up /store1 on it's own. In rules I've selected Folder -> Unix Path -> is, and then the path I want to exclude /myth/ (relative to the mountpoint) and /store1/myth/ (relative to filesystem root) don't seem to work, so have I got the format wrong ? The manual days absolutely nothing about the format of what goes in the text box.
  7. Retrospect 8.0 and PowerPC

    Can you guarantee that we won't miss out on any initial upgrade offers (if there are any) by waiting ? I also need G4 support, I can't afford to buy an Intel based computer since I'm still using a SCSI attached tape drive.
  8. ATTO UL5 and 10.4.10

    Hmm, sound like my problem as well :-( Hardware is Atto UL4D in Blue&White G3 with Tandberg SLR100 drive.
  9. Linux client is "deferred" -- why?

    I can't say WHY but I think I may have found a workaround for this. After enabling a second NIC and setting the box up as my internet gateway, I found that I could not use the client at all - it just doesn't seem to like being multihomed, even when configured to only respond to one address. Following a hint I picked up from another thread, I added the client by IP address and not only did it re-appear but it also could be used by the backup server - ONCE. So I added a cron job to stop and restart the client every night and it now works with backup server. This is version 7.0.110, installed from tarball on Debian AMD64 using 32 bit libraries.
  10. Linux client on Debian Etch AMD64

    For the benefit of anyone else in the same situation ... I recently replaced my Suse system (6 years old, hard disk died) with an AMD64 running Debian Etch AMD64, and have got the Linux Client 7.0.110 to run (mostly). These are (I think !) the steps I took - unfortunately I tried a few things and can't be certain of the exact steps required : Download the .tar version of the client and stick it somewhere convenient. If not already installed, install package ia32-libs (apt-get install is32-libs) which provides the 32bit libraries needed to run 32 bit applaiction. Untar it (tar xvf retroclient-70_linux.tar) You now have two files, RCL.tar and Install.sh I think i just ran the install script (./Install.sh) which mostly installed stuff, if it doesn't work, copy the files contained in RCL.tar (tar xvf RCL.tar) into /usr/local/dantz/client (which you'll have to make). Make sure the script rcl gets copied to /etc/init.d and that a symlink is created in /etc/rc2.d (cd /etc/rc2.d ; ln -s ../init.d/rcl S99rcl) Start the client (/etc/init.d/rcl start) and check the logs to see if it started. Make any configuration changes, eg "retrocpl -exclude on" to turn on handling of the /etc/retroclient.excludes file. The only thing left is to find out why it appears as Deferred in Backup Server - as in this thread Linux client is "deferred" -- why?
  11. Linux client is "deferred" -- why?

    Interesting. I've just got client version 7.0.110 running on an Debian Etch AMD64 system and have exactly the same problem At least I've got it working at all !
  12. Quote: Thanks for the reply. I'd seen numbers in the 1400's in the windows forums, so I was a little curious (they were backing up to hard drives...). Doh, hard drives are a LOT faster than tape ! I've seen 1800MB/min regularly from a Linux client to FW800 hard disks on a 1.8GHz xServe, but about 400 max to SLR100 (about the rated streaming data rate for the drive with a little bit of compression taking place. Looking at http://www.exabyte.com/products/products/specifications.cfm?id=400, I see the quoted transfer rate as 21.6GB/hr, which is about 360MB/min (coincidentally the same as the SLR100). You can multiply that by whatever compression ration you get and that will be your limit - reached only when copying large files where the overhead of cataloging etc is low. If anything holds up the data (slow client, lots of small files causing a lot of houskeeping, etc) to the extent that you cannot feed the drive at it's streaming rate, then then the tape will stop, back up, and set off again when more data is available - this drops throughput VERY dramatically.
  13. I sympathise with your problem - I used to have similar problems but not half as acute as we didn't have too many laptops and they were in the office quite a lot. What you really need is a multi-tasking backup server just like most other backup packages have had for years, but since I don't see that happening with Retrospect for a long time (if ever) then I have a few tips that might help. The backup server benefits from processor power and memory, both are under heavy usage during the scan & compare phases. At my last job we upgraded to an xServe when the old (upgraded) G3 just couldn't keep up - in particular if you have 100 clients and it takes 5 minutes to do a scan & compare, then it's going to take over 8 hours a day to get round all the clients even without spending any time copying data ! Faster backup media helps. Tape is relatively slow to transfer data, and you waste time seeking. We switched to external Firewire drives and it made a HUGE difference. The way hard disk prices have come down over the last few years, they aren't even that expensive compared with good performance/capacity tapes (but you do need to handle them with some care). As a performance indicator, I had a 1.8G xServe with 1.5G RAM and LaCie external disks hooked up with FW800. We had around 80 windows clients (nearly all XP) and 15-20 Macs (mostly in the design dept with large files). In addition I was backing up two Linux servers, and another xServe with design dept files on it (about 300G when I left). My network switch had two gigabit ports on it, so I had the backup server and one of the others (a linux box IIRC) on these ports - it could do 1.8GB/min backup rate using gigabit ! All the other clients were on 100M ethernet. When starting a fresh backup cycle, it would be about two days before we had a full backup on the majority of clients - with a wekk bringing us down to just one or two clients 'missing' because they weren't on the network very often. IIRC it took about 1 1/2 minutes/client to do a scan and compare on the Windows clients, up to 1 1/2 hours for some of the Macs (never did work out why, most were only a few minutes). To illustrate the different the backup media makes, before switching to disks we used SLR100 tapes, and it was a minimum of a week to get about 95% of clients with at least one backup - probably two weeks before we had what we could consider a nearly complete backup. The rest of the setup was the same. A lot of the difference of course was down the limited capacity of the tapes meaning that the server could only do so much overnight/at weekends before it ran out of tape. There is a slight risk, but if you have sufficient hardware, you could consider copying one backup archive to the newly recycled next archive before backing up the clients. This way you avoid having to copy the entire client disk contents once at the start of each cycle. The risk is of course that any bad files in the archive will be propagated to the next backup set. Consider adding an additional incremental backup to fresh media. Eg, instead of doing Aa a Bb b Cc c Aa a Bb b ... you might do Aa a a Bb b b Cc c c Aa a a Bb ... where 'Aa' means a full backup to a recycled set plus incrementals for a week, and 'a' means doing incremental backups for a week. So instead of doing a full backup every two weeks, you are only doing it every 3 weeks. If you haven't filled the media after (say) a week, tell Retrospect to skip to new media anyway so that you can take the previous weeks media off-site. We took this out to 4 weeks - the trade off is the quantity of media you need to do a full restore (and of course the risk of it getting wiped/damaged while it's back on-site). Consider setting up two schedules - one that copies everything but only runs once a week, another that only copies user data but runs every day. That way, when you reset the backup set, the full backups are spread out a bit and so give a better chance to copy the users data (which being smaller will take less time to copy).
  14. From memory (it's been a while since I've run Retrospect in a business environment and now I use an old G3 at home) Retrospect is a cycle hog - it's clearly never been updated for the multitasking world- yet another example where it's been left to stagnate while the world has moved on. I'm sure that if we could see the source we'd find loads of : 1 is condition <x> true 2 if yes, goto 1 3 do something 4 goto 1 type loops. For example, during a backup I suspect it sits there in a loop continually looking to see if there is a packet in the input buffer - rather than invoking a system call that will suspend the process until something is put there by the OS. The result of this is that even when doing very little, all normal indications of CPU usage will indicate that the machine is running flat out - because it actually is. It's running flat out going round in tight loops doing nothing. The nearest analogy I can think of would be that instead of waiting for the familiar clatter of the letterbox, you get up, walk to the front door to see if there's any post, when there isn't you go and sit down again, repeat immediately until some post arrives. When you get something, you deal with it, then resume walking to the front door again until some more arrives. In a business envirnment you could substitute going to reception instead of waiting for the receptionist to bring it round to you. All this polling was fine in the days when you weren't multitasking and hardware didn't have power saving modes - if you didn't use the cycles for polling, the system would use them for an idle loop. These days it simply wastes cycles that could either be given to another program or reduced by power saving modes. Having said all that, I do seem to recall that on our XServe the CPU usage did drop below 100% while polling clients - but not by a huge amount.
  15. Welcome to the Retrospect backwater where development has passed by without notice ... in other words, in spite of the number of other packages that do this out of the box, Retrospect still can't in any meaningful way ! I have tried to simulate this in the past without too much success. Here's a few tips : You can set up a Duplicate operation to update the local disk to be an exact copy of the client. You cannot do this in Backup Server mode, only on a timed schedule basis (so if the client isn't on at the scheduled time, it doesn't get done). You cannot do this with a Windows client to a Mac server as important file information will be lost. To do Windows cients you need to duplicate to a Windows disk - I suspect you can do this to a Windows client with a Mac server but I haven't tried. When I was doing it, I had an old Windows server and a spare Retrospect licence so set that up separately to my main backup server. The useful feature that eliminates duplicates (ie doesn't copy the contents of a file, just information that it's there, if the same file has already been copied into the archive) doesn't work within a source. So whilst the space used for a lot of application and system files is saved when copying them individually, this doesn't happen when copying your intermediate backup - thus enlarging your archive media requirements. You can get around this by backing up the intermediate store subvolume by subvolume - but this then makes even more work. Where I did find the technique useful, but ultimately too much administration, was to reduce the effect on clients of the first backup to a recycled archive. By having a local disk with a fairly up to date copy of a client, I could backup that first - with the result that when the client itself is backed up, most of the files on it are already in the archive and so don't need to be copied.
×