Jump to content


  • Content count

  • Joined

  • Last visited

  • Days Won


Xenomorph last won the day on March 12 2014

Xenomorph had the most liked content!

Community Reputation

18 Good

About Xenomorph

  • Rank

Profile Information

  • Gender
  • Location
    St. Louis
  1. Yeah, I had checked out those threads. * Both the Client and Server are x64 executables. * The Retrospect Server is configured with 4 execution units, and currently has over 20 GB free memory. * The Retrospect Server's C: drive has over 1 TB of free space. I'm going to try the backup again one more time, then just split it into two backups. I was hoping there was some way to figure out how a 64-bit system with over 100 GB of unused RAM running a 64-bit application could be "out of memory".
  2. Server: Windows 2008 R2 (x64), 32 GB RAM Retrospect Server (x64) Client: CentOS 6.9 (x64), 192 GB RAM Retrospect Client (x64) /backup/folder: ~3.1 TB size ~4,615,606 files Yes, it has over 4 million files, but it had been backing up fine until September 1st (and another backup with over 9 million files has been working). The folder hasn't been touched in months, so nothing in it has changed. The client has plenty of memory free (over 100 GB, actually). The system itself hasn't been giving any errors. The log on the Server just says this: - 9/8/2017 3:06:18 AM: Copying /backup/folder on server.local: Scanning incomplete, error -625 (not enough memory) The log on the Client just says this: 9/8/2017 3:06 AM: Script "2017-09-Backup" (SERVER): /backup/folder, Execution incomplete I noticed that it starts to scan the /backup/folder and then around ~2 hours later it gives up. What's the best way to figure out WHY it suddenly doesn't have enough memory? What are the limitations for Retrospect 10.5?
  3. My first guess is that this is caused by block-level backups. It didn't work on any of our Mac clients, so we turned if off. I'm guessing we need to just stop using it all toghether. A big reason we upgraded from the old 7.7 server to the new 9.5 server was to take advantage of block-level backups! I can understand block-level backups "not working", but it seems to completely crash our Linux clients. Server: (Windows x64) --- Server1 (32-bit), client (32-bit) Nov 20 17:00:17 ServerName1 Retrospect[30253]: fetFileSpec: unexpected EOF on "/share/lab/home/user1/.mozilla/firefox/abc123.default/places.sqlite-wal" Nov 20 17:00:17 ServerName1 Retrospect[98651]: TransReadPipe: read loop error. transaction possibly terminated. Nov 20 17:00:17 ServerName1 kernel: pid 30253 (retropds.23), uid 0: exited on signal 11 (core dumped) The Retrospect server reports this error: Trouble reading files, error -557 ( transaction already complete) --- Server1 (32-bit), client (32-bit) Nov 25 17:43:41 ServerName1 Retrospect[32794]: fetFileSpec: unexpected EOF on "/share/lab/home/user2/Projects/ImportantFile.xlsx" Nov 25 17:43:41 ServerName1 Retrospect[41014]: TransReadPipe: read loop error. transaction possibly terminated. Nov 25 17:43:41 ServerName1 kernel: pid 32794 (retropds.23), uid 0: exited on signal 11 (core dumped) The Retrospect server reports this error: Trouble reading files, error -557 ( transaction already complete) ----- Server2 (32-bit), client 7.7.100 (32-bit) Nov 26 17:21:57 ServerName2 Retrospect[9767]: fetFileSpec: unexpected EOF on "/etc/webmin/system-status/info" Nov 26 17:21:57 ServerName2 kernel: [83564.889072] retropds.23[9767]: segfault at ffffffff ip b76d2e72 sp bfaf3d88 error 5 in libc-2.15.so[b759c000+1a4000] The Retrospect server reports this error: Trouble reading files, error -540 ( trouble creating service) ----- In each case, the cause seems to be the same: Retrospect is trying to back up a file that has changed size (gotten smaller, in these cases), and the client gets the "unexpected EOF" error. But WHY does the client crash like that? If a file is 200KB during the initial scan and only 150KB when it tries to back it up (which causes it to freak out for some reason), shouldn't it just continue to the next file? Did something in exception handling break? Searching for these "unexpected EOF" errors under Linux gives me hits from 2009, 2007, 2006, etc. http://forums.retrospect.com/index.php?/topic/25915-unexpected-eof-when-backing-up-a-linux-client/ http://forums.retrospect.com/index.php?/topic/19454-error-540-trouble-creating-service-and-unexpected-eof/ http://forums.retrospect.com/index.php?/topic/18634-error-540/ http://forums.retrospect.com/index.php?/topic/18343-fedora-5-unexpected-eof-waitpid-failed-trouble-creating-service/ Why would this issue pop up with the latest Retrospect Server?
  4. I just found this thread when trying to Google for the weird output I get with the new 9.5 client. Running /usr/local/retrospect/client/rcl status sometimes gives me one of these messages: PmcOpenLocal: Handle 3 closed PmcClose: Handle 3 closed Nothing seems wrong with how it works, I was just wondering what those messages mean. I don't recall seeing them with the 7.7 client. I'm running this on FreeBSD 9.3. I have notes for (really basic) scripts that cleanup the old /usr/local/dantz install in preperation for the new /usr/local/retrospect install. I also have stuff for converting the Linux install (32-bit, only) to work with FreeBSD, and I've made .deb packages (7.7 only, so far) for Debian/Ubuntu. Most of it seems to "just work", other than weird messages seen in the logs or stdout.
  5. Catalog file size: ~1.1 GB ~1,000,000 files ~245 sessions System: old Xeon dual-core @ 3 GHz, 12 GB RAM, Retrospect 7.7 We've been backing up a server since January. Today, someone needed a file. No biggy, I figured. I go to open the catalog, and it's been sitting at this for an HOUR now: I'm guessing we should be doing 3-month rotations on the tapes insted of 6-month. Anyone know how long this will take? It's missing other backups while it sits, trying to restore from this one.
  6. We had old Exabyte tape libraries. They kept dying. Now we have Tandberg. Magnum 224 is what we're using now. We've bought at least three of them. One one is still working. Every library keeps busting. We've spent thousands on these things, and the support contracts are something like $1600/year each. They break more than I'd like, and support hasn't been the greatest. Failed power supply, failed drive, failed library. The latest issues is that the robot can't identify tapes any more. It says all labels are invalid, it keeps suffering from read errors, it hangs when trying to move tapes, etc. It's been a week, and I'm still waiting to hear back from Tandberg. I just gave them another $1600, and this is the "support" we get. We're a non-profit, so paying some company $1600-$3200 a year to not help us stings a little. With the latest scenario, no one has been here to take a look at the library, but they mailed us another tape drive for it. We have hundreds of terabytes of data backed up to tape, so I don't want to move to a HDD-based backup, yet. I'm fine with tapes. Does someone have a recommended tape library? Our existing setup: 2U rackmount 24 tapes LTO-4 drive fiber channel connection I'd like to stick with something like that.
  7. FreeBSD and Solarix (x86) support?

    I got it working perfectly. FreeBSD 9.1, it backs up like a champ. Better performance than on our Linux, Mac, and Windows systems as well! Some instructions and a startup service I put together for FreeBSD: http://xenomorph.net...are/retrospect/
  8. I just found out the Solaris client is for the old Sparc systems (1980s-1990s), and not for any current x86-based Solaris system. The Linux client can be ran under FreeBSD (with Linux binary compatibility installed), but it cannot see any mounted drive. Are these platforms really something Retrospect doesn't care about?
  9. Retrospect Insights And 2011 Roadmap

    Hopefully the 64-bit client fixes the "not enough resources" errors with massive amounts of files.
  10. So you're saying that the issue is with backup size, and not the number of files?
  11. Retrospect keeps giving this error for most files: can't read, error -1019 ( not enough resources) Windows Server 2003 R2 x64 (up-to-date software, drivers, & BIOS - as of July 15th, 2011) 4 Gigs RAM Volume size: ~2.2 Terabytes, ~1.4 Terabytes in use. Number of files: ~1.1 Million Retrospect version: 7.7.562 On the old Dantz forum, someone from Retrospect said this on June 21st, 2010 regarding the limitations of the software: I thought it was a memory issue, and this link mentions how to modify the pool settings: http://support.microsoft.com/default.aspx?scid=kb;en-us;304101 That only applies to 32-bit Windows, as the PoolUsageMaximum and PagedPoolSize settings are different in x64 Windows (and already default to much higher). Just to be sure, I changed them and rebooted, but it made NO difference. All these files were backed up fine with the Linux client, but now on a Windows system the client is having trouble reading all the files. The server version (7.7) is the same. We no longer have the Linux server, so these files must remain on a Windows system. How can we get the Retrospect client for Windows to back up the files? After copying around 523,000 files, it then gives the "not enough resources" error for the remaining ~620,000 files.
  12. Wow. Lots of questions. Client version is 6.1.130. Yes, 6.1.130. Provided directly by Dantz/EMC/Roxio/whomever they are now. This download link: http://download.dantz.com/archives/Client_OS_X-6_2_234.dmg provides 6.1.130, regardless of what the file name says it is. Go ahead, download it and check. 6.1.130 is in the DMG. I'm using 6.1.130 because it works under 10.2.x. I had installed 6.3.029, but it would not load because it requires a newer version of Mac OS X (good thing it INSTALLS on 10.2 when it doesn't even work under 10.2!). So I went back to 6.1.130. Not all Mac versions have an Uninstall option to use. Manual uninstall is the only option I've seen. 6.1's installer does give an Uninstall option. The 6.3 installer did not give an option to Uninstall. Going by this thread: http://forums.dantz.com/showtopic.php?tid/29180/ , I deleted these files: / Library/Preferences/retroclient.state /var/log/retroclient.history /var/log/retropds.log /Library/StartupItems/Retro* I deleted everything that had Retro or Dantz in its name. I made sure the Retrospect client wasn't running, and everything was deleted, moved to the trash, trash emptied, etc. I rebooted, I ran fsck, I checked permissions, I rebooted again. I made sure there were no Mac OS X updates either. We have a lot of installs of Retrospect client that give errors all the time. "Backup Client Reserved" is one of the most common. Reserved? Reserved for what? So I started updating ALL clients to the latest Retrospect version. Unfortunately, the 6.3.029 installed on 10.2.x, but will NOT run on 10.2.x. So I removed it (rm command), and installed 6.1.130. Now it is giving -1101 errors. It worked before. it worked fine before for YEARS. But now it won't. I've read in other threads that some Mac OS X users have to remain logged in, or they get a 1101 error as well.
  13. We have an old Mac OS X 10.2.x system that a user wishes to have backed up. It had been working fine, until the 6.3 client was installed. 6.3 installed, but then gave an error that it needed a newer version of OS X. We removed 6.3, and put 6.1.130 back on it. Now, every time we try to back it up, it doesn't even make it to 1 file. It immediately gives the error -1101, file/directory not found. We first tried renaming the hard drive from "Macintosh HD" to just "HDD", thinking maybe the space was causing trouble. Why won't it back up any more? Is there a way we can figure out what file it is looking for that it keeps failing on?
  14. It has both 100 Mb and Gigabit NICs. I'm just using the 100 Mb connection right now (I was troubleshooting an un-related driver issue with the Gigabit connection). It had worked before on 100 Mbps (for *years*), and it seems to be doing fine on the current client. The issue seems more like a per-client problem. On some systems it just zips by, and on others (almost identical computers), it goes back to just a few K a second. We do support for a lot of almost-remote users - these people are in other buildings next to ours (we have fiber connections between locations). I thought it was just one problem system, but more and more backups are "stalling" in the same way. Three systems will complete like normal, then the backup will be hung on the forth system. I try to skip that one, and it gets hung on the fifth system. I would have to do some sort of packet analysis to find out what is using all the bandwidth? We have monitoring systems in place that aren't reporting any unusual traffic.
  15. You know, I was thinking about something like that earlier - it is using 100 Mbps right now instead of gigabit, and Network Utilization is at 99%. Would it really need that much bandwidth just to scan other systems to make a list of files to back up?