Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Xenomorph last won the day on March 12 2014

Xenomorph had the most liked content!

Profile Information

  • Gender
  • Location
    St. Louis

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Xenomorph's Achievements


Newbie (1/14)



  1. This is similar to my last thread: http://forums.retrospect.com/topic/154613-optimizations-for-10-gbps-networking-or-is-this-just-the-limit-of-lto-5/ Some things have changed. Some haven't. I have even more systems connected at 10 Gbps, and I've moved to faster & more dense LTO-6 tapes and tape drives. I'm also now backing up close to 500TB of data. I find myself falling behind more and more because Retrospect itself seems to drag its feet. Server: Windows Server 2008 R2 w/ 32 GB RAM, running Retrospect Multi Server, version Client: Windows Server 2008 R2 w/ 32 GB RAM, running Retrospect Client, version The Server and all Clients are connected with 10 Gbps Ethernet. Doing direct client to server file copies over SMB easily hits 600-900 MB/sec. When doing a backup of the exact same files, Retrospect only copies at 60-90MB/sec (3704 to 5241 MB/min). I can have Retrospect back up to Tape, local Disk, or even a RAM drive. It never gets above 60-90MB/sec. I can see in Task Manager that the Retrospect.exe process uses minimal CPU or RAM. For the latest test copy & backup, I selected four Windows ISOs, around 3.5GB each, 14GB total. I configured a 16GB RAM Disk on both Client and Server. This ensures that I'm NOT dealing with Disk I/O or Tape Drive performance issues. This is direct RAM to RAM file copies over a 10 Gbps network. * Windows Explorer showed "0.99 GB/sec" when doing a "drag & drop" copy from Client to Server over the network. This is about what I'd expect. * Retrospect Backup (compression off) got just ~90 MB/sec backing up from the same source (a RAM disk on the Client) to the same destination (a RAM disk on the Server) over the network. This is no better than when I back up to Tape or Disk or when backing up over a 1 Gbps network connection. * Retrospect Backup (compression off) did get 176-226 MB/sec backing up on from one RAM Disk to another RAM Disk on the same system. Doing it like this eliminates the network completely. Does anyone else do backups over 10 Gbps networks? What is the bottleneck with the backup? Does the Retrospect client application hold things back? Does the Retrospect server application hold things back? Are there any tweaks or modifications that I could make to the Retrospect server or client application?
  2. Engineering came to my rescue! It looks like this is fixed in Server version and newer (still in internal testing, as of May 23rd, 2019). A server upgrade it therefore required if you intend on using Retrospect on FreeBSD 11.2+ or FreeBSD 12.0+.
  3. I had a similar error back in 2014, but it was an unrelated issue. The platform is CentOS 7, but with FreeBSD kernel (FreeBSD 11.1 and older versions didn't experience this issue, but the current 11.2 and 12.0 kernels report the lack of xattr information differently). Basically, the FreeBSD kernel doesn't allow the Linux-based Retrospect client to read extended attributes / xattr of files, and so Retrospect refuses to back up any of those files. I don't need extended attributes, I just need the files backed up! The Retrospect Client is logging this for every file encountered: fetFileSpec: ExtAttrGetData() failed with error 61 When tracing system calls on retropds.23, I see this: linux_llistxattr() ERR#-61 'Attribute not found' Basically, as expected, it can't get the extended attributes. The Retrospect Server logs this differently, though: can't read, error -1114 (unexpected end of file) None of the files are changing in size, so there is no "unexpected end of file". Retrospect has full read/write access on the drive (obviously, since it is what places "retropds.23" there). It looks like the Retrospect Server is interpreting the message from Retrospect Client incorrectly! I've tried reporting the problem to Support on the Retrospect website, but since I'm running it on an "unsupported configuration" (that's worked fine for the past 6+ years), they won't even consider fixing the bug. Does anyone know of a way to get Retrospect to ignore xattr information? Or to get the Server to NOT skip files?
  4. Server: Windows 2008 R2 w/ 32 GB RAM, running Retrospect Multi Server version Client: CentOS 7 Linux w/ 64 GB RAM, running Retrospect Client version The LTO-5 Tape drive is connect to the server via 8 Gbps fiber. Client and Server are connected to each other with 10 Gbps Ethernet. Doing direct client to server file copies over SMB easily hits 450-650 MB/sec. The client's disks are set up in a RAID60 configuration, and are able to deliver data quickly. Yet, when doing backups with Retrospect, I'm getting about 60-80 MB/sec ("4358.7 MB/min" on the last backup job). The CPUs on both the client and server are mostly idle, the memory is mostly free, and the NICs are barely utilized. Isn't LTO-5 supposed to be able to get 140 MB/sec (or 280 MB/sec compressed)? Is this to be expected? Or where should I start troubleshooting something like this?
  5. Yeah, I had checked out those threads. * Both the Client and Server are x64 executables. * The Retrospect Server is configured with 4 execution units, and currently has over 20 GB free memory. * The Retrospect Server's C: drive has over 1 TB of free space. I'm going to try the backup again one more time, then just split it into two backups. I was hoping there was some way to figure out how a 64-bit system with over 100 GB of unused RAM running a 64-bit application could be "out of memory".
  6. Server: Windows 2008 R2 (x64), 32 GB RAM Retrospect Server (x64) Client: CentOS 6.9 (x64), 192 GB RAM Retrospect Client (x64) /backup/folder: ~3.1 TB size ~4,615,606 files Yes, it has over 4 million files, but it had been backing up fine until September 1st (and another backup with over 9 million files has been working). The folder hasn't been touched in months, so nothing in it has changed. The client has plenty of memory free (over 100 GB, actually). The system itself hasn't been giving any errors. The log on the Server just says this: - 9/8/2017 3:06:18 AM: Copying /backup/folder on server.local: Scanning incomplete, error -625 (not enough memory) The log on the Client just says this: 9/8/2017 3:06 AM: Script "2017-09-Backup" (SERVER): /backup/folder, Execution incomplete I noticed that it starts to scan the /backup/folder and then around ~2 hours later it gives up. What's the best way to figure out WHY it suddenly doesn't have enough memory? What are the limitations for Retrospect 10.5?
  7. My first guess is that this is caused by block-level backups. It didn't work on any of our Mac clients, so we turned if off. I'm guessing we need to just stop using it all toghether. A big reason we upgraded from the old 7.7 server to the new 9.5 server was to take advantage of block-level backups! I can understand block-level backups "not working", but it seems to completely crash our Linux clients. Server: (Windows x64) --- Server1 (32-bit), client (32-bit) Nov 20 17:00:17 ServerName1 Retrospect[30253]: fetFileSpec: unexpected EOF on "/share/lab/home/user1/.mozilla/firefox/abc123.default/places.sqlite-wal" Nov 20 17:00:17 ServerName1 Retrospect[98651]: TransReadPipe: read loop error. transaction possibly terminated. Nov 20 17:00:17 ServerName1 kernel: pid 30253 (retropds.23), uid 0: exited on signal 11 (core dumped) The Retrospect server reports this error: Trouble reading files, error -557 ( transaction already complete) --- Server1 (32-bit), client (32-bit) Nov 25 17:43:41 ServerName1 Retrospect[32794]: fetFileSpec: unexpected EOF on "/share/lab/home/user2/Projects/ImportantFile.xlsx" Nov 25 17:43:41 ServerName1 Retrospect[41014]: TransReadPipe: read loop error. transaction possibly terminated. Nov 25 17:43:41 ServerName1 kernel: pid 32794 (retropds.23), uid 0: exited on signal 11 (core dumped) The Retrospect server reports this error: Trouble reading files, error -557 ( transaction already complete) ----- Server2 (32-bit), client 7.7.100 (32-bit) Nov 26 17:21:57 ServerName2 Retrospect[9767]: fetFileSpec: unexpected EOF on "/etc/webmin/system-status/info" Nov 26 17:21:57 ServerName2 kernel: [83564.889072] retropds.23[9767]: segfault at ffffffff ip b76d2e72 sp bfaf3d88 error 5 in libc-2.15.so[b759c000+1a4000] The Retrospect server reports this error: Trouble reading files, error -540 ( trouble creating service) ----- In each case, the cause seems to be the same: Retrospect is trying to back up a file that has changed size (gotten smaller, in these cases), and the client gets the "unexpected EOF" error. But WHY does the client crash like that? If a file is 200KB during the initial scan and only 150KB when it tries to back it up (which causes it to freak out for some reason), shouldn't it just continue to the next file? Did something in exception handling break? Searching for these "unexpected EOF" errors under Linux gives me hits from 2009, 2007, 2006, etc. http://forums.retrospect.com/index.php?/topic/25915-unexpected-eof-when-backing-up-a-linux-client/ http://forums.retrospect.com/index.php?/topic/19454-error-540-trouble-creating-service-and-unexpected-eof/ http://forums.retrospect.com/index.php?/topic/18634-error-540/ http://forums.retrospect.com/index.php?/topic/18343-fedora-5-unexpected-eof-waitpid-failed-trouble-creating-service/ Why would this issue pop up with the latest Retrospect Server?
  8. I just found this thread when trying to Google for the weird output I get with the new 9.5 client. Running /usr/local/retrospect/client/rcl status sometimes gives me one of these messages: PmcOpenLocal: Handle 3 closed PmcClose: Handle 3 closed Nothing seems wrong with how it works, I was just wondering what those messages mean. I don't recall seeing them with the 7.7 client. I'm running this on FreeBSD 9.3. I have notes for (really basic) scripts that cleanup the old /usr/local/dantz install in preperation for the new /usr/local/retrospect install. I also have stuff for converting the Linux install (32-bit, only) to work with FreeBSD, and I've made .deb packages (7.7 only, so far) for Debian/Ubuntu. Most of it seems to "just work", other than weird messages seen in the logs or stdout.
  9. Catalog file size: ~1.1 GB ~1,000,000 files ~245 sessions System: old Xeon dual-core @ 3 GHz, 12 GB RAM, Retrospect 7.7 We've been backing up a server since January. Today, someone needed a file. No biggy, I figured. I go to open the catalog, and it's been sitting at this for an HOUR now: I'm guessing we should be doing 3-month rotations on the tapes insted of 6-month. Anyone know how long this will take? It's missing other backups while it sits, trying to restore from this one.
  10. We had old Exabyte tape libraries. They kept dying. Now we have Tandberg. Magnum 224 is what we're using now. We've bought at least three of them. One one is still working. Every library keeps busting. We've spent thousands on these things, and the support contracts are something like $1600/year each. They break more than I'd like, and support hasn't been the greatest. Failed power supply, failed drive, failed library. The latest issues is that the robot can't identify tapes any more. It says all labels are invalid, it keeps suffering from read errors, it hangs when trying to move tapes, etc. It's been a week, and I'm still waiting to hear back from Tandberg. I just gave them another $1600, and this is the "support" we get. We're a non-profit, so paying some company $1600-$3200 a year to not help us stings a little. With the latest scenario, no one has been here to take a look at the library, but they mailed us another tape drive for it. We have hundreds of terabytes of data backed up to tape, so I don't want to move to a HDD-based backup, yet. I'm fine with tapes. Does someone have a recommended tape library? Our existing setup: 2U rackmount 24 tapes LTO-4 drive fiber channel connection I'd like to stick with something like that.
  11. I got it working perfectly. FreeBSD 9.1, it backs up like a champ. Better performance than on our Linux, Mac, and Windows systems as well! Some instructions and a startup service I put together for FreeBSD: http://xenomorph.net...are/retrospect/
  12. I just found out the Solaris client is for the old Sparc systems (1980s-1990s), and not for any current x86-based Solaris system. The Linux client can be ran under FreeBSD (with Linux binary compatibility installed), but it cannot see any mounted drive. Are these platforms really something Retrospect doesn't care about?
  13. Hopefully the 64-bit client fixes the "not enough resources" errors with massive amounts of files.
  14. So you're saying that the issue is with backup size, and not the number of files?
  15. Retrospect keeps giving this error for most files: can't read, error -1019 ( not enough resources) Windows Server 2003 R2 x64 (up-to-date software, drivers, & BIOS - as of July 15th, 2011) 4 Gigs RAM Volume size: ~2.2 Terabytes, ~1.4 Terabytes in use. Number of files: ~1.1 Million Retrospect version: 7.7.562 On the old Dantz forum, someone from Retrospect said this on June 21st, 2010 regarding the limitations of the software: I thought it was a memory issue, and this link mentions how to modify the pool settings: http://support.microsoft.com/default.aspx?scid=kb;en-us;304101 That only applies to 32-bit Windows, as the PoolUsageMaximum and PagedPoolSize settings are different in x64 Windows (and already default to much higher). Just to be sure, I changed them and rebooted, but it made NO difference. All these files were backed up fine with the Linux client, but now on a Windows system the client is having trouble reading all the files. The server version (7.7) is the same. We no longer have the Linux server, so these files must remain on a Windows system. How can we get the Retrospect client for Windows to back up the files? After copying around 523,000 files, it then gives the "not enough resources" error for the remaining ~620,000 files.
  • Create New...