Jump to content

Trouble reading files, error -519 (network communication failed)


savcredit

Recommended Posts

  • 1 month later...

We're getting similar problems here - on multiple machines with different OS's ( Linux, Windows, OS X )

 

We know for a fact that it isn't cabling, switches, and I would have a very very hard time believing that the nics or nic drivers on all of these machines are the issue ( mix of broadcom & intel )

 

In fact, we still get -519 errors when using a crossover cable between our retrospect server and our linux file server ( and we've verified that both machines see each other via ping ) No timeout/connection errors are logged in the linux client... until the retrospect timeout limit is reached and then retrospect flags error -519 and the linux client flags error 9

 

Thoughts?

Link to comment
Share on other sites

  • 1 month later...

I am having this same problem with Retrospect Server 6.5.350 on Windows XP. The client is running Fedora Core 6. There are 19 other clients on my network, Mac, Win and Linux including Fedora Core, and they all have been backing up almost daily without this problem. The problem only occurs on this one machine, which has recently been added to Retrospect's client list and has not yet (for several days) been able to complete a backup without failing part way through with this error.

 

Because of the trouble I have been having, I made a script to attempt this backup. I found an option "Stop backing up a client and log an error if execution performance drops below this value" so I set the value to 1MB per minute, the lowest setting possible. When it says "log an error" I don't know WHICH error that might be, perhaps our friend -519? But changing this setting didn't seem to help.

 

Often it will scan and find some 100,000 files to back up, then fail after less than 1,000. In other words at this rate it may take 100 attempts to get one full backup. I can't set the script to run itself more than once a day, but I don't want to wait 3 months before I get a full backup, nor do I want to kick it off manually every hour.

 

I happened to be watching the server one time when it failed. The filenames were updating very quickly as usual, then suddenly it froze on a single filename. After LESS THAN 30 SECONDS it gave up and produced the error. It did not seem to try again, just quit without any visible attempt to recover or continue.

 

No other I/O operations to the client are failing like this. As a test (and seeing attempts on this forum to blame the router, network card etc.) I copied a 180MB file across the net to the client 50 times, then compared the 50 files and found them all identical. In other words, no network errors, fatal delays or timeouts. Everything seems to work flawlessly, except Retrospect, whose only reason for existence is to increase the reliability of our enterprise.

 

Since we plan to upgrade all our Unix servers to Fedora Core 6 soon (this client is our test) if Retrospect doesn't like FC6 I will have to abandon Retrospect, I think.

Link to comment
Share on other sites

I found that if I do this on the Fedora Core 6 Retrospect client immediately before attempting a backup, it works better:

 

# ps -ef | grep -i retro

root 16955 1 3 09:15 ? 00:01:19 /usr/local/dantz/client/retroclient -daemon

root 16968 16955 3 09:16 ? 00:01:06 retropds.23

# kill 16955

(which also kills 16968, since it is a child of 16955)

# /usr/local/dantz/client/retroclient -daemon

 

Before using this technique, my last few backups did this much before failing with error -519:

 

FILES MB

===== =====

995 49.6

2423 47.7

3863 75.7

 

After

 

FILES MB

===== =====

8707 259.2

41758 445.4

 

Moreover the last backup ran to completion! Hooray!

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...