Jump to content

Tree

Members
  • Content count

    123
  • Joined

  • Last visited

Everything posted by Tree

  1. I have had a root-level folder on another computer that was shared via SMB, to act as a place to keep some media sets. It began to report as offline, so I started to investigate. I finally got to the point of just removing the share, intending to add it back in again... except that I cannot. When I click on "Add Share..." and then put in Server "SMB://xxx.yyy.zzz.abc/VolumeName", together with the correct Username and Password, I get "These credentials are not valid". All of this information is entered exactly as I recorded it from when I first set up the share; this share has been being used for a while now. The only significant recent change I can think of is that the machine was upgraded to Mac OS 10.10, and apparently during this process, the SMB sharing was turned off. I detected that right away, and tried just to turn SMB sharing back on, but that didn't work. That's when I began trying to roll things further back, by removing the share with intent to re-add. I know that the credentials are valid, because I reset the password on the host machine, turned file sharing off, rebooted, and restarted the RS8 engine to be safe. I can enter those credentials and access the shared volume via Finder. They are the same credentials that are used for a few other shared volumes, so I don't particularly want to change them. The host machine is running Mac OS 10.10, but the RS8 engine is installed and running on a machine with Mac OS 10.5.8. What can I do to get my shared volume back?
  2. The main limiting factor of upgrading Retrospect is our uncertainty whether we wish to persevere with the product. We have been evaluating other options (including our whole backup paradigm), but as long as I can get Retrospect to behave properly then we can continue using it. I'm not likely to just purchase a more current version in the hopes that it resolves the very particular failure mode that I am seeing. So for the time being, we've been using a work-around, by relying on Carbon Copy Cloner to handle the one particular server volume that isn't getting backed up since its media set cannot be seen by RS8. The work-around is effective enough that it calls further into question our need for Retrospect. We have not had other issues regarding backing up data on machines running Mac OS 10.10; we have Client 6.3.029 running on them, and only certain folders are backed up, not the entire boot drive and operating system. The volume that isn't getting backed up by RS8 right now is actually on an external drive hosted by Mac OS 10.8 Server; the problem isn't in talking to that client, though, but in the fact that the "destination" where its media set has been kept is now on a 10.10 system. In other words, I can talk to clients on 10.10 systems, and read their data for backup purposes, but I cannot "see" an SMB-shared volume from a 10.10 (non-server) computer. Apparently I need to have that media set on such a shared volume, unless perhaps there is a way to define media sets such that they utilize the RS8 Client in lieu of volume-sharing.
  3. Here is a rundown of the various methods I have attempted, thus far without success: 1. Force Mac OS 10.10.2 to utilize SMB-1 protocol (in lieu of SMB-3) 2. Try to add the Share in Retrospect as "cifs://servername/volume" instead of "smb://..." 3. Tried multiple variations of capitalization, just in case of case-sensitivity 4. Disabled then re-enabled SMB sharing as well as Guest account (both of the "sharing-only" accounts on the host machine) 5. Updated host machine to Mac OS 10.10.3 Nothing seems to work. I suppose I haven't yet tried changing the credentials (password), which I have wanted to avoid since the same credentials are used to connect to several other similarly-shared volumes, but I guess I can check that next. Anybody have any other ideas?
  4. Follow up: The Backup scripts have been running well, it seems they were only failing (error -519) while trying to do the initial backup of large quantities of data. Now that they are only grabbing a few megabytes at a time, I am not seeing issues. It seems that the throughput speed is a factor, as I was able to get one huge initial backup to go through by letting it take its time, at 50 MB per minute. It took over two full days to run, but I just let it and it completed perfectly. Other scripts that have run successfully have achieved about ten times that speed. It seems that the failures tend to occur on scripts that attempt to run at 1 GB per min. or more. I'm not sure if there is a way that I can set a speed limit so that scripts don't try to run too fast, if there is then I think I have a workaround. The other workaround has been to just keep re-running the scripts, as they eventually whittle the amount of data to be backed up down to a manageable level.
  5. I checked in on Retrospect (8.2.0.399) to see how backup scripts were rolling along - I haven't changed anything in a long while, things have been just humming with the same config for a long time, I just need to re-run scripts sometimes when a client fails to connect (error -519 for instance). One script in particular was only making it part of the way through - a Copy Script that would get about 20-30 GB into a 150 GB load, then fail: + Copy using ccProjects at 8/11/14 (Activity Thread 1) 8/11/14 5:18:21 PM: Connected to C|C Server 8/11/14 5:18:21 PM: Connected to iMac-MRR To volume ccProjects1 on iMac-MRR... - 8/11/14 5:18:21 PM: Copying Projects on C|C Server > !Trouble reading files, error -519 ( network communication failed) 8/11/14 6:25:15 PM: Execution incomplete Remaining: 54724 files, 126.6 GB Completed: 39900 files, 28.9 GB Performance: 478.9 MB/minute Duration: 01:06:53 (00:05:08 idle/loading/preparing) I tried running this script several times yesterday to see if I could get it to complete; then I left for the evening as it was scheduled to run that night. Again, it got underway and then lost network communication after 21 GB, 52 minutes in. That's not the ultra-weird part. The weird part is that today, I can not get it to run the script anymore! Every time I select the script and press the Run button, it depresses but then nothing happens. I have tried making a duplicate of the script, same non-response. Change the activity thread for the script, same non-response. Stopped and restarted the RS8 engine, no luck. Rebooted both my machine (console) and the RS8 server machine (engine), still nothing. Tried other scripts, they won't go either. Oddly, though, I did a Restore using Restore Assistant, and that DID work, leaving me with a new RA script (since that's how RS8 likes to work). I have other scripts scheduled to go off tonight so I will see whether those actually happen based on schedule. But I cannot manually get any script to run right now. The last script I was able to manually run was one that I did this morning; it completed with some errors, which was to be expected since I was running it during the work day while files it would be backing up were in flux (errors were mostly of the "file didn't compare" variety). That script did actually complete, to wit: *snipped* 8/12/14 11:16:28 AM: 14 execution errors Total performance: 1,199.4 MB/minute Total duration: 01:55:21 (00:03:37 idle/loading/preparing) But since 11:16:28 AM the only thing I have been able to do is Restore Assistant. Now, when I tried to run a script and discovered the non-response, I went and looked at the log. I found a bunch of repeated lines like this: *snipped* TFile::Open: UCreateFile failed, /Library/Application Support/Retrospect/ConfigBackup, oserr 21, error -1011 TFile::Open: UCreateFile failed, /Library/Application Support/Retrospect/ConfigBackup, oserr 21, error -1011 TFile::Read: read failed, /Library/Application Support/Retrospect/ConfigBackup, oserr 21, error -1011 TFile::Read: read failed, /Library/Application Support/Retrospect/ConfigBackup, oserr 21, error -1011 *snipped* ... with the UCreateFile line repeating 10 times, then the Read line repeating 5 times, Ucreate another 10, Read another 5, Ucreate another 10, and Read another 5 times. I went over to the server machine and looked up the file in question; it was actually a folder, containing a very old restore of my backup of RS8's Config.Dat and Config.Bak files. All files in that folder were date-stamped 2009; their equivalents in Library/Application Support/Retrospect/ bear more current modification dates. So, I surmised that I don't really need that folder, so I renamed it, and also moved it to a different location. Stopping, restarting RS8, and checking the log confirmed that folder to be the one in question, as the UCreate and Read messages vanished once that folder was moved away. But, no effect on the non-running scripts! So I placed that folder back where I got it, renamed it to what it had been, and sure enough the log now shows the same error messages again. I have no idea whether this is a related issue or not; my guess is that it is totally unrelated, and since I don't need the ancient 2009 restore it would be good to just get rid of it. System Information: Retrospect 8.2.0.399 engine running on iMac 24" with OS 10.5.8 w/ 2.8 GHz Core 2 Duo, 4 GB DDR2 SDRAM console running on iMac 27" with OS 10.6.8 w/ 2.8 GHz Core i5, 4 GB DDR3 wired Ethernet all 10/100 routers/hubs/switches, no wireless enabled Anybody got any ideas?
  6. I have tried sidelining the Copy scripts that have been failing, thinking that perhaps they are not as efficient at dealing with large amounts of data on the source. I replaced those with regular Backup scripts, to test my theory. Unfortunately, the new Backup scripts are also exhibiting the same symptoms - getting a few gigabytes into execution, then failing with a -519 and leaving the client in the "reserved" state. I have been doing these tests during the work day, and I have not been needing to reboot the server computer that I am trying to back up; these -519 network errors are not showing up as any other form of network interruption. We continue to use the files and folders on the shared volumes without issue. I used Carbon Copy Cloner to replicate the files on the affected volumes onto an off-site disk, so I'm not feeling too vulnerable at the moment, but if this cannot be resolved then I may have to genuinely abandon Retrospect.
  7. No luck on letting it run over night according to regular script schedule. It did launch, but then failed with a -519 after only 2 GB. There was another script that tried to run on the same client thereafter, and of course it failed with the -505 client reserved message. I am going to shift focus to a different script, see if I get -519s with any/all scripts to that client, or just this particular one.
  8. I just ran the same troublesome script again twice today. This script is attempting to Copy from one of our shared Server volumes, the one that actually gets the most frequent activity throughout the workday, as most of our project files are on it. Thus, any "real" network/communication errors would be felt by the rest of us, as it would interrupt our active work. That said, maybe we wouldn't notice brief disruptions, since we mostly communicate with the server at the time of opening, saving, and closing files; we don't often need a continuous streaming connection the way the RS8 client does when executing. The first attempt today got as far as 35 GB prior to quitting with Error -519. The second attempt had my hopes up as it made it to 99 GB. I will run it again when I leave tonight; perhaps with everybody gone it might do better, all the bandwidth to itself. This script normally runs at night, anyways; I've just been trying to manually run it while I am here to get back to where I have confidence in the results. As before, each time it gets to that Error -519, it is leaving the Client state as "In use by..." and therefore "Client Reserved" when testing the client by IP address in Console. The Command+Click+Off cycle gets it back to "Ready". As far as any other conditions at the Client machine, I don't know (Client machine is actually a Mac Mini running OS 10.8 Server, and subject volumes are on an external hard drive array hosted by that computer). I am hopeful that I can keep pushing forward, as I've now successfully copied 99 of the 150 GB; I don't think it's actually copying all of that much data, that's just how much is on the volume, and it compares against what has been copied already. I assume that the next time I run it, presumably no data will need to be copied for that first two-thirds (aside from minor file changes that may have occurred during the work day). I can say that I have seen other clients with Error -519s that often seem to clear up when I wake up their computer - even though I have set each machine's preferences to prevent sleep (other than display sleep). We have a mix of iMacs of various vintages. I generally just check RS8's activity to see which (if any) scripts failed with a -519, go wake that machine if necessary, and just re-run the script. But it sure would be nice to get to a better understanding of what's causing all these -519s!
  9. Update: The regularly scheduled Backup scripts did go as planned, except that one of them produced a "Client Reserved" error. Clue! As it happens, all of the Copy scripts that were failing to initiate were ones that use the same client. Basically, I just wasn't getting the "Client Reserved" feedback to know what was going on. Sure enough, examining that client showed its status as In Use for the script that I had originally tried to run, about noon yesterday. By Command+Clicking the client "Off" to get to status "Not Running", then turning it back on, I was able to launch the Copy script this morning. So now I may be back to just trying to figure out why it fails after copying 20 GB, which is a different issue.
  10. Well... I guess I need to clarify, because I just tried a script that is an actual Backup script rather than Copy, and the Backup script does run! It's just that all of my Copy scripts won't get going when I press the Run button.
  11. We recently purchased an upgraded office server, and migrating over onto it meant installing the Retrospect Client (we're using version 6.3.029, and Retrospect Engine is 8.2.0399). Our previous office server was running Mac OS 10.4.11 Server, and appropriately enough the Retro client installed on that machine was listed as Type = "Server" when examined in Console / Sources. We are licensed for up to one "Server" plus up to twenty "Desktop" clients. The client installation, however, on the new physical server is reported in Console as "desktop", and I have removed and re-added the client to no avail. I don't see any means of instructing Retrospect to treat a certain client as a "server". I was able to check on a "Licenses" tab in Preferences to verify that we had 13 of 20 desktop clients, and 0 of 1 servers, and now after reinstalling the client and adding back as a source, I see that I have 14 of 20 desktop clients and 0 of 1 servers. All of this is in pursuit of what might be a separate issue, in that an external drive connected to the physical server is showing up incorrectly as a source. In short, when I try to browse that drive, I can only see a couple of files in a few specific folders, and not the full contents of the drive. I know there is a ton more data on the drive, as it is what I need to be backing up! Getting the right kind of client might help out here, if for some reason its "desktop" role prevents it from seeing the drive the same way that I see it when mounted as a network sharepoint on other computers.
  12. Daniels, thank you for the input. That sequence was what I had attempted just prior to posting here, because I thought it would surely rectify things. I didn't get as far as #5, though, because I saw that after doing #4 it was still treating it as a "desktop" rather than "server" client. And, due to the other issue (external drive not showing all its contents), I cannot edit the scripts that reference that volume. I think that, as a workaround for the second issue, I can add the external drive as network shares, which means basically I'm relying not on the client so much as the network file sharing infrastructure to talk to the drive, but I don't want to leave things like that, I'd like to resolve things properly.
  13. Sorry for failing to mention the server OS - it is running Server 10.8.2, on a Mac Mini that was purchased just this summer. I should also mention that when initially setting up this server, I opted to direct it to store its services files on the external drive in question, rather than on the internal SSD. This was because it would be hosting a wiki that would be in constant flux and likely to grow beyond the SSD's capacity. But last week, somehow things got screwed up, to where the server could no longer connect to its databases on the external drive, and I was forced to "roll back" to a fresh server with all its files on the SSD. I have not, however, deleted or changed any of the data on the external drive, and it shows as a sharepoint with expected permissions settings managed by the server, as before. But the Retrospect Client, even after removing and reinstalling, (removed using Uninstall option on downloaded current client DMG) reports as "desktop", and the only contents visible on the external drive are a couple of folders that were previously part of the server's service files location - not the entire contents of the old service files location, mind you, which does still contain the old databases (as I hope to be able to somehow recover them).
  14. Well I was getting this Error -557 as well, after having done a "repurposing" of computers wherein my machine (with the Retrospect Engine installed) was handed down to another employee on the network. I figured I would just leave the engine installed where it was, as I could manage things via the Console on my new machine. However, the process of migrating the other employee's account onto that old machine ended up getting the Retrospect Client also installed on that machine. I reasoned that it shouldn't make any difference, the Engine could still invoke a client that is on the same physical machine, and for the most part this is true. But in my "Sources" list I did have an option among two... either the HD volume presented by the Client, or the HD presented by the Engine, as it were. The HD presented by the Engine (i.e. it was not nested under the name of a client) could simply not be removed as a Source. I tried removing all of the Favorite folders from the HD presented by the Engine, so that only those presented by the client could be accessed by a script, but I still got Error -557. Then, I went the other route, re-creating those favorite folders under the HD presented by the Engine and setting those to be used in lieu of the same-named Favorites from the Client. At that point, the scripts finally worked, without an Error -557. So it appears that Retrospect doesn't like to talk to a Client installed on the same machine as the Engine. It's like the Engine behaves as though it is a privileged Client in itself; even if you try to force it to negotiate only with the Client, the Engine still gets to jump in front and throw up a -557 message. Hope this helps!
  15. Here is what I did, in order to effectively rename the Media Set: First, I created a new Media Set with the proper sequential name, using the + button in Retrospect's console. Next, I altered the script that uses the poorly-named Media Set, changing it to use the new Media Set. Then, I manually ran this script (click the "Run" button) and elected as a Media Action to "Recycle Media Set". Doing so forces Retrospect to grab all data and fill up this new Media Set with what had populated the other one, although one loses any older snapshots by doing so. In my case, that was not an issue since I am replacing a freshly-created Media Set that happened to have a bad name. If retention of older snapshots is of concern, then this method is not ideal. After the script has run, I removed the poorly-named Media Set using the - button in the console, and then finally trashed the big .RBF file using Finder.
  16. I have recently become aware of a flaw in which one of my scripts is backing up erroneous data. The script is working fine, it's just that the corrupt data takes up huge amounts of space and consequently I am running out of room for my media sets. Essentially, a media set that until November was around 30 GB has ballooned up to over 100 GB, all due to some files that I am now purging from the client's computer. The script backs up multiple clients, so I don't want to recycle the media set if I can avoid it. Plus, I have it scheduled such that it launches a new media set member at the start of each month. Now that we are in December, the bloated November media set is sitting there, plump full of unwanted backed up data. I've gone into "Past Backups" to see about Removing the November 1 backup of the affected client, but I get a tooltip warning me that clicking "Remove" will not delete any data, it will just remove it from the list. Of course, I had "retrieved" the Nov. 1 backup in order to get it on the list, since that appears to contain all the junk files - later backups of the same client don't show many files, and are of reasonable size. In other words, it sounds like clicking "Remove" will only undo what I just did, which was to populate the Past Backups list with the Nov. 1 instance. My intention, though, is to actually destroy the backed up data. Is there a way for me to do so? I don't think Grooming is the right thing to do, as the corrupt files were just created once then not modified thereafter (in other words, they were only backed up once).
  17. Thank you for pointing out the Utility scripts in the manual, that worked fine. I ended up doing a "Copy Backup" script set to copy only selected backups, and I just selected all except the one client's backups. Now I am back to a 30 GB media set, life is good. However, to do this I had to first create a new media set, and give it a name. To differentiate, I just appended "fixed" the end of the filename. As a test, I set the script to go ahead and skip to the next media set member (which it will do on the 1st of the month ordinarily), and what it did was to just take my "fixed" filename and append " [001]" to that, thus restarting the sequential numbering. It seems like what I should have done instead was to name the new media set as the next sequential number (" [036]" in this case). So now I have a mis-named media set. Can I change the name of the media set as simply as typing in Finder? I don't see a tool for renaming Media Sets within the Retrospect console. I am guessing that I rename it via Finder, then edit the script to make use of the renamed file (these are File Media Sets, btw). But I don't want to "break" anything - is this really all I need to do?
  18. When that 70 GB gets replicated each month, it starts to add up... So how does one copy snapshots into a new media set? Are you referring to what I am already doing, which is that the script is scheduled to generate a new media set on the first day each month? I believe the steps I've taken (purging the files off of the client's machine) will mean that the next media set will be back to reasonable size. But that would be new, fresh snapshots - it sounds like you're talking about migrating older snapshots to a new media set. I'm not sure how that is done.
  19. Just to bump this a little bit, I recently had trouble with a script that was failing with the "error -556: network interface unavailable" message. I am still not sure what specifically triggered this, but here is how I resolved it. I went to "Sources" and then clicked "Add", then typed in the IP and password. This is the same IP and password as had already been configured for the client, I haven't really changed any settings in over a year. But it successfully added the client back, and I could then Browse its contents, run the script, etc. I should mention, too, that in trying to resolve this issue I had gone to the client itself and tried the COMMAND+turn off trick, to set the client to "Not Running" rather than "Turned Off". However, just doing that alone did not work. It wasn't until after the manual Add that things came back. I noticed from the time of the last successful run of the script in question that it began failing after I did some things to the client computer when chasing an unrelated font corruption issue. I had used a tool called OnyX to wipe out font caches, and then restarted the machine. Error -556 began to be reported ever after that point. I'm not sure if there is any real causal relationship there, but I do know that OnyX likes to check hard drive SMART status when it runs, and it can do a lot more than just zap a font cache, though I only used it for that purpose.
  20. Tree

    Sound Method? Or Mad?

    I do believe that you need to define separate scripts, each of them as proactive so that they run when their media set is available. Say you have Media Set A, Media Set B, Script A, and Script B. Scripts A and B are both proactive, so they are polling their clients to see if they are ready to be backed up, plus they are requesting their media set. If you've taken Media Set B offsite, but have Media Set A connected, then Script A will be satisfied and run, while Script B sits idling while seeking for its Media Set. Later, you bring Media Set B in and then Script B, which has been waiting for its media, will become satisfied and run. Essentially, then, you always have at least one script "stuck" waiting for media, but since they are separate scripts one of them can run anyways. If you have all Media Sets attached to one script, then this one script will hang when one of those Media Sets goes missing (i.e. off site) and nothing will happen until you resolve that missing media. At least, that is my understanding! I've been using Retrospect a while too, but I'm by no means an expert. Hopefully someone else can confirm my insane ramblings.
  21. I believe that every current user of Retrospect 8 has already cut plenty of slack, just to use the product. I think it fair to vent some of the frustration, just so the new management knows the critical nature of what they are inheriting. There's no more slack to give, at the user end of things - RS8 needs to improve, just to hang onto its remaining market share. I do agree with your assessment of the diplomat that Robin is. Unfortunately, despite his persistent positivity, part of the job of a diplomat is to absorb the enemy's venom, and pass it along to those capable of changing policies.
  22. Tree

    What Does This Update Alert Mean?

    What if it was the other way around, where you primarily update the Engine, and "push install" to update console(s)? Perhaps the host for the engine can keep track of which console(s) communicate with it. Or, say, when you update the engine, the necessary data for the console's update is kept at the engine host and so, when you try to communicate to that engine with an outdated console, it auto-updates by fetching data from the host, rather than hunting online. Of course, the console still has to notify the user of the engine version and prompt for an engine update when one becomes available... but it could just place the engine update on the host, ready to be installed, and then let you know that you need to run the update from the host machine. Running the update would upgrade the engine, and then the next time the console tries to talk to that engine, the outdated console would complete its auto-update based on the already downloaded update package that is resident at the host. This way, nothing gets updated until the Engine is processed, and the console update happens afterwards and automatically. If you never process the Engine update, the console doesn't change either, so they stay in sync. Can that work?
  23. Tree

    how to correctly add clients

    In my case, assigning IPs at the router, there is no issue of scope overlap; the router automatically adjusts the dynamic pool as reservations are made. But you're right, that might not be true of every router.
  24. Tree

    how to correctly add clients

    In my case, I have established static DHCP (reserving IP assignments based on MAC hardware address) using a feature of my router. I log in to my router with admin credentials and then build up a list of MAC addresses and the IP's reserved for them. The user interface for doing so will vary based on the brand name of the router, assuming that your router offers this feature. Just look for something named "Static DHCP" or "DHCP reservation" or something similar; you might even Google-hunt for instructions by searching those terms plus the brand name of your router.
  25. If one can carefully read around the attitudes and posturing, what might be beneficial to future readers of this thread would be to point out a few things. First, Russ asked that the specific version be provided, both of Engine and Console. This is for posterity, sure, so that others can find solutions pertaining to their particular installation. But it also requires of the original poster that they confirm the version number for both of these elements. Why is that crucial? Because they are two separate animals and it could very well be that a mismatch in version numbers between Engine and Console is the whole reason for the problem. I hate it that we have to basically update "twice" whenever a new release comes out, but that's what it amounts to. The automatic updating built into the Console's operation will get the Console updated... but not the engine. For that, one must download and update manually. Extremely inelegant, frustrating to users, and a built-in source of problems that EMC desperately needs to change. But that's how things work now. Once EMC does release a version that handles updating better, then this post will become irrelevant to users who employ the new version. That's why we need to carefully report that we're talking about version 8.1.626.
×