Jump to content

Don Lee

  • Content count

  • Joined

  • Last visited

  • Days Won


Everything posted by Don Lee

  1. I have now done this with Retro 10 as well, and it behaves the same. Whatever it is, it happens on the second tape, and fails the MD5 digest on one of the same pictures. This latest try is to a completely different set of tapes. Is this tape set usable, given this error? I don't care about the pictures, but what about all the other stuff in the set?
  2. It seems minor, but it's not. The font chosen for the preference panel that displays the licenses does not have much difference between a "5" (five) and an "S" (letter ess). This causes needless confusion. (like it just did for me) A font should be chosen that clearly separates zero from Oh, 2 from Z, S from 5, and so forth, where licenses are displayed.
  3. I saw this again this evening. One of my remote engines was complaining of being unable to back up its client. The remote network there is 192.168.0.xx. (I use an ssh tunnel to get there.) I connected to the engine machine, and the engine was unable to connect to the client to browse files, in addition to the backups failing with: + Normal backup using bu_DBase at 1/10/13 To Backup Set DBase... Can't access volume Databases on Skinnie, error -556 ( backup client: network interface unavailable) 1/10/13 9:00:03 PM: Execution incomplete + Normal backup using bu_DuoUsers at 1/10/13 To Backup Set Users... - 1/10/13 10:00:00 PM: Copying Users 1/10/13 10:00:53 PM: Snapshot stored, 16.7 MB 1/10/13 10:00:56 PM: Comparing Users 1/10/13 10:01:11 PM: Execution completed successfully Completed: 843 files, 210.4 MB Performance: 814.2 MB/minute (788.6 copy, 901.6 compare) Duration: 00:01:11 (00:00:40 idle/loading/preparing) I logged into the client, and noticed that the IP of the machine ( was OK, but the "protected by Retrospect" IP was This is really odd, because that's in my DHCP range. I tried turning the client off, and restarting it. I'm not sure what ended up ding the trick, but the client finally changed its "protected by" IP to 192.168.0.(whatever). It's working now. Clearly something about connecting to that machine through my tunnel has caused it to think that the retro console (which also connects through a tunnel) picked up a DHCP address from the host of the console. Strange. I also noted that the bi-weekly backup operations were failing. When looking at the scripts, there were clearly some missing. Throughout this, I had the 10.0 console open, and connected to both engines. (one engine is v9, the other is v10) I've seen this "confusion" before. Unfortunately, I have been unable to come up with a procedure to reproduce it.
  4. I went back and ran the v10.0 uninstaller script, then the v9 client installer, and rebooted. Now the backups are working again.
  5. I upgraded from 9.0 to 10.0.1 (105) yesterday. Most things seem to be OK, except for a case where a volume said "no files need to be copied" the first time it was run. A subsequent run worked fine... ....and this. I upgraded this client via an updater from the console via the engine pushing the update of the client out to the client. It seemed to be transparent, and worked fine. A few things seem to work better now in the client. The client is running Mac OS X 10.6.8. (recently upgraded from 10.5.) This is the first backup on this client since the upgrade of the engine to 10.0.1, and the client SW to 10.0.0(174). The following log entries tell the tale. + Normal backup using daily_user at 1/9/13 (Activity Thread 1) To Backup Set v9_daily_user... - 1/9/13 1:33:49 PM: Copying Users on Witsend Using Instant Scan MacStateRem::msttDoBackup: VolDirGetMeta failed err -516 !Can't read state information, error -516 ( illegal request) 1/9/13 1:34:29 PM: Execution incomplete Completed: 1976 files, 171.3 MB Performance: 570.9 MB/minute Duration: 00:00:37 (00:00:18 idle/loading/preparing) + Normal backup using daily_user at 1/9/13 (Activity Thread 1) To Backup Set v9_daily_user... - 1/9/13 2:03:58 PM: Copying Users on Witsend Using Instant Scan MacStateRem::msttDoBackup: VolDirGetMeta failed err -516 !Can't read state information, error -516 ( illegal request) 1/9/13 2:04:22 PM: Execution incomplete Completed: 17 files, 30.2 MB Performance: 604.1 MB/minute Duration: 00:00:21 (00:00:17 idle/loading/preparing) Is there a bug here, or user error? If a bug, what can I do to help track it down?
  6. OK, now it's getting strange. I got tired of these errors every 30 minutes due to the retries, so I re-installed the 9.0 client on the machine "witsend". Now I get this... + Normal backup using daily_user at 1/9/13 (Activity Thread 1) To Backup Set v9_daily_user... - 1/9/13 7:18:39 PM: Copying Users on Witsend Using Instant Scan MacStateRem::msttDoBackup: VolDirGetMeta failed err -516 !Can't read state information, error -516 ( illegal request) 1/9/13 7:19:04 PM: Execution incomplete Completed: 264 files, 41.2 MB Performance: 618 MB/minute Duration: 00:00:22 (00:00:18 idle/loading/preparing) When I uninstalled the client, I went through the uninstall Applescript from v10 and ensured that the retroISA launchd plist was unloaded. Yet, I get the "Using instant Scan", which clearly is the problem. How do I fix this? My machine may be backing up as it is, but those little red X's are pretty unpleasant. I'd like to get my backups working again.
  7. Update: I upgraded my engine to 10.0.1 (105) yesterday (running on a 10.6.8 machine) and am running the 10.0 console with the 9.0 "remote" engines. As far as I can tell, it works fine, with the exception of some nits. Take care with 9.0 console on 10.0 engine. It offers you the chance to "upgrade" the engine from 10.0 to 9.0. Oops. It looks workable with the 10.0 console, and 9.x engines. Keeping my fingers crossed.
  8. I try to do my installs with a "system" user who owns and installs all of the applications. In general, I can set all of the applications to read-only, so that even a user who does something pretty dumb can't do much damage. This is also something that is commonly done with enterprise setups, where "approved" applications are kept on a network server, and are strictly read-only because they are shared among many users. The Retrospect console is a single application, and when it is installed according to the implied instructions on the installer disk, the single app is placed in a folder in the /Applications folder, and on first run, parts of that application are moved from the bundle to the folder. Two problems with this: 1. The files so moved are set up with "0777" permissions - that is world read/write/execute. This means that anyone on the system can scribble on them, remove them, rename them, or otherwise screw them up. If I am trying to keep my machine relatively secure this is "bad". 2. If I install the Retrospect console as "admin" and then first launch it as a normal user, these files are not moved from the bundle. I have not yet explored what this means, but it is clear that the difference in behavior will be puzzling to someone in addition to me. My suggestion is that the application should definitely be set up so that if I want to have the folder and all its content be read-only, it should be possible. Bonus points if it is also easy. If the ease of installation of having the bundle contents in the app is important, the step of moving the components to the enclosing folder should be explicit and should request authorization explicitly rater than simply failing as it does now.
  9. Having just installed Retro 10.0.1 (105) engine, and still having remote v9 installs, I tried connecting my still-installed v9 console to the new v10 engine. It seems to connect, and might even work (I have not dared do much with it. ;-> ), but it soon offered to "upgrade" me from 10.0 to 9.0. Screenshot included. This should be fixed. It is likely that users will accept the offer, with undesirable effects.
  10. I would add that this could/should be an important feature of the console, to non-destructively interoperate with the older engines. The new consoles don't even have to support the full feature set. They would simply have to be able to manage the older engines without doing anything "bad". Given that there are more people out there running v8 and v9, and having multiple "clients" that need to be managed, this need will become more urgent as new versions are released..
  11. Given that the pref files are per-user, there is a very crude way to do this. Set up a different user on your management machine for each version of the console you need to run. It's not pretty, but it would work. ;-> (for very small values of "work")
  12. I have another instance of this and enclose a screen shot of the console. The engine is on a remote machine that I access through an ssh tunnel, so the latency is high (about 200 ms) and BW is under 500 KBits/sec, but it's usable. The log in the lower pane is clearly for the wrong script. I even hit refresh a couple of times, and changed to another pane and back, yet this log "stuck".
  13. The three attached screenshots show a running media set copy from two input sets to a single output set. The shots are of the "activity" window, the "scripts" window, and the "media sets" window. Note that the activity window shows that the "...[004]" set is being copied, but the "...[007]" set is locked. When I click on the "...[004]" set in the "Media sets" window, I can "see" the members, backups, etc. When I click on the "...[007]" set, it is "locked", so I cant "see" it. screen shots:
  14. Question to follow up: If I upgrade my engine (on a different machine) and then run a v9 console and a v10 console on my laptop, will I have two configurations - one for the v9 console and one for the v10 console, or will the console configuration be "shared"? I don't ask for full upward and downward compatibility between the mismatched engines/consoles, but if that is not provided, I am hoping for confirmation that the consoles will not damage each other's configuration files. Bonus points if I can run both consoles, and they have separate configs, so I can "move' the engines from one console to the next as they are upgraded. It sounds like I can probably use the v10 console on a v9 engine, but this may not be safe (not tested) fredturner (above) suggests that the v9 and v10 consoles "step on" each others' configuration files. What is the recommended procedure for those of us with multiple engines to manage?
  15. Retro just made a liar of me. I opened a client on my laptop (Mac OS X 10.5.8, retro 9) and see one backup in the history window. Screenshot included. Note that this client gets backed up at least every day, so there are about 20 backups that are "missing" in this list. Two odd things thing about the Nov 30th backup that appears in the window. First, it only appears when I click the lock icon at the bottom of the client pref pane and unlock the client. Second, is that v9_daily_user_last is a set that is not used to back up any clients. The sessions in v9_daily_user_last were copied there with a media sopy script. Clients are backed up to v9_daily_user. I do a media_set copy every month from v9_daily_user to v9_daily_user_last so I don't have to tweak the client scripts. The backup in the window is Nov 30, 11:28 PM, and is the most recent backup on the v9_daily_user_last set.
  16. I hope this is fixed in v10. I have several v9 servers running, and a variety of clients, including several running Mac OS X 10.5, 10.6 and 10.8. (none at 10.7) With the "new" client (prior to v10) there is a "history" pane in the client, and it is always blank. It doesn't matter how many backups have run, when, or what options I select on the server. The "history" pane is blank.
  17. - Are the clients with blank history pane on the same subnet as the Engine? Yes. - Are the media sets specified for "Back up on demand to" protected by password _and_ locked? Back up on demand is not enabled. - If the media set is unlocked, and use Console to Locate a client Source, is that client's history pane still blank? n/a - From that client, does Back up on demand work? n/a Maybe I misunderstand the purpose of the history window. I presumed that it would contain a history of backups on that client. Is it limited to backup on demand, or somehow limited only to clients that do backup on demand?
  18. Don Lee

    Spontaneous loss of script parameters

    I would like to see an export/import capability for the configuration. If the export file were in some text-y format, then I could do "batch" changes to large configurations via text editor. That way Retro's console does not have to get fancy new features for my peculiar needs.
  19. Update on this.... I decided to let the transfer complete. The tail end of the log on the media set transfer looked like this: !Backup Set format inconsistency (4 at 308693232) (many, many of these.....) !Backup Set format inconsistency (4 at 308694528) !Backup Set format inconsistency (4 at 308695728) !Backup Set format inconsistency (4 at 308697408) !Backup Set format inconsistency (4 at 308697440) !Device trouble: "[u]2-v9_iCompute_2012t[/u]", error -212 ( media erased) - 12/8/12 9:19:15 AM: Transferring from v9_iCompute_2012 12/8/12 12:27:41 PM: 309 execution errors Completed: 690896 files, 113.6 GB Performance: 190.1 MB/minute Duration: 12:06:33 (01:54:39 idle/loading/preparing) - 12/8/12 12:27:41 PM: Verifying v9_iCompute_2012t 12/8/12 3:08:14 PM: Execution completed successfully Remaining: 1 files, 66.6 MB Completed: 575859 files, 74 GB Performance: 1.4 MB/minute Duration: T03:11:5 (00:01:50 idle/loading/preparing) I then did a verify, figuring that any errors in the transfer would show up in the verify as some sort of error. I am not now inclined to trust this media set. It had "309 execution errors". I ran a verify of the tape set. The verify log looks like this: + Executing [u]Verify[/u] at 12/10/12 (Activity Thread 1) To Backup Set [u]v9_iCompute_2012t[/u]... 12/11/12 4:45:37 PM: Execution completed successfully Completed: 690896 files, 113.6 GB Performance: 207.6 MB/minute Duration: 23:08:50 (13:48:31 idle/loading/preparing) No errors? How can that be? Why is there no indication of the 309 errors? The summary in the xfer script says that it wrote 113.6 GB, but it only verified 74 GB. How can that be? How much of this set should I trust? Is it completely untrustworthy, or is it likely to be a 90% solution? Is there any way I can "fix" the second member without re-writing all three tapes? (hours and hours) Thanks.
  20. As has been mentioned here in the past, the file selection rules need work. Today, I wanted to omit the files in /Users/*/Library/Caches/Safari and /Users/*/Library/Caches/PubSub. I do not want to omit all folders called "Cache" or risk omitting files from my backups because the users have selected unfortunate names. I want to be very specific. What I found after a little research is that this is not so easy. There is no "wildcard" or regular expression capability in the rules, and due to the changing name of the path components (like the volume name and the user name) it is non-obvious how I do both. It would also be helpful to have a "does NOT contain" rule. With a combination of "contains" and "does not contain", I can build what I need, but without the "does not contain" that's much harder. (impossible?) What I ended up doing is omitting "folder" "mac path" "contains" "/Library/Caches/com.apple.Safari" and "/Library/Caches/PubSub" This is sufficiently specific that I don't think any random user is going to trip on it, but it is still not strictly correct.
  21. It is really important to ensure that your backup regimen matches your needs. Grooming is one way to address this problem, but it is better to have 2 or 3 layers of assurance. How you do this depends on how much data you need to keep, and how often it changes. It also depends on how long your laptops are "absent". If your backup cycle only keeps 2 weeks, and a laptop is gone for 3 weeks, you have no backup, as you've found. The balance is to keep enough backup data to cover "worst case" without keeping so much that you waste a lot of effort and money maintaining it.
  22. I see mention of the activity threshold, but not mention of the speed threshold. Is there a reason that could not be used? WiFi is quite a bit slower than GigE. As long as the speed of GigE is better (I presume it is, otherwise, why insist on using it?) you should be able to pick a speed threshold that allows backup over GigE, but not WiFi. Am I missing something?
  23. Don Lee

    Temporary Website issues

    That's a nice problem to have. ;->
  24. I am reporting this early because it may be a serious bug, but I have not yet verified that I have not caused this with pilot error. My situation is that I want to continue to manage my backups as I always have, with a backup media set for each month, and then each month start a new set. With Retro 9, this is not possible. There is no way that I can see to do what I did in Retro 6, which is "create new backup set", effectively creating a new set, with all the scripts that used the old set now using the new one. Instead, what I do is run a media set copy operation to "mediaxx_last" every month, with the "reset on successful copy" option set. I tried this August first "by hand", and it worked fine, so for Sept 1st, I set up scripts to do this. What I see is that my "mediaxx_last" sets have only the most recent snapshots in them. I expect to see the most recent snapshots in the "backups" tab under "media sets", with the "retrieve" button revealing the full set. In the script, I have ""copy backups", "media verification", "data compression", and "recycle after success" enabled. The other options are disabled (match source, don't add dups, match only in same location, and eject tapes) I am thinking this is not likely to be pilot error because I run a similar script on several media sets, and one of the larger and more important sets seems to have worked fine, and several others are "truncated". I will be digging into this to figure out what went wrong, but would appreciate any feedback from anyone else who has used these features.
  25. No. Only the most recent one is available. The "retrieve" button is grey.