Jump to content

fredturner

Members
  • Posts

    94
  • Joined

  • Last visited

  • Days Won

    4

fredturner last won the day on August 2 2017

fredturner had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

fredturner's Achievements

Newbie

Newbie (1/14)

  • Dedicated Rare
  • First Post Rare
  • Collaborator Rare
  • Week One Done Rare
  • One Month Later Rare

Recent Badges

8

Reputation

  1. Any timeframe on native Apple Silicon binaries for Engine, Console, and Client?
  2. Hey David— Thanks for the reply and thoughts. Yes, my connection will handle the transfers just fine. It is a Gigabit fiber connection, and the fastest upload of any of these clients is 100Mbps, with some being more like 10-20Mbps. The way I've set up my installations is just like your home setup— multiple USB disks being rotated offsite weekly. Unfortunately, since I can't be everywhere each week, and since that method relies too much on human interaction and the disk sets behaving themselves, that's why I'm wanting to do a sort of "disk-to-disk-to-cloud" arrangement, but really to my own server, not exactly the overused-word "cloud". As for monitoring, I am monitoring and constantly coordinating w/ customer sites, but I'd rather do it better and less vulnerable to human error. Perhaps the Copy Backup function would work here... How would that compare to the Cloud set, which would back up to local media and upload to the Amazon or other service (which I would like to emulate and be my own "cloud")? Would I just set up a share and make it a File set that the local one gets copied into? I'll do a little experimenting to see if it might be similar enough to what I'm wanting to work. Does kinda suck that in 4.5 years you haven't gotten a seemingly basic lock functionality that you bug-reported. Not terribly surprising tho... Thx, Fred...in the US 🙂
  3. Hey Everybody— Looking for suggestions on adjusting my usual M.O. of how I set up my customers'' Retrospect environments. I have several of these configured at different sites, and I usually use 2 or 3 USB3 hard drives as independent, rotating-offsite sets. This works okay, but I've had some difficulty w/ filling and failing disks and making sure the customers are rotating as they should be. So I'm wanting to make this less error-prone and more robust. The Cloud functionality is kind of what I want to do, but I'd rather use my beefy fiber connection and Mac Pro/EMC disk shelves at my disposal to sort of "roll my own" offsite destination. Is this doable? I've seen reference to such things as Minio, but I'm not versed in that yet. What about a simple AFP volume that can be reached via Internet connection? Or even something using Carbon Copy Cloner? What I'd like to do is implement a larger, faster local backup destination on the existing Retrospect servers that I don't have to make sure is rotated, then mirror those local sets to my server/disk shelf array. Thoughts? Thanks, Fred
  4. Yes, I know...user forum...I don't post all the time, but do refer and post periodically. Also, this subforum _is_ called Bug Reports, so if Robin, et al, are not checking or perusing anymore, perhaps it should be renamed or shuttered? So yeah, Storage Groups sets are basically like an envelope holding a bunch of source-specific "subsets". You can therefore have simultaneous operations going to a single media set, like multiple network clients backing up at the same time w/o having to go sequentially and wait on each other. I'm not sure how you are suggesting to prohibit the activities from using "any thread"...I could give it a try, but the heart of the matter is that the functionality just isn't functioning like it's supposed to (and was).
  5. Right back to 16. After restart tried again to set to 8, then quit Console, launched again...back to 16.
  6. I have not on these 2 servers. Obviously I shouldn't have to do that, but let me give it a try on one of them and I'll post back. Thx, Fred
  7. So far I've observed two backup servers running Retrospect 18 that ignore the number of threads set in Preferences > General > Allow ___ Activity Threads. I set to something other than 16, but the Engine still spawns up to 16 executions. Quit and relaunch Console and setting is back to 16. Lather, rinse, repeat...same thing happens. When I then rebuild or groom a storage group (because they require constant attention), 16 threads fire up and a 6-core Mac mini runs at about 900% CPU for hours w/o getting any single thread finished in a reasonable amount of time. Glad for the seemingly massively parallel CPU utilization, but I need a way to limit that a little bit. Fred
  8. This probably doesn't help the v16 problem specifically, but there may be a general problem when upgrading to new versions. I just had this happen going from 17.5 to 18. Wondered why hundreds of extra GB started getting backed up. ALL of the checkboxes were cleared for ALL clients. So all of the excluded folders from clients were no longer excluded. Fun! As a side note, I don't know about you, but I spend inordinate amounts of time trying to handhold this software like this and make sure it doesn't choke and leave my clients w/o backups. This has been a constant since the v8 debacle. I really just wish that we could forego worrying about adding shiny, new features in each paid upgrade, and instead just make the software not do weird things that cause workarounds, constant handholding, and missed backups. Fred
  9. So, we now have these storage groups that function like a collection of mini-sets and mini-catalogs, with each client or source getting a mini-set/catalog. When I have to repair or rebuild a set on any of my consulting clients' Retrospect installations (WAY more often than I should if I had a truly reliable backup program IMHO), I see the individual "mini-sets" getting scanned and rebuilt. Is there a way to trigger a repair or rebuild for an individual source in a storage group? Right now I've got a client that isn't backing up due to a "chunk checksum (-641)" error. I'm so tired of rebuilding multi-terabyte backup sets that I really don't want to have to put everything on hold yet again while this 5TB disk is scanned and the catalog rebuilt. Since each source appears to have its own, self-contained catalog and folder of band files on the backup disk, couldn't Retrospect just rebuild the individual source whose catalog has failed? Thx, Fred
  10. Hey Everyone— I've recently moved most of the stations at one of my larger installations to macOS 10.14 Mojave. I started seeing the alerts in the logs about the Retro client not having full disk access, of course. Is there a suggestion or recommendation for a way to automate setting this on the stations? I use Apple Remote Desktop to manage machines, so is there a defaults command that any of you have already used to get this working? I really don't want to have to manually hit every single station to configure this. Thanks, Fred
  11. Hey Everybody— I've been very pleased w/ how much faster backups are now w/ Retrospect 16, since InstantScan is working again. And the ability to back up multiple clients at once via Storage Groups is a very welcome new feature. One problem is that there doesn't seem to be much documentation about pros/cons and caveats...just one brief KB article. Two things I've encountered are: 1. No way to limit the number of execution units a group can use (at least, not that I can tell). It appears that a Storage Group will simply use however many threads you've specified for the whole Engine in Preferences. If that's the case, I'd suggest v16.1 adds a field to the Media Set options that allows limiting based on preference and/or practical disk speed limitations (while not limiting the overall # of threads/units). 2. More importantly, I seem to have trouble w/ a standard, nightly script that I want to back up to the same Storage Group that my client stations back up to. Obviously, Storage Groups are targeted at Proactive executions, but the KB article also says, "Scheduled scripts support Storage Groups as destinations, but the backups run on a single execution and not in parallel." I keep having trouble getting these scripts to finish executing. Just now, I had one disk in the script back up 8GB, then during scanning of the next disk, the whole thing just stopped and is showing that it needs media. The media has plenty of space, and the hang appears to have occurred before the scanning even finished. Has anybody else seen this or know why it is balking after a partial backup? There may be more going on than I'm realizing, but again, I just don't see much in the way of specific documentation! Thanks for any suggestions or thoughts. Fred
  12. Hey Everybody— Have I missed something about APFS/High Sierra and Instant Scan? Or am I just having an isolated issue? I'm noticing it takes an hour to scan my 1 million+ files on my MBP, and that Instant Scan doesn't appear to be doing anything on my machine. Checking the Engine logs does not show the usual "Using Instant Scan..." entry for my machine. I can provide further observations and log info if need be, but first I just wanted to see if this was something already known/common. I'm using Retrospect v14.6 of server and client. Thx, Fred
  13. Hey Everyone— I am wanting to create a rule for excluding render files and transcoded media from Final Cut Pro X. FCPX keeps these files inside event folders, which are themselves contained in Libraries, in folders called "Render Files" and Transcoded Media". So the hierarchy would look something like this: My FCPX Library.fcpbundle My Event 1 Original Media [some other files and folders] Render Files Transcoded Media My Event 2 ... I seem to be able to do 2 ALL conditions, one each to select Render or Transcode: – Folder Mac Path Contains ".fcpbundle" – Folder Name Contains "[Render Files/Transcoded Media]" and I think that will work. But it seems clumsy to me. Is there a way to only list the parent folder path w/ ".fcpbundle" once, then the other 2 below it? IOW, the parent condition is an ALL and the 2 child conditions are ANYs? I need to match the parent condition for sure, and one of the children. I don't seem to be getting the Any/All popups to do that right. In this case, it isn't critical, but if I had more subfolders under the parent that I wanted to exclude/include, having to list the parent every time gets a bit tedious. Suggestions? Am I doing it wrong? Thx! Fred
  14. That is an excellent question! ...and leads to another question/suggestion: Why is there no way (or at least no easy/obvious way that I've found) to find out how many clients you have logged in? I had to count one screen's worth and then page down and multiply. I get around 50 clients. I just don't understand why it takes SO LONG to get started on WAY OVERDUE clients. It can list all of the ones that are online in a matter of seconds when you go to Add or Locate. Why can't it do _that_ fast scan once every couple of minutes and say, "Hey, there's John's MacBook Pro...it's been 11 days since I've backed him up...nothing else is running and I have a valid backup disk...I'll start NOW!" Instead, I can have John's MacBook Pro jumping up and down and frantically waving its arms at the Retrospect server, but by the time the server has decided it might want to check John's MBP out, John's MBP has gone to sleep or left the network. I can just sit and check on it regularly for a period of a couple of hrs while I'm onsite, and it picks up hardly any clients when they are there and needing backup. I even try to help it along by clicking Browse on a client disk or doing a Refresh on the client. But it just sits there w/ its fingers in its ears. It'll eventually get to them, but I can't imagine how awful it'd be in a truly large setup. All of this "dead air" is wasteful time-wise and leads to clients getting missed. Why isn't the Engine more aggressive and active??
  15. Rebuilding the set didn't matter. I can click on the set, choose "Performance-optimized grooming" from the menu, click Save, then click on another set and click back, and it has reverted back to Storage-optimized instead. I changed back to Performance-optimized again and started a grooming operation, but the log says "optimizing for storage..."
×
×
  • Create New...