Jump to content

j.a.duke

Members
  • Content count

    13
  • Joined

  • Last visited

Community Reputation

1 Neutral

About j.a.duke

  • Rank
    Member

Recent Profile Visitors

449 profile views
  1. System Details: Retrospect 17.0.2.101 Mac Mini 2018 32/256 connected to an Areca Thunderbolt 3 array (140TB) macOS 10.14.6 on server and 10.12 - 10.15 on clients (mix of local desktops & remote outside of network laptops) Destination is a Storage Group (to allow multiple backups to write to it simultaneously) I set up a Proactive script per the instructions in the KB (https://www.retrospect.com/en/support/kb/how_to_set_up_remote_backup), pushed out a client to all of my systems via Munki (https://www.retrospect.com/en/support/kb/deployment_with_munki) and everything was working. And it's worked well for over a month until last Thursday when all proactive stopped working. All scheduled backups are running. I rebooted the server Mac multiple times. I tried stopping, then starting the Retrospect Engine on the server mac. When I rebooted today, some of the proactives ran, but nowhere near the number that are listed as ASAP under Activities-ProactiveAI. I'm currently running a regular backup script manually, but no proactives are running. I've tested several of the clients that are in the primary proactive script and they are responding. I don't see a whole lot of troubleshooting info for proactives in the knowledge base. Any suggestions on what I should look at or how to get this all working again? Thanks. Cheers, Jon
  2. j.a.duke

    Disk-to-Disk-to-Cloud?

    Thank you. My primary concern is laptops that are off-site currently, so sounds like D2D2C would be better. Cheers, Jon
  3. I have three Proactive scripts defined: FT_FD to backup both our laptops and desktops, JAD-Cloud as a test to backup my laptop to Minio running on one of our Synology units and a regular JAD script to a disk storage set. In Schedule, I've set both JAD proactive scripts to every 3 hours while the FT_FD script is every 1 day. For the JAD scripts, my laptop SSD is the only source. For the FT_FD script, there are ~60 sources. My laptop is not in the source list for the FT_FD script. Neither of the JAD proactives have executed successfully/completely in the last 7 days (last successful backup was 1 June 2020). How can I ensure that my laptop actually gets backed up in a reasonable period? Thanks. Cheers, Jon
  4. Would this be better than a direct-to-cloud backup in terms of performance? I understand that you stage to disk to minimize the delays in getting data to the tape drive and to maximize the capacity of the tape. I was thinking that it might be better if the cloud upload occurred off-hours (like 2100-0700) rather than during the normal workday. Has anyone done this and, if so, have any comments/suggestions? Thanks. Cheers, Jon
  5. I upgraded to 17.0.0.149 last week, then used Munki to push out to all our desktops & laptops Client 17.0.0.149 configured with our public key and the server.txt file to connect to our backup server. Amazingly, everything worked pretty much as described. I used the following info to get this working: Retrospect Client Deployment with Munki Just add the server.txt to the disk image along with the installer & public key information to get remote clients connected. Remote Data Protection How to Set Up Remote Backup User Guide - Operations - ProactiveAI Backup Anyway, now I'm seeing media requests from a couple of clients. I dig into the disk media set (Smaller Backup:Retrospect Data:FT_FD:Retrospect:FT_FD:) where the individual machine backups are stored (JAD-Macintosh HD) and I see "1-JAD-Macintosh HD" which has a large number of .rdb files. But for the clients that are requesting media, their folder existed in the structure noted above, but from within Retrospect I can only get to Smaller Backup:Retrospect Data:FT_FD:Retrospect. The folders below that "Retrospect" folder are not visible in the console navigation. Has anyone seen this and how does one solve it? Thanks. Cheers, Jon
  6. Thanks for the information. I've got the D2D2T setup as Robin has noted in the video with the settings changes that you have noted. Backup of e-mail server via Retrospect client to disk media set starts at 2000 hours, finishing within 3 hours (~20 gigs of data). Copy Backup (from disk media set to tape media set) starts at 0300 hours and runs for ~6 1/2 hours currently. In "the early days" it would be wrapped up well before I arrived in the office. Would it help if I ran a test to a new tape to benchmark the throughput for an initial backup? There are really two interrelated problems here: speed of backup and efficient use of tape capacity. In my media set window, the first tape shows 4.4 TB used, 9.9 TB free for a total of 14.3 TB. The second tape member show 167.2 GB used, 5.6 TB free for 5.7 TB total. Both tapes are LTO-7 media (in an LTO-8 drive). My goal was to fit an entire year of backups onto a single cartridge for each of our critical systems and send those off-site at year-end. Thanks. Cheers, Jon
  7. All the components were new in ~March. Hadn't had a tape drive previously in this setup (used a VXA-320 long time ago). But, I'm seeing multi-gig performance (typically) to the array, but 60-70 meg/min from the array to the tape for my mail server copy backup. I run three copies to tape, a small data amount (<5 gig) from our slowly deprecating file server is ~2 megs/min, our document management server (~15 gig) that runs 1-3 gig/min and the mail server (20-25 gig) at 60-70 megs/min. Would the number of files needing to be copied affect the performance? Each of these has two sources (startup & data volumes). But email has ~50k files per copy, the others <7k typically) Thanks. Cheers, Jon
  8. I added a shiny new LTO-8 drive to my backup plan earlier this year. My copy backups are running fine (no errors), but the performance is currently terrible. I can't find early executions (the log only has about one month of entries), but I seem to think they were much better throughput (4-6 GB/min vs the current mid 2 digit MB/min). Retrospect is 15.1.2 (I see that 15.5 arrived this morning, but I don't see anything in the release notes regarding this problem). My config is: Mac Mini mid-11, 2GHz QCi7, 16 GB RAM running 10.12.6. Network connection is via a Promise SANLink2 10Gbase-T adapter. Primary backup storage is an Areca ARC-8050T3-12. Tape drive is Quantum LTO-8 external connected via an OWC Helios TB3 with an ATTO ExpressSAS H680. This card was chosen over a ThunderBolt/SAS box because the vendor (BackupWorks.com) recommended it as more reliable (at least the ATTO card). For those of you playing along at home, I have the Thunderbolt chain configured like this: Mac Mini - SANLink2 - Apple TB/TB3 adapter - ARC8050T3 - OWC Helios TB3. Source is the disk set (ungroomed) of our e-mail server. The rules for that set is "All Files Except for Cache". Catalog is not set to be compressed. Destination is a Tape set with "Fast Catalog rebuild" checked. The set has 12,951,189 files occupying 4.5 TB. The script has the "All Files" rule set. Options set are Media Verification, Match Source Media Set to destination Media Set, Don't add duplicates to Media Set, Match only files in same location/path. This machine is dedicated to running Retrospect. Is there something I should set differently that would improve the performance? Does the information is this post http://forums.retrospect.com/topic/154263-realistic-time-for-copy-media-set/?do=findComment&comment=264611 have any relevance to what I'm doing? Thanks. Cheers, Jon
×