Jump to content

Search the Community

Showing results for tags 'backup'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Announcements, News and Resources
    • Latest News
  • Windows Products-Retrospect
    • Professional
    • Server, SBS and Multi Server
    • Device and Hardware Compatibility-Windows
    • Exchange Server Add-On Support
    • SQL Server Agent
    • Linux, Unix and Netware Clients
    • Express for Windows
    • Product Suggestions-Windows
  • Mac OS X Products-Retrospect
    • Retrospect 9 or higher for Macintosh
    • Retrospect 8 For Macintosh
    • Retrospect 6: Desktop, Workgroup and Server for Mac OS X
    • Device and Hardware Compatibility-Mac OS X
    • Linux Clients
    • Product Suggestions-Mac OS X
  • Macintosh OS 9 and Earlier-Retrospect
    • Express, Desktop, Workgroup and Server for Pre-OS X
    • Device and Hardware Compatibility Pre OS X
  • General Discussion-Retrospect
    • Networking and Clients
    • Strategy, Scripts and General Use
    • Retrospect iPhone App
  • Retrospect 8.x for Mac
  • Retrospect 6.1 for Mac
  • Retrospect 7.7 for Windows
  • Retrospect 7.6 for Windows
  • Retrospect Express
  • General Discussion


There are no results to display.

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







  1. I am having problems with one backup script. I regularly do a file backup alternately to two different NAS drives. Yesterday, the script for one of the drives failed, saying it could not find the disk. I went to the backup set properties, and it showed the address as \\N7700PRO\Backup\Backup. The correct address is just \\N7700PRO\Backup. I selected the properties, picked advanced, and entered the correct address. It showed in the window. I closed the properties dialog. However, when I immediately opened the properties dialog, it had the old, wrong address again. I cannot get it to save the correct address. The index file exists, and the full backup set is still at the correct address on the NAS drive. The other backup script, for a Synology NAS, still works fine. It is identical except for the disk address. I have noticed a few times in the last few weeks that I had to enter the password for the N7700PRO script as it was executing. I attached a shot of the backup set properties after I made the change and then opened properties again. Please let me know how I can get around this problem. This is the first time in many years I have had such an issue. I know you changed how remote drives are processed in the latest update, and I wonder if that is related. Thanks. I am running Retrospect Desktop on Windows 10 1903.
  2. Found that when I changed the name of the source drive, even if the sources are refreshed to recognize those new source names, subsequent backups seem to revert the sources to what they were and then the backups fail because that device no longer exists. My workaround was to refresh the source list and then immediately run a script using those devices. Somehow that seems to work... For now. Here's the step by step in case you want to reproduce: Change the source name ( in my case I'd reformatted the source drive in APFS and renamed it) of a device in the Retrospect Multi-Server backup plan. Go to Retrospect and refresh the source computer to register the names of the drives. Wait 24 hours. Run the backup script with the sources. The source names revert over time (and running other automated backups) to the old names ALL BY ITSELF, those new source names previously existing in the dropdown list disappear, the old names replace them and any backup scripts using the old drive source names obviously will continue to fail, lacking manual intervention. After the workaround above- reset the source list by refreshing sources (Sources>select the source machine>Refresh) and immediately perform a backup. N.B., two anomalies reading through the log. The log shows "First Access" for the device "GenieMBP", even though previous backups of the device WERE COMPLETED previously that same day. Guess it means first access of those SOURCE DRIVES? The script also changed the source drive to something that existed ALL BY ITSELF during previous backups (Macintosh HD) prior to failing on the missing drives. For some reason signatures are no longer available/presented (where I had my User equipment, versions, backup device info, etc.) So, I'll try to include on every post until that feature of the forum is restored or found again. Retro Desktop Multi-Server 15.6.1 Host (Engine) CPU:  Mac Mini, 2GHz Core2 Duo, 16GB RAM, 500GB internal SSD (APFS), OS X 10.14.3 Host Backup Media: 8x External SATA III via Thunderbolt, 2x via FW->USB, USB3.0 500GB SSD Standalone Drive Router (1000BT wired and 802.11n/b/g wireless) Retro Client: Retro 15.6.1 (all)  3 Intel Clients (iMac, MBP), CPUs running OS X 10.14.3 H Retrospect log_190219.rtf
  3. To All Retrospect Users, can you please read the following feature suggestion and offer a +1 vote response if you would like to see this feature added in a future version of Retrospect? If you have time to send a +1 vote to the Retrospect support team, that would be even better. Thank you! I contacted Retrospect support and proposed a new feature which would avoid redundant backups of renamed files which are otherwise the same in content, date, size, attributes. Currently, Retrospect performs progressive backups, avoiding duplicates, if a file's name remains the same, even if the folder portion of the name has changed. However, if a file remains in the same folder location and is merely renamed, Retrospect will backup the file as if it's a new file, duplicating the data within the backup set. This costs time and disk space if a massive number of files are renamed but otherwise left unchanged, or if the same file (in content, date, size, attributes) appears in various places throughout a backup source under a different name. If this proposed feature is implemented, it would allow a Retrospect user to rename a file in a backup source which would not subsequently be redundantly backed up if the file's contents, date, size, attributes did not change (i.e., just a file name change doesn't cause a duplicate backup). I made this suggestion in light of renaming a bunch of large files that caused Retrospect to want to re-backup tons of stuff it had already backed up, merely because I changed the files' name. I actually mistakenly thought Retrospect's progressive backup avoided such duplication because I had observed Retrospect avoiding such duplication when changing a file's folder. For a folder name change, Retrospect is progressive and avoids duplicates, but if a file is renamed, Retrospect is not progressive and backs up a duplicate as if it's a completely new file. If you +1 vote this suggestion, you will be supporting the possible implementation of a feature that will let you rename files without incurring a duplicate backup of each renamed file. This can allow you to reorganize a large library of files with new names to your liking without having to re-backup the entire library. Thanks for you time in reading this feature suggestion.
  4. Hi, Currently using Retrospect 9 (MacMini running Mavericks) to backup 10Tb Raid6 NAS (will be doubling if not trebling storage capacity in near future) and 5x clients using 2 separate backup cycles (8-week recycle + daily incrementals) to 2 separate external firewire 800 drives on a weekly rotation. NAS and clients connected with 1Gb ethernet. Recycle backups start at 6:00pm on a Sunday and are not finished when the company users come in on Monday morning. Are there any improvements in the hardware/network configuration that can improve the performance without breaking the bank? An incremental backup of some 70-80Gb of data has taken upwards of 5 hours to complete earlier this morning at a rate of just 580Mb/s. Are there any settings within Retrospect that can improve the performance? Any advice on how to achieve optimum network D2D backup performance would be greatly appreciated. tia
  5. Hello! I'm using Retrospect OEM Single Server v8.5.0(136) on Windows Server 2012 R2 Usually we just leave Retrospect to do its thing every day and have had no real issues. We recycle between two Dell R7X91 320GB cartridges for our daily backups named, DailyBackup1 and DailyBackup2 starting at 8pm M-F. I always take one home at the end of the day, then swap it out for the previous night's backup when I return in the morning. I just happen to check the history today and noticed there have been many, I mean many days that show Execution Incomplete. Also, no data shows to have been copied over. Completed 0 files, @ zero KB, Remaining 298711 files @ 278.3 GB. The errors below are fairly the same for all of them, except there are a few where some data is copied, yet it still says the media request timed out after waiting, which makes no sense to me. How can anything be copied if the error just before stated the media request timed out? So every day (M-F) there are two logs for every work day. The first is usually a timeout error because the cartridge or DailyBackup it expects is not available (it's offsite), so then it looks for the other DailyBackup. It finds it and then starts backing up. Well that is what is used to do. Now, it just seems terribly inconsistent and I have no idea why or what has changed. The failed backup errors below are from 8/3/17. It wasn't until 8/18/17 (2 weeks later) that the log showed a full backup. Nothing Backed Up + Recycle backup using PP_Backup at 8/3/2017 8:00 PM (Execution unit 1) To Backup Set DailyBackup1... - 8/3/2017 8:00:53 PM: Copying DATA (P:) Using Instant Scan Media request for "2-DailyBackup1" timed out after waiting 0:01:00 8/3/2017 8:02:42 PM: Execution incomplete Remaining: 298711 files, 278.3 GB Completed: 0 files, zero KB Performance: 0.0 MB/minute Duration: 00:01:49 (00:01:14 idle/loading/preparing) + Recycle backup using PP_Backup at 8/3/2017 8:02 PM (Execution unit 2) To Backup Set DailyBackup2... - 8/3/2017 8:02:43 PM: Copying DATA (P:) Using Instant Scan Media request for "2-DailyBackup2" timed out after waiting 0:01:00 8/3/2017 8:04:28 PM: Execution incomplete Remaining: 298711 files, 278.3 GB Completed: 0 files, zero KB Performance: 0.0 MB/minute Duration: 00:01:45 (00:01:11 idle/loading/preparing) Full Backup + Recycle backup using PP_Backup at 8/18/2017 8:00 PM (Execution unit 1) To Backup Set DailyBackup1... - 8/18/2017 8:00:54 PM: Copying DATA (P:) Using Instant Scan 8/18/2017 11:35:12 PM: Snapshot stored, 52.6 MB 8/18/2017 11:35:16 PM: Comparing DATA (P:) 8/19/2017 2:52:33 AM: 65 execution errors Completed: 302090 files, 280.2 GB Performance: 1402.3 MB/minute (1348.5 copy, 1460.7 compare) Duration: 06:51:38 (00:02:34 idle/loading/preparing) Partial Backup + Recycle backup using PP_Backup at 8/22/2017 8:00 PM (Execution unit 1) To Backup Set DailyBackup1... - 8/22/2017 8:00:55 PM: Copying DATA (P:) Using Instant Scan Media request for "2-DailyBackup1" timed out after waiting 0:01:00 8/22/2017 8:16:45 PM: Execution incomplete Remaining: 243204 files, 271.1 GB Completed: 59602 files, 17.3 GB Performance: 1220.1 MB/minute Duration: 00:15:50 (00:01:19 idle/loading/preparing) So what could cause this, where or what do I need to check? I'm also certain a cartridge is in the drive at all times as I am the one doing it. I'm still really puzzled why a few of the logs indicate a transfer started but then only a fraction of the files were actually copied, leaving 90% or more of disk space. operations_log.utx
  6. So I am trying to understand how using multiple disk media sets with a normal backup script will work. Scenario: I have setup two disk media sets, A and B. I have created a backup script that has a schedule to backup to media set A on M, W, F. The same script that has another schedule to backup to media set B on T, Th, Sat. So what I am looking to figure out is if the backup script is striping the data across both media sets? Or is it simply copying all of the same data to both media sets on a rotating bases? The reason I ask is that the source volume is larger than the destination media sets. So, I would like to stripe the backups across both media sets to accommodate for the size difference. If the above scenario is not appropriate for this, how should I do it instead? Thanks!
  7. Here's the thing, I have the Windows 7 OEM Ultimate 64 bit; that came preinstalled with my HPE-570t when I bought it. HP partitioned my HDD of 1.5 GB into 2 parts "C", (where the OS is, and all my installed programs) and "D" where the "System Recovery" is located.So, should I ever have to reinstall Windows 7, I could do it from Drive D.Both parthtions together, (over the course of five years,) only come to 307GB. What I'd like to know is, after I back up the system, would there be a way to put the Windows 7 OS on a 500GB SSD, seeing as how, there's only 307GB's of information on the entire disk. Is that something that Retrospect 7.7 can do? Is there any truth to the information I've heard that, you only only put your most frequently used programs on the SSD? And the rest, can boot from the mechanical HDD. I do have an External Drive "M" with 1.52GB free if that helps you in putting all the pieces together, as to how to get the info from one drive from another. Thanks in advance, Quiet Riot
  8. So let me paint the picture for you so everyone understands where I am at. I came in this morning to find that the backup sets that I have been working on over the past week started to fail over night, giving me and error code -643 (not a chunk file or badly damage). After talking with Retrospect some we found that the RtrSec.dir was corrupted and needed to be recreated. I deleted this folder and this did seem to repair the error. Though this fixed my error with the backup sets, I am now running the backups and they are doing a full new backup rather than just an incremental backup. I have checked the settings for each backup set and everything checks out, however I do not see why this would happen. These backup sets are rather large and running new backups would take about another week at best. Right now I have rebuilt the catalog file and am testing one of the backup sets. Has anyone ever had this happen before? Does anyone have any suggestions? Thanks, pardy1as
  9. Ok...I have been waiting since I was using Windows XP for the unattended (automatic) backup feature to work again. As far as I can tell it still doesn't and I can't find anything in the recent posts in the forums on this topic. I have several backup scripts scheduled, but when I launch retrospect after using the computer for several nights during which backups should have been running, there are no backups logged. The last time I remember this working was in Win XP. I am running Windows 10 Pro and Retrospect To get my scheduled backups to run I have to login and manually launch Retrospect. The screen capture of the History window shows no backup activity for 4-6 Feb when the PC was used each of those days. In the Preferences/Startup menu I have the following options selected: Startup: Enable Retrospect Launcher service Automatically launch Retrospect Security: Always run Retrospect as the specified user (I have a separate Admin account for Retrospect to use). I pay good money for this software. Why do I STILL have to launch it manually? Frustrated.
  10. I have been working on new backup sets on a new backup environment and am having major issues, I am hoping someone in the community can help me. My environment: Retrospect is installed on a virtual 2012 R2 server with 230 GB of free space on the C drive. It is backing up the virtual server, a File server with about 4.5 TB of data, one AD/DNS server, and one Mac client device. I have split each device into its own backup set and also have split up the file server into multiple sets to reduce the set size. This is all being backed up to a large 12 TB NAS. When backing up the system we have continuously gotten errors when backing up all sets. The common issue is Retrospect continues to lose the media and requests the media, this then stops the execution and nothing starts up again once I have given it new media. It was recommended that I rebuilding the catalog files which I did on all but one drive because it is the largest drive and would take days. After rebuilding the catalogs I continue to have errors where it freezes on building the snapshot. everything on the system runs fine, however the snapshot seems to freeze on the system. I have other backups running now to make sure that the issue happens on all of the backups, however I have found this issue on a multiple backups. I tested running the backups without building the snapshot and I do not have any issues, everything runs and everything is happy. Then I tried running the backup with building the snapshot and the backup froze again. Has anyone had this issue before? I am really lost on this issue and could use some help.
  11. HI, I could use some advise in relation to deleting files from Retrospect Backups. I am running Retrospect 8.1.0 for Windows - Single Server version. I am backing up to removable hard disks. I have been tasked with removing specific files from our backups due to a requirement from a customer. Reading through the documentation, it seems that maybe the Transfer Backup Sets/Snapshots option may be the way to go but I am unclear on several things. When excluding files/paths during the Transfer process, is it only a reference to the stored files that is removed, or is the item(s) removed from disk too? So does this process reduce the size of the backup/snapshot(s)? Can the source and destinations be the same, assuming there is enough room on the backup media - the reason I ask it that I do not want to purchase additional media until I have to. Once the snapshot/backup has been transferred minus any files selected, are the original backups/snapshots deleted/marked as obsolete - what's the process here? Is the Transfer process the best and/or only way of achieving what's required here, or is there another way of doing this? Can the 'deleted' files be recovered by rebuilding the backup/snapshots? Thank you in advance for any help and suggestions, it's much appreciated. Cheers
  12. Retrospect forcibly dismounts (forcibly ejects) external USB drives when completing a backup. This happens at the conclusion of a backup caused by my script. I see no options to do this. Other externals (connected by FW) are NOT dismounted, only USB drives. The maker of the product says the fault pins to Retro, since it doesn't happen in other Backup or normal Finder uses (tried on CCC without error). Naturally they never heard of this before. To recover from this, I need to manually reconnect the drives- remove and reconnect the USB cable. The device is a 4 port USB drive interface for bare SATA drives from Startech.com (SATDOCK 4 U3E). Happens with Retro running on Yosemite, and other OS's. Is there some setting or other preference controlling this. It has been happening regularly and seems to have followed the last version update (V12). Please note: the Eject tapes and discs option in the script is NOT CHECKED.
  13. I have been trying to use a Source Group as a selector inside the Selecting window for a backup script. The main selector is: Include everything + but always exclude file matching volume name exactly matches sourcegroup In picking my conditions, I picked a volume name but put in the Source Group name instead. What did I do wrong? Does it matter if the "source groups container" is a member of the source group, or not? I really like source groups as the source for a backup script. I was hoping that I could apply the same approach to Selecting conditions.
  14. - 01/06/2015 4:00:06 AM: Copying Local Disk (C:) Backing up 0 out of 7 files using block level incremental backup, storing full backups for remaining 7. ... File "C:\Windows\SysWOW64\log.txt": can't read, error -1020 ( sharing violation) VssWSetCompResult: res =1,101 Writer "COM+ REGDB Writer" backup failed, error -1101 ( file/directory not found). Component "COM+ REGDB" backup failed, error -1101 ( file/directory not found). 01/06/2015 4:10:52 AM: Snapshot stored, 252.6 MB Is there anything I can do to address the COM+ REGDB errors? They occur while backing up files on the backup server itself. This has been happening for several months since before the current version. I have Single Server
  15. I am setting up a test machine as an upgraded version of another, production machine. I wanted to restore a part of its filesystem with the current, production data on the production machine. I started a restore from the production media set (v9_icompute) to the "favorite" before midnight. While that restore was running, a normal, incremental backup ran to the same media set. The restore and the backup were running at the same time. The restore failed with this message: + Executing Restore Assistant - 12/18/14 11:04 PM at 12/18/14 (Activity Thread 1) 12/18/14 11:09:29 PM: Connected to Hope-temp To volume EIMS3 on Hope-temp... - 12/18/14 11:09:29 PM: Restoring from v9_daily_eims, Snapshot Hope, 12/18/14 12:02:39 AM State information not restored, can't access Snapshot 12/19/14 4:37:18 AM: 1 execution errors Completed: 2303 files, 8.4 GB Performance: 26.1 MB/minute Duration: 05:27:49 (00:00:16 idle/loading/preparing) It looks like the backup created a new snapshot, and "hid" the one that was being used by the restore, and at the end of the restore, it went to set permissions, and it was… gone! This seems to me a bug. Retro should not allow the restore and backup to run concurrently if it does not work (reliably). Retro 10.5 engine on Mac OS X 10.8.5 Client Mac OS X 10.8.5, retro client
  16. I have read many times about the algorithm used with proactive backups and how it puts the clients that have never been backed up at the top. Also it is supposed to optimize the proactive backup procedure. I have to assume this works well for other users but it does not work well for us. We have a lot of clients in our database and it takes a long time for the proactive backup to get through the entire list. I like that it seems to seen when a client connects and may start that backup right away, but I would really like the option to just let the process go through from never backed up to most recently backed up without trying over and over again to backup machines that are not in the office. This could be setup as an option, to use the algorithm or just go from top to bottom in the list. Also even when a client is set to be deferred it is not skipped over immediately. It seems to poll the client to see if it is there instead of just skipping it. This also seems to be the case when it gets to the bottom of the list. It runs through clients that were already backed up that day instead of skipping them. I would be very willing to talk with Retrospect designers about this. I like Retrospect and I think it is a good product, but I think it can become a great product. Thanks for all your work to make this product even better. Jeff
  17. Okay. Here's my current dilemma... I've run into this issue and have tried to go around it as many ways as I can, but I somehow fall short each time. When trying to backup an external hard-drive to a tape, it has my Tape mistaken for another tape. It started off like this... 1) Switched out tape that was filled up pretty well. 2) Inserted tape to backup. 3) Created a new media set for that tape. 4) Made sure script was set up so that I did not do verification. 5) First job backed up successfully. 6) Tried to back up tape subsequently, but it said, "Needs Member Device." 7) Modified bindings by selecting a specific tape drive and tape, but it only backed up folders without their contents. 8) Resorted to restoring contents to an external hard-drive, which worked. 9) Erased tape COMPLETELY. 10) Shut down Quantum Drive. 11) Detached Quantum Drive. 12) Shut down iMac. 13) Turned on Quantum Drive. 14) Turned on iMac. 15) Started over Media Set--> Script --> Backup processes. 16) Successfully backed up files until I switched out external hard-drives. --> "Media Set Already In Use" Error message Have completed steps 8-16 a few times, but have decided that this isn't working very well. Here are some pictures as well.
  18. To that one superhero out there that knows Retrospect inside and out, here's the issue I'm dealing with... I try to backup files from hard-drive to Tape, and for whatever reason the only thing saved being saved right now are the folders themselves -- without the content in the folders. Anyone out there know why this is happening?
  19. Retrospect 8 for Mac is not completing backup and giving the following error. "Can't save catalor, error -108 (not enough memory)" It was working fine and nothing changed, but last 2 weeks it is giving me this error for the weekly full backup. It works on daily backup. Tape library is just fine and have enough empty tapes in it. Mac server 10.4 has 8 GB RAM and nothing else running. Total volumes need to backup 1.2 TB The backup script has 5 shared volumes to backup. It finishes first 3 and then it stops at number 4 generating this error. The total number of files is about 2 millions. As I said, it worked with all of this just fine. But now it is not working for last 2 weekends. Any idea? Thanks
  20. I have a script to copy most recent backups to an external hard drive. There exists a media set for the drive, with one member: the external drive. The script is supposed to recycle the external drive media set before beginning the copy, with my intention being that the external drive holds a complete up to date copy of all backup sources, without older versions of files taking up space. When the script runs, it goes straight to "Matching" and then "Copying", with no recycle process. Note: below it indicates the option "Match source Media Set to Destination Media Set" is checked; I have just tried running the script with this option unchecked, and it exhibits the same behavior. Old backups are still on the drive while new files are being created. Essentially, Retrospect doesn't appear to be respecting the "Media action: Recycle Media Set" setting. Is the problem that this is a disk media set, rather than tape drive? The script is set up as follows: Summary Status: Scheduled Type: Copy Backup Rule: All files Options: Data compression off Backups (my main media set) Media sets (the External Drive Media Set) The Schedule detail shows the recycle icon Sources Copy most recent backups for each source Main Media Set selected Destinations External Drive Media Set selected Rules All Files Schedule Destination: External Drive selected. Media Action: Recycle Media Set Start: 11PM Repeat: Weekly Every [1] week on [Friday] Stop: when done Options Copy Backup Media Verification checked Match source Media Set to Destination Media Set checked Don't add duplicates to the Media Set checked
  21. (environment retro 10.2.0 (201) running on mac os x 10.8.x on mac mini, 4 core) I am following a sort of "manual grooming" path with my backups. I keep a couple of "current" sets, and copy off specific backups to a separate media set, and copy that off to tape. For instance, I have "myset"(disk), and "myset_2013"(file). I run the backups weekly to "myset" and then use a "copy backup" to copy just one backup per month to "myset_2013". Sometimes "myset" has several months of backups, so I do a "copy backup" operation and "copy selected backups" to copy just the backups I want to the "2013" set. In the past, this seems to have been working. Yesterday, I tried to do this, and ran into a problem selecting more than one backup on a media set for retrieval. (you can't copy the sets unless they are "retrieved". Today, I had a more alarming result. I did the "copy backup" operation and it resulted in a media set that appeared to contain no backups. I verified it - OK I "repaired" it. - OK I "rebuilt" it - THAT restored the backups in the output media set. This means three things to me. 1. THe copy backup operation can sometimes produce a corrupted output. Note that the log for this copy operation did not indicate any errors, but the "yellow triangle" DID appear on the "activity" when it was done. 2. The "verify" operation does not verify an important aspect of media set integrity. 3. A rebuild op will rescue the media set.
  22. I change my backup data sets either monthly or quarterly. I have seven different backup sets, and for each backup set, I need to manually remove the "old" backup set name, and insert the "new" backup set name. Are there any tools, even from third parties, that could automate this process? Ideally such a tool would also create the new backup sets also. Thanks.
  23. Using v10 now. I have seen this behavior several times now, on a "v9" client (9.0.2 (102), Mac OS X 10.6), and now a 6.x client on Mac OS X 10.4. Start with an empty media set, and a client (source) that has not been backed up. Use a favorite with some content on the source, like /MacintoshHD/Users/. Do a straight up, no media-action backup of that source/favorite. You can run it repeatedly and it will copy no files, completing successfully. Each time it only runs a few seconds. It doesn't even try. I just switched to doing the /MacintoshHD instead, and it's scanning. I'll try backing up the favorite and see if it at least scans the directory before declaring victory and quitting. This appears to be new in v10.x, but I don't think I ever tried backing up a favorite of something that had not been backed up before to an empty media set with the v9- engine.
  24. I have been running Retrospect 6.5 for years successfully executing away backups to different external hard drives of my work product folder on my system hard drive. I do this manually without scheduling. A very simple and straightforward process. In Retrospect it appears as though the script is described as a duplicate but in fact it would only update changed or new files. EXACTLY what I wanted to accomplish. This would only take minutes to execute, and again exactly what I wanted to accomplish. When the process ran it would show up in Retrospect execution window as a "folder for folder" copying process.........not one with the special Retrospect backup files or icons. Just standard folder icons. In the last week +/- the process Retrospect executes has changed. It now duplicates the entire folder, [about 23 GB]. This is killing time, and resources I do not want. I am at a loss how to edit the script back to it's prior process. I copied and pasted in a copy from the log file of one of the short run copy sessions, and below that one of the new long run process. To me the scripts appear identical but something has changed and I can't identify it or figure out how to change it. To make matters more complicated I have a complete history of files on the external hard drive, [93 +/- GB], and only relevant or current files on still on the system hard hard drive. And the complete backup volume is now too large to replace it on my system hard drive and create a new backup file set up from scratch. I simply do not have a big enough hard drive. Can someone help me PLEASE get back to where I was? I would so appreciate it!! WHAT I WANT ----------------------------------------------------------------------------------------- + Duplicate using W3D Passport J at 1/7/2013 5:35 PM To volume W3D on Passport 2 (J:)... - 1/7/2013 5:35:03 PM: Copying W3D on DATA (D:) 1/7/2013 5:37:11 PM: Comparing W3D on Passport 2 (J:) 1/7/2013 5:37:18 PM: Execution completed successfully Completed: 71 files, 329.4 MB Performance: 323.9 MB/minute (171.8 copy, 3293.2 compare) Duration: 00:02:14 (00:00:12 idle/loading/preparing) WHAT I'M GETTING ----------------------------------------------------------------------------------------- + Retrospect Express version 6.5.342 Launched at 2/4/2013 8:00 AM + Retrospect Driver Update, version 4.7.103 + Duplicate using W3D Passport J at 2/4/2013 8:19 AM To volume W3D on Passport 2 (J:)... - 2/4/2013 8:19:31 AM: Copying w3d on DATA (D:) 2/4/2013 9:03:37 AM: Comparing W3D on Passport 2 (J:) 2/4/2013 9:40:41 AM: Execution completed successfully Completed: 13085 files, 23.5 GB Performance: 595.2 MB/minute (550.7 copy, 647.8 compare) Duration: 01:21:09 (00:00:31 idle/loading/preparing)
  25. I upgraded from 9.0 to 10.0.1 (105) yesterday. Most things seem to be OK, except for a case where a volume said "no files need to be copied" the first time it was run. A subsequent run worked fine... ....and this. I upgraded this client via an updater from the console via the engine pushing the update of the client out to the client. It seemed to be transparent, and worked fine. A few things seem to work better now in the client. The client is running Mac OS X 10.6.8. (recently upgraded from 10.5.) This is the first backup on this client since the upgrade of the engine to 10.0.1, and the client SW to 10.0.0(174). The following log entries tell the tale. + Normal backup using daily_user at 1/9/13 (Activity Thread 1) To Backup Set v9_daily_user... - 1/9/13 1:33:49 PM: Copying Users on Witsend Using Instant Scan MacStateRem::msttDoBackup: VolDirGetMeta failed err -516 !Can't read state information, error -516 ( illegal request) 1/9/13 1:34:29 PM: Execution incomplete Completed: 1976 files, 171.3 MB Performance: 570.9 MB/minute Duration: 00:00:37 (00:00:18 idle/loading/preparing) + Normal backup using daily_user at 1/9/13 (Activity Thread 1) To Backup Set v9_daily_user... - 1/9/13 2:03:58 PM: Copying Users on Witsend Using Instant Scan MacStateRem::msttDoBackup: VolDirGetMeta failed err -516 !Can't read state information, error -516 ( illegal request) 1/9/13 2:04:22 PM: Execution incomplete Completed: 17 files, 30.2 MB Performance: 604.1 MB/minute Duration: 00:00:21 (00:00:17 idle/loading/preparing) Is there a bug here, or user error? If a bug, what can I do to help track it down?
  • Create New...