Jump to content

Joriz

Members
  • Content count

    9
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Joriz

  • Rank
    Newbie

Profile Information

  • Location
    Belgium

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks David, Nigel; I currently have the disk to disk backups running with grooming set to 4 backups. I have also created a grooming script to run before the disk to tape backup starts. One thing i still need to investigate is the disk to disk backup of the veeam backup files. The churn rate is higher then expected as Retrospect is copying all the data instead of doing an incremental during the weekdays disk to disk backups. Probably is veeam "changing" the incremental backup files of previous backups too and Retrospect recognize them as a new files and include them in the incremental backup. So i have to find something for that. I will also do some test with Copy Backup scripts and Copy Media Sets scripts later. I can't go into the office right now as i'm ill (non covid)
  2. What kind of hardware are you actually using? Like a Mac Pro or a Mac Mini (with thunderbolt to SAS ?) To backup the catalog files i use an Amazon S3 bucket.
  3. You are right. If grooming does its job well, the outcome would be similar. Using Copy Media Set would mean less scripts. As far as i understood. A Copy Backup script gives you more backup options for the selected mediaset (Copy most recent, copy selected, copy all), while a Copy Media Set script only allows to select the mediaset.
  4. I will test using tar once the system is available again and post the results. Something i forgot to mention is that i changed the raid configuration on the enclosure from raid6 to a raid10 array. There was no improvement so i set it back to raid6. Probably the Mac Pro is causing this then. Also yesterday i ran a D2D and a D2T script at the same time. For the P2000 enclosure this means writing and reading at the same time, something a 7200RPM mechanical HDD doesn't like. It had no impact at all on the D2D/D2T speed. The D2D was running at 4.5GB/min (the source was a SMB share over a 1Gbit link, so speed is ok) and the D2T at 6.6GB/min. I like the idea of the Copy Backup Scripts during the week. I have to change the Copy Media Set scripts to Copy Backup Scripts. For the Copy Backup Script i have to create a script for each D2D media set because i can only select 1 source while multiple are possible when using Copy Media Set scripts. When i put this in a schema i get something like this. Starting from May 1st. It's fine David, no worries. I must have made a calculation error then. The disk to tape scripts are Copy Media Set scripts whose source are the disksets (3). I appreciate your effort, thanks.
  5. I think we can exclude that either the disk/filesystem on the NAS is causing this. They all run a ZFS filesystem which is a very reliable. ZFS is very sensitive to read and write errors on the disks, so it would give some alerts if disks were faulty. A data scrub and smart test is scheduled twice a month. A data scrub will read back all the data and validates if the data still matches the checksum. I’m using Zabbix for monitoring the network and servers. The Mac Pro running Retrospect and the tape library are included in the monitoring. Zabbix didn’t report any major issues with the network, for example switchports going up/down or high error rates on the switchports while I sometimes see spikes in the response time/ICMP loss to the Retrospect server. The Retrospect server is not in the same room/switch as the NAS. ACTION: I’m going to move the Retrospect server to the same room/switch as the NAS. The files are present in the first backup and can be restored. So it looks fine but weird that Retrospect reports this error. This is the number I read during the write phase of a disk to tape backup. NAS1 and 2 have all kinds of data: Office files, raw audio, video, picture files. NAS3 is storing vmware backup files made by veeam (with compression) I’m not using software compression in Retrospect while the tape library has hardware compression enabled. I’ll check the options of the storage enclosure if something can be optimised. know what "tar" is but i don't exactly understand what you mean with speedtest using tar. Do you want me to create a large tar file on the volume and check the speeds? During weekdays I run the disk to disk backups and during the weekend I run the offsite backup. The offsite backup backups the most recent backup of the week. The offsite backup is too slow to run every weekday and I have only 3 tapesets available. The option "Use attribute modification date when matching" is enabled while "Match only file in same location/path" is not. I understand it might be better to start from scratch and the details you are asking about capacity, churn, retention,…) are important to give advice. The thing is, i’m very limited in budget, it is actually zero. My boss wasn’t using any backup strategy with offsite backups at all in the past. He doesn’t care, but luckily I do. The only backup in place was based on file replication between multiple FreeNAS servers, onsite, all in the same rack (yes I know…) What I’m trying to do is implement a 3-2-1 backup strategy with the limited budget (=0) i have. So I worked the opposite way and asked myself the question: what equipment do I currently have (storage, tapes, backup software, backup server, tape library) and how can I use them in the most efficient way to implement a 3-2-1 backup strategy without additional costs. I totally agree with your way of working. Every project should start with defining the needs. But it doesn’t work here sadly enough… About 8 months ago I tried to implement a backup strategy the same way but I failed. So I tried to backup the data directly from the fileservers to tape over the network which didn’t work well. Recently a friend of mine was decommissioning old storage enclosure. He gave me 2x HP P2000 raid enclosure with disks for free. Because of the COVID outbreak I had more time available so I decided to mount the enclosures in the serverrack and stress test them for 1 week. 1 enclosure is now connected to the Retrospect server, the other one I kept for spareparts. After reading manuals and youtube tutorials about D2D2T in Retrospect I was able to setup the configuration I described in my first post, which is probably not the best solution… What I currently have is: - 1x 30TB storage connected to Retrospect - 6 x LTO7 tapes - 3 FreeNAS fileservers to backup Fileserver 1 Total disk capacity: 8TB Used: 2.42TiB File types: Office files, raw audio, video, picture files Total data change each day is max. 10GB Fileserver 2 Total disk capacity: 8TB Used: 5.81TiB File types: Office files, raw audio, video, picture files Total data change each day is max. 20GB Both fileservers replicate their data and have snapshots available with a 2 week retention. Fileserver 3 Total disk capacaity: 8TB Used: 0.73 TiB File types: veeAm vmware backup files (full and incremental with compression) Total data change each day is max. 50GB veeAm is using a 10 restorepoint retention Because I have 6 tapes available, I created 3 tape sets with 2 tapes each. With 3 tape sets I can span max 2 weeks retention for offsite backup. Thanks. I changed the option to storage-optimized and initiated groom For the limited amount of available tapes, restore from a week ago is fine. We had no offsite backup before and my boss still doesn’t care. I’m sorry for my English. This was a typo, I’m sorry. My Retrospect version is 16.6.0 (114) I don’t have enough tapes and time to run a D2D and a D2T every day.
  6. Thanks Nigel. I understand what i have explained is complex. Let me try to describe what i want. The goal is to have offsite backups on tape. In case of a disaster, i need to restore all the most recent data of the 3 fileservers. I have 3 tapesets, each with 2 LTO7 members which i want to rotate every week. Sources: 3 fileservers sharing files over SMB. Destination: Disk mediasets for D2D backup Tape mediasets for D2T backup The disk to disk backup can be run every night during the week if a restore from disk is needed. The disk to tape can be run in the weekend because it needs more time to backup. On monday morning i remove the tapes from the library to store them offsite. The tapesets have 2x LTO 7 tapes. This should be enough for at least 1 backup of the 3 fileservers.
  7. I’m trying to achieve an automated backup setup in Retrospect but I feel the setup needs some optimisation to run more error free. Please share your findings/recommendations. I’m running Retrospect version 13. The purpose is a disk to disk to tape backup with 3 fileservers running FreeNAS as source and 3 different tapesets which I want to take offsite every week. The disk to disk backup scripts can run every week day during the night The disk to tape backup scripts can run during the weekend I have 3 tapesets with each 2 members which gives me +/- 14TB uncompressed in total for each tapeset. The tapesets are named: OFFSITE A OFFSITE B OFFSITE C The idea is to rotate them every week. The tapeset should contain all the data of the week. So incase of a disaster on May 5th, I can use tapeset OFFSITE A to restore the data of that week. An example for May 2020: Disk media sets For each fileserver i made an individual disk media set. FILESERVER 1 DISKSET FILESERVER 2 DISKSET FILESERVER 3 DISKSET The capacity of each diskset is 8TB and is stored on a HP SAS raid enclosure running RAID6 with a total useable capacity of 30TB. The diskset grooming option is set to: no grooming Sources Each fileserver has many file shares using SMB protocol. Each file share is added as SMB share with the Administrator account as source. Scripts Disk to Disk backup For each fileserver I made an individual script. The script type is set to “Backup”. In each script I selected the SMB shares for the corresponding server as source. As Media Sets I selected the corresponding Media Disk Set. FILESERVER 1 backup to disk FILESERVER 2 backup to disk FILESERVER 3 backup to disk The scripts schedule options are set as follows: Start: 10PM Repeat: weekly Every: 1 week (on Mon, Tue, Wed, Thu, Fri) Disk to Tape backup For the Disk to Tape backup I made an individual script for each tapeset. The script type is set to “Copy Media Set”: Backup to tape OFFSITE A Backup to tape OFFSITE B Backup to tape OFFSITE C Each script has the Disk media set of all 3 servers as source. As destination the corresponding Tape media set is selected. Backup to tape OFFSITE A Source: FILESERVER 1 DISKSET FILESERVER 2 DISKSET FILESERVER 3 DISKSET Destination: OFFSITE A And so on for each tape set.. The scripts schedule options are set as follows: Start: 08:00AM Repeat: weekly Every: 3 weeks (on Sat) I’m running this configuration for a month now but I think some additional configuration need to be done. Some things I have noticed: The used diskspace for each Disk mediaset (disk to disk backup) keeps growing while the data on the source (the fileserver) is not. I’ve been reading about grooming so I set the grooming options for each disk mediaset to keep 1 backup and “Performance-optimized grooming”. After setting this option I started grooming but the used diskspace didn’t drop. For example the total used diskspace of the source is around 1TB while the Disk media set is showing 2TB used. Because the Disk mediasets grow, 2 LTO7 tape are not enough as destination while the data should fit uncompressed: Fileserver 1: 2.42 TiB used Fileserver 2: 5.81 TiB used Fileserver 3: 0.73 TiB used Tape recycle Should I recycle the tapeset everyweek before doing the disk to tape backup to prevent Retrospect asking for new media? This means a fullbackup every weekend which can take a lot of time and can cause tape wear? What is recommended? Errors during disk to disk backup I often see errors during disk to disk backups for a list of files. The files are not open by a user when this happens. Also the Administrator account is used to read the data so I would not expect a permission issue. Often when I manually initiate the same script after the error occurred it runs error free… can't read, error -1.100 (invalid handle) can't read, error -1.101 (file/directory not found) The script ends with a big red mark. So I’m not sure if the copied correctly and don’t want to find out when a disaster happens. The script doesn’t re-run automatically when an error occurred for example. Disk to tape speed The disk to tape backup speed is around 6 – 6.5GB/min. I’m wondering if this is normal and if not what kind of optimisations can be implemented? The file sizes are mixed but even with large files (200-300GB per file) the speed is not going up. The datasets are stored on a HP P2000 SAS raid enclosure. With BlackMagic Disk speedtest it is showing 380MB/s read and 213MB/s write. The tape library is a Neo T24 SAS LTO7 with HP LTO7 tapes. The SAS HBA is an ATTO H680. The tape library and the HP enclosure are each connected to an individual SAS port. Restore speeds are very fast, around 10-11GB/sec. The Mac Pro is old but is not showing any high cpu / memory loads during backup: Mac Pro Early 2008 / El Capitan / 2 x 2,8Ghz Quad Xeon / 8GB memory
  8. Thanks for the replies. It's correct that i'm located in Europe. My english is not perfect but i'll do my best. I currently have a backup job running which i don't want to cancel, so i can't answer the troubleshooting related replies right now. About the age of the mac pro machine. I know it's old but it still should be able to backup data to an LTO drive. It's not very a demanding thing to do. When a job is running the cpu load is below 20% and the machine is not running out of memory as it is only being used for Retrospect. I prefer to use the Retrospect client as well but in my case it is not possible because the backup sources are FreeNAS/FreeBSD or HP based storage machines. I have 1 Windows 10 machine running which is using the Retrospect client to write the veeAM backup files to tapes. This is the machine/source where the backupjob was asking for a new tape after only 100GB of copied data. The LTO drive make/model we use is an Overland Neo T24. It's a 24 slot library with a LTO7 drive. The library is connected to the mac pro with a SAS cable. The HBA we use in the mac pro is an ATTO H680 6Gb/s SAS interface. The interface is running the latest firmware/drivers. The library is 12 months old and in Retrospect i have a cleaning slot configured. As far as i can remember, Retrospect initiated 3 cleaning jobs during the last 12 months.
  9. Hello all, I'm working for a company some time who are using Retrospect for mac to backup projects. I've been working with other backup software before like veeAM, Symantec backupexec which worked great, while Retrospect is pulling my hairs out... I have so much issues with this software and i don't know why. It seems all very unlogical to me if i compare it with software like veeAM and backupexec. I if could switch to other software i would do that for sure... Issues: I backup from all kind of servers. They all have their data shared over the SMB protocol which are added as source in Retrospect. Backup data from 1 server is working fine but it's so slooow, like 2.3GB/m. While a manual file transfer of the same source is utilizing the 1Gbit network card... The library is a LTO7 connected over a SAS interface. Why is it so damn slow ? While running a backup of a FreeNAS server over SMB, the backup job often just fails (error: Execution incomplete, !Can't read state information, error -1.101 (file/directory not found)). Why? Retrospect is asking for new media while it doesn't have to. For example i'm backing up a project of around 21TB and it asked me for a 5th tape today. A LTO7 tape is compressed around 14TB and native around 6TB. Why ? Similar issue when running an offsite backup to tape. Retrospect asked for a new LTO7 tape after copying only 100GB of data while the complete backup was around 4TB.. Why ? I'm trying to automate my offsite backups during the weekend so i made scheduled jobs for that. It's look not possible to simply overwrite the tape everytime. How do i do this? The Retrospect version i'm running is Retrospect v16.1.2 The machine specs are: - OS X El Capitan version 10.11.6 - Mac Pro Early 2008 - CPU 2 x 2.8 Ghz Quad core Xeon - 8GB memory - 2 x 2TB of storage
×