Jump to content

Adam Ainsworth

Members
  • Content count

    8
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Adam Ainsworth

  • Rank
    Newbie
  1. It's been a week since the problem, and this backup set has been absolutely fine, so I'm hoping it was just a blip. I'm going to see if I can reduce the size of our backups as it appears that we're within 2Tb of running our out of space on three tapes, and that will be by far the easiest course of immediate action. I need to look at our VM solutions as a whole, and not just the backup side of things, so that will have to wait for the time being. The machines between them only take up around 15% of the backup size (HD video and massive PSDs seem to be the main culprit). Thanks again to everyone for advice, I certainly know a great deal more than I did this time last week!
  2. It's a Neoseries of some description - I can't get round the back to see. I'm not sure of its age, but it hasn't even been turned off in nearly three years. The connection is via fibre though, and I'm guessing that some kind of interruption caused the problem. Given that it was never mentioned to me by my predecessor, it had either never happened before or was very rare. The hours of use is set to zero, and there is a light on the front, so I guess it has always been done manually. I'm not sure about messing around with the config right now as I have a few deadlines and Easter approaching, so I don't want to rock the boat too much. The same goes with getting it on the network - I don't know how practical that would be or whether it would be worth it. Something I hadn't mentioned was that most of the code is on virtual servers with virtual disks (split down in to chunks). So, while the number of files isn't important, it does mean that it only needs one to change for the entire chunk to be backed up, and it also means that duplicate files take up space, as do all the node_modules folders etc. It's not an ideal situation, and one that has been on a to do list for some time to sort out (as well as archive old and ex customer sites). As with everything, time is the problem, as there tends to be customer work to do before internal housekeeping. It has - thank you. I am grateful to everyone who has contributed to this thread, as I now have a better understanding of our system, a good idea what went wrong, and it's made me think about what we ought to do to make things more efficient. I've also been given a helpful warning that we don't have as much space as I thought.
  3. I'm not sure what code bases you work on, but the WordPress install alone is 11.5Mb compressed (https://en-gb.wordpress.org/download/), and we have hundreds of them. When you add in themes and plugins, uploaded assets, mature DB dumps, and other paraphernalia, a good proportion of our client sites come to 100s of Mb. They all need to be backed up in their entirety, because if the worst happens, we need them back as quickly as possible.
  4. Apologies - work is getting on top of me this week. Thank you for your replies. Correct, it is an LTO-6 drive (an IBM Ultrium HH6, as you noted). The drive and the RAID are both connected to the file server, and the tapes are managed by an 8 tape library, which is also attached directly to the server. I've attached a screenshot from Retrospect and more info on the drive can be found at https://www.ibm.com/uk-en/marketplace/ibm-lto-ultrium-6-data-cartridge It appears that the drive isn't set up for automatic cleaning, but I can find no record of a cleaning cycle in the logs, and I've not been asked by Retrospect to do it in the time I've been here (two months). I have been fooled by the spare capacity and you are indeed correct in that we are running out of space. A great deal of the data is video and large image files, although there is also a lot of code and SQL, which I would expect it compress somewhat. However, I will probably need to look in to either reducing the size of the backup or finding another solution (which I have been thinking about anyway as it is a bit of a faff every Monday morning, as this isn't the only backup process we have).
  5. As in 'all three tapes are the same as each other'.
  6. You're absolutely right, it's LTO, not DLT and they are all the same. And I agree about the capacity, having thought about it now, in which case it's doing exactly what it should be. I will have a proper read up on it so that I know exactly what the state of play is. We can't practically go above three tapes per set, so if they are approaching capacity, I'll have to archive some of the data on the RAID. Anyway, it looks like it probably was just an error on the third tape, and it is an isolated incident. I guess I'll find out in four weeks time! If it does go wrong again, I'll replace the tapes. Thank you so much for your help.
  7. Hi, thanks very much for the reply and apologies for the delay in responding. The only error in the log is the one I described; where the tape is deemed full and it requests a fourth. - 04/04/2019 19:32:50: Copying RAID Using Instant Scan 04/04/2019 19:33:00: Found: 1683234 files, 187799 folders, 5.6 TB 04/04/2019 19:33:12: Finished matching 04/04/2019 19:33:22: Copying: 1361 files (26.9 GB) and 0 hard links stucFinished: [IBM|ULTRIUM-HH6|E6R3] incorrect scsiServiceResponse 0x1, scsiStatus 0x2 stucFinished: [0|0|0] transaction result 0x6 xopWrite: trouble writing, error -102 (trouble communicating) xopWrite: trouble writing, error -102 (trouble communicating) xopFlush: flush failed, error -102 (trouble communicating) !Trouble writing: "3-WeekThree" (3771793408), error -102 (trouble communicating) !Trouble writing media: "3-WeekThree" error -102 (trouble communicating) Media request for "4-WeekThree" timed out after waiting . 04/04/2019 20:42:41: Execution incomplete Remaining: 977 files, 19.2 GB Completed: 384 files, 7.8 GB, with 1% compression Performance: 2,194.6 MB/minute Duration: 01:09:51 (01:06:14 idle/loading/preparing) 04/04/2019 20:43:06: Execution incomplete Total performance: 1,579.9 MB/minute with 1% compression Total duration: 01:12:41 (01:07:33 idle/loading/preparing) + Normal backup using WeekFour at 05/04/2019, 19:00:00 (Activity Thread 1) To Backup Set WeekFour... 05/04/2019 19:00:00: Recycle backup: The Backup Set was reset - 05/04/2019 19:00:00: Copying KerioConfigBackup on mailserve01 Using Instant Scan 05/04/2019 19:00:12: Found: 16 files, 1 folders, 24.4 GB 05/04/2019 19:00:12: Finished matching 05/04/2019 19:00:12: Copying: 16 files (24.4 GB) and 0 hard links 05/04/2019 19:08:43: Building Snapshot... 05/04/2019 19:08:44: Checking 1 folders for ACLs or extended attributes 05/04/2019 19:08:44: Finished copying 1 folders with ACLs or extended attributes 05/04/2019 19:08:44: Copying Snapshot: 2 files (202 KB) 05/04/2019 19:08:49: Snapshot stored, 202 KB 05/04/2019 19:08:49: Execution completed successfully Completed: 16 files, 24.4 GB Performance: 3,500.3 MB/minute Duration: 00:08:49 (00:01:41 idle/loading/preparing) The source of the backup is a RAID drive attached directly to the server, so there shouldn't be any issue with speed. If the space on tapes 1 and 2 is being wasted for some reason, is there a way of recovering it? We've now moved on to the next backup set, so these tapes will be out of rotation for the next couple of weeks. I've also just noticed that only 1TB of data is on the tape, rather than ten, so is it possible that the tape is damaged and Retrospect stopped when it reached this error? Or could the RAID have disappeared so it just wrote 9TB of empty blocks? Thanks again, Adam
  8. Hi all, I recently started a new job, taking over from someone else, and this is my first time using Retrospect. We are using v14.6.2 on Mac OS 10.11.6 (El Capitan). Our backup uses a single DLT drive with a tape loader that holds eight tapes. Each media set is three tapes, and it runs a full backup on a Friday, with daily incrementals up to the following Thursday, after which we switch over to the next set (which is preloaded in the tape loader). The recently finished set is then replaced with the next set. The data is fulled from a RAID that is directly attached to the server. Each tape has a nominal capacity 10.3Tb, so the media set has about 31Tb of capacity. Our current weekly usage is around 6.5Tb. Until this morning, I hadn't looked at how this is distributed across tapes, but last night tape 3 became full and it asked for a new tape. I'm not sure what data changed on the drive to cause the massive incremental change (we are a media company, and it's not unusual for big files to be added) but I was concerned that the software chose to ask for another tape, wait an hour and then give up mid-backup, despite there being plenty of space on the previous two tapes. I have attached a screenshot of the media set. Is this just the way it works, or is my backup not configured correctly? Is there something I can do to the sets to 'defragment' them so that the other tapes are used to their full capacity? I have seen this or similar questions asked, with various suggestions proposed, and people asking about compression and tape errors. We haven't played around with the compression and there's no tape errors reported, backup speeds seem fine. Buying another tape is not practical because the magazine doesn't have space for four-tape sets and a cleaning tape. Any help that can be offered would be greatly appreciated. As I said, I'm new to the software, haven't dealt with tape backups in fifteen years and am far too busy to spend too much time on something which really should be automated. I am happy to provide any logs or screenshots necessary to get the resolved. Many thanks in advance, Adam
×