Jump to content

Pete

Members
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Pete

  • Rank
    Occasional Forum Poster

Profile Information

  • Gender
    Not Telling
  1. OK, skip team viewer and just run a direct screen-share via VLC or the like.
  2. Do we think that Retrospect will ever support Google Drive as a cloud destination? I presently have truly unlimited Google drive storage space with G-Suite Business at many of the sites we manage. Arq supports this as a destination, but I would prefer to keep this within Retrospect. Anyone know the reason we haven't seen this as an option - or otherwise if we may see this in the future?
  3. How do you have the drive physically attached to your mac mini exactly? What interfaces and/or adapters?
  4. Retrospect 10.5 & FileMaker 13

    Script filemaker to stop the database, make a duplicate cold copy (in another directory), and then re-open the original database and then set retrospect to backup that duplicate copy
  5. Sheerly for entertainment value... I poked around for a bit... exploring the idea of trying to get a SCSI based library attached to a Thunderbolt equiped system. The "best" solution of course would be to upgrade to a modern library and host system. Of course this may not be reasonable cost-wise. Realistically, I would indeed just buy a last-gen Mac Pro and be done with it. ATTO (for example) has SCSI drivers available for their line of host adapters tested and supported thru OS 10.8.5. You could easily leave that machine on 10.8.5 for quite awhile to run Retrospect, or experiment with 10.9 before placing it into production. Other ideas... albeit these are almost entirely academic, and I wouldn't go spending money on them... would only test if I had the items on-hand.... and even then, not sure I'd deploy in a production environment with some serious testing. Again this is almost just "for fun". -Try a Sonnet Thunderbolt PCI expansion chassis with your scsi card. None are listed as supported, but then again - they likely haven't even been tested. -Skip thunderbolt altogether and do USB to SCSI with something like a Ratoc U2SCX-LVD USB 2.0 to Ultra Wide SCSI Converter. This is going to be SLOW - as that's a USB2 adapter. Again. This is just for fun. Don't *do* any of those. Buy a new host system... or even new host and library.
  6. I never really considered stopping the engine with the admin application open. Just doesn't seem like an intuitive thing to do. Is there anything you are trying to do where you can't just quit the app before stopping the engine?
  7. Tangentially related... I'm in the process of implementing a new Fibre Channel based LTO6 tape loader - connecting it to the last generation of Xserve using an ATTO PCI card. Planning ahead... like you.... for a new machine... I have confirmed that the ATTO - Thunderlink - Thunderbolt to Fibre Channel adapters are indeed supported by Retrospect - and further spoke with ATTO to confirm they have a ThunderBolt 2 version brewing as well...
  8. OK - I'm chiming in here to report the same exact problem. G5 running 10.3.8 server.... Mirrored 250GB ATA drives User interface in Retrospect is SUPER sluggish.... hogging machine cycles...slowing down the machine. The only answer we've come up with is to not have to app open during the day. which sucks obviously. So it really does sound related to RAID drives...
  9. Yes - we're seeing the exact same thing on one of our servers... a G5 single cpu w/ 1 gig. Drags the system completely down when just sitting idle. ATTO UL4S scsi card (latest driver) Overland Loader Express, DLT1 The only work around we have right now is to not have the application open during the day... Anyone?
  10. Hi all- We have a mixed environment of Mac desktops and laptops all currently running OS X.3.5. All of the laptops have airport cards. As you all know - laptop users can not be nailed down and be expected to leave their machines in the office at night for the standard nightly backup - and will need a backup server. We have the laptops configured to automatically connect to the ethernet network and airport network getting IP addresses via DHCP. No user intervention is needed. Ethernet is *higher* up in the list than aiport - and should take priority. We have no issues when the laptops are at the users desk and plugged into the wired ethernet network. We have instructed users to plug into ethernet whenever at their desks. HOWEVER... When users are roaming on the Airport network...and the backup server finds them, it completely saturates the Airport network bandwidth and brings it to it's knees in terms of throughput...almost completely blocking traffic for all other users. How are other people dealing with this? Under no circumstances can we have the backup server back up clients when they are connected to the wireless network. We are running (6) standard Airport extreme base stations with the latest firmware as of today's date. We're looking into blocking the Retrospect port on the Airport interface on each individual laptops with the ipfirewall (ipfw) command, but it seems like others must have encountered this and may have a more global / easier / and more elegant fix than managing all of this on the client side. Thanks in advance for any insight...
  11. I'm designing a backup system that needs to approach *near-line* more than our current one - and thought I'd throw it out there for anyone that has done anything similar. We're currently running Retrospect 5 Workgroup on Mac OS 9.2.2 to back up approximately 30 client machines. This machine is also running AppleShare IP. The backup currently running is a standard full on Friday nights, and incrementals on all other days of the week. The backup device is an Overland Data DLT loader with 15 tapes per magazine. The approximate amount of data being backed up currently is approximately 650 gigs. The finished set of tapes are physically removed from the location every Friday morning. Some machines on the network are running Gigabit Ethernet on copper (maybe 30%)... while the rest are running 100Base-T. We had a recent situation in which a very important client machine hard drive bit the dust on a Friday morning at about 10AM - right after the tapes had been removed from the premise. The tapes needed to be recalled - which were still on the truck of the off-site data storage company. Long story short - when you need your data immediately you need your data immediately. You can't wait for trucks and the hours it requires for full drive restores from DLT tape. Now I have been tasked with designing a redandant backup system to operate in parallel with the existing one - which hopefully will keep the data much more accessible than the "building-burns-down" system we have implemented right now. I need my data backup approaching near-line. SO... What's the best solution here? I can set up a separate backup server and back-up the clients AGAIN to a simple hard disk... which would aid in speed of restores (that would be one massive drive array). As you can imagine, the backup window to complete a full backup starting on Friday nights is quite long. It can run into Saturday afternoon or occasionally even Sunday - so running them back to back wouldn't work. Potentially, the full backups could run in tandem simulatenously with a slight time shift - basically executing the same script with a few hours or so shift beetween the two, as to not step on one-another. The idea of creating another backup system which would execute this same backup AGAIN seems silly though. Intuitively the data should be backed up once - to hard disk - and then locally from that disk to tape for off-site storage. This of course is one additional step in the backup process - and while unlikely - adds additional potential for problems. On top of this - we don't want to risk losing an entire days work. So the backup needs to occur during the day as well. How are other people managing this? Is there a way to lock-out the user during lunch - for instance - while the mid-afternoon backup runs? Or the 10AM... 1PM... and 4PM backups? If running this often, the amount of changed data would probably not be too large - minimizing this said lock-out. What about files and applications that are open and hot during this time? These ideas are off the top of my head without too much background thought. Are these things Retrospect can do even? Anyone with a completely different angle? The company which this is for is (ahem) a bit demanding, and will not stand for anything less than the best solution with the least possible amount of data loss. Thanks for any and all input. -P
  12. I second the last poster's recommendation of loaders by Overland Data. As a consultant I have implemented a variety of different autoloaders by different companies and have had the least number of issues with those by Overland. They are very solid, and are by and far my recommended brand of choice. I do favor the DLT (or LTO if budget permits) variants over AIT however. DLT is a more robust, time proven platform - which continues to be the medium of choice for the enterprise. -Peter
  13. Hi- I have Retrospect Workgroup 5 running on a Mac OS X Server 10.2.1. Is there a way in which to run Retrospect in the background... as a service... so that I can log all users off of the server console? In testing, Retro will not load up / execute if my admin user is logged out. All other services on this machine run properly in the background. thanks! -P
×