Jump to content

ragge

Members
  • Content count

    17
  • Joined

  • Last visited

  • Days Won

    1

ragge last won the day on April 16 2012

ragge had the most liked content!

Community Reputation

2 Neutral

About ragge

  • Rank
    Occasional Forum Poster
  1. ragge

    Lock client configuration

    If the user is not an administrator=member of the, the user can not change the settings. (And if the user is and administrator, she/he could change it anyway by just editing the right file.)
  2. The new Mac OS X client 9.0.0 obviously contacts the server every 10 minutes to tell the server it's current ip address. This is a very good development! Using this mechanism for finding clients instead of multicast would enable us to backup clients wherever they are in the world! The sad thing is that it seems the Windows Multi Server 7.7 doesn't use this information. It displays the client's last reported address when you bring up the window with information about the client, but when it is about to find it, it still uses multicast or a static address, whichever the client is configured for. I'd like to be able to set the clients to be found by this mechanism instead! /ragge
  3. It seems that the client only registers for multicast request on the first active interface. If another interface comes up, it will not register (and respond) on that interface. In addition, if the first interface is taken down, the client doesn't reregister on the remaining active interface(s), which means that it now will not respond on any interface. I have verified this with "netstat -g" and my own retrospect multicast client querier hack. This has become a huge problem to us. At our site we have a WiFi network that won't relay multicast but that everyone is always connected to. When people want backup they have to connect an ethernet cable too. But even if they do, the client won't register for receiving the queries on the wired interface, and they will not get a backup. Even if they disable the wavelan interface, they still will not get a backup. They have to first turn of wavelan, and then connect the ethernet cable to get backups. I strongly believe that the client should register and respond on all active interfaces (and that the response of course should contain the ip address of the interface the query was received on). We think that this has worked before, and that the problem started around the end of january 2012, somewhere around the updates 10.7.3 and 10.6 Security Update 2012-001. Maybe something has changed in the IP stack around that time that causes this new behavior. The problem seems to be exactly the same with both the 9.0.0 (319) and the 6.3.029 clients. /ragge
  4. That I had almost guessed :-) With "you" I meant EMC/Dantz in general. And I thank you for that. Ok, at least we have an indication then. Please also tell them that 90 days is pretty bad in the industry today. Today, we have to choose between a client that is exploitable over the network, or one that can be exploited locally if you don't fix it and that occasionally turns itself off. This means that backup could be heavily impaired, which is not acceptable for many of us. EMC/Dantz have to speed those cycles up. In the general release cycle that could be the case, but that can't be used for security fixes. You have to be prepared to with very short notice release a new version with a small fix compared to the currently released one. If you want to do all the testing in all the languages, fine, but you have to make it very quick. You could probably do a lot less testing on a release with a very small change. If you can't do all the testing you want in the short time frame, you will have to release it untested, clearly stating that if you want, and either release a new one if you find any problems or reclassify it to tested when the tests are done. This is how things work nowadays, and EMC/Dantz can't be any different. /ragge
  5. Many of these systems we have no other access to than that we are backing them up with Retrospect. We could contact the owners and ask them to run a script or something, but that is plenty of work and really a backward solution to a simple problem. Why can't you just release a fixed version? Do you have internal problems? If so, let us know! /ragge s
  6. Thanks for your reply. It is good to hear that at least someone listens. But that client is (potentially) vulnerable over the network, which in our case on most clients is even worse. You need to get a fixed version out now! There is no excuse for not caring about your customers data integrity like this. I really don't want to be rude or anything, but I have to tell you where we stand. /ragge
  7. That is good, but I still haven't seen the update. The broken version has been out for months now, hasn't it? This is not acceptable. Not at all. You should immediately release a version that possibly could be the same with the same problems with shutting off and all that, but with the security problems fixed. Don't you understand that you need to react to and fix these things immediately? That is, within hours or in worst case a few days. If you don't understand this, it is a big problem to us, your customers. We can't afford to invest in a product that opens up security holes in our computer systems, and with the manufacturer not even caring to fix the problem. Going that path will be a sure way to loose your market. Please release a new version with this problem fixed immediately. Please confirm that you in the future will respond to security problems immediately. /ragge
  8. I like to get an email when something fails. I think that is a good feature. Retrospect server 7.5 and 7.6 does something akin to that, but the feature is a tad broken so that it almost isn't useful. Please fix that! 1. It spams also if the client disconnect from the network, with a "Trouble reading files, error -519 (network communication failed)" error. This happens a lot at our place, and is quite normal. People tend to move their portable computers around, and should be able to. It should be optional to be spammed about this. 2. It sends 3 or 4 emails for each error, instead of one! It often looks like this: Email 1: Email 2: Note: No, i do not want to check the log, I want the error to be shown here! Email 3: It should of course only send ONE email with the complete error report. In addition, the subject line would be more interesting if it was a short summary of the problem. 3. In some situations, it sends the email AFTER the problem is fixed! I sadly don't recall exactly in what situation this happens, and I don't know if it is fixed now, but I believe it was when it wanted more tapes and put up a dialog about that. Someone saw that and fixed it, and when the problem was fixed it sent the email.
  9. I upgraded our windows server from 7.5.xxx to 7.6.yyy, and started to upgrade the clients from the server. On Mac OS X, many of the installed files for the 6.2.229 client are installed writable for everyone! This is a major security problem! I strongly suggest, that Dantz/EMC immediately release a new rcu file that does this correctly! This is not acceptable! I'd also suggest that you issue a new security alert with the new rcu. /ragge
  10. I first want to thank you for being active in the support forum and answer user's questions! This is very positive, I think. This is worrying information number one. Vernam crypto is, as far as I know, a one time pad crypto. How that could be converted into cipher block chaining crypto and still be even the slightest secure is beyond my (rather limited) understanding. It also says that there are 4*10^9 different keys. That sounds like a ridiculously low number. I really don't want to be impolite or anything, but without further details, Simplecrypt seems quite untrustworthy. There aren't many proprietary or homebrew crypto systems that actually stand a review. Especially not older ones. And even if the crypto algorithm in itself is quite good, implementors often do other mistakes in how keys are generated or handled, or other similar errors, that makes it simple or trivial to crack. A switch to AES or similar for network encryption would be a natural move, IMHO. A good key generation scheme is of course still needed. But is says very little or nothing about how keys are generated or handled and such. This is rather a paper describing it from a user perspective. I am looking forward to that! /ragge
  11. Thanks! I haven't found any really detailed explanations, but from what I have found it seems it isn't very secure. The only mention of key handling that I have found seems really worrying, though it is hard to tell without any details. It there any plans to replace the over-the-network encryption with anything stronger? That sounds very good! Will there be any documentation on how it works? /ragge
  12. 1. The networking security measures, meaning the client key, crypto algorithm, key exchange mechanisms, and whatever other things that apply, are undocumented, and there are reasons to believe that they haven't changed much for decades. How do I know that they are acceptable for my use? I believe that the mechanisms in use should be well documented and open for review by the user. I am only talking about an overview in textual form, though the actual source code would of course be even better. 2. The setting for network communication encryption should be a global one and not a separate setting for each client. It is very easy to forget to tick that box on a client. Maybe there should be an option for overriding the default setting on a client, but the default should be globally settable. /ragge
  13. We use a disk-to-disk-to-tape strategy. The grooming strategy "Retrospect's defined policy" doesn't really cut it. The disk backup sets over time get filled up with snapshots of old disks that are now scrapped, and they will never be kicked out of the backup set. To manually delete them takes a _lot_ of time, so that is no solution. I see two possible solutions to this: 1. Enable the administrator to set an upper limit on how long a snapshot should be saved in the backup set. After that time, the snapshots will get kicked out. We would probably set that to 3 months or so. 2. Rethink the "save the last two snapshots of a disk, no matter what" strategy. An alternative would be that just the oldest snapshot always is groomed, no matter if it is one of the two last ones for a certain source. (Before the oldest snapshot is groomed, of course the snapshot sparsening algorithm should be tried, as is done now.) This should probably need to be combined with a minimum saving time, so that one can make sure that the snapshots manage to get to tape before they are groomed. In additions, since grooming takes a _lot_ of time and can potentially happen on _each_ backup, if alternative 1 is not implemented, one would like the grooming algorithm to always take quite a chunk at a time, say it should always try to free up 10 percent or so. How much maybe should be user settable. When we tried to use disk backup sets with only grooming, many small backups took 8 hours or more just because it very often had to groom to make room for the files. This of course didn't work for us at all. /ragge
  14. I'd like to tell the server to clients as soon as it gets hold of one with an older version of the client software. We have clients that are away for weeks or months. To hunt those and manually initiate an upgrade of the client software when it happens to be on campus and reachable is a real pain. The server should be able to upgrade clients on its own. /ragge
  15. I think the client polling scheme has several problems and should be changed. As much as I enjoy someone actually using IP multicast for something useful, and has done so for a long time by the way, I think the current polling method is inappropriate. We backup some 250 clients, mainly laptops, using one backup server. We split the clients in 8 different groups, and have 8 different proactive backup scripts, each backing up one group of clients to a separate disk backup set each. The current client poller always start polling the oldest client first. We have clients that are away for weeks or months, so the first 50 clients or so haven't been there for weeks and probably won't be today either. In addition, the cyclic poller resets and starts form the beginning every now and then, as when a backup ends I think. This means that our backup server spend most of it's time polling for clients that probably isn't there anyway, and it can take many minutes or tens of minutes for it to find a client to backup, even though there are many available at all times. Requesting an ASAP backup from the client is quite meaningless, because either it already is recently backed up and our server will not get that far in the list until it due again and has wandered up the list, or the client is due for backup anyway. I have two suggestions for changes to the polling scheme, that could be implemented separately or together. 1. Still send multicast polls, but let ALL clients answer on each request (and report in their ASAP-backup-request status). To not get a burst of replies, the clients could randomize a delay under, say, a few seconds, for the reply. The server could give the delay time to spread the replies over in the poll request, based on the expected number of replies (for example, number-of-clients/25 seconds, giving an average of 25 replies a second, this could be a user setting). The polls should probably be repeated a few times, as is done now, with a sequence number or such so that a client only replies to each poll once. This would let the backup server get an almost immediate picture of the client situation, and the starving problem explained above would be solved. This should be a minor change to both the server and the client, and the server would have to know which clients are of the older kind that still have to be polled individually. 2. The client should remember the IP address of its server and itself report home. This would allow us to backup machines that are away from campus but that still have IP connectivity, such as machines that are at home or at other campuses, which our machines tend to be for shorter or longer times. To work with a NAT, as at many peoples homes, it would have to either just open a normal connection to the server when it is due for backup, or would have to work with the different NAT-tunneling schemes that are available. I would suggest that the client just connects to the server and asks when it is next due for backup, and when that time comes, or when the user requests an ASAP backup, it connects to the server again and waits for backup. /ragge
×