Jump to content

Time frame for next update?


svenlm

Recommended Posts

The next update will be probably be the update that includes Power PC support. We are working on the final schedules right now.

 

It is possible a bug fix update will come out before then, but that would delay the Power PC release. The PPC release will contain bug fixes too.

Link to comment
Share on other sites

FWIW, I find the .608 release to be more stable, faster, and generally more satisfactory than the initial 8.0 release. I unfortunately didn't keep baseline statistics, so I'm going based on my memory, and I'm sure my network is substantially simpler than other users...

 

But -- Good job jumping on those bugs! Congratulations for whacking quite a few of them in a short amount of time!

Link to comment
Share on other sites

I would add a vote to put fixes ahead of the PPC schedule.

As a PPC user, I agree. There is no point in rushing out something that "breaks quickly". For backup software, my priorities are:

 

(1) reliability

(2) reliability

(3) reliability

(4) retrieval of old backups

(5) features

 

Even if the PPC platform was "supported" to the extent that the Intel platform currently is, the software doesn't yet seem ready for production use. This isn't a gripe, just an observation that the software needs to become reliable. We are entrusting our data to backup software, and it needs to be worthy of that trust.

 

Russ

Link to comment
Share on other sites

My primarily "reliability" issue is the fact that (at least on my system), the 609 engine is "respawning" at least once a day.

 

it's not necessarily stopping the backup (though that happens sometimes), but the engine clearly restarts itself.

 

This is obvious when I leave the console up and running (say showing only the Activities view) and I look up and the console has reset itself to the default 3-pane view.

 

I'm deliberately restarting the backup server machine daily (like I had to do with 6.1 under 10.5) to try to get around this -- but that doesn't always help.)

 

That's my #1 "reliability" issue at this point. The rest of my bug reports I can work around as -- when the engine hasn't *stopped* working -- my proactive daily backup scripts are backing up just fine.

 

And I don't touch my scripts -- or my rules -- now that I have them where I want them...

Link to comment
Share on other sites

Out of curiosity, what types of reliability issues have folks seen / are seeing with the current release of 8?

- John

 

1. Data transmitted between the consoles and the engine does not seem to be accurate:

2. Setting an option in a script is not always saved

3. The current activity is not always reported back to the console

4. Logs say "Execution completed successfully" when files where not backed up

5. Schedules become corrupted after trying to defer them and re-enable them

6. Performance and accuracy between the console and the engine diminishes the "further" you get from the server - for example, using the Console remotely over the WAN using VPN is unbearable.

7. Scripts can lose their setting. For example, I have had scripts set to backup a source to media set. They work for days. Then one day I go in and I see a script has a red warning x next to it, and it no longer has a source and/or a media set defined. This has happened more than once to more than one script.

 

 

(See my "Errors that can't be tracked down" post)

(See my "Oven Temperatures Vary" post)

 

I haven't even tried a restore yet (shudder).

 

Those are in the reliability area.

 

Other improvements that are needed:

- Documentation (there is none!)

- Improved performance

- Improved reporting and e-mailing (this maybe could even be put in the reliability column, as I get emails that say "completed successfully", when in fact they did not; also, there is not an email sent when something did NOT happen, for example, no email notification was sent when scripts weren't executing because they had no sources defined due to the problem noted in number 7 above)

- More flexibility in the scripting and multithreading/intelligence in the engine: Currently if you have complex backup needs (several types of data across several servers; and you want to periodically move data from local storage to archive, and then erase the oldest data on the local storage), and you want to get the most performance (i.e. use multi-threading) it requires a balancing act of of managing several little scripts and Media Sets that do one thing, and stringing them together in the proper order to optimize thread execution. Certainly the software CAN do it, it just is clumsy and involves lots of individual scripts with trial and error and analysis.

 

- Right now, everything is stored in a proprietary Retrospect format, in 600MB sizes (presumably so it could be burned to a CD). I would like to see the option of storing the data in ten native Finder format, and not limited to 600MB in size

 

Link to comment
Share on other sites

There's an awful lot of posts on the forum. I asked what problems folks here had since they offered up that they have had issues.

 

My biggest concern out of Joel's list is #4, the rest sound like annoyances but not responsible for data loss, per se. Does anyone have further details on reports of successfully completed backups that did not back up everything?

 

Would folks suggest going back to 6 for the time being for reliable backups accordingly?

 

 

Link to comment
Share on other sites

Having read a lot of the forum postings (even for stuff I didn't post), it *seems* that the people with the most significant issues are those with Tape drives and/or SCSI cards -- always the bane of Retrospect it seems.

 

If you are backing up to External hard disk -- that seems to be better. That's what I'm doing. YMMV.

Link to comment
Share on other sites

My biggest concern out of Joel's list is #4, the rest sound like annoyances but not responsible for data loss, per se. Does anyone have further details on reports of successfully completed backups that did not back up everything?

 

You don't consider scripts losing their definitions (sources, destinations, schedules) a concern?

 

It sounds like you haven't used this software, because it is more than the list above implies. It is just buggy all over the place.

 

And that is one thing that backup software just can't be.

 

But even beyond that, it just is not reliable. Period. And it is backup software.

 

If you need further evidence, or perhaps to underscore the overall feeling you get with the application, since the time I posted my last response and this one, I was working on the issue I reported in "Errors that can't be tracked down ".

 

I started another copy script.

 

The console crashed about 1.5 hours into it. No email. No indication of crash, except the console was no longer running. No activity logged in the "Past Activities" list. If I didn't know I started the script, you would have never known it ran or failed. Just "poof", never happened.

 

I don't mean to dump on the software. I believe the developers are working hard to resolve the issues. It seems that they have implied that there was overwhelming pressure from corporate and end users to get the product out the door. And I believe that one day, this software will probably be pretty good.

 

But it is COMPLETELY UNACCEPTABLE for a production environment (today). Period.

 

 

 

 

Link to comment
Share on other sites

I certainly haven't used it a lot, nor in our production environment yet. I have run into issues already, which definitely doesn't make me feel too good about it.

 

It seems as though the backups do not run any faster than with version 6, although Dantz has touted otherwise. What are your experiences?

 

Also, has anyone found the multithreading to really speed things up at all when backing up multiple machines?

 

I guess my bottom-line question is, why would I use version 8 over the old version 6 that has been working fine for us for years?

Link to comment
Share on other sites

Ahh, I forgot about the big feature - grooming backup sets. Has anyone tested this extensively to see if it works reliably? I tried grooming one set here, telling it to keep just one backup, but it still had the two backups in it when it finished.

 

- John

Link to comment
Share on other sites

it appears multithreading only works for multiple destinations, not just multiple sources. IE:

source A } drive 1

source B } drive 2

instead of what I thought would happen, multiple sources to one destination, ie:

source A and B simultaneously } drive 1

 

Am I correct? Is this how it works in Retrospect 7 for Windows?

 

Because of this I think network backups still will not run my old DLT at full speed. :(

My hope is they get source to disk to tape working.* My testing with source to disk to disk isn't working so well. :confused2:

 

* Lacking a PCIe SCSI card, I haven't tested d2d2t. It may be just fine.

Edited by Guest
clarify D2D2T
Link to comment
Share on other sites

Also, has anyone found the multithreading to really speed things up at all when backing up multiple machines?

The short answer is yes. Obviously anytime you can do multiple tasks in parallael is going to be an advantage over doing them sequentially. And it can be substantial. Especially if you have any remote sites that are limited to T1 speeds. It certainly makes up for the slower overall performance of the product.

 

The performance improvements as a result from threading is one significant win for the product over version 6.

 

Where it breaks down a bit is in how it is implemented. It isn't very intelligent (I already posted about that in this thread), and makes you do a lot of unnecessary management things to make use of multi-threading.

 

For example if you have:

Server A and Server B, and you want them both to execute simultaneously, you have to create two Media Sets. Now these "Media Sets" can be on the same device (the same hard drive for example). You will essentially end up with a folder on that hard drive for each Media Set you create.

 

Now that is fine for two machines, but if you want to scale this up, you can see that end up creating a lot of Media Sets and folders to manage. It gets exponentially more complicated if you have to types of data that you want to handle differently. For example, staff and student data. Now if you want to give priority to staff backups, you have to create a Media Set for each "type" of data you are backing up. If both Server A and Server B both have staff and student data, even though they are all going to the same destination, you have to create four Media Sets.

 

Now lets say the next step is you want to every week move your daily backups to an attached archive disk. Well now, you have to create scripts for each Media Set if you want to take advantage of threading.

 

And after a successful copy to the archive disk you will want to recycle your local drive. Now do that for 24 servers. It gets to be unmanageable.

 

 

Ahh, I forgot about the big feature - grooming backup sets. Has anyone tested this extensively to see if it works reliably? I tried grooming one set here, telling it to keep just one backup, but it still had the two backups in it when it finished.

Yet another flop. I have been trying for three weeks to successfully copy my local backups to an archive disk. I have yet to get it work without error. There is no way I am going to tell Retrospect to automatically delete any data (grooming) at this point. Until basic functions are working correctly the trust just isn't there.

 

I guess my bottom-line question is, why would I use version 8 over the old version 6 that has been working fine for us for years?

Well that is a different question. I am not sure either of those products is production ready, depending on your needs and your environment. Version 6 hasn't kept up with the times, and version 8 needs some more work. That being said, obviously version 8 is what will be supported moving forward.

 

Edited by Guest
Link to comment
Share on other sites

Ahh, I forgot about the big feature - grooming backup sets. Has anyone tested this extensively to see if it works reliably? I tried grooming one set here, telling it to keep just one backup, but it still had the two backups in it when it finished.

 

- John

 

 

I have used/tested grooming a *lot* (this feature is a major reason I have 8 in production). It has worked very well for me so far.

Link to comment
Share on other sites

Everyone, I appreciate the feedback. Each person needs to look at your own individual environment and try not to compare with others too heavily. Everyone will have different experiences depending on the hardware being used and the exact strategy being attempted. What one person sees may not be seen by others. I run a scheduled back to disk every day at the same time and it has never failed for me while others have had more then a couple of problems.

 

Some of the items mentioned in this thread are real issues, which will be fixed. In other cases I think more review and analysis is needed before specific problems are confirmed as not being isolated to a specific configuration.

 

Almost every issue reported in this forum has been logged as a bug and in a couple of weeks we will publish another update with a huge number of bug fixes. Today we found a solution for some of the primary causes for VXA backup failures. We have also isolated and fixed a large number of assert errors.

 

I am going to close this thread since it is going off topic for the forum: direct bug reports and troubleshooting of specific problems.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...