Jump to content

2927aa2c-bcad-4982-b785-6b9ccc007482

Members
  • Posts

    35
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by 2927aa2c-bcad-4982-b785-6b9ccc007482

  1. I've experienced the same weird problems as Ernst. Scripts suddenly deciding to use another media set or source (this could only be resolved by deleting that script and creating a new one), a complete inability to set the start time for a script e.g. I change it to 9am, the control loses focus, it changes to 3pm etc. Automatic grooming never seems to happen for me - the media set fills up and then asks for more. Manual grooming doesn't reduce the number of backups and hence barely reduces the media set usage. Ernst, I created a cron job which keeps the Retrospect server alive. I had to do this since every time I went on holiday, the server crashed and I had a nightmare awaiting me on return. If the backups fail, it's my responsibility. All of the bugs also make me look foolish; I can't just keep blaming the buggy software, as true as that may be.
  2. I've also been pointed by another forum user to this tool made by Paul Fabris: Retro8LogAnalyser: A free Filemaker 10 based log analysis program for Retrospect 8 Log Data. This parses the log file much more than my script and logs detailed information to an SQL database. My script could probably be simplified if I changed it to interface with a database created by Retro8LogAnalyser.
  3. I'm in a similar situation. I work for a university and our group's tech. budget comes straight out of funding which is always a limited resource. When Retrospect was set up (before I started work), it seemed like a really good solution at the right price point. I would have made the decision to use it myself. Sadly, the previous admin and myself have had to deal with all the issues. My worst case scenario? The backups fail, I get fired, and subsequently have two weeks to leave the US. This is something I worry about! I have spent hours upon hours dealing with various issues to keep all the backup plates spinning at once. To top it off, it sounds like v9 still has many bugs.
  4. Hi, I asked the same question a few weeks ago. It prompted me to write a log parser in Python which I've just made public (see http://forums.support.roxio.com/topic/78298-remotely-viewing-logsreports/page__view__findpost__p__394126). If you check the readme file, it has tips on how to set up automated summary emails. With a little tweaking, you could set up individual emails per (failed) job. Or you could do some funky stuff and send updates to some networked client program! I do not know all the log event classes and types so my script is likely missing some of these at present. Hopefully what is there is of use to you though. I find it very useful to monitor failures from home.
  5. I finally got round to fixing the code up for public view. You can download it from here for the moment (I may move the location in the future): https://kortemmelab.ucsf.edu/~oconchus/retrolog.zip There are three Python scripts: one parsing script which does most of the work, one executable script which emails a status summary of the log in HTML and plain text to admins, and one script to be added to a Python-driven webpage. An example of the webpage output (in Chrome) can be seen here: https://kortemmelab.ucsf.edu/~oconchus/retrospectlog.png The HTML table in the email looks similar to the screenshot. The method used to check whether a script has passed is simplistic at present. You define a list of scripts and the expected frequency at the bottom of retrospect.py as described in the readme file. A value of 1 means daily, 7 means weekly, etc. Ideally, you could give it a more descriptive schedule e.g. 'every day from Monday to Friday' or 'on Sunday'. This shouldn't be difficult and is the reason for the Script class but I didn't have the time or need to implement it. If you have any problems or suggestions, feel free to let me know.
  6. I spent a few hours writing a script to help me remotely administer our Retrospect installation (v8.2.0.399). I'm pretty happy with the results. The script parses the log file, converting UTF-16 to ASCII, and throws away the $[...] comments as above. It then produces both a HTML table and a plaintext list of the most recent and the last successful runs for each job I'm interested in. Jobs that haven't run successfully in x number of days are marked as erroneous. My webserver script prints the HTML table along with a date-decreasing series of log entries. The HTML table hyperlinks to their log entries. A separate script runs as a cronjob every 12 hours and sends me a HTML/plaintext email with the HTML table and plaintext list with hyperlinks to the relevant logs. This is an enormous improvement on the system I had before which involved being in the office and leaving my chair a lot. If this is something people would be interested in, let me know and I could check with my boss whether it's okay to make the scripts available.
  7. Thanks for the reply, Steve but I missed the name of that app. Thanks for the information, twickland, it was very useful. Maybe Retrospect 9 will add the export to tab-delimited text file feature back in... but I won't be upgrading until it's well tested. In the meantime, I wrote a Python script to parse the file for terminal printing so I can read error messages from home. Next I'm going to write another script to create XML so I can run XSLT over it on our webserver. In case this is useful to anyone else, here's the script I'm using for terminal printing. It does one particularly bad thing by stripping the high byte off the 16-bit characters but it works fine for me; our logs don't contain any strange characters and string conversion functions aren't the best in old versions of Python anyway. import os, sys, re maxchars = 32000 logfile = "operations_log.utx" internalregex = re.compile("\$\0\[\0[^[]*?\]\0") entryregex = re.compile("\+.*?") # Read from the log file sz = os.path.getsize(logfile) F = open(logfile, "r") if maxchars < sz: F.seek(sz - maxchars) else: maxchars = sz contents = F.read(maxchars) F.close() # Remove all nested $[...] sections from the log lasts = None while contents != lasts: lasts = contents contents = internalregex.sub("", contents) # Remove high byte from string characters (Warning: this could break the string if it uses certain characters e.g. non-ASCII) contents = re.sub("\0", "", contents) # Optional: Remove leading whitespace from lines contents = re.sub("\n\t+", "\n", contents) # Entries seem to begin with a plus sign. Skip to the first full entry or use the entire string if no full entry exists. contents = contents[contents.find("\n+")+1:] # Print the file contents, highlighting entry headers in green for line in contents.split("\n"): if entryregex.match(line): print("\033[92m%s\033[0m" % line) else: print(line)
  8. Hi everyone, Is there any way to access the Retrospect reports without using the Retrospect application to connect to the Retrospect engine? Ideally, I would like to generate the reports on-demand from a command line to some parsable format. I would then install a cron job which parses the files and emails me if e.g. a machine had not been backed up in a few days. At present, the only way I can determine whether the Retrospect application/engine is running properly is either to physically sit at the machine it runs on or rely on receiving emails that something failed. The failure emails are also usually uninformative compared to the reports so any function to have reports emailed to me as above would be incredibly useful.
×
×
  • Create New...