Jump to content

OS X Server - mysql - and Retrospect


Recommended Posts

Hello,

 

I have a client who is running OS X Server 10.3.9 (2GB RAM, 21GB data getting backed up) and Retrospect 6.1.126. I have 3 duplicate backups that backup to an external firewire 800 hard drive. The client hosts 2 websites on the server as well as a mysql database, and an application named JRun that ties into the mysql db.

 

We had numerous problems doing a full duplicate, from server reboots to the website going down and database neededing to be restarted. Eventually, I excluded the /Applications/JRun4 directory as well as the /var/mysql directory. This at least kept the website up and the database up. However, after the backup, the mysqld process in Activity Monitor kept fluctuating between 5% CPU and 35% CPU with very slow search times and performance. My client rebooted the server and all is back to normal.

 

Is there something else I should be doing on this server regarding the mysql? This is a non-standard setup and another person does the maintanence of the database and mysql. I've never had issues with Retrospect like this. The client thinks its Retrospect, I think its mysql or JRun.

 

Any help would be appreciated.

 

Thanks,

Jason

Link to comment
Share on other sites

Is the duplicate backup being done while the database is live? If so, you are guaranteed problems. A discussion of the reason is more than can be done here, but, in short, any backup of a database (mysql, Cyrus mail database, financial transactions, etc.) must be done while the database is quiet/inactive. Otherwise, there's no way to get a consistent copy. There's an extensive discussion in this thread:

Is Duplicate an Exact Duplicate?

Link to comment
Share on other sites

hi jason,

 

russ is totally correct about this, and i'd also add that you can have MySql dump it's info to static files and then back that up. you should talk to the database admin about this and have them dump the data to a specific folder that you can back up. if you are interested, here are a couple of articles on 'mysqldump' that might be illuminating. i believe there are other utilities for this as well:

 

http://builder.com.com/5100-6388-5259660.html

 

http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

 

the fact that MySql has these tools built in should tell you that you don't want to try backing up the 'live' databases. i've done the same thing with FileMaker FWIW. you'll then want to define 'Subvolumes' so that you only get the data you need from the server.

 

HTH.

Link to comment
Share on other sites

Thank you both for your quick replies. I know that the mysql databases get backed up by a script that the database admin runs, so I might be able to exclude the database folders altogether from the backup. Do you forsee any problems with that? Or will I still have issues with the mysqld process?

Link to comment
Share on other sites

you can use a selector to exclude the directory, but Retrospect will still scan it. i don't forsee that being a problem, but if it was me i'd try to subvolume if i could to cut down on scan time. i'd still try to get a backup of his 'mysqldump', if that's what he's using, just to be safe.

Link to comment
Share on other sites

I excluded the following folders from the backup:

 

/Applications/JRun4

/var/mysql

 

The backups run properly, but again the database is slow or unresponsive once the backup is complete. My client has been restarting the server in order to correct this. For now, I have stopped the backups until I can get this issue resolved.

 

I think we will need to stop he jrun and mysqld processes while Retrospect does the backup, then restart the processes after the backup is complete. Would this be something we could do via a script? I have next to no scripting experience.

Link to comment
Share on other sites

hi jason,

 

it still sounds like Retrospect is hitting some MySQL files, perhaps the prefs? i'm not certain.

 

however, your question of stopping and starting the MySQL and JRun4 daemons is a good one. if you are running Retrospect locally, you could use the 'Retrospect Event Handler', which is an editable AppleScript. check your manual for usage.

 

my best suggestion would be for you to colaborate with your MySQL admin. ask him/her to write a short shell script that will shut down/start up everything properly and then read up on AppleScript (especially the 'do shell script' command). here is a good resource for AppleScript:

 

http://bbs.applescript.net/viewforum.php?id=2

Link to comment
Share on other sites

Have you tried to analyze the source of the sluggishness? If it's cpu resources that are being hogged, "top" should show you, and you can also get pretty pie charts showing thrashing, etc., with OS X's Activity Monitor.

 

There's no reason that I can think of for Retrospect to slow things down after a backup. You don't indicate whether these are scripted backups that autolaunch at a certain time, I assume that's what you are doing. Just for curiosity, what do you have set in Retrospect's "Preferences > Unattended" (hopefully, Quit) and "Preferences > Run Control" (hopefully, not "Stop on Errors" and not "Confirm Before Stopping Executions"). If Retrospect quits after executing, then hard to see how it is able to slow things down after finishing.

 

Now, I don't have your configuration (we have Xserve G5 with 10.4.7 Server), but I've never seen what you are describing.

 

Russ

Link to comment
Share on other sites

waltr - I will have the mysql admin look into the scripting side of things. However, I brought this up to him and it might not fly because this db is for an online bookstore and they are concerned if the db goes down during the night, they will lose sales from other parts of the world, so I'm not sure what to recommend at this point.

 

rhwalker - the source of the sluggishness is the mysqld process CPU usage, it fluctuates anywhere between 3% and 35% after the backup is run. The system is a 1Ghz XServe, running OS X 10.3.9 server, Retrospect 6.1.126, runs a duplicate (unattented) backup of the 60GB hard drive to a 60GB partition on a firewire 800 hard drive that is connected to the XServe. The duplicate backup runs at 1:00 AM EST 3 days a week, per the clients request. I have 2 exclusions: folder names that contain "JRun4" and "mysql". The backup runs and completes successfully each time, but the database CPU utilization renders it useless until the server is rebooted. The client runs an online bookstore and website from the XServe. The JRun4 application ties into the mysql db that runs their inventory, both internally and on the website.

Link to comment
Share on other sites

Quote:

waltr - I will have the mysql admin look into the scripting side of things. However, I brought this up to him and it might not fly because this db is for an online bookstore and they are concerned if the db goes down during the night, they will lose sales from other parts of the world, so I'm not sure what to recommend at this point.

 


Well, your earlier post indicates that your mysql admin is already backing up the mysql database:

Quote:

I know that the mysql databases get backed up by a script that the database admin runs, so I might be able to exclude the database folders altogether from the backup.

 


There are two separate issues here that need to be addressed/analyzed:

 

(1) the cause of the mysql processor usage following a Retrospect backup; and

(2) how to coordinate the mysql backup with the one done by Retrospect.

 

As for (1), let me suggest that you try to troubleshoot this by not doing the existing server backup (adjust the schedule for all present scripts to 1 week in the future) and to define some subvolume on the server (who cares what, perhaps /Applications/Utilities or some such) to see whether Retrospect can back up that subvolume and quit, and whether the mysql CPU load occurs after this simple test. As a comment, a 1 GHz G4 Xserve is a bit underpowered for running Retrospect, which maxes out our 2 GB 1 GHz single processor Xserve G5. I think that there might have been some firewire issues with 10.3.9 (others were reporting), so you also might try creating another subvolume on one of the internal drives and trying the duplicate to that subvolume rather than to your firewire drives, just to see if the slowness problem persists there. That would give a working data point from which you could go forward, adding folders for backup, etc., and experimenting with the destination, etc.

 

As for (2), if you want, I could email you a copy of our heavily-edited Retrospect Event Handler for you to look at, if you want, which uses a "trigger script" to stop/start our mail server when Retrospect does a duplicate of the Cyrus databases, should be easy to modify to call your mysql admin's backup script, if that's what you want to do. Send me a private message with your email address (you don't have it in your profile). But it's useless to try to get (2) going before you figure out what is happening with the CPU slowdown.

 

Russ

Link to comment
Share on other sites

good stuff russ,

 

here's some more. jason, you are going to have to read this:

 

http://developer.apple.com/internet/opensource/osdb.html

 

i note that the MySQL daemon lives in /usr/local. could Retrospect be backing up the actual executables? and could that have anything to do with the problem? if it were me, i'd start by excluding *EVERYTHING* that has to do with MySQL. alternatively, you could define Subvolumes and only backup the data you need.

Link to comment
Share on other sites

Quote:

The system is a 1Ghz XServe, running OS X 10.3.9 server, Retrospect 6.1.126, runs a duplicate (unattented) backup of the 60GB hard drive to a 60GB partition on a firewire 800 hard drive that is connected to the XServe. The duplicate backup runs at 1:00 AM EST 3 days a week, per the clients request.

 


 

Note that "Duplicate" and "Backup" are different terms in Retrospect, that mean different things.

 

- Can you unmount the external FireWire drive when you are observing the high cpu loads?

 

- Do you get the same results if you perform a Backup to a File Backup Set stored on the same external drive?

 

- What happens if you restart the SQL process(es) instead of rebooting the entire server?

 

- Why are you doing Duplicates instead of Backups? Do you hope to have a clone system to quickly replace a hardware failed drive? If you're looking to minimize your downtime after a failure, excluding things from the Duplicate will require reinstalling at least some software. Performing a Restore to a new, empty volume might not take too much longer.

 

Duplicating corrupted data will just give you two copies of the problem; perhaps a hardware RAID system could give you the protection agains disk failure you want, while Retrospect can provide data backups in case of software crashes, lost files, etc. Retrospect Snapshots are one of the best features of the program, that you are not using in your current configuration.

 

That being said, Retrospect accesses Source files as Read-Only, so it's somewhat mysterious that the server is being affected by the Duplicate this way.

 

 

 

Dave

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...