Backing up Snow Leopard Server with mlbackups

Apple‘s Snow Leopard Server product is one lovely implementation of UNIX.  I’ve thoroughly enjoyed using it for the power and simplicity that it offers.  I’ve loved using Apple’s operating systems thanks to the combination of UNIX power and elegant design.  Snow Leopard server is no exception to this rule.

The barrier to entry with Snow Leopard server was lowered when Apple reduced the price of the product to $499 USD and offered an unlimited client version only.  It was even more palatable when the Mac Mini server version was introduced at $999.  Previously, you could build your own Mac Mini server for about $1300 USD, but this new model allows small developers and workshops to get into the product at a very low price point.

With that in mind, I’m anticipating that there’s a number of people that are checking out Snow Leopard server for the first time.  You might be enthralled with all of its features and niceties.  One of those niceties includes Time Machine.  However, when you look at what Time Machine really backs up, you’ll discover that it doesn’t back up any elements of your server data.

That’s right.  Snow Leopard server is not backing up your mail data, Open Directory data or anything else of that nature.  It’s backing up enough to restore the server to an operational state, but you’ll find yourself rebuilding the Open Directory and mail data from scratch.  The Apple documentation states that they offer a number of other command-line utilities for backing up server data.  These utilities are a number of powerful little guys like “ditto” and “rsync.”  For the uninitiated, this means you’re now plunging into the world of search to discover a backup script that will save your hindquarters.

When I was researching this, I came across SuperDuper! and Carbon Copy Cloner most often.  While these serve the purpose of making a bootable clone of your server drive, the developers recommend that you do not use their products against Snow Leopard server while the services are running.  So, guess what?  Now you’re back to looking into a script to stop all of your services, back up the data, then start them back up again.

One side note here: it’s dreadfully easy to stop services with a terminal command or bash script, but I’m not going to go over that here.  If interested in this information, let me know and I’ll post it in another article.

After more searching, I came upon a little resource at – a site where a fellow has created the backup script “mlbackup.”  The site is mainly in German, so it’s not entirely clear what to do with the program.  However, it seemed to fit the bill for what I wanted so I decided to check it out.

mlbackup is provided as an installation package.  After downloading it, double-click on the pkg to install it.  Installation is pretty straightforward but implementation is a different story.

After installing mlbackup, it’s important to know what was placed on your system:

/usr/local/maclemon/* – the binaries for mlbackup and man pages
/etc/maclemon/backup/* – sample configuration file and globalexclusions

Start by copying the sample configuration file to a new file for modification.  In my case, I created a backup configuration file that is named after the volume that contains the data for my server.  My server’s name is “Ramona” and the server data is stored on a volume named “RA Server Data.”  Therefore, I did:

sudo cp demo.mlbackupconf.sample ramona_serverdata.mlbackupconf

This creates a copy of demo.mlbackupconf.sample and names it “ramona_serverdata.mlbackupconf.”  Next, you’ll want to edit that new file and make the necessary changes for your server.

I use “vim” to edit, so I type next:

sudo vim ramona_serverdata.mlbackupconf

Using “vim” is an article in and of itself, so I’m certainly not going to cover its usage here.  If you’re an innocent  newcomer to vim, it can quickly turn you into an innocent bystander of a violent gunfight.  Be careful.

Once you’re in vim, hit “i” for insert mode and make the necessary changes to the file.  Notably, you’ll want to change some of the items listed below.  I’m pasting in the contents of my file and the changes that I made.

# What is the name of this backup set?
# What is the name of this backup set?

# What file or directory to Backup. (No trailing Slash for directories)
# Default: $HOME
MLsourcePath="/Volumes/RA Server Data"


# Where shall the Backups be stored (local path on the destination machine)
# Default: /Volumes/Backup
MLdestPath="/Volumes/Ramona Backup/mlbackups"


# Where to send the eMail notification about the backup to?
# Default: $USER
MLadminEmail="<my email address>"

Hit “esc” to get out of insert mode.  Then hit “:” and type “wq” and enter.  This will save the file.

In case you haven’t deduced it from examining the file, I have two external drives hooked up to this Mac Mini server.  One is a firewire800 drive called “RA Server Data.”  The other is a larger drive on USB called “Ramona Backup.”  My intention is for mlbackup to execute against RA Server Data and back it all up to Ramona Backup.  I intend for it to keep 5 sets of backups.  mlbackup is pretty nice in that it will automatically clean up the prior backup sets so it only keeps the amount of sets you want.  If you’re running this every day, you’ll have 5 days of backups.

One other note here.  There’s a bug in the mlbackup script that if the “MLbackupname” parameter contains a space in the name, mlbackup won’t clean up the old backup sets automatically.  Found this one out the hard way.  If you intend to have a backup set name that contained spaces, replace the spaces with an underscore character to prevent this issue from biting you.

The first half of your mlbackup configuration is complete.  You’ve told mlbackup what to do, but now you need to tell it when to do it.  The old way of setting up scheduled tasks in OS X was to use a cron job, just like a regular UNIX or Linux implementation.  However, Apple has replaced cron with launchd.  Now you need to configure launchd.

Launchd is a complex beast.  I’ll summarize what I learned about it.  Launchd will fire up any tasks on your behalf at any time you define, much like cron used to do.  It uses an XML file to define the job and depending on the parameters in the XML file and where you put it, launchd will do different things.

Since it’s so complex, I’m only going to talk about what I did for this launchd task.  Your ideas and mileage may vary.  I’m interested in hearing what others do with this, so be sure to let me know what you put together.

In my case, I created a new file in /Library/LaunchDaemons.  I put it there because I want mlbackup to be executed as root (spare me the speeches, I’m trying to back up server data here) and I want it to have access to the system.

Here’s the contents of the file I created there in /Library/LaunchDaemons.  The name of the file is at.maclemon.mlbackup.radata.daily.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "">
        <string>Back up the disk RA Server Data every day.</string>

Again I’m not going to go into a whole lot of detail here about what I did, but I will point out a few items of interest.

Note the “ProgramArguments” directive.  There, you see that I’m calling mlbackup and giving it the full path of the config file I created earlier.  This is vitally important, otherwise mlbackup won’t do a thing.

The “StartCalendarInterval” element tells launchd when to start the task.

The “AbandonProcessGroup” item is required if you intend mlbackup to send you email at the completion of the job.  Without this element, mlbackup won’t send an email and may not clean up the backup sets in a way that you intend.

Finally, tell launchd that you created the file and you want the system to load it up and pay attention:

sudo launchctl load -w at.maclemon.mlbackup.radata.daily.plist

Launchctl is the terminal command to force launchd to load up the file.  For some odd reason, without the -w, it won’t load.  Why you have to do this is unclear in the Apple documentation for Snow Leopard.  In the past, -w controlled whether the job was enabled or disabled.  In Snow Leopard it seems, without using -w, launchctl just looks at you funny.

That should do it.  You should receive an email from the root user at the end of the backup task if all goes well.  If you didn’t receive an email, be sure to check your backup volume and verify that something was written there by mlbackup.

One other note that I forgot to include.  mlbackup is mainly just a bash script, but it does contain a newer version of rsync.  mlbackup uses this version of rsync over the one that Apple supplied with OS X because it has some vital optimizations for the backup to take place.

That should do it.  Hope this helps get your server backups running well.

Reblog this post [with Zemanta]

Drobo Rebuild Time and Your Sanity

I am one of the aspiring new media yahoos that bought into the fever gripping folks everywhere – the Drobo (a play on words for “data robotics.”)  Leo Laporte, Scott Bourne and all of those folks loudly proclaimed about what a fantastic device the Drobo is.

I’m here to tell you it sucks.

Now, first a disclaimer – I’m moaning about the generation 1 Drobo.  I know that a 2nd generation Drobo and a DroboPro have both been released and I’m sure they are much better devices – but there are still some serious problems here that you, the prospective buyer, need to be aware of.  If that’s you, maybe you can skip down to the bullet list for consideration below.

I purchased a Drobo for use in our studio that is expecting to have terabytes of data and loaded it up with 4 1TB Seagate 7200.11 drives.  The Drobo saw them, fired itself up and ran beautifully… or so I thought.

I noticed that the Drobo’s throughput was pretty slow.  Oddly slow.  No, ridiculously slow.  It was so abysmally slow that it was clear from minute one that this thing was only going to be useful for long term storage of archived data or as a Time Machine disk.  Okie, so it’s so slow that even as a Time Machine disk it’s problematic, but I suffered through it.  I was “feelin’ droovy” like everyone in said I would.

Then I lost a hard drive.

At first, the Drobo didn’t indicate there was a drive failure.  It suddenly acted like it was out of drive space – at least that’s what it tried to indicate by flashing all four of the drive bays red and green.  Uhm, okie.  Either I lost all four drives or you’re trying to tell me something.

After a reboot of the Drobo, it told me that one of the drives was just bad.  It flagged it with a red light and the software, Drobo Dashboard, informed me as such.  (NOTE: If you use Snow Leopard, you can forget about using Drobo Dashboard in the 64-bit kernel as they still haven’t updated it yet.  Snow Leopard has only been available to developers for almost a year now, guys).  If you want to do anything in regards to checking error messages or updating firmware, you have to use the Drobo Dashboard kids.  That means you won’t be using the 64-bit Snow Leopard kernel.  Oh well, Drobo’s not the only folks guilty of this oversight.

Anyway, after going back to the 32-bit kernel and checking to see what’s going on, the Drobo was upset about a drive failure.  I ordered a replacement drive from Seagate and brought it into the office and replaced the dead one.  Drobo then warned me that it couldn’t protect me from hard drive failures because it was rebuilding the array.

…and it was going to take 1,447 hours to rebuild.

What?  Yes, that’s right.  Better yet, the time to rebuild changed repeatedly.  Sometimes it went to 887 hours, then 2,088 hours, then 48 hours, then back to 1,447 hours.  Drobo couldn’t make up his mind.  The drives were spinning relentlessly.  It was beating on the drives so long and so hard that I became concerned after about a week that another drive might fail in the process.  Fortunately, I could access the data on the drives and copy it off just in case, so I did so.

It’s been two weeks and the array is still building.  It’s also still copying my data off the drive.  That copy has been going on for about three days now.  I’m sure the data copy isn’t helping the throughput at all, but having my array in a compromised state for two weeks without an accurate time estimate to completion is completely unacceptable.

I started to research what was going on here and noticed that other people around the net were experiencing incredibly bad performance issues as well, especially as it pertains to array rebuild times.  The support kb at Drobo says “it can take some time” (not a direct quote), but two weeks is outrageous.  Oh yeah, and it’s still not done by the way.

My copy still has about 11 hours left, so hopefully the data will be copied off the Drobo before it dies completely.

I started thinking about the ramifications of this problem and realized that the Drobo wasn’t entirely a good idea.  I thought I’d bullet those out for you here.

  • Drobo uses a proprietary technology that is NOT based on RAID.  The proprietary technology has marketing materials on it, but that’s about all you’re going to get.  It’s the company’s secret sauce.  It’s something akin to ZFS, but all in all, you’re just going to have to trust your data to them.
  • A key selling point to the Drobo is that this secret sauce allows you to use drives that are varying in capacity and it will squeeze every byte out of it that it can.  That’s nice, but the performance of the unit is so poor that I no longer give a shit.
  • Drobo is very, very proud of their proprietary technology.  So much so that they’re willing to charge you a premium for the privilege of using it, even if it is slow.
  • Drobo performs adequately for almost nothing (other than long-term get-it-out-of-my-site storage) until it has an issue.
  • If it has an issue, you will not know about it under the 64-bit Snow Leopard kernel, if you’re not within eyeshot of the unit.  The Drobo Dashboard can send you alerts.  But if you’re using the 64-bit kernel, it’s not going to send you jack.  It’ll blink at you from across the table… that’s about it.  Hopefully this changes VERY soon.
  • The company charges a mandatory fee for firmware updates and support.  If you don’t pay them a yearly fee, you will not get any support beyond the knowledge base.  You also will not get software and firmware upgrades.  I realize that charging for support is not an entirely new thing and many companies do it, but paying a fee for firmware updates is insane.  (Garmin, I’m looking at you and those maps you want me to buy for the Nuvi, too).
  • The last bullet sucks so bad that you should stop considering a Drobo purchase.
  • Drobo is proprietary, expensive and forces a regular maintenance fee upon you.  You are handing your data over to an unknown, unproven algorithm.  Don’t do that.  I shouldn’t have.  I need to remember to be skeptical of things like this, stop buying into the hype and stick with a solution that has been proven (also known as RAID).

I ordered a Promise Smartstor DS4600 to replace the Drobo.  It’ll do good ol’ RAID5.  Once the copy finishes, I’ll be pulling the drives out of the Drobo and putting them into the DS4600.  I’ll put the Drobo someplace else… maybe hang it off the server for large archival storage one day when I feed it some more drives.  Until then, forget it.

ONE OTHER NOTE: No, I did not call Drobo Support.  Perhaps I should have, I don’t know.  I’m not sure what I was expecting them to do aside from saying, “Yeah, that will take a while.  Sorry buddy!”  So I didn’t.  Mea Culpa if you want to hold me to that, but I’m sure someone out there understands why I didn’t.

Update: Just in case any of you think I’m off my rocker (which I am, but that’s besides the point) – here’s a screen capture of my Drobo Dashboard.  Keep in mind we’re starting on WEEK THREE of the rebuild.  Check out the estimated time to completion after two full weeks…

Estimated time to completion dialog from the Drobo Dashboard.

Reblog this post [with Zemanta]