Lion Finally Installed (…and Here’s Why It Was Failing)

After a tremendous ordeal of trying to install OS X Lion on my January 2008 Mac Pro, I finally had a breakthrough. I discovered what appears to be a hardware incompatibility.

To properly tell this story we’ll have to go back in time.

In January of 2008 when this model of Mac Pro was available (version 3.1) it was definitely the cat’s meow. I bought a true boss of a system too. I picked up an 8-core 2.8ghz Mac Pro with 4gb of RAM. Later, I bought some third party RAM from Crucial to stuff it to the brim with 32gb.

I also visited Newegg.com to pick up three more hard drives. That was an easy decision. I picked up 3 more 750gb hard drives, all of them Seagate. When I was a PC guy Seagates or Maxtors were the only drives to buy.

I also picked up a Drobo. If you’ve read this blog for a while, you know what happened with the Drobo. To replace the Drobo I eventually purchased a Promise DS4600 and 4 Seagate ST31000340AS 1 terabyte drives.

The Promise unit had issues. It was finicky and liked to drop drives out of the array for no discernible reason. Oddly enough, the drive that failed out of the array most often was whatever drive was in bay 4. It didn’t have to be the same drive. You could literally swap the drives in and out of bay 4 and eventually it would fail. It was really bizarre. I opened numerous cases with Promise.

Promise finally came back with the information that this particular model of Seagate drives were not certified to work with the DS4600. Okie, I can handle that. No problem. I reviewed the compatibility list provided by Promise and selected a set of Hitachi 2TB drives.

I then moved on to fight the DS4600 again to make it work over eSATA. Only this weekend did that finally get resolved. But that’s another story for another day. If you want to hear it, let me know and I’ll be happy to tell it.

Anyway, back to the Seagate drives. After all of that drama and switching around, I now had a set of Seagate ST31000340AS 1tb drives… four of them to be exact. During this time I was also experiencing regular S.M.A.R.T. failures in the 750gb drives in the chassis of the Mac Pro. Those drives were being sent off and replaced on a fairly regular basis.

As these drives were replaced and rolled around, I decided that maybe I should move the 1tb drives into the Mac Pro and get 250gb of extra space. That’s kind of a no-brainer decision, right? I ended up with the original 750gb drive that shipped from Apple and three ST31000340AS drives in the other bays. I had the bright idea of creating a RAID-0 and installing Lion on it. I just knew it was going to scream.

…except that Lion wouldn’t install.

The install would always start off just fine. It would write files and then reboot. Then, somewhere in the next storage of the install it would just die. An error message popped up claiming that there was a problem and Lion couldn’t be installed.

how can this be? thought I. There’s no way Apple would release an operating system that has an incompatibility of this nature with a 2008 Mac Pro. this is insane.

I lost many countless hours of sleep to install attempts. I would try to install. I would watch it fail. I would research a little more. I’ve spent weeks trying to get through this. Nothing… and I mean NOTHING would get through with the install…

…until one time, it did.

Immediately, my trust level of the whole installation was suspect of course. Why would it fail to install so many times and then just out of the blue… it would work? It didn’t make sense. I had tried reseating hardware. I had tried pulling out the BlackMagic Intensity Card. I had tried pulling out the eSATA cards. I tried putting the stock RAM back in place. I tried everything. Nothing worked… until this time it did. Weird. It didn’t make sense.

I ran with Lion on the RAID-0 for a few days and happily thought I would go about the installation of Carbon Copy Cloner so I could set up clone tasks for the operating system disk.

cue music. It started to happen. Everything segfaulted. I could literally open the Console application and watch the crash reports roll in like a riot was going on in the Grid and reporters were on the air. No program was safe. Every one of them blew up. Sync your iPhone? Bam. iTunes died. Sync your iPod? SLAM. VTPCDecoder (or something) explodes. yeah, this OS is suspect.

I decided to whack the RAID-0 and try the install again on a single ST31000340AS drive. Guess what? The install failed… again and again.

I booked a Genius Bar appointment. Obviously, my logic board was bad.

I’m not sure what made me think to try it, but I did. One of the 1tb drives had died at some point and I received a replacement that was a completely different model. Also I ran across information on the net that a lot of people were having problems with ST31000340AS drives and certain versions of firmware. Those versions were SD1A and SD15. I looked over the drives I had. All of them had one of those two firmware revisions.

interesting.

I took one of the drives Seagate sent back that was a different model number. For the record, the drive was model ST31000528AS. I slapped it in the chassis and formatted it with HFS+. I fired up the Lion installer and hit go. I asked it to give me a full, fresh install of Lion on this disk. It worked the first time.

Not only did it work, it has been rock solid. Nothing is crashing like it was before on the other drives. Lion has become a joy to use the past two days. I stored away the Snow Leopard volume and kept it for emergencies.

I cancelled the Genius Bar appointment.

By now I think you can figure out what my conclusion. There’s something wrong with ST31000340AS Seagate drives. Don’t try to use them with Lion. Something about the kernel in Lion disagrees strongly with that model of drive. If you read around on the net you will find many, many horror stories with those drives.

Beware.

Exchange ActiveSync and Your Mobile Devices

It’s brutally important that you understand this article if you support Exchange 2007 or 2010.

Read it. Now.

http://support.microsoft.com/kb/2563324

Saving iPad Documents to Dropbox

This is a crosspost from The Cat Convention.

If you’re not familiar with Dropbox by now, you should be. Dropbox is what MobileMe‘s iDisk aspires to be one day. For now, it isn’t.

For the uninitiated, Dropbox is a fantastic cross-platform bit of code that synchronizes files across all of your computers. It also provides a look into the folders via a web browser if you should need it. They also offer an iPad app that allows you to browse and download files to local applications such as Pages.

Alas, Pages on the iPad, however, doesn’t speak Dropbox. It will allow you to edit the documents and export them to:

  • An email
  • iWork.com
  • iDisk
  • A webdav server

Dropbox is missing from that list. You could save your files back to your iDisk, but then you’d need to go to a regular machine and copy that file from the iDisk to your Dropbox folder. That’s pretty obtuse.

While we wait for Apple to purchase Dropbox and implement it as an iDisk replacement, we can use the magic of Apple Mail and Applescript to create a nifty workaround. Today I spent some time on a script that will do the following:

  • Take the contents of an email message with a particular subject line
  • Extract the attachment
  • Save the attachment in a Dropbox folder depending on the keyword you use in the subject line of the message

Since Dropbox runs all the time on your Mac, it will notice the file change event and automatically sync the file to all of your computers linked to that Dropbox account.

Making an Applescript that will save an attachment to your file system is quite easy. Linking a mail rule to that Applescript is also quite easy. Therefore, the implementation of this is easy. What makes this script a little different is that you can specify keywords in the subject line and it will decide where to put the file inside your Dropbox folder based on the keyword. Editing those keywords are completely up to you.

To implement, download the “Save Attachment to Dropbox.scpt” file below. You should open /Utilities/Applescript Editor.app and modify the script’s keywords for the subject lines you plan to use. Save the .scpt file to your favorite location for AppleScripts. (For Mail scripts, I use “~/Library/Scripts/Mail”).

Next, create a rule in your Apple Mail using criteria to judge when to fire off the rule. In my case, I told it to look for messages that meet all of these criteria:

  1. Messages coming from a particular email address
  2. Containing a subject line keyword that starts with“-savedb”

The script will execute and look at the subject line of your email message. The subject line should start with “-savedb…” and have some kind of keyword in there. You edited the script to define those, right? Well, you don’t use the rule to define those keywords. Note that I said in the keyword to use “starts with” the string “-savedb”. The script will determine what to do with it based on what you code there.

I also recommend adding an action to move the processed messages over to a folder. In my case, I created a folder called “Processed to Dropbox” and told the rule to move the message there.

An important note: the script will overwrite any files that have the same name as the file. I felt that this was a safe thing to do since Dropbox automatically backs up 30 copies of the file on the site and you can retrieve any version you like. Deleted versions of the files are tossed in the Trash. They are not deleted completely until you empty the Trash. If you still do not like this behavior, feel free to modify the script to remove that action.

Now all you have to do is send yourself an email from the proper address with the proper keyword from your favorite app on the iPad and voila, it’s instantly synced to all of your computers and backed up.

Another way to use this is via “DropDAV” at http://dropdav.com. I was close to using that solution until I read more about it. I decided I wasn’t entirely comfortable with giving another third party my Dropbox username and password, so I developed this method instead.

If you want to encourage the developers of Dropbox to add WebDAV support, be sure to give them a +1 vote here.

I hope you enjoy this script and it helps band-aid the interruption in workflow until Apple purchases Dropbox. 🙂 If you have any questions, don’t hesitate to ask in the comments below.

Click here to download “Save Attachment to Dropbox.scpt”.

Enhanced by Zemanta

Attempted Virus Attack on Safari

Tonight I was browsing through my normal websites with Safari on my Mac when suddenly, this window took over my entire browsing experience (click to go full screen on it).

I either got this from macdailynews.com, macnn.com or msnbc.com. I’m not sure which. I did a force-quit on Safari and moved on with my life, but still… beware.

Http 178 162 157 198 6f0299fed32d7c988d9c14969bba16e7e76ab50fe97dea7d

WWDC Sold Out in Six Hours!?

Guess who didn’t get a ticket? There’s absolutely no way I will ever be able to purchase a ticket through my corporation if the windows is down to the hour. No way. I was going to be lucky to pull a purchase within 48 hours.

Apple needs to expand this conference and offer paid developer accounts a first right of refusal.

Drobo Still Takes Forever to Rebuild?

I’m guessing from the amount of hits on the Drobo article from 2009 that people are still having problems with Drobos rebuilding the array in a decent amount of time.

Ever since I got a DS4600 using standard RAID-5 I’ve been quite happy. Rebuild times on a 6TB volume are about 2.5 hours. Note: the volume is only about 1/3rd full, but it’s still way more data than what was on the Drobo in 2009.

Since that incident I strongly reconsider anything that implements something in a closed, proprietary fashion to replace a standard.

Just sayin’.

If you have one of the newer Drobo units and still have problems with the array rebuilding in an acceptable amount of time, let me know in the comments. I’d love to hear about it.

Could a Bug be Deliberately Coded into an Open Source Project for Financial Gain?

For some bizarre reason, the thought at the top of my head last night at bedtime was… “I wonder if sometimes… open source developers deliberately code bugs or withhold fixes for financial gain?”

If you don’t follow what I mean, here’s where I was: often times, large corporations or benefactors will offer a code fix bounty or developmental funding for an open source project they have come to rely upon.  What if an open source developer were to deliberately code a bug into an open source project or withhold a fix so they might extract some financial support with this method?

I brought it up in #morphix to Gandalfar, one of my trusted open source advisors.  We debated it shortly and he brought up several good points.  While this may happen, the scheme is likely to fall apart quickly.  The community is the resolver of situations like this.  If the community finds a bug and offers a fix for the problem, then the developer will find themselves in a political combat situation.  They would likely try to stifle the fix with some ridiculous excuses and/or start to censor discussion of the subject over mailing lists or on forums.  Speculation could be raised about the issue and ultimately, people could start to fork the project elsewhere, unless the license of the project disallows that.  In the long run, the community would resolve the situation by simply offering a new solution.

So while it could theoretically be achieved for short-term gain, in the long run the community makes the approach unsustainable.

Why do I bring this up?  Well, I think we all know that closed source entities often engage in this practice.  I could point out several examples that I have absolute knowledge of this happening, but I don’t think I have to.  I’m not completely absolving open source from this either – look at what “official distributions” do in some situations… Red Hat Enterprise Linux or Novell (SUSE) for example.  But in those situations, if you didn’t want to pay to upgrade the operating system and still resolve your situation, we all know that with the right application of effort and skill you could overcome it.

All in all, this whole thought process ends up with a positive note about open source.  If it’s broken, you can fix it yourself or work with others to make it happen.  The community – that incredibly large, global groupthink – keeps it all honest.

Or, you can put all your money and eggs into a closed source basket and find out you’re getting screwed when it’s too late.

It’s all about choice, right?

Reblog this post [with Zemanta]

Backing up Snow Leopard Server with mlbackups

Apple‘s Snow Leopard Server product is one lovely implementation of UNIX.  I’ve thoroughly enjoyed using it for the power and simplicity that it offers.  I’ve loved using Apple’s operating systems thanks to the combination of UNIX power and elegant design.  Snow Leopard server is no exception to this rule.

The barrier to entry with Snow Leopard server was lowered when Apple reduced the price of the product to $499 USD and offered an unlimited client version only.  It was even more palatable when the Mac Mini server version was introduced at $999.  Previously, you could build your own Mac Mini server for about $1300 USD, but this new model allows small developers and workshops to get into the product at a very low price point.

With that in mind, I’m anticipating that there’s a number of people that are checking out Snow Leopard server for the first time.  You might be enthralled with all of its features and niceties.  One of those niceties includes Time Machine.  However, when you look at what Time Machine really backs up, you’ll discover that it doesn’t back up any elements of your server data.

That’s right.  Snow Leopard server is not backing up your mail data, Open Directory data or anything else of that nature.  It’s backing up enough to restore the server to an operational state, but you’ll find yourself rebuilding the Open Directory and mail data from scratch.  The Apple documentation states that they offer a number of other command-line utilities for backing up server data.  These utilities are a number of powerful little guys like “ditto” and “rsync.”  For the uninitiated, this means you’re now plunging into the world of search to discover a backup script that will save your hindquarters.

When I was researching this, I came across SuperDuper! and Carbon Copy Cloner most often.  While these serve the purpose of making a bootable clone of your server drive, the developers recommend that you do not use their products against Snow Leopard server while the services are running.  So, guess what?  Now you’re back to looking into a script to stop all of your services, back up the data, then start them back up again.

One side note here: it’s dreadfully easy to stop services with a terminal command or bash script, but I’m not going to go over that here.  If interested in this information, let me know and I’ll post it in another article.

After more searching, I came upon a little resource at http://maclemon.at – a site where a fellow has created the backup script “mlbackup.”  The site is mainly in German, so it’s not entirely clear what to do with the program.  However, it seemed to fit the bill for what I wanted so I decided to check it out.

mlbackup is provided as an installation package.  After downloading it, double-click on the pkg to install it.  Installation is pretty straightforward but implementation is a different story.

After installing mlbackup, it’s important to know what was placed on your system:

/usr/local/maclemon/* – the binaries for mlbackup and man pages
/etc/maclemon/backup/* – sample configuration file and globalexclusions

Start by copying the sample configuration file to a new file for modification.  In my case, I created a backup configuration file that is named after the volume that contains the data for my server.  My server’s name is “Ramona” and the server data is stored on a volume named “RA Server Data.”  Therefore, I did:

sudo cp demo.mlbackupconf.sample ramona_serverdata.mlbackupconf

This creates a copy of demo.mlbackupconf.sample and names it “ramona_serverdata.mlbackupconf.”  Next, you’ll want to edit that new file and make the necessary changes for your server.

I use “vim” to edit, so I type next:

sudo vim ramona_serverdata.mlbackupconf

Using “vim” is an article in and of itself, so I’m certainly not going to cover its usage here.  If you’re an innocent  newcomer to vim, it can quickly turn you into an innocent bystander of a violent gunfight.  Be careful.

Once you’re in vim, hit “i” for insert mode and make the necessary changes to the file.  Notably, you’ll want to change some of the items listed below.  I’m pasting in the contents of my file and the changes that I made.

# What is the name of this backup set?
MLbackupName=”Ramona_Server_Data”
# What is the name of this backup set?
MLbackupName="Ramona_Server_Data"
...

# What file or directory to Backup. (No trailing Slash for directories)
# Default: $HOME
MLsourcePath="/Volumes/RA Server Data"

...

# Where shall the Backups be stored (local path on the destination machine)
# Default: /Volumes/Backup
MLdestPath="/Volumes/Ramona Backup/mlbackups"

...

# Where to send the eMail notification about the backup to?
# Default: $USER
MLadminEmail="<my email address>"

Hit “esc” to get out of insert mode.  Then hit “:” and type “wq” and enter.  This will save the file.

In case you haven’t deduced it from examining the file, I have two external drives hooked up to this Mac Mini server.  One is a firewire800 drive called “RA Server Data.”  The other is a larger drive on USB called “Ramona Backup.”  My intention is for mlbackup to execute against RA Server Data and back it all up to Ramona Backup.  I intend for it to keep 5 sets of backups.  mlbackup is pretty nice in that it will automatically clean up the prior backup sets so it only keeps the amount of sets you want.  If you’re running this every day, you’ll have 5 days of backups.

One other note here.  There’s a bug in the mlbackup script that if the “MLbackupname” parameter contains a space in the name, mlbackup won’t clean up the old backup sets automatically.  Found this one out the hard way.  If you intend to have a backup set name that contained spaces, replace the spaces with an underscore character to prevent this issue from biting you.

The first half of your mlbackup configuration is complete.  You’ve told mlbackup what to do, but now you need to tell it when to do it.  The old way of setting up scheduled tasks in OS X was to use a cron job, just like a regular UNIX or Linux implementation.  However, Apple has replaced cron with launchd.  Now you need to configure launchd.

Launchd is a complex beast.  I’ll summarize what I learned about it.  Launchd will fire up any tasks on your behalf at any time you define, much like cron used to do.  It uses an XML file to define the job and depending on the parameters in the XML file and where you put it, launchd will do different things.

Since it’s so complex, I’m only going to talk about what I did for this launchd task.  Your ideas and mileage may vary.  I’m interested in hearing what others do with this, so be sure to let me know what you put together.

In my case, I created a new file in /Library/LaunchDaemons.  I put it there because I want mlbackup to be executed as root (spare me the speeches, I’m trying to back up server data here) and I want it to have access to the system.

Here’s the contents of the file I created there in /Library/LaunchDaemons.  The name of the file is at.maclemon.mlbackup.radata.daily.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<dict>
  <key>Label</key>
        <string>at.maclemon.mlbackup.radata.daily</string>
  <key>ServiceDescription</key>
        <string>Back up the disk RA Server Data every day.</string>
  <key>LowPriorityIO</key>
  	<true/>
  <key>ProgramArguments</key>
  <array>
	<string>/usr/local/bin/mlbackup</string>  
	<string>/private/etc/maclemon/backup/ramona_serverdata.mlbackupconf</string>
  </array>
  <key>Debug</key>
  	<true/>
  <key>StartCalendarInterval</key>
  <array>
  	<dict>
    		<key>Hour</key>
    		<integer>4</integer>
    		<key>Minute</key>
    		<integer>0</integer>
  	</dict>
  </array>
  <key>AbandonProcessGroup</key>
	<true/>
</dict>
</plist>

Again I’m not going to go into a whole lot of detail here about what I did, but I will point out a few items of interest.

Note the “ProgramArguments” directive.  There, you see that I’m calling mlbackup and giving it the full path of the config file I created earlier.  This is vitally important, otherwise mlbackup won’t do a thing.

The “StartCalendarInterval” element tells launchd when to start the task.

The “AbandonProcessGroup” item is required if you intend mlbackup to send you email at the completion of the job.  Without this element, mlbackup won’t send an email and may not clean up the backup sets in a way that you intend.

Finally, tell launchd that you created the file and you want the system to load it up and pay attention:

sudo launchctl load -w at.maclemon.mlbackup.radata.daily.plist

Launchctl is the terminal command to force launchd to load up the file.  For some odd reason, without the -w, it won’t load.  Why you have to do this is unclear in the Apple documentation for Snow Leopard.  In the past, -w controlled whether the job was enabled or disabled.  In Snow Leopard it seems, without using -w, launchctl just looks at you funny.

That should do it.  You should receive an email from the root user at the end of the backup task if all goes well.  If you didn’t receive an email, be sure to check your backup volume and verify that something was written there by mlbackup.

One other note that I forgot to include.  mlbackup is mainly just a bash script, but it does contain a newer version of rsync.  mlbackup uses this version of rsync over the one that Apple supplied with OS X because it has some vital optimizations for the backup to take place.

That should do it.  Hope this helps get your server backups running well.

Reblog this post [with Zemanta]

A brand new NO CARRIER

For those of you who follow my adventures here, but not necessarily my adventures over there, you should be aware that we’ve posted NO CARRIER Episode #11.  This episode is very special to my heart because it’s the first show we did in our new studio (Whitey is still over Skype though).  I think the audio quality is MUCH better.  Of course, we’ll be tweaking as things move on, but the new studio and the new processes we’re using to lay down the audio sound damn fine if I do say so myself.

Check it out and let us know what you think!

Reblog this post [with Zemanta]

Industy buzzword that needs to die: the “experience”

RENTON, WA - MAY 4: (FILES)  Microsoft product...
Image by Getty Images via Daylife

One of the industry buzzwords that needs to go to the grave is the user “experience.”

Don’t quote me here, but I recall this buzzword being developed by Microsoft as part of the marketing campaign behind Windows XP.  XP was supposed to be “experience” or “expert” or “Xtra stuPid marketing,” I’m not sure.  Don’t get me wrong, I’m not an XP hater.  But I’m definitely a hater of the term “experience.”

The user interface is just that, an interface.  The term “experience” has its roots in the passive voice.  Somewhere in there you’ll find that the user “experience” for computing generally sucks.  It’s just a nicer way of saying “our user interface sucks, but it provides an experience.”

Whatever.  Drop it.  Look, other computing companies are just as guilty on this one (I’m looking at you RIM and Apple), but the fact is – everything great on the Internet is ruined by salespeople and marketers, period.

This is just one example of many.

Reblog this post [with Zemanta]