Brad’s first blog contest – backup horror stories

And we’re done. I’ve received some excellent entries in the Backup Horror Story contest. Give ’em a read below, and feel free to add your comments too. And of course, feel free to share your horror story too, but sadly, this contest is closed.

Welcome to the first (of hopefully) many contest I’d like to run on my blog. I’ve been doing this tech-writing thing for a while now, but I’ve always been looking for ways to more closely engage with you. I think I may have found it with this style of contest — I get you to write for me. It’s ok, I have prizes 🙂

I’ll get to the details in a moment, but first let me frame the scenario: Backup Horror Stories.

We’ve all been there. We’ve all, at one time or another, lost some important piece data; your digital photo collection or music collection. Perhaps you’re a writer and all your ‘in progress’ manuscripts are now toast. You’ve lost data. No external copy or backup available. Poof! Done!!

That’s the scenario, now the details:

The Prizes and Sponsor
Clickfree – a cool Canadian company that specializes in no-brainer backup solutions is sponsoring this contest and has given me a few Clickfree Transformer SEs for this contest. Since Clickfree is all about simple backups, the theme of the contest kinda suggested itself 🙂

In the past, I’d reviewed Clickfree’s C2 Portable Backup drive – a solid unit. The TransformerSE we’re giving away in this contest uses similar technology, only you provide the USB drive. Here’s the official company line on the TransfomerSE:

The Clickfree Transformer SE (Special Edition) turns any USB hard drive, iPod, or iPhone into a simple automatic backup solution for your computer. Just connect the Transformer SE to your computer, then connect the USB hard drive, or iPod/iPhone via USB into the Transformer SE. Backup will start automatically onto the available free space of the connected product, whether it is a 3rd party hard drive, or an iPod/iPhone.

I will be doing a full review of the Clickfree TransformerSE very soon, but don’t let that stop you from entering the contest.

To enter:
Take your worst / best backup horror story and write-up a comment to this page that describes a data loss horror story that was averted or would have been prevented if you had a trusty recent backup. That simple.

Important: If you’ve not commented here before, your comment may be held in moderation until I can authorize it. No worries, I do this daily.

The Rules:
I’m keeping this fun, so the rules are simple.

1) It’s a blog comment contest – tell me your story in a comment to this page using the form below. Anyone can enter. Only comments entered into the comment form below on this page will be eligible.
2) After that you post a comment, let me know through a private email notification to me (via this in-blog contact form). It’ll let me know you’ve entered and be sure to provide a valid email address for follow-up should your entry be selected. No, I won’t sell or spam you..the email address is to be used ONLY for this contest. After the contest, all email entries will be deleted.
3) Top 3 comments will be selected for a prizes. I’m not sure what criteria I’ll use to judge yet. Maybe the funniest, most dramatic, most potential for loss-of-life, I don’t know. Maybe the most support from other commentors (get your friends to help out!). But there will be three, and I’ll write about them in a follow-up post.
4) Random draw for a few more prizes. It’ll be random.
5) Winners notified within a week, delivery within a month via Canada Post.
6) The contest starts now (March 1, 2010) and runs until Midnight, March 31, 2010. Timestamp of the blog and corresponding email to me will determine entry date and time.

Bonus Prize: Everyone Wins
Ok, now this is also very cool. For the month of March, the fine folk at Clickfree have also authorized a discount code for orders on their site. Place any order, use this code ( Grier10 ) and they knock 15% off the price of your order.

Related Posts with Thumbnails

17 Replies to “Brad’s first blog contest – backup horror stories”

  1. We do independent game development at work and much of my time works remotely from one another, we have been both bitten and saved by our backup repositories both physical and software.

    One story in particular involves a programmer who felt that once he had his assigned tasks felt that he did not need to communicate with the team. He would work on his tasks and then check in his files in to SVN. Unfortunately this partiular time he was working on a bigger task that took him a week to complete, during a time when we were making massive changes to the game. As such when this programmer was finished his task, he checked his changes into SVN as usual (since they worked fine on his end on an week old version of the game) and never notified anyone.

    Unfortunately for the rest of us, we were also working on massive changes to our game and pulling all-nighters to do it. We updated our local SVN repos and tried to work with the new changes that we were all making (plus unknowingly the changes this other guy made)… only the game ended up crashing. It worked fine before this latest update and no one was supposed to have made any changes that would cause this problem, and yet, here it was, the game was crashing. Franticly we looked at all the changes “we” had made for the problem (remember we did not know this guy had checked anything in) and arguements rose over who was at fault of this issue (oddly no one fingered the particular programmer in question since we didn’t know he had committed anything, plus it was 4am and no one was thinking straight).

    Being an artist, during one of the more heated programmer debates I took a closer look at the SVN logs and discovered that indeed this guy had stealthily checked in something between our changes. Luckily for us, when you use a version repository system (software or hardware) you can roll back your changes to a previously uploaded state that’s stored on the device (since all the data gets saved for each change that is made). I gave this a try locally and quickly discovered what the problem was and relayed it to the rest of the team. Crisis averted thanks to our backup system! We ended up permamently rolling back the changes this guy had made on the servers and had a pretty healthy discussion with him the next day (it was 4am at the time and no one had the effort to put in a nasty email).

    That’s my example of how a backup system saved our butts and I think a pretty valid story that signifies why people should invest in a backup solution that is more than just an external HDD but can also track changes you have made too.

  2. I got a frantic call from a grad student once, saying that someone had broken in and stolen his computer with all his thesis data and his 3/4 finished draft thesis — two years of data collection research and writing gone!
    I said, “Didn’t I tell you to back everything up every week at least?”
    And he said, “I did, but they stole my external drive and my back up CDs as well! The cleaned me out! What am I going to do? I can’t face starting over! I’ll have to drop out!”
    “Didn’t I tell you to send a copy of your thesis to you mother every time you finished a chapter?”
    “You did!”
    “And did you?”
    “I did! I forgot all about that! I did that just Wednesday! It’s at my Mom’s!”
    And all was well
    Okay, so the moral is, the principle of offsite backup might be a little too abstract for some users, but everyone gets sending a copy to Mom.

  3. My own story is a little different. I backed up everything religiously, but about six months away from completing my thesis, it finally occurred to me that I had a stack of floppies (and my brother and mother had stacks of floppies at their houses) that could only be read by an Osborne computer and that I was the only person I knew who still used an Osborne computer…. so I went out and bought a backup Osborne. And sure enough, my computer died a month or so later. But I thought, no problem, I have my back up, and it survived long enough for me to upload everything to the UofA’s MTS system. There! Backuped on a mainframe, what could be more secure than that? So it didn’t bother me when I took my Osborne in for repair and the guy at the counter at the repair place called everyone in from the backroom to look at an actual Osborne and they all laughed uproarously at the thought of trying to find parts are having a clue how to fix one….. I was covered. So I bought an IBM and worked for a couple of years on it knowing that my data was backed up at the UofA somewhere and not worrying about it until years later I went to do a follow up study and asked for my data and discovered that not only is the University of Brasilia the last place on earth still using MTS, but the physical computer I’d used all those years has been torn down and replaced with a couple of Mac servers….

    Fortunately, I still have everything in hard copy because I am, you know, anal, but that’s still a lot of retyping if I ever get around to using that data again….

  4. @Runte: yeah, that’s one flaw with backup scenarios. You have to keep the backup on *current* media.

    A few years ago I backed up my photo collection on floppies. Then it grew too big for that, so I used CD / DVDs — and don’t have a floppy reader in the house.

    Some day I expect that I’ll not use CD / DVDs either, so those backups will be worthless.

    Currently, I use a removable 500GB hard drive as my backup device. I have a few of them and swap them between a secure offsite storage place (Not moms 🙂 ) Work and home.

    Online backup is also an option these days, though some folk get queasy at storing their data in the Cloud.

  5. I have many backup and data loss stories. Here’s one, to enter the contest, Brad.

    At a previous employer we backed everything up on tape.

    So, one day I find my web logs have become corrupted.

    I check the server folder and find old logs have been purged.

    I put in a help ticket to request the tape. Unfortunately, it’ll take three weeks for the tapes to be retrieved. Ah well, that’s ok. My reports are not a 911 and I’m just happy my data is safe.

    Six (6) weeks later the tapes arrive and reveal my web logs cannot be restored from the backups. Data gone.

    My backup lesson is to always test restore a backup after it’s first been created. Also do test restores periodically, because you can reach a fail point even after an initial test.

  6. Hey Johnn,
    Painful, yet good lessons to learn. Lucky for me I’ve never had a bad backup situation, but yes, I do test them prior to committing to a system.

  7. Another lesson I learned on my personal machine is to always use custom setup during program installation. This often lets you choose your data directory.

    For anyone who doesn’t mirror their whole drive as part of their backup process, they’ll want to ensure their data is in with the other folders they have flagged for scheduled backups.

  8. Agreed! Custom setup is essential for many reasons. In my case I use one physical device to keep my OS on, and another for my program files, installation data, etc.

    Makes reinstalls a little easier too.

  9. I work for a large manufacturing corporation and for years we’ve been using tap backup. So long that the backup server is running SunOS 5.8 and Backup Exec 3.4. to a Quantum backup robot the size of two fridges. Apparently they can’t be upgraded currently because of the VAX machines still on the network that won’t support anything else. The backup process sort of fell on me when I started because well….I was the new kid in the dept. Being outdated and all, I just followed the instructions in the tattered old instructions someone had typed out by some other sucker that couldn’t remember the long list of steps to get everything done.

    Things seemed to be working fine and I never questioned the backup process, because I didn’t know enough about what I was actually doing to be able to question….things. Recently though, a number of things started to peak my curiosity of what I was actually doing. When finance kicked back the latest request for new tapes I started to ask around. Why had we always thrown new tapes into the machine, pulled old tapes out and shipped them to Calgary to be stored by Iron Mountain? After all these years of doing this, we must’ve had HUNDREDS of tapes in the library. I never thought about it but there wasn’t any sort of rotation for the tapes. This was the start of the problems.

    Next, we realized that no one had ever attempted to do a restoration of the data. Upon further investigation we discovered that it wasn’t just a matter of people not having attempted it but that we couldn’t actually do a test restore on the old systems without affecting the production systems. Had we wasted hundreds of thousands of dollars in tapes in a vault that could be worthless?

    And then it happened. One morning one of the manufacturing managers called me frantically because he’d noticed that the piece of junk old monitor we had for the server was on…and saying that it couldn’t boot because of drive corruption. And of course, I was expected to support the system to get it operational again.

    What the crap was I supposed to do? I’d played with Linux a bunch and that but I sure as hell didn’t want to test my knowledge on a system that old that had been backing up data for a million dollar a day operation. Naturally, I took my sweet ass time getting up to the server room. Trying to think of what my options were.

    By the time I got to the machine the machine had gathered a small audience. Anxious to know whether or not the junkard system would come back up. It’s always funny when people don’t think they need to spend money on a backup system until it’s critical. I mean, we weren’t in need of the backups themselves but the realization that the system obviously wasn’t adequate started to sink in very quickly.

    My first step was clear, sit down and read the message again. Obviously I had to confirm that the “user” was reading it right. Wait, what’s this? It says “Hit enter to reboot the system”? No one had thought to do it, yet.

    Closing my eyes as my index depressed the Enter button caused me to take a deep breath.

    I was pretty sure that no one in the room would ever trust my judgement ever again. I’d taken the noob helpdesk approach. I hadn’t even come with a notebook or cds as tools to resolve the issue. Nor had I said anything “techy” about possible causes, resolutions or concerns.

    Slowly, I opened my eyes to the SunOS logo. A sigh of relief filled the air conditioned room. The system was coming up properly.

    It’s been a couple month since the system had the problem but it hasn’t mysteriously rebooted in that time. We’ve started to recall old tapes from the very beginning of the backup process.

    Now that we’re no longer ordering new tapes it’s added a hefty amount towards the backup system that they’ve been apparently planning to implement and just needed budget for. It isn’t much in the grand scheme of the project but the lessons learned added invaluable worth to it all.

    Unfortunately until such a time as the new system is in place I still have to go through the motions of using the slow and old robot…

  10. @foomanizer: Wow, amazing that an org that runs a $1m/day operation would leave backup to antiques like that.

    Very cool that they appear to have seen the light.

Comments are closed.