The Top 5 Business Continuity Excuses I Hear Every Day

Reader beware: this is going to get ugly

Before we start: if you’re a technical person reading this, please forward to your boss, your boss’ boss and your boss’ boss’ boss. Forward it all the way up the org chart. Print it out and tack it to the bulletin board in the cafeteria. Make sure no one in I.T. management or at the C-level can pretend they didn’t know.

 

In the past few months, two White Star Software customers experienced one-in-a-million interruptions in availability of their OpenEdge application. I’m not talking about FIRE and BRIMSTONE DISASTERS. It was not Irma nor Harvey nor any other natural disaster. I’m talking about events like a construction crew 3 kilometres away cutting through a fibre link, or a random operating system bug corrupting files left, right and centre. My customers felt the wrath of events that were totally random and completely unavoidable. One customer was prepared and the other was not. Even for the one that was completely prepared, the costs to the business were significant.

This is where the rant part comes in: Outside of finance and medical, I’ll bet a bottle of Laphroaig 18 that 90% of you are not prepared for a stupid business continuity event that costs pennies to insure against.

 

The Excuses

If you watch trash TV, you know that cringeworthy feeling you get when bachelor #3 gets busted cheating on bachelorette #7. He sits in front of the camera crying, saying he didn’t know, or it wasn’t like that, he really cares for #7…what’s-her-name, Jill, no Jane. Guess what? You are bachelor #3, this article is reality TV and below are the top 5 BCP excuses that you better not try and use on me.

 

Excuse #1: It’s too expensive…we are but a poor one gajillion dollar revenue company

You know what’s expensive? Going out of business is expensive. DR/BCP costs go up exponentially, but you can get pretty far down the RPO/RTO curve before costs get out of hand. If you don’t know what RPO and RTO are, you better learn fast. The future of your business depends on it.

 

Excuse #2: Our business can survive losing 1 day of data

Hahahahahahahahahaha!!! Wait…hold on…hahahahahahahahahahaha!!! I agree, it’s completely acceptable for you to find out what non-existent orders you didn’t ship when the customer calls to ask where his shipment is. This isn’t 1984 and your orders are not coming in by fax. 24 hours of data loss!?! You wouldn’t even know what data you lost as it’s coming in and going out a thousand different ways.

 

Excuse #3: Our business can survive with the OpenEdge application unavailable for 24 hours

Like Bridge over Troubled Waters, this one is an oldie but a goodie. Some of you can manufacture, but you probably can’t pick, ship nor invoice without QAD. That’s ok, your customers won’t mind if you ship tomorrow. Then again, you probably lost a day or two of data so you probably won’t ship their orders anyways.

 

Excuse #4: We have NEVER had a DR/BCP event in 21 years, 4 months and 1112 days

Somebody buy them a plaque. Like mutual funds, past performance is not an indication of future returns. The reality of my 24 years of experience supporting OpenEdge environments is that the problem is rarely OpenEdge related. It’s a power supply, an HBA, an electrical panel, a squirrel. One of my favourite stories is about an HVAC unit on a roof. Have you heard it? It busted and started gushing water. Guess what was right under the HVAC unit?

 

Excuse #5: Don’t worry, Billy will just clone the jimmy-jangle and fire up the whizbangboo

This excuse I put squarely on the shoulders of the technical people. They lie (there is no better word) to themselves and to their superiors, assuring the business that their automagical VMWare/VMotion/VMooMoo© solution has them all covered. Don’t worry, they say. We don’t need a written plan. We can do it all by hand. You can trust the whizbangboo. It knows what there is to do. And you can trust Billy, too.

The semi-technical IT manager, who paid for the whizbangboo, reports to his non-technical CTO, who reports to the CEO that, you guessed it: We’re good! We know what we’re doing! We have it covered!

And the magical whizbangboo? One time I saw it not only fail to whiz-bang-boo the jimmy-jangle, but in the process of failing it also destroyed everything. EVERYTHING.

And let’s not forget Billy the DBA/sysadmin/network admin. In the past 24 years, I’ve seen whiz-bang-boom Billy:

1. Quit

2. Go on vacation

3. Die (Yes die. Before he finished his documentation. How inconsiderate)

Morbid, I know. And yet, the death of that guy, you know, the guy, is a real business risk. Need I tell you that I have personally witnessed all three of these things happen at the most inopportune time?

 

Paul…I’m scared…I don’t like you…go away…but first tell me what to do

Calm down, it’s going to be alright. The mean, scary Paul is gone and nice Paul is in the saddle.

First, sit down with each business unit and find out what happens if the system is down or if you lose data. The shipping representative will tell you that he can’t ship and customers can’t track their orders online. The manufacturing rep needs lot and serial numbers and can revert to stickers and magic markers, but lost data means lost traceability which means . Customer service can’t do anything but whisper soothing words to angry customers. As for the web site: can you say static html pages?

Second, take this information to the C-level and say here’s what happens to the business. How much of this is acceptable? Yes I’m oversimplifying, but I’m already 950 words into this rant.

Third, figure out what all this is going to cost. Somewhere along the cost-benefit curve is a point that’s acceptable to both I.T. and the C-level.

Last, and most importantly, do it. Write the plan. Deploy your solution. Test and update it every year. Either that or try not to squirm when you see yourself in my next business continuity rant.

 

Paul Koufalis
White Star Software

 

Leave a Comment