Hurricane Sandy created a path of devastation, disrupted countless lives and businesses, and taught us many lessons. Over the last week, here at Eze Castle Integration we have reflected on what we learned now that the lives of our employees and clients are slowly getting back to “normal.”
Communicate Openly & Often.
With Hurricane Sandy we had the “luxury” of knowing the storm was approaching, however, that isn’t always the case. Companies must have a communication plan that can be quickly initiated should an unforeseen disaster occur. We encourage clients to look into Automated Messaging Systems that allow notifications to be sent to all employees or clients simultaneously across multiple devices (i.e. home phone, work phone, cell phone, email).
Prepare, prepare, prepare.
Knowing the storm was approaching, some clients chose to proactively activate their disaster recovery (DR) systems the Sunday before the storm. This ensured a very orderly DR activation process and gave users the opportunity to test and validate access to their required applications and files before the markets were set to open on Monday. Other DR clients took a wait-and-see approach – which was okay because our DR team was on standby – with the added precaution of having users test remote connectivity and access to help ensure any user issues were addressed before the hurricane impacted operations. The important aspect of both approaches is that prepared firms have prepared and knowledgeable employees.
Conduct realistic tests.
DR tests should be conducted at least twice per year. Testing lets firms validate that the applications in the DR site match the business needs. It is important to be realistic about what applications require DR because adding a new application, such as an order management system, during a disaster isn’t an option. Beyond validating applications, testing helps users get comfortable with the DR environment and login process. You can read more about what is involved in DR testing HERE.
Load testing is also an important aspect of the testing process as it helps guarantee your DR system can accommodate having all employees access it concurrently.
Do your due diligence.
Unforeseen vulnerabilities were exposed as a result of Sandy. We heard about data centers that had their fuel supply stored on the ground floor. When the power (and electric fuel pumps) went out, employees had to carry buckets of fuel up 18 flights of stairs to keep the generators going. Stories such as this highlight the importance of asking detailed questions as part of your technology selection process. Data center questions include:
Where are the generators located?
Where is the fuel supply located?
How much fuel is onsite? What is your emergency delivery plan? What if roads are blocked?
Have you ever experienced an outage?
How often do you communicate with clients during a disaster incident?
Use Sandy to understand the contingencies your service providers have in place to ensure continuous operations and protect your data. Ask them how they fared as a result of Sandy and get a copy of their Disaster Recovery & Business Continuity Plan.
Some key questions to ask include:
Did you experience any business interruption? If so, how long and what was the impact?
Did you activate your disaster recovery plan?
Was any of our information vulnerable or inaccessible?
Were your key service providers impacted?
Our friends over at the Regulatory Fundamentals Group also published their eight lessons from Sandy, which you can read here. Our Disaster Recovery Guidebook is another handy resource for firms looking to reassess their DR and BCP strategies post-Sandy.
As always, Eze Castle Integration is available to discuss best practices as well as how our clients benefit from our Eze Disaster Recovery and Eze Business Continuity Planning services.
Photo Credit: Flickr
Categorized under: Business Continuity Planning