In Part One of Bob Guilbert’s interview with HFMWeek, he discussed the increasing call for sound disaster recovery (DR) solutions, prompted by changing hedge fund regulations and hedge fund operational due diligence. In Part Two, Bob talks about tape back-up as a disaster recovery solution and the growing trend of cloud computing in the hedge fund industry.
BG: It is our opinion that tape back-up is not efficient or sufficient enough in order to have an effective disaster recovery solution [for hedge funds]. Tape-back up gives you a very low RPO and RTO, and there is high-risk in using tape alone to enable disaster recovery. The biggest risks are, firstly, that the data is restored properly from the tape. We have seen multiple instances where data is written to tape, but recovering it does not always work properly.
Secondly, if you do have some form of a disaster and you have your data stored on tape and the disaster destroys your primary computer systems, you have nowhere to recover the data. Unless you have idle resources or idle servers available, it makes it very difficult to recover the data to an operating platform. If you have to borrow or order new servers to recover the tape to, that would drive your recovery time up to unacceptable limits. As a result, we do not feel tape alone is sufficient to provide a true disaster recovery solution.
HFM: Why is there a growing trend for firms to explore the use of virtualisation and cloud computing for disaster recovery?
BG: In both the US and UK we have seen the occurrence of this trend. Virtualisation helps drive costs down considerably in terms of disaster recovery. The reason for this is that people traditionally procured a second set of equipment and had it set-up in a co-location facility. They would then replicate their data from a production environment to a data recovery environment and buying that second set of equipment was fairly costly. What we are seeing in the market today is that many firms can work with outsourced providers to utilise their cloud infrastructure, which is completely virtualised, meaning the clients do not have to purchase any equipment at the data recovery site.
Many providers offer a virtual infrastructure, whereby the data that is on the production-site can be replicated into the virtualised cloud. Lower costs and increased utilisation are key benefits to this model. The utilisation rates out of the server and storage environment are higher, and by using virtualisation they can offer lower cost points to the clients because they are getting greater efficiency out of the infrastructures. There is definitely a growing trend for companies to opt for outsourced providers who offer a virtual environment, as opposed to buying and storing their own equipment in a secondary data centre.
HFM: What are some ongoing practices firms can undertake to ensure their disaster recovery solutions remain efficient and up-to-speed?
BG: The best approach that funds can take to ensure an effective disaster recovery system is to test them periodically. Whether on a quarterly basis or every six months, it is vital to test to ensure the applications and data come up as planned, and testing also enables the funds to ensure that key users have access to those environments. In addition, it helps the users to know how to get into these systems in the event of a failure. The technology is evolving, and the funds must ensure they have effective plans in place. If they are not tested then there is no guarantee the systems will work in high stress environments. Our recommendation is to test their environments as frequently as deemed necessary, at the very least once a quarter.
You can learn more about Eze Castle's Eze Disaster Recovery services here.
More Disaster Recovery Resources:
Report: Establishing Disaster Recovery & Business Continuity Plans (20 pages)