Guest Column | June 25, 2009

Taking A New Look At Business Continuity

c-mccall

Written by: Chris McCall, Product Marketing, HP StorageWorks

It’s a given — virtually every organization knows that successful business operations often depend on continuous application availability. Most companies rely on internal applications, ranging from email to enterprise resource planning to payroll systems, to keep their businesses moving. At the same time, they also depend on external-facing applications for everything from selling products to automating the supply chain with suppliers and partners. The failure of any of these business-critical applications can be catastrophic to a company.

The causes of downtime are numerous. Failures range from the most common, such as disk failure, to natural disasters, such as floods, tornadoes, and hurricanes. There are also those failures caused by unexpected power or cooling outages. Then there are those that don’t count as true disasters, for example a fire that causes smoke and water damage in even the best-protected data center. Human error can range from the wrong circuit breaker being thrown to an accidental discharge of fire-suppression material. Whatever the situation, when a failure occurs, the scope of its impact can range from a single application to multiple business operations or loss of data.

Business Continuity Within The Virtualization Context
Business continuity is not optional. It’s a necessity for companies of any size, including small and midsize businesses. A comprehensive business continuity solution includes not only the capacity to recover operations after a major disaster, but also the capacity to remain online during minor disruptions. So, how is business continuity best achieved in today’s increasingly virtualized environments?

The good news first: Server virtualization software has brought great relief to IT organizations searching for high availability and disaster recovery solutions. By better integrating servers and storage through virtualization, customers are able to protect existing technology investments and improve business continuity.

And now the not so good news: the same organizations that choose server virtualization as part of their high availability and disaster recovery strategy often overlook the increased storage demands and requirements. Virtualization enables cost-effective high availability and disaster recovery, but it also places higher demands on the storage infrastructure, increasing implementation and operational costs. High availability of server virtualization software such as VMware requires the continuous availability of shared storage volumes. It is important to note that not just any storage array is ideal to support virtualized server environments.

Lots of options exist for implementing shared storage. If improving business continuity is a top priority for the server virtualization project consider the following:
  • Redundancy for high availability: Most shared storage options deliver high availability with component redundancy and hardware RAID options. However, this approach does not protect against failures that occur outside of the box. For example, power outages, or air conditioning unit failures. It also doesn’t address the number one failure, those caused by human error. To take data availability to the next level, consider storage that can manage multiple copies of data striped across different systems so you can distribute the units to different areas of the data center or even different data centers so volumes stay online during “beyond the box” failure conditions.
  • Replication for disaster recovery: Most shared storage options provide add-on replication software options. Even if disaster recovery isn’t a requirement for the initial project, make sure you understand the impact add-on software will have to Total Cost of Ownership (TCO) of your solution. In addition, understand how much capacity you’ll need to reserve for the volumes/Logical Unit Numbers and snapshot overwrites as it can rapidly increase the amount of raw storage required. Some solutions require setting aside 300% of the original volume size!

The reality is that effective business continuity strategies need to have both high availability and disaster recovery components. But some organizations and vendors fail to differentiate between the two, believing that if they have one, they also have the other. In fact, high availability and disaster recovery are two distinct components of a good strategy. Understanding the differences can help drive a more comprehensive business continuity strategy.


Know The Basics For High Availability And Disaster Recovery In Storage Virtualization
Consider business continuity as a spectrum: At one end is high availability, which means no down time and no data loss due to a disaster or failure. On the other end is disaster recovery, which almost always involves downtime, lost access to data, and some data loss.

High availability helps to keep applications online and accessible in the event of a failure. It’s fast and automatic, with no data loss. Synchronous replication is the strategy most often used to implement high availability in storage. Because of the low latency required to replicate storage in real time, high availability can be leveraged across a single data center, building, or campus.

Disaster recovery helps to recover operations after a catastrophic failure. Asynchronous replication is the storage strategy most often used to implement disaster recovery in storage. This approach makes and keeps copies of data at a remote site so that, in the event of a primary site failure, operations can be resumed at the remote location using a copy of the data. Asynchronous replication does not have the latency requirement of synchronous replication. Instead, it creates point-in-time copies of data that are not subject to distance limitations. The historical point in time to which an organization expects its operations to fail-over is known as its recovery-point objective, and the time it expects to take to perform the failover and resume operations is known as its recovery-time objective.

Don’t Take Risks With Traditional Shared Storage
Traditionally, high availability storage solutions have been expensive to implement and complicated to manage. Achieving true high availability and data protection in virtualized server environments requires highly reliable virtualized shared storage. In most storage systems, data is not distributed. So a failure that takes a data center offline can easily take the storage area networks (SANs) offline as well. The high-availability solution for traditional SANs uses redundant components.

While this approach protects against a hardware failure, it does not protect against datacenter failures from power and cooling problems or simple human error. To protect against a comprehensive set of failures with most storage systems, you must purchase a second storage system and optional synchronous replication software. Then maintain two copies of data in two SANs at two locations. In that case, if one site fails, a backup copy of the data is ready to support virtual machine failover. This is a complicated and expensive approach. New scale-out storage architectures exist that have options to manage multiple copies of data distributed between different systems within a single virtual volume. This approach eliminates the need for a second system and add-on software.

If some level of downtime and data loss is acceptable or your sites are more than 100 miles away from one another, a disaster recovery solution may be the right answer. Just make sure the total cost of ownership is thoroughly investigated. Understand the cost of additional software and support along with how much raw capacity will be required to implement.

There are many potential reasons for failure that can cause business-critical applications to go offline and endanger a company’s viability. While server virtualization has provided solutions that help with business continuance strategies, many traditional SANs do not provide effective support for the storage availability and disaster recovery that server virtualization requires.

The best shared storage solutions for server virtualization provide capabilities for both high availability and disaster recovery and a comprehensive approach to business continuity. They deliver “beyond the box” data redundancy for true high availability and cost effective asynchronous replication schemes that don’t waste storage by requiring inefficient, up-front capacity reservations.

Chris McCall handles product marketing for HP StorageWorks.
Newsletter Signup
Newsletter Signup
Get the latest channel trends, news, and insights