Curt FriersonCurt Frierson, Chief Technology Officer

The renowned futurist, Ray Kurzweil, tells us that technology change is accelerating at an increasing rate.  In fact, Kurzweil says, “at today’s rate of change, we will achieve an amount of progress equivalent to that of the whole 20th century in 14 years.”  If you subscribe to this reasoning, it is not surprising that it always seems as though technology is changing faster than ever before.  Technologies such as cloud computing and virtualization have exploded over the last few years to bring us completely new ways of thinking about how we deliver IT.  Many financial institutions are already realizing the benefits of these technologies, while many more are still struggling for funding to get projects off the ground.  Regardless of where you are currently on the technology curve, it is clear that cloud computing and virtualization will play an enormous role in the future of IT.  As we begin to adopt these latest innovations, however, we are presented with a new set of challenges to address in order to take full advantage of the benefits these technologies can provide.  Outlined below are several often overlooked issues to consider as you begin to develop your IT strategy for the future.

Bandwidth/Connectivity
As more and more solutions move to the cloud, communications begin to play an even more critical role in ensuring service availability and performance.  Moving a traditionally in-house solution, such as email or data backup, to a service provider’s data center requires reliable connectivity in order for the system to function.  Most often, the circuit providing access to a cloud service provider is an organization’s Internet circuit.  If the Internet circuit goes down, any cloud services relying on Internet access will be unavailable until connectivity is reestablished.  Organizations that plan to rely heavily on cloud solutions need to plan for redundant, highly available Internet access.  If Internet access is centralized at a main office or operations center, fault tolerant WAN communications will also prove to be a critical component in order to ensure that a single circuit failure to the main office does not bring down access to cloud services at every branch.

In addition to reliable connectivity, adequate bandwidth must be provided to ensure sufficient performance.  Many organizations are still under the impression that a 1.5Mbps T1 Internet connection provides ample bandwidth.  This may be true in a small organization where the Internet connection is used exclusively for web browsing and sending external email.  As institutions begin to consume more and more services via the Internet, however, insufficient bandwidth creates risk to critical service availability and well as simple web browsing.  Institutions must plan to expand the capacity of their circuit(s) when implementing additional services that will consume more bandwidth.  This may seem obvious, but many institutions fail to plan for this event when implementing cloud services.  This failure may be due to fact that their current Internet bandwidth has been adequate for many years, resulting in a simple oversight in the planning process.  Another likely cause could be an aversion to including any additional costs in a plan to implement cloud computing.  Either way, failure to plan for additional bandwidth as part of any cloud project can have serious implications and result in emergency, unbudgeted expenses down the road.

Storage
One of the primary goals of most virtualization projects is centralization.  Centralizing critical systems on a few physical hosts allows organizations to more efficiently utilize hardware resources and more easily manage systems.  Many institutions also believe that by purchasing fewer physical servers, they will realize substantial hardware cost savings.  Unfortunately, many of the greatest benefits of virtualization require shared storage, which can quickly consume any potential cost savings.  Shared storage is an array of disks that can be accessed by multiple machines – typically a NAS or SAN.

SANs can range from basic, entry-level devices to massively redundant, multi-petabyte capable systems.  They typically serve as the single pool of storage for the entire virtual environment.  Most organizations first looking to virtualize look to entry-level SANs in order to more easily justify the cost of the project.  If you are going to put all of your eggs in one basket, however, you must make sure that the basket is of sufficient quality to support your critical eggs.  Entry-level SANs, although cheaper, lack many features that are a required in a virtual environment.  They often lack redundant components, creating single points of failure in one of the most critical components of a virtualized infrastructure.  If a hardware failure causes the SAN to be unavailable, all virtual servers stored on that SAN will be down.  This could cause a total failure of your entire IT environment.  In order to address this risk, any virtualization project involving critical systems should plan on a more intermediate-level SAN that has complete hardware redundancy to eliminate any single point of failure.  This will definitely add to the cost of the project, but will provide a much more reliable platform for your institution.

Management/Visibility
For both cloud computing and virtualization, traditional methods of IT management do not have adequate visibility of these newer technologies.  Therefore, new methods are required to adequately monitor and manage these solutions.  Customers of cloud services must typically rely on the management tools provided by the service provider, although new methods of monitoring cloud solutions are beginning to emerge.  Virtualization has spawned countless new management tools to help manage the many complexities of a virtual environment.  Both technologies share a common attribute; it is much more difficult to have complete visibility into all of the underlying components that make the solution operate.  This characteristic makes these environments more challenging to troubleshoot and identify the root cause of issues when they arise.  Failure to plan for this difficulty can result in prolonged downtime.  To maximize visibility in these environments, it is important to consider the tools available to help provide sufficient manageability.

Disaster Recovery
Disaster recovery is an area where many organizations have historically struggled.  In fact, this is one of the reasons driving the widespread adoption of cloud computing.  Utilizing cloud services shifts much of the burden of disaster recovery to the service provider, thereby reducing DR equipment costs, planning, and resources required by the end user organization.  Virtual environments, likewise, promise much more robust DR capabilities.  To achieve these enhanced recovery capabilities, however, new methods of data protection must be enlisted.  These new methods involve protecting the entire virtual machine for quick and easy restores.  These solutions can range from hardware-based snapshot and replication to image-level backup of virtual machines.

Whichever specific solution you choose, you will most likely find that you will be protecting significantly more data than you were in a physical environment.  This is due to the fact that in a physical environment, most organizations limit backups to only their critical data since restores would likely require reinstallation of the operating system and applications.  Many of these types of solutions now include deduplication and archiving features to offset the amount of additional storage needed and reduce backup costs.  These features can provide great benefit, but are typically found only in higher cost data protection solutions.

An additional issue to consider is how you will deal with a complete site or service failure.  While cloud computing and virtualization can provide many DR benefits, there remains a very real probability of failure at some point.  How will you manage an outage of your virtual infrastructure or cloud service?  How long can you afford to be down?  What are your recovery options if faced with an extended outage?  These are all questions that should be considered before proceeding with these solutions.  Although these problems are not new, the ways of addressing the risks of these new technologies often involve modern solutions.  This fact is something that some organizations, unfortunately, realize the hard way.

Cloud computing and virtualization will be the platform on which the future of IT is built for years to come.  Like all innovative solutions, they promise great new benefits and capabilities.  Many organizations looking to capitalize on these benefits plunge headfirst into implementation projects without sufficiently addressing other areas of their IT environment.  Failing to recognize these issues can significantly limit the effectiveness of these solutions.  By taking a proactive and comprehensive approach in your plan to adopt these technologies of the future, you will considerably improve your chances of creating a solid foundation to build on for years to come.