Home | About | Recent Issue | Archives | Events | Jobs | Subscribe | ContactBookmark The Sterling Report


    

How critical is M&A to software vendor growth today?

More important than it was 3 years ago

Less important than it was 3 years ago

It has been and will always be a key growth tactic


Dynamic Data Center – Getting Your Head into the Game

By B.V. Jagadeesh, President and CEO, and Rob Reiner, Senior Director of Marketing – 3Leaf Systems

IT management tasked with deploying data center resources to keep one step ahead of rapidly changing business environments need solutions that offer new levels of flexibility, efficiency and cost effectiveness. Mergers and acquisitions, global outsourcing, remote access for telecommuters and mobile workforce, and desktop virtualization are forcing IT management to consolidate data centers in a few key geographic locations.

This consolidation taxes existing data center resources and contributes to skyrocketing operational and capital expenditures. According to IDC, operational expenditures alone have increased by a factor of five in the last decade and projections are that they will continue to rise. Combating the ever-increasing costs and keeping up with server demand requires increased scalability, agility, flexibility and resource utilization.

The ability to quickly adapt and respond to changes in enterprise workloads requires a new vision of virtualization, the Dynamic Data Center. Three key components are required to build a truly ‘dynamic data center’ that addresses today’s challenges while lowering operating costs and capital expenditures: Virtualized pools of server resources, easy repurposing and fast, flexible provisioning.

Virtualized Pools of Server Resources
Today’s data centers are highly inefficient, with server resources typically running as low as 10 – 15% utilization. Frequently, a single application consumes only a small fraction of the server’s capacity, and redundant configurations can waste even more resources. As a result, there are literally millions of servers that have underutilized processors, underutilized memory and underutilized networking and storage ports. According to IDC, the inefficiencies of low server utilization waste $140 billion of unused server resources every year.

What if a commodity server’s processors, memory and I/O bandwidth could be grouped into logical and separate pools of resources, enabling servers to be configured with the amount of processor performance, memory, networking and storage bandwidth needed for the given applications that will run on the server? What if this server is adaptive and can scale to meet the demands of the application load, resulting in a perfect match between the needs of the application and the size of the server?

Virtualization in the ‘dynamic data center’ model introduces a level of abstraction that allows servers to be decomposed into pools of its fundamental resources: Compute resources, memory resources, and I/O resources. According to IDC, server virtualization hit ‘mainstream’ status in 2006, and they are projecting that by 2010 one third of new server hardware will use virtualization.

Available solutions for virtualizing a server’s compute and memory resources allow multiple Guest Operating Systems (GOS) to run on a single physical server. The processing power and memory can be configurable for each guest. This increases computer and memory efficiency, and allows processing power and memory to be allocated as needed within a single physical machine. Multiple servers can be consolidated onto a single physical machine, providing large savings in capital expenditures.

Virtualization technology is evolving to the point where virtual machine monitors, or ‘hypervisors,’ will be able to run across multiple physical servers. Hypervisors increase the pool of compute and memory resources for a given application and enable efficient scale-up computing across multiple commodity servers. This will allow complete flexibility, enabling multiple servers to be dynamically grouped into a single guest operating system, or a single server to support multiple guest operating systems, all drawing from the same set of compute and memory resources.

Offering this level of flexibility will be an important milestone in the evolution of the dynamic data center. By allowing complete flexibility in how compute and memory resources are provisioned for servers, both capital and operational expenditures are significantly reduced.

A traditional ‘hypervisor’ is adequate to serve the needs of multiple applications that can run on a single server. However, for applications that require dynamic allocation and re-allocation of resources, a new hypervisor technology would enable computing resources to be allocated with utmost flexibility.

For example, a web server could provision additional memory from multiple physical servers to create a large, high-performance web cache that can run much faster than a disk. A large web cache could reduce access to application servers and data base servers by as much as 70%, reducing the cost for application and data base servers by as much as 40 – 50%.

Grid computing is another application that would benefit from the ability to add compute power as needed, regardless of the physical server boundaries, allowing applications in the grid to elastically scale up as needed. The new hypervisor technology can reduce the number of physical servers required by as much as 50%, as well as drastically cutting management expenses.

I/O virtualization decouples the networking and storage interfaces from individual servers and applications. The I/O is abstracted into pools of resources that can be allocated or de-allocated as needed for each individual server or application. I/O virtualization removes the need for each physical machine to have its own NIC (Network Interface Card) and HBA (Host Bus Adapter). Instead each physical server, or compute node, connects to a centralized pool of NICs and HBAs through a switch fabric. This consolidates the I/O from a large pool of underutilized servers into much fewer NICs and HBAs that have much higher utilization.

‘Quality of service’ parameters are assigned dynamically to the ‘virtual’ networking and storage interfaces for each compute node, guaranteeing that each application gets the required amount of bandwidth I/O virtualization. This reduces the number of NICs and HBAs by up to 85%, reduces the number of Ethernet and Fibre Channel switch ports by up to 80%, and reduces the number of cables by up to 70%. The I/O profile for each server can be easily created, instantiated, and moved to another server. Together, these benefits provide huge capital and operational savings for the enterprise data center.

Easy Repurposing
With the Dynamic Data Center, the ability to quickly and easily repurpose server workloads is key to managing rapidly changing data center requirements and maximizing data center resource utilization. Workloads change across geographies and times of the day. For example, an investment bank is going to be using their servers for market data during the day, and in the evening the servers can be repurposed to business logic. A corporation that has offices throughout the world will have its workloads shifting depending on which time zones are having their daylight hours, and each shift could potentially require that the servers be repurposed. Easy repurposing enables server utilization to be maximized around the clock, minimizing capital outlay for dedicated server workloads and delivering up to 50% savings in power consumption.

Repurposing needs to include the ability to relocate servers from physical machines to virtual machines, as well as from virtual to physical, from physical to physical, and from virtual to virtual. Doing so enables complete flexibility, minimizes downtime for maintenance, permits seamless upgrades to higher performing server hardware and permits relocating servers from low utilized physical hardware. Unnecessary hardware resources can be powered up or down, thus saving power.

Fast, Flexible Provisioning
Currently, provisioning a server can take weeks or months by the time the cable is laid and the networking and storage administrators have synchronized with the server administrator. Being able to quickly and easily provision new servers is a key element for the ‘dynamic data center.’ This also includes planning by configuring servers in advance and mass provisioning across a fully heterogeneous environment.

By decomposing servers into pools of resources, provisioning a server involves only a software configuration that specifies the resources that will be applied to each server. The actual resource provisioning can be done in advance of the actual server deployment. The networking and storage provisioning, the operating system installation, and the server configuration can all be taken care of well in advance of deploying the servers. Once the servers are ready to be deployed, the actual provisioning is simply a matter of applying the configuration onto the server, which takes place in a matter of minutes.

Every year, new generations of servers are purchased with faster processors and higher levels of integration, so enterprise data centers have a broad spectrum of server hardware in use. Reducing the time required to deploy new applications and server resources is critical.

With the Dynamic Data Center, IT management has the flexibility, agility and scalability to automatically mass provision across a fully heterogeneous environment. Memory, CPU and I/O virtualization allows a server administrator to provision a large number of servers with a single command, regardless of what server hardware is being deployed.

Conclusion
As global trends fuel data center consolidation and increase demands on data center resources, the need to increase agility, flexibility and scalability while at the same time reducing costs is at an all time high.

The ‘dynamic data center’ addresses today’s changing business needs by providing the on-demand resources and flexibility that can literally revolutionize operational efficiencies. Virtualization of compute, memory and I/O resources enables the creation of a pool of server resources that can span across multiple physical machines and be allocated or de-allocated as needed. Easy repurposing and migration allows machines to remain fully utilized as workloads change during the course of the day, week or month. Fast and flexible provisioning allows machines to be mass deployed within a heterogeneous environment, drastically reducing the time and cost of provisioning new servers and the applications they support.

These technologies offer the promise of a data center that is truly dynamic, nimbly responding to the changing requirements of the enterprise while drastically reducing both capital and operating expenses.

B.V. Jagadeesh is President and CEO of 3Leaf Systems, a provider of next generation virtualization solutions for enterprise data centers. He was the Co-Founder and CTO for Exodus Communications, which pioneered the concept of Internet Data Centers. Later, Jagadeesh was CEO of NetScaler, which was acquired by Citrix where he was a part of the senior management team. For article feedback, contact Jagadeesh at bvj@3leafsystems.com 

Rob Reiner is Senior Director, Marketing at 3Leaf Systems. He has over 20 years experience in marketing and development in the telecommunications, networking and semiconductor industries, having worked with market leaders such as IBM, Siemens, Nortel Networks, Motorola and PMC-Sierra. For article feedback, contact Rob at rob.reiner@3leafsystems.com  


Click to email this article to a friend     Back



Back




  Home | About | Recent Issue | Archives | Events | Jobs | Subscribe | Contact | Terms of Agreement
© 2006 The Sterling Report. All rights reserved.