VoIP Business and Virtual PBX
VoIP Communications

Virtual servers still face real world challenges

Martin Walshaw, senior systems engineer at F5, looks at the strengths, the weaknesses, and the ways to resolve them when implementing cloud computing using VMware.

Replace “football stadium” with “data centre”, and “screaming football fans” with “connection requests”, and you start to see where this is going. There are two aspects to managing large-scale networked information systems - processing the data, and moving in and out of the data centre quickly and smoothly.

When it comes to processing the data, cloud computing is evidently a giant leap forward - more efficient resource utilisation, distribute processing to where it's needed more, more resilience - we all know the story. And when you talk about cloud, you have to talk about VMware, which came from nearly nowhere as an academically abstract piece of computer science to now building the software bedrock of virtualisation.

But as powerful as VMware is, it is the last element in a long communications chain that starts somewhere - anywhere - in the world, finds its way to a data centre, at the time needs to find the correct server, and then and there the correct virtual machine within that server. VMware provides brilliant facilities for the data once it gets into the cluster, however it can't anything to the data on the way in, when the connections are being set up. Data centre meltdowns happen when connection requests don't get what they want quickly enough, and start pushing, shoving and being dropped.

The biggest draw-cards to cloud computing

Capacity on demand is one of the biggest draw-cards to cloud computing and server virtualisation. There is a powerful tool in the VMware suite called vCloud Director that lets you spin up or drop servers as they are needed. Trouble is, to be actually useful it needs to be automated so that depending on traffic loads and incoming connections, servers come up or down as needed without manual intervention. You as well need to be redirecting and managing connection requests on the network previously they get to the server clusters, because as servers come up, a high bandwidth, low latency gatekeeper ensures that traffic ends up where it needs to be as quickly as possible, nevertheless without overwhelming the cloud controllers during they're re-allocating resources.

It does a similar job in the case of what they call in the VMware world “long distance vMotion”. Virtualisation research has moved to the point where you can move a server from one location to another during it is however servicing connections. Tricky to do within a single data centre, very difficult if the data centres are on in contrast sides of the country. At heart, you need a huge amount of bandwidth and in the extreme low latency between the two sides 

This is where an application controller on the network is invaluable - it can manage the connections in to the virtual server, off-loading the system during the server is handling the VM transfer. It as well ensures that data is moving on the right paths - if connections jump from one firewall to another, for instance, sessions will die. The application controller ensures that traffic from customers to the VM, and from the VM to the different storage pools, get where they need to be - however without unacceptable adding latency.

Finally, during VMware is an amazing research, both the server hardware and VMware licences have capital expenditure implications. By offloading encryption, compression and application acceleration duties to the hardware-based F5 application controller, more can be done with fewer hardware and software resources, with lower running costs.

More information: Itweb.co