
Orchestrating and Automating Technical Building Blocks
Previous chapters discussed how to classify an IT service and covered a little bit about how to place that service in the cloud using a number of business dimensions, criticality, roles, to cut a long story short on. So you know how to choose the application and how to define the elements that make up those applications, however how do you decide where in the cloud to in fact place the application or workload for optimal performance? To understand this, you must look at the differences between how a consumer and a provider look at cloud resources.
The differences between these two views
Figure 11-1 illustrates the differences between these two views. The consumer sees the cloud as a "limitless" container of compute, storage, and network resources. The provider, furthermore, cannot provide limitless resources as this is simply not cost-effective nor possible. Still, together, the provider must be able to build out its infrastructure in a linear and consistent manner to meet demand and optimize the use of this infrastructure. This linear growth is archived through the use of Integrated Compute Stacks, which will often be referred to as a point of delivery. A POD has been described before. In exchange, for the purposes of this section, consider a POD as a collection of compute, storage, and network resources that conform to a standard operating footprint that shares the same failure domain. That is, if something catastrophic happens in a POD, workloads running in that POD are affected nevertheless neighboring workloads in a different POD are not.
Clearly at the concrete level, what makes up a POD is determined by the individual provider. Most providers are looking at a POD comprised of an ICS that offers a pre-integrated set of compute, network, and storage equipment that operates as a single solution and is easier to buy and manage, offering Capital Expenditure and Operational Expenditure savings. Cisco, for instance, provides two examples of PODs, a Vblock1 and a FlexPod,2 which provide a scalable, prebuilt unit of infrastructure that can be deployed in a modular manner. The main difference between the Vblock and FlexPod is the choice of storage in the solution. In a Vblock, storage is provided by EMC, and in a FlexPod, storage is provided by NetApp. In spite of the differences, the concept remains the same; provides an ICS that combines compute, network, and storage resources; and enables incremental scaling with predictable performance, capability, and facilities impact. The rest of this chapter assumes that the provider has made a choice to use Vblocks as its ICS. Figure 11-2 illustrates the relationship to the conceptual model and the concrete Vblock.
A FlexPod offers a similar configuration nevertheless supports NetApp FAS storage arrays instead of EMC storage, and at this stage typically NAS using Fibre Channel over Ethernet or Network File System; at that time, the SAN is no longer required. As well, note that the Vblock definition is owned by the VCE company, so it will be aimed at VMware Hypervisor-based deployments, whereas FlexPod can be considered more hypervisor neutral.
The concept of a network container in the tenant model
One important point to note is that we introduced the concept of a network container in the tenant model. A network container represents all the building blocks used to create the logical network, the topology-related building blocks. A network topology can be complex and can potentially contain many different resources, so using a container to group these components simplifies the provisioning process.
Modeling capabilities are an important step when on-boarding resources as they have a direct impact on how that resource can be used in the provisioning process. If you look at the Vblock definition again, you can see that it will support Layer 2 and Layer 3 network connectivity as then as NAS and SAN storage and the ESX hypervisor, so you can already see that it won't support the majority of the design patterns discussed before as they require a load balancer. If you were looking for a POD to support the instantiation of a design pattern that required a load balancer, you could query all the PODs that had been on-boarded and look for a Load-Balancing=yes capability. If this capability doesn’t exist in any POD in the cloud infrastructure, the cloud provider infrastructure administrator would need to create a new POD with a load balancer or add a load balancer to an existing POD and update the capabilities supported by that POD.
Taking this concept furthermore, if you configure the EMC storage in the Vblock to support several tiers of storage—Gold, Silver, and Bronze, for instance—you could simply model these capabilities at the POD level and do an initial check when provisioning a service to find the POD that supports a particular tier of storage. Capabilities can as well be used to drive behavior while the lifetime of the service. For instance, if you want to allow a tenant the ability to reboot a resource, you can add a reboot=true capability, so this could be added to the class in the data model that represents a virtual machine. You probably wouldn't want to add this capability to a storage array or network device as rebooting one of these resources would affect multiple users and should only be done by the cloud operations team.
- · Rackspace debuts OpenStack cloud servers
- · America's broadband adoption challenges
- · EPAM Systems Leverages the Cloud to Enhance Its Global Delivery Model With Nimbula Director
- · Telcom & Data intros emergency VOIP phones
- · Lorton Data Announces Partnership with Krengeltech Through A-Qua⢠Integration into DocuMailer
