In this article we will go thought an overview of this PoC and introduce some use cases that we will dig deeper into later. VMware vRealize Automation Center (vRA) will be one of the products in this Proof of Concept (PoC) for which use case(s) for Load balancing and geo-location traffic management will be presented.
Part 1: Geo-Location Based Traffic Management with F5 BIG-IP for vRA (PoC) (this article)
Part 2: Infrastructure Setup
Part 3: F5 BIG-IP LTM
Part 4: F5 BIG-IP GTM
Part 5: Infrastructure Setup (continued)
Part 6: Use Case 1
Part 7: Use Case 2
Lab Environment
The logical design of this lab can be seen HERE.
Introduction
The increasingly global nature of content and the migration of multimedia content distribution from typical broadcast channels to the Internet make Geo-Location a requirement for enforcing access restrictions and for providing the basis for traditional performance-enhancing and disaster recovery solutions.
Cloud computing is also of rising importance, introducing new challenges to IT in terms of global load balancing configurations. Hybrid architectures that attempt to seamlessly use public and private clouds for scalability, disaster recovery, and availability purposes can leverage accurate Geo-Location data to enable a broader spectrum of functionality and options.
Geo-Location improves the performance and availability of your applications by intelligently directing users to the closest or best-performing physical, virtual, or cloud environment.
VMware vRealize Automation Center (vRA) will be one of the products in this Proof of Concept (PoC) for which use case(s) for Load balancing and geo-location traffic management will be presented. This PoC can be used as a test environment for any other product that supports F5 BIG-IP Local Traffic Manager (LTM) and F5 BIG-IP Global Traffic Manager (GTM). After completing this PoC you should have the lab environment needed and feel comfortable enough to be able to setup more advanced configurations on your own and according to your business needs and functional requirements.
For additional read on geolocation and related topics, visit: Geolocation: Risk, Issues and Strategies and F5 Global Traffic Manager (GTM)
One of the typical scenarios which involving Geo-Location based traffic management is the ability to achieve traffic redirection on the basis of the source of the DNS query.
Consider a software development company called Kaloferov.com that is planning to implement vRealize Automation Center (vRA) to provide private cloud access to it’s employees to develop and test their applications. In the rest of the article I will mostly refer to the globally available vRA private cloud application as GeoApp. Our GeoApp must provide access to the company’s private cloud infrastructure from multiple cities across the globe.
The Kaloferov.com Company has datacenters in two locations: Los Angeles (LA) and New York (NY). Each datacenter will host instance(s) of the GeoApp (vRealize Automation Center). Development (DEV) and Quality Engineering (QE) teams from both locations will access the GeoApp and use it to develop and test their homegrown software products.
We will introduce several commonly seen load balancing and traffic management use cases
Use Case 1 (UC1)
The Kaloferov.com Company has made design decision and is planning to implement the following to lay down the foundations for their private cloud infrastructure:
- UC1-D1: Deploy 2 GeoApp instances using VMware vRealize Automation Center (vRA) minimal setup in the Los Angeles datacenter for use by the LA employees.
- UC1-D2: Deploy 2 GeoApp instances using VMware vRealize Automation Center (vRA) minimal setup in the New York datacenter for use by the NY employees.
The company has identified the following requirements for their GeoApp implementation:
- UC1-R1: The GeoApp must be accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp.f5.vmware.com.
- UC1-R2: To ensure that the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that the Los Angeles employees be redirected to Los Angeles datacenter and the New York employees be redirected to New York datacenter.
- UC1-R3: The workload of the teams must be distributed across their dedicated local GeoApp (vRA) instances.
This is roughly represented by the diagram below:
- UC1-R4: In case of failure of a GeoApp instance, the traffic should be load balanced between available instances in the local datacenter.
This is roughly represented by the diagram below:
Use Case 2 (UC2)
The Kaloferov.com Company has made design decision and is planning to implement the following to lay down the foundations for their private cloud infrastructure:
- UC2-D1: Deploy 1x GeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the Los Angeles datacenter for use by the LA employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.
- UC2-D2: Deploy 1x GeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the New York datacenter for use by the NY employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.
The company has identified the following requirements for their GeoApp implementation:
- UC2-R1: The GeoApp must be accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp-uc2.f5.vmware.com.
- UC2-R2: To ensure that the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that the Los Angeles employees be redirected to Los Angeles datacenter and the New York employees be redirected to New York datacenter.
- UC2-R3: The workload must be distributed across the Tier nodes of the local GeoApp (vRA) instance.
This is roughly represented by the diagram below:
- UC2-R4: In case of failure of a single Tier Node in a given GeoApp Tier , the workload should be forwarded to the remaining Tier Node in the local datacenter.
This is roughly represented by the diagram below:
- UC2-R5: In case of failure of all Tier Nodes in a given GerApp Tier , the workload of all tiers should be forwarded to the GeoApp instance in the remote datacenter
This is roughly represented by the diagram below:
Satisfying the requirements for these use cases involves the implementation of two computing techniques:
- Load balancing
- Geo-Location Based Traffic Management
There are other software and hardware products that provide load balancing and/or Geo-Location capabilities, but we will be focusing on two of them to accomplish our goal:
- For Load balancing: F5 BIG-IP Local Traffic Manager (LTM)
- For Geo-Location: F5 BIG-IP Global Traffic Manager (GTM)
Based on which deployment method you choose and what functional requirements you have you will then have to configure the following aspects of F5 BIG-IP devices, which will infect the management of your traffic:
- F5 BIG-IP LTM Pool
- F5 BIG-IP LTM Pool Load Balancing Method
- F5 BIG-IP LTM Virtual Servers
- F5 BIG-IP GTM Pool
- F5 BIG-IP GTM Pool Load Balancing Method (Preferred, Alternate, Fallback)
- F5 BIG-IP GTM Wide IP Pool
- F5 BIG-IP GTM Wide IP Pool Load Balancing Method
- F5 BIG-IP GTM Distributed Applications Dependency Level
Implementing Use Case 1 (UC1) with GTM and LTM is roughly represented by the diagram below:
Implementing Use Case 2 (UC2) with GTM and LTM is roughly represented by the diagram below:
Now that we have identified the use cases let’s see how can we build our solution.
This blog article is split into few smaller posts:
- Part 1: Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC)
- Part 2: Infrastructure Setup
- Part 3: F5 BIG-IP LTM
- Part 4: F5 BIG-IP GTM
- Part 5: Infrastructure Setup (continued)
- Part 6: Use Case 1
- Part 7: Use Case 2
include TEMPLATEPATH."/../../../itBlogDisclaimer.php"; ?>