Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC): Use Case 1 (SKKB1018)

In this article we will take a look at our first use case. The use case describes a traffic management (Goe-Locaiton) scenario for vRealize Automation Center (vRA) instances spread across datacenters. The vRA instances use minimal deployment method.

Part 1: Geo-Location Based Traffic Management with F5 BIG-IP for vRA (PoC)
Part 2: Infrastructure Setup
Part 3: F5 BIG-IP LTM
Part 4: F5 BIG-IP GTM
Part 5: Infrastructure Setup (continued)
Part 6: Use Case 1 (this article)
Part 7: Use Case 2

Lab Environment

The logical design of this lab can be seen HERE.

 

Use Case 1

GeoApp Requirements

The Kaloferov.com Company has made design decision and is planning to implement the following to lay down the foundations for their private cloud infrastructure:

  • UC1-D1: Deploy 2 GeoApp instances using VMware vRealize Automation Center (vRA) minimal setup in the Los Angeles  datacenter for use by the LA employees.
  • UC1-D2: Deploy 2 GeoApp instances using VMware vRealize Automation Center (vRA) minimal setup in the New York datacenter for use by the NY employees.

The company has identified the following requirements for their GeoApp implementation:

  • UC1-R1: The GeoApp must be accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp.f5.vmware.com.
  • UC1-R2: To ensure that the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that the Los Angeles employees be redirected to Los Angeles datacenter and the New York employees be redirected to New York datacenter.
  • UC1-R3: The workload of the teams must be distributed across their dedicated local GeoApp (vRA) instances.

This is roughly represented by the diagram below:

  • UC1-R4: In case of failure of a GeoApp instance, the traffic should be load balanced between available instances in the local datacenter.

This is roughly represented by the diagram below:

Implementing Use Case 1 (UC1) with GTM and LTM is roughly represented by the diagram below:

 

vRA Deployment

To satisfy the GeoApp requirements for this use case we will deploy vRA instances as follows:

  • 2x minimal deployments of vRA in the LA Datacenter: Identity Appliance, vRA Web, and vRA IaaS Server.  
  • 2x minimal deployments of vRA in the NY Datacenter: Identity Appliance, vRA Web, and vRA IaaS Server. 
  • Add a Node to the LA LTM device named geoapp-la-01 that corresponds to a vRA Web server in LA.
  • Add a Node to the LA LTM device named geoapp-la-02 that corresponds to a vRA Web server in LA.
  • Add a Node to the NY LTM device named geoapp-ny-01 that corresponds to the vRA Web server in NY.
  • Add a Node to the NY LTM device named geoapp-ny-02 that corresponds to the vRA Web server in NY.
  • The vRA component servers in the LA datacenter are connected to the F5-internal-A-01 (NSX VLAN) logical network.
  • The vRA component servers in the NY datacenter are connected to the F5-internal-B-01 (NSX VLAN) logical network.

This is roughly represented by the diagram below:

The GeoApp (vRA Web) VM’s have the following characteristics:

F5 Name: geoapp-la-01
IP: 172.16.60.50
Netmask: 255.255.255.0
GW: 172.16.60.1
DNS: 192.168.1.1
Network: F5-Internal-A-01 (NSX VLAN)

F5 Name: geoapp-la-02
IP: 172.16.60.51
Netmask: 255.255.255.0
GW: 172.16.60.1
DNS: 192.168.1.1
Network: F5-Internal-A-01 (NSX VLAN)

F5 Name: geoapp-ny-01
IP: 172.16.70.50
Netmask: 255.255.255.0
GW: 172.16.70.1
DNS: 192.168.1.1
Network: F5-Internal-B-01 (NSX VLAN)

F5 Name: geoapp-ny-02
IP: 172.16.70.51
Netmask: 255.255.255.0
GW: 172.16.70.1
DNS: 192.168.1.1
Network: F5-Internal-B-01 (NSX VLAN)

Note that the F5 Name might not correspond to the actual DNS hostname of the server.

In this case we are using minimal vRA deployments in each site therefore we are no interested in load balancing multiple vRA Web , vRA Iaas Web or vRA IaaS Manager Service servers cause they are not part of a distributed vRA deployment.

 

 

LTM Configuration

Monitors

We are only adding the vRA VA Servers as Nodes.
IN this case we will be using the following LTM Monitors we created earlier in this article:

  • vra-https-va-web

If you haven’t created it, not is the time to do so.

 

Nodes

We’ve already added the nodes earlier in this article. If you haven’t done so it, not is the time to do so.
In this case the nodes have the following purpose:

  • geoapp-la-01 – vRA VA Server added to the LA LTM device.
  • geoapp-la-02 – vRA VA Server added to the LA LTM device.
  • geoapp-ny-01 – vRA VA Server added to the NY LTM device.
  • geoapp-ny-02 – vRA VA Server added to the NY LTM device.

 

 

Pools

In this case we will create one pool per LTM device which will contain all out GeoApp nodes in the corresponding datacenter.

This is roughly represented by the diagram below:

Go to the f5-ltm-a-01 LTM device and navigate to [Local Traffic > Pools]
Create a Pool with the following properties:

Name: pool-geoapp-la
Health Monitors: vra-https-va-web
New Members (Node List): geoapp-la-01, geoapp-la-02
(Note that we explained earlier who to create the monitor and what properties it has.)

Leave all other properties to their default values.

Go to the f5-ltm-b-01 LTM device and navigate to [Local Traffic > Pools]
Create a Pool with the following properties:

Name: pool-geoapp-ny
Health Monitors: vra-https-va-web
New Members (Node List): geoapp-ny-01, geoapp-ny-02
(Note that we explained earlier who to create the monitor and what properties it has.)

Leave all other properties to their default values.

Virtual Servers

In this case we will create one virtual server per LTM  which will act as an access point to the GeoApp for the corresponding datacenter.

This is roughly represented by the diagram below:

Go to both LTM devices and navigate to [Local Traffic > Virtual Servers]
Create a Virtual Server with the following properties:

Name: vs-geoapp-la
Destination Address: 172.16.60.100
Service Port: 443 (HTTPS)
Type: Standard
Source Address Translation: Auto Map
Default Pool: pool-geoapp-la
Default Persistence Profile: source_addr_carp

Leave all other properties to their default values.

 

GTM Configuration

Pools

In this case we will be configuring two pools. One containing the LA GeoApp Virtual Server as a member and one containing the NY GeoApp Virtual Server as a member.

This is roughly represented by the diagram below:


Go on the f5-gtm-a-01 GMT device and navigate to [DNS > GSLB > Pools].
Create two Pools with the following properties:

Name: gtm-pool-geoapp-la
Member List: vs-geoapp-la

Name: gtm-pool-geoapp-ny
Member List: vs-geoapp-ny

Leave all other properties to their default values.
You should now have created the pools:

If you have previously successfully configured GTM Synchronization Group these changes will be synchronized to the other GTM members. If you haven’t do an SSH to the f5-gtm-b-01 GTM device and run the following command to synchronize the changes:

run gtm gtm_add -a admin@172.16.61.2

Same configuration changes should appear on f5-gtm-b-01 GTM device.

 

Wide IP’s (WIP)

In this example we will be creating a Wide IP called geoapp.f5.vmware.com containing as members both GMT pools we created earlier.
This is roughly represented by the diagram below:


Go on the f5-gtm-a-01 GMT device and navigate to [DNS > GSLB > Wide IPs].
Create a Wide IP with the following properties:

Name: geoapp.f5.vmware.com
Pool List: gtm-pool-geoapp-ny, gtm-pool-geoapp-la

Leave all other properties to their default values.
You should now have created the Wide IP:

If you have previously successfully configured GTM Synchronization Group these changes will be synchronized to the other GTM members. If you haven’t do an SSH to the f5-gtm-b-01 GTM device and run the following command to synchronize the changes:

run gtm gtm_add -a admin@172.16.61.2

Same configuration changes should appear on f5-gtm-b-01 GTM device.

 

Topology

No additional Topology configuration is needed for this use case other than the one already described in this article.

 

 

Distributed Applications

No additional Distributed Applications configuration is needed for this use case other than the one already described in this article.

 

 

Testing GeoApp Connectivity

Assuming you have followed this article and have used default Load balancing policies you should be able to directly access the geoapp.f5.vmware.com and test your deployment. The standard configuration will most likely forward you via General Availability/Round Robin load balancing methods to one of the available GeoApp instances.

To test the setup go to vm-a-01, open a browser and navigate to https://geoapp.f5.vmware.com/vcac

In this case the F5-BIG forwarded me to vra-la-dev-01-web which is actually the server that lies behind the geoapp-la-01 F5 BIG-IP Node which I’ve added earlier on the f5-ltm-a-01 LTM device.

If you do an SSH to the either of the Client VM’s (vm-a-01 or vm-b-01) and run a [traceroute geoapp.f5.vmware.com] you will notice output , similar to the following , which will return once the LTM Virtual Server IP (172.16.60.100) of the LA GeoApp and once the LTM Virtual Server IP (172.16.70.100) of the NY GeoApp.

Here a quick nslookup output to show you the DNS entries behind these IP’s

Now that we have verified our GeoApp is accessible , lets see how to fulfill the requirements that the company has identified for Use Case 1 (UC1).

 

 

Meeting the GeoApp Requirements

Let’s take a look at UC1-R1:

  • UC1-R1: The GeoApp must be accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp.f5.vmware.com.

We previously successfully tested the connectivity so we are already meeting this requirement.
As we have seen the GeoApp is accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp.f5.vmware.com.

We have now successfully met UC1-R1.

 

 

GTM Wide IP Level Load Balancing

Let’s take a look at UC1-R2:

  • UC1-R2: To ensure that the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that the Los Angeles employees be redirected to Los Angeles datacenter and the New York employees be redirected to New York datacenter.

To accomplish this we will use Load Balancing at the GTM Wide IP level.

We will  modify the Wide IP Load Balancing Method on our GTM devices for the geoapp.f5.vmware.com Wide IP and select Topology as preferred method.

We have already done the first step to achieve this, as we have already identified and created topology regions and records which will:

  • Forward clients on our Los Angeles external network to the GeoApp Pool in LA.
  • Forward clients on our New York external network to the GeoApp Pool in NY.

If you haven’t configured topology regions and records you must do so before continuing.  

To configure the Wide IP Load balancing Method go to the f5-gtm-a-01 GTM device and navigate to [DNS > GSLB > Wide IP’s > Wide IP List].
Click on the geoapp.f5.vmware.com Wide IP and navigate to [Pools].
Change the Load Balancing Method to Topology and click Update.

We have now successfully met UC1-R2.

There is one more place on the GTM device where Load Balancing Method can be configured which is worth noticing. This is on the GTM Pool level. To configure this navigate to [GSLB > Pools > Pool List] , select one of the GeoApp pools and navigate to [Members].

 

 

LTM Pool Level Load Balancing

Let’s take a look at UC1-R3:

  • UC1-R3: The workload of the teams must be distributed across their dedicated local GeoApp (vRA) instances.

To accomplish this we will have to modify the Load Balancing Method for the LTM GeoApp Pools on each LTM device.

To configure the Load balancing Method go to the each LTM device and navigate to [Local Traffic > Pools > Pools List].
Click on the GeoApp Pool and navigate to [Members].
Change the Load Balancing Method to Least Connections (member) and click Update.

Make sure to repeat the steps on each LTM device.

We have now successfully met UC1-R3.

Let’s take a look at UC1-R4:

  • UC1-R4: In case of failure of a GeoApp instance, the traffic should be load balanced between available instances in the local datacenter.

To accomplish this we do not have to make additional configuration changes. If one of our GeoApp instances is not available and the LTM Monitor detects this, it will forward the traffic to the rest of the available healthy pool members.

We have now successfully met UC1-R4.

 

 

Testing the Use Case

Previously we saw that when running [traceroute geoapp.f5.vmware.com] from our Clients VM’s (vm-a-01 and vm-b-01) we get random GeoApp IP’s as an response due to the default LTM/GTM Load balancing Methods (General Availability/Round Robin ). Now that we have shaped the traffic management, let’s see if we have achieved the desired use case goal.

Do an SSH to the each of the Client VM’s (vm-a-01 or vm-b-01) and run a [traceroute geoapp.f5.vmware.com] you will notice the output .

The Client located in LA (vm-a-01) will only receive as a response the Virtual Address (172.16.60.100) of the LA GeoApp Pool.

If you shutdown one of the GeoApp VM’s in the LA datacenter this should take on effect on the routing and you should still be able to access the GeoApp (vRA) portal.

The Client located in NY (vm-b-01) will only receive as a response the Virtual Address (172.16.70.100) of the NY GeoApp Pool.

If you shutdown one of the GeoApp VM’s in the NY datacenter this should take on effect on the routing and you should still be able to access the GeoApp (vRA) portal.

 

DISCLAIMER; This is a personal blog. Any views or opinions represented in this blog are personal and belong solely to the blog owner and do not represent those of people, institutions or organizations that the owner may or may not be associated with in professional or personal capacity, unless explicitly stated. Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual.
All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.
Photos
Unless stated, all photos are the work of the blog owner and are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. If used with watermark, no need to credit to the blog owner. For any edit to photos, including cropping, please contact me first.
Recipes
Unless stated, all recipes are the work of the blog owner and are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Please credit all recipes to the blog owner and link back to the original blog post.
Downloadable Files
Any downloadable file, including but not limited to pdfs, docs, jpegs, pngs, is provided at the user’s own risk. The owner will not be liable for any losses, injuries, or damages resulting from a corrupted or damaged file.
Comments
Comments are welcome. However, the blog owner reserves the right to edit or delete any comments submitted to this blog without notice due to
– Comments deemed to be spam or questionable spam
– Comments including profanity
– Comments containing language or concepts that could be deemed offensive
– Comments containing hate speech, credible threats, or direct attacks on an individual or group
The blog owner is not responsible for the content in comments.
This policy is subject to change at anytime.