After Dane Young wrote an excellent blogpost on how to setup a load balanced multi-node Citrix Receiver StoreFront Server group for use with the Citrix Cloud Gateway I used his information and screenshots to build a similar environment. The only thing that’s different from Dane’s stetup is that we were using Cisco ACE’s for loadbalancing. We created a setup with two Cisco ACE 4710’s in an HA pair (Hot-standby) on both of the available data centers. The ACE can be divided into virtual contexts, so we were able to create a separated management configuration and dedicated resources for the SBC/VDI configuration.
So to configure this please use Dane’s blog because he has all the screenshots and information needed to complete the setup but instead of configuring the Netscaler(s) like this:
Similar to Access Gateways, the actual load balancer setup is rather involved, so I will just cover the pieces that are important. 1) Create an LB VIP that ties to the http(s) (80/443) services and IP addresses of your Receiver Storefront servers. I simply used tcp for the monitor:
2) On the method and persistence tab, I used Method: Least Connection, Persistence: Cookieinsert with a backup persistence of SourceIP. Note, my storefront.vm.2k8 DNS record points to the load balancer VIP.
Like I said, you can configure different virtual contexts so we did, as we are using the ACE’s for the new Exchange environment too and the Administrator virtual context.
The Cisco ACE is configured to listen on the Virtual IP addressess (VIP) for traffic that has to be load balanced. These VIP’s are virtual ‘servers’ within a configured virtual context. Layer 3/4 and 7 policy-maps are assigned to these VIP’s so you can configure what the ACE has to do with the incoming traffic for this VIP. The VIP will be configured to a server farm that contains the physical IP addressess (this can be a VM on a hypervisor or aphysical server).
On the real servers are Health probes active, these Health probes will probe on ports or services and will make sure the webservices are available when routing traffic to one of the available nodes. For this group we configured two probes:
- Ping probe: basic availability
- http probe: availability of the web service
When the probe gives a different result than configured the server will be transfered to an “Out of Service” state and no additional traffic will be routed to this node.
So we configured Real Servers but there’s no mechanism yet to configure them into a clustered resource so we have to create server farms. Server Farms are groups combined on resources that have to be load balanced. In this case we’ve added two web servers with the Cloud Gateway Express software installed on them.
Don’t forget to set the VLAN and the NAT pool ID otherwise your VIP will be available and will round robin ICMP requests based on availability but won’t load balance the ports you already specified. This happened to one of our configurations and being new with Cisco ACE’s it took me a while to find the ‘Advanced View’ tab as you can see in the right-top of the screen. Be sure to set this with the proper values and you’re off!
In the end the base configuration took some time but adding a VIP, Real Servers and Server Farms was relatively easy. I was able to configure the complete scenario within a couple of hours. If you don’t have a Netscaler or Netscaler knowledge this could be a valid solution for you too.
Latest posts by Kees Baggerman (see all)
- Updated: VM Reporting Script for Nutanix AHV with Powershell - April 8, 2019
- How to add a CD-ROM drive and mount an ISO file via powershell to an AHV-hosted VM - March 27, 2019
- How to add a nic via powershell to an AHV-hosted VM - March 25, 2019
- VM Reporting Script for Nutanix AHV with Powershell - March 19, 2019
- Reporting vGPU enabled VMs on AHV - March 13, 2019