XenServer and Nutanix: Insights on the how

XenServerWith the release of full support of XenServer for the Nutanix Enterprise Cloud Platform the performance and solutions engineering group has worked hard to create a reference architecture on this topic and I figured to write a blogpost on what Nutanix/Citrix did to make this work. It’s not just a QA thing, it’s been a thighly coupled engineering effort from both sides. 

 

We’ve published the reference architecture on our Support portal here:CITRIX XENDESKTOP ON XENSERVER (Login required) and I wanted to highlight some interesting information from that document but before I start with that let me first go over the way Nutanix has implemented their storage native capabilties:

 

Nutanix Acropolis Architecture

Acropolis does not rely on traditional SAN or NAS storage or expensive storage network interconnects. It combines highly dense storage and server compute (CPU and RAM) into a single platform building block. Each building block is based on industry-standard Intel processor technology and delivers a unified, scale-out, shared-nothing architecture with no single points of failure.

The Nutanix solution has no LUNs to manage, no RAID groups to configure, and no complicated storage multipathing to set up. All storage management is VM-centric, and the DSF optimizes I/O at the VM virtual disk level. There is one shared pool of storage composed of either all-flash or a combination of flash-based SSDs for high performance and HDDs for affordable capacity. The file system automatically tiers data across different types of storage devices using intelligent data placement algorithms. These algorithms make sure that the most frequently used data is available in memory or in flash for optimal performance. Organizations can also choose flash-only storage for the fastest possible storage performance. The following figure illustrates the data I/O path for a write in a hybrid model (mix of SSD and HDD disks).

../images/RA-2087-Citrix-XenDesktop-on-XenServer_image2.png

 

 

 

 

 

 

 

 

The figure below shows an overview of the Nutanix architecture, including the hypervisor of your choice (AHV, ESXi, Hyper-V, or XenServer), user VMs, the Nutanix storage CVM, and its local disk devices. Each CVM connects directly to the local storage controller and its associated disks. Using local storage controllers on each host localizes access to data through the DSF, thereby reducing storage I/O latency. Moreover, having a local storage controller on each node ensures that storage performance as well as storage capacity increase linearly with node addition. The DSF replicates writes synchronously to at least one other Nutanix node in the system, distributing data throughout the cluster for resiliency and availability. Replication factor 2 creates two identical data copies in the cluster, and replication factor 3 creates three identical data copies.

 

 

 

 

 

 

 

Local storage for each Nutanix node in the architecture appears to the hypervisor as one large pool of shared storage. This allows the DSF to support all key virtualization features. Data localization maintains performance and quality of service (QoS) on each host, minimizing the effect noisy VMs have on their neighbors’ performance. This functionality allows for large, mixed-workload clusters that are more efficient and more resilient to failure than traditional architectures with standalone, shared, and dual-controller storage arrays.

XenServer on the Nutanix Enterprise Cloud Platform

Now for the interesting part, how did we manage to get all that goodness into the XenServer stack? 

Nutanix and Citrix XenServer engineers have developed a storage manager plugin, called NutanixSR, to support XenServer. NutanixSR runs on the hosts, providing shared storage to the XenServer pool via storage repositories (SRs) supported by Nutanix CVMs, which draw on physical disks. The plugin seamlessly mounts storage containers as shared SRs on the XenServer hosts immediately upon creation in Prism, without requiring any action by the admin via CLI or XenCenter. As you scale out the cluster, the additional shared storage is immediately available to the entire XenServer pool.

Nutanix has added the following paths to the XenServer codebase:

Control path: Nutanix offloads storage repositories and virtual disk image (VDI) operations to Stargate by making API calls. Stargate is responsible for all data management and I/O operations and is the main interface with the hypervisor (via NFS, iSCSI, or SMB). This service runs on every node in the cluster to serve localized I/O.

Data path: Nutanix created a plugin that directly connects to Stargate for I/O requests and bypasses the DOM0 kernel storage stack. It provides enhanced storage throughout, lower latency compared to using an existing XenServer SR type (such as NFS or iSCSI), and storage HA in case of CVM upgrade or failure.

This means that current XenServer releases are shipped with Nutanix native code embedded so when you install and configure your Nutanix cluster you’re good to go. 

Credits to Steven Poitras @nutanixbible.com for the images used in this post.

The following two tabs change content below.

Kees Baggerman

Kees Baggerman is a Staff Solutions Architect for End User Computing at Nutanix. Kees has driven numerous Microsoft and Citrix, and RES infrastructures functional/technical designs, migrations, implementations engagements over the years.

3 comments

  1. […] via XenServer and Nutanix: Insights on the how — My Virtual Vision […]

  2. Davide Bozzelli says:

    Don’t undestand if the xs+nutanix combo is ONLY fitted to vdi or to general vm usage too.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.