One of the technologies within the Nutanix technology stack for desktop virtualisation is the technology called Shadow Clones. It leverages the data locality to solve the known issues of having a master image and updates that needs to be distributed across hypervisors, which can be time consuming.
There’s an ongoing discussion on the topic MCS vs PVS, a lot of bright minds already shared their view on this topic:
- Barry Schiffer:Citrix Provisioning Services vs Machine Creation Services 2014 revision
- Daniel Feller: PVS or MCS – What’s Your Decision Going to Be
- Martijn Bosschaart:Provisioning Services; time to let go
I’m not going to debate if you should choose PVS or MCS, there’s a use case for both of them but with the introduction of Shared Hosted Desktops (SHD) on MCS the use case for MCS got a little bit bigger.
How does MCS works?
For a scenario where desktops are configured as non-persistent there will be three (possible four) disks:
- Master image
- Differencing Disk
- Identity Disk
- Personal vDisk (optional)
The Master Image
A master image will contain the base image (eg the Operating System, Applications and settings), all reads will come from the master image.
The Differencing Disk
All writes to the disk will be redirected to the Differencing disk, this disk can be either a persistent or a non-persistent disk depending on the type of deployment you’re doing.
When using pooled desktops the XenDesktop Delivery Controller will create a non-persistent Differencing disk so all information on the disk will be wiped when the VM is rebooted. This process will make sure that the VM is reverted to it’s original state and original look and feel.
The Identity Disk
An Identity disk will always be persistent, it’s a fairly small (<20MB) and it’s used to store the data containing the identity of the VM (eg Computer Name and the Computer Account password).
The Personal vDisk (optional)
Citrix Personal vDisk (PvD) can be added from the Desktop Delivery controller and ti can store both user installed applications, user data and settings.
Ok, we’ve got the disks covered. Now what?
Now we’ve got the disks covered we can get down to the issue I’ve been experiencing with MCS as it will copy the master image disk to every datastore configured in the host connection. XenDesktop supports both local and shared datastores but it let’s you choose between two ‘evils’:
- Shared Datastores: Shared storage isn’t a commodity in desktop virtualisation projects as often the need of IO is higher than the existing NAS/SAN can deliver.
- Local Datastores: You’d have to configure each datastore first before you can enable the Desktop Delivery controller to copy your master image to the local datastore. In larger environments this will be a painful manual task and after that the master image still have to be copied which can take up some time. The same goes for updating the master image, painful manual task (manual task = risk error) and time consuming.
Nutanix, a distributed file system
The Nutanix solution is a converged storage + compute solution which leverages local components and creates a distributed platform for virtualization aka virtual computing platform. The solution is a bundled hardware + software appliance which houses 2 (6000/7000 series) or 4 nodes (1000/2000/3000/3050 series) in a 2U footprint. Each node runs an industry standard hypervisor (ESXi, KVM, Hyper-V currently) and the Nutanix Controller VM (CVM). The Nutanix CVM is what runs the Nutanix software and serves all of the I/O operations for the hypervisor and all VMs running on that host. For the Nutanix units running VMware vSphere, the SCSI controller, which manages the SSD and HDD devices, is directly passed to the CVM leveraging VM-Direct Path (Intel VT-d). In the case of Hyper-V the storage devices are passed through to the CVM. Below is an example of what a typical node logically looks like:
Together, a group of Nutanix Nodes forms a distributed platform called the Nutanix Distributed Filesystem (NDFS). NDFS appears to the hypervisor like any centralized storage array, however all of the I/Os are handled locally to provide the highest performance. More detail on how these nodes form a distributed system can be found below. Below is an example of how these Nutanix nodes form NDFS:
Does it solve the issue with MCS and data locality?
By using the NDFS we can use the advantages of shared datastores as you only have to select your Nutanix Container from your host connection and Nutanix will take care of the distribution of the master image while still having the best performance available because the reads will come from a local copy of the image.
Hold on, what is that thing called Shadow Clones?
NDFS has a feature called Shadow Clones, it will enable us to do distributed caching of vDisks or VM data which is in a multi-reader scenario. This would be the case when the master image is used, all reads come from the master image and all writes will go to the differencing disk.
Again from the Nutanix bible:
With Shadow Clones, NDFS will monitor vDisk access trends similar to what it does for data locality. However in the case there are requests occurring from more than two remote CVMs (as well as the local CVM), and all of the requests are read I/O, the vDisk will be marked as immutable. Once the disk has been marked as immutable the vDisk can then be cached locally by each CVM making read requests to it (aka Shadow Clones of the base vDisk). This will allow VMs on each node to read the Base VM’s vDisk locally. In the case of VDI, this means the replica disk can be cached by each node and all read requests for the base will be served locally. NOTE: The data will only be migrated on a read as to not flood the network and allow for efficient cache utilization. In the case where the Base VM is modified the Shadow Clones will be dropped and the process will start over.
With the data locality of NDFS and the feature Shadow Clones (by default enabled on NOS 4.0.2 and up or you can enable it manually by running the following NCLI command: ncli cluster edit-params enable-shadow-clones=<true/false>) we can take away the pain of having to configure multiple datastores and solve the time consuming process of copying the master image and updates over all datastores and by using data locality through NDFS and Shadow Clones speed up your deployments.
Latest posts by Kees Baggerman (see all)
- Updated: VM Reporting Script for Nutanix AHV with Powershell - April 8, 2019
- How to add a CD-ROM drive and mount an ISO file via powershell to an AHV-hosted VM - March 27, 2019
- How to add a nic via powershell to an AHV-hosted VM - March 25, 2019
- VM Reporting Script for Nutanix AHV with Powershell - March 19, 2019
- Reporting vGPU enabled VMs on AHV - March 13, 2019