Citrix MCS for AHV: Under the hood

MCS for AHV

Citrix MCS for AHV; it’s been a hot topic since the release of XenDesktop 7.7 so here’s my explanation of what’s going on under the hood when we look at Citrix MCS for AHV. First I’ll start with a short description of the components and in the end there’s a alpha-demo video on the integration piece.

Looking at the complete solution for the integration for Citrix MCS on AHV there are three major components:

  • Citrix MCS
  • Acropolis Base Software
  • Citrix Provisioning SDK

Let’s start with an outline on those three components:

XenDesktop 7.7 MCS

Citrix MCS for AHV provides the simplest means of creating a machine catalog. MCS tools and services are baked into a default XenDesktop install and are ready for immediate use.

A Machine Catalog created with MCS has the following characteristics:

  • A master image is prepared and created based on a standard VM that has all desired software and customizations that an admin wants to make available in a users’ virtual desktops and the Citrix VDA that’s registering to the Citrix Desktop Delivery Controller(s).
  • An admin defines the size of the VMs by choosing how many vCPUs and how much vMemory for each desktop VM.
  • The admin chooses how many VMs to be created and added to the machine catalog and defines a naming convention, such as “Win81-MCS-###” (where ### will automatically increment as 001, 002, etc.).
  • VMs are all created as “Linked Clones”.

Each VM created has two or three disks assigned to it (see figure for reference):

Citrix MCS for AHV

  • Base VM : A single Base Disk which resides on a shared NFS datastore that has been mounted on the hosts in the cluster you configured in the Studio host connection
  • Identity disk : The ID disk is a very small disk (max 16 MB) that contains identity information; this information provides a unique name for the VM and allows it to join Active Directory (AD).
  • Difference Disk : The difference disk, also known as the write cache, is used to separate the writes from the master disk, while the system still acts as if the write has been committed to the master disk.

The identity disk and the difference disk together make the VM unique. A more comprehensive post on the inner workings of MCS is posted on the Nutanix blogs: Citrix MCS and PVS on Nutanix: Enhancing XenDesktop VM 

Acropolis Base Software 4.6

As you can see MCS needs a way to read/write to a small (16 MB) Identity Disk associated with every VM.

With Acropolis Base Software 4.6 we introduced a few new APIs, some of them are directly related to the MCS integration we’ve built. A new set of REST apis are provided to manage the identity disk. The APIs are as follows:

Create:

This REST API creates the identity disk in the AHV cluster and also writes the data provided onto the identity disk.

[codesyntax lang=”powershell” container=”none”]

POST https://<Cluster-IP>:9440/api/nutanix/v1.0/identity_disks

[/codesyntax]

Request data JSON:

[codesyntax lang=”powershell” container=”none”]

[/codesyntax]

 

Read:

The read REST API is used to read the contents of the identity disk. 

[codesyntax lang=”powershell” container=”none”]

GET https://<Cluster-IP>:9440/api/nutanix/v1.0/identity_disks

GET https://<Cluster-IP>:9440/api/nutanix/v1.0/identity_disks/?includeData=true

GET https://<Cluster-IP>:9440/api/nutanix/v1.0/identity_disks/<uuid>

GET https://<Cluster-IP>:9440/api/nutanix/v1.0/identity_disks/<uuid>/?includeData=true

[/codesyntax]

 

Response metadata JSON:

The data returned by Acropolis from NOS cluster is as follows:   The data field will be empty if includeData is not set.

[codesyntax lang=”powershell” container=”none”]

{

“name”: “string”,

“uuid”: “string”,

“annotation”: “string”,

“logical_timestamp”: “integer”,

“created_time_in_usecs”: “integer”,

“checksum_type”: “dto.acropolis.DiskInfoDTOChecksumType”,

“checksum”: “string”,

“container_name”: “string”,

“container_id”: “string”,

“container_uuid”: “string”,

“disk_type”: “dto.acropolis.DiskInfoDTODiskType”,

“size”: “integer”,

“data”: “RHVkZSwgbXkgY2F0IGhhcyBVzdCBwYWphbWFzLg==aZ….”

}

[/codesyntax]

Update:

The contents of the identity disk can be updated. To update the contents, the UUID of the identity disk should be used and the new data should be passed. The same uuid should be used for updating the identity disk if the mapping between the VM and identity should not change. This REST API updates the identity disk in AOS cluster by writing the new data provided as part of this API onto the disk.  During the update, the identity disk UUID (uuid4 format) must be passed to the REST API. The REST API will pass this UUID to the Acropolis and Acropolis will use this uuid to update the identity disk.  

[codesyntax lang=”powershell” container=”none”]

PUT https://<Cluster-IP>:9440/api/nutanix/v1.0/identity_disks/<uuid>

[/codesyntax]

Request data JSON:

[codesyntax lang=”powershell” container=”none”]

{

“name”: “string”,

“uuid”: “string”,

“annotation”: “string”,

“disk_type”: “dto.acropolis.DiskInfoDTODiskType”,

 

“identityDiskImportSpec”: {

“container_name”: “string”,

“container_id”: “string”,

“container_uuid”: “string”,

“checksum_type”: “dto.acropolis.DiskInfoDTOChecksumType”,

“checksum”: “string”,

“size”: “string”,

“data”: “string”,

“url”: “string”,

}

}

[/codesyntax]

Delete:

The identity disk along with its content is removed from the cluster. The logical timestamp can optionally be provided for consistency. If a logical timestamp is specified, then this operation will be rejected if the logical timestamp specified does not match the value of the logical timestamp in the identity disk in the cluster. The logical timestamp can be obtained by doing a read of the identity disk.

[codesyntax lang=”powershell” container=”none”]

DELETE https://<Cluster-IP>:9440/api/nutanix/v1.0/identity_disks/<uuid>

[/codesyntax]

Response metadata JSON:

The data returned by Acropolis from NOS cluster is as follows:

[codesyntax lang=”powershell” container=”none”]

{

“task_uuid”: “string”,

}

[/codesyntax]

Provisioning SDK

CitrixSDKFrom the Provisioning SDK Guide:

The Citrix Provisioning SDK is a new addition to the XenDesktop and XenApp platform for developers and technology partners. It brings the power and flexibility of Citrix Machine Creation Services (MCS), and applies it to any hypervisor or cloud infrastructure service that you choose.

It does so using the same tried-and-tested technologies that have made MCS successful on Citrix XenServer, Microsoft SCVMM, VMWare vSphere and, more recently, Amazon Web Services (AWS) and Citrix CloudPlatform. Whether your infrastructure resides in your data center or in the cloud, the Citrix Provisioning SDK is your first step towards an integrated provisioning and management solution for desktops and applications.And integration is only the beginning. Perhaps your infrastructure or cloud service has unique and differentiating features, setting it aside from other offerings in the market. With this SDK, you can expose that functionality and bring it right to the administrator’s fingertips in the Citrix Studio management console.

The SDK enables you to create your own provisioning plug-in, which can be added alongside the plug-ins that are installed by default with the XenDesktop and XenApp products. Once it is installed, your plug-in will be discovered and loaded automatically by the services on the delivery controller. It will then appear as a new connection type in Citrix Studio, allowing administrators to easily connect, configure, and provision on your chosen infrastructure platform.

  • A set of .NET programming interfaces that are used to call your provisioning plug-in whenever it needs to take action. Your plug-in takes the form of a .NET assembly (DLL), which implements these interfaces. There are several .NET interfaces that a plug-in must implement, but each is designed to be small and easy to understand. Most interfaces are defined with both a synchronous and an asynchronous variant, allowing you to choose the programming pattern that works best. Further information about individual interfaces is provided in the ‘Functional areas’ section later in this guide.
  • The Citrix Common Capability Framework, which allows the rest of the product to understand the specific custom features of your infrastructure, and how those features should be displayed in Citrix Studio. The framework uses an XML-based high-level description language. Your plug-in produces specifications using this language, allowing Citrix Studio to intelligently adapt its task flows and wizard pages.

Unfortunately the current SDK doesn’t allow us to create the actual UI integration that we wanted as it seems to be focussed on PoSH execution as of now, of course we’re looking for ways to improve this so for now we’re working on a separate UI to create the hosting connection and the machine catalog. Of course, after the initial create of the machine catalog you can manage it as if it’s hosted on one of the other hypervisors.

Citrix MCS for AHV

By utilising the provisioning SDK we can create the connection from Studio to AHV and access all the APIs offered from an AHV perspective. Resulting in the following architecture:

CitrixSDK_Arch

By installing the plugin on an existing XenDesktop 7.7 installation you’ll get the additional services to deploy your desktops from the DDC on a Nutanix AHV cluster.

When we look at the benefits of AHV my colleague Josh Odgers did a great series of posts on this topic:

Part 1 – Introduction
Part 2 – Simplicity
Part 3 – Scalability

Part 4 – Security
Part 5 – Resiliency
Part 6 – Performance
Part 7 – Agility (Time to Value)
Part 8 – Analytics (Performance & Capacity Management)
Part 9 – Functionality (Coming Soon)
Part 10 – Cost

Combining the three items in this architecture provides us an additional MSI installer that can be used to install the plugin and will provide native MCS capabilities with Citrix Studio while connecting to AHV. This will leverage a built for purpose hypervisor and a platform that can deliver performance and scale out capabilities for your Citrix XenDesktop and/or XenApp projects.

As a pre-alfa demo our engineering recorded the following demo, please be aware this will not be the final interfacing but it will give you an idea of the way we’ll be offering the first integration for Citrix MCS on AHV, this external interface is a solution for the initial machine catalog creation and this will not be the final version:

The following two tabs change content below.

Kees Baggerman

Kees Baggerman is a Staff Solutions Architect for End User Computing at Nutanix. Kees has driven numerous Microsoft and Citrix, and RES infrastructures functional/technical designs, migrations, implementations engagements over the years.

7 comments

  1. […] recommend reading this article by Kees Baggerman for more information and tech preview […]

  2. […] recommend reading this article by Kees Baggerman for more information and tech preview […]

  3. […] Tools & Firmware. I’ve blogged about this previously in the blog that was called Citrix MCS for AHV: Under the hood but now it’s finally available for the […]

  4. […] the current availability for MCS on AHV as described in Citrix MCS for AHV: Under the hood, Installing the plugin for Citrix MCS on Nutanix AHV and more recently in Citrix MCS […]

  5. […] with ran into an issue where Citrix Studio was throwing out power commands towards Prism and Nutanix AHV but the VMs didn’t always respond properly. After some investigation it turned out we ran […]

  6. […] recommend reading this article by Kees Baggerman for more information and tech preview […]

  7. […] on these advantages in the following Posts: citrix-mcs-on-nutanix-ahv-unleashing-power, citrix-mcs-for-ahv-under-the-hood, and […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.