Citrix: XenDesktop 5 on Hyper-V

I’m building a demo site with Xendesktop 5 on Hyper-V (including MCS and no PVS, because it’s just a demo). While the installation went without real problems the configuration had some minor issues. I used the CTX127578 article as a reference.

1) No storage available while configuring Machine Creation

When I tried to configure Machine Creation in XenDesktop 5 I discovered that I didn’t have any storage available on my Hyper-V servers according to XenDesktop. One of my collegues configured the Hyper-V servers and used CSV so I knew there was shared storage available.

 

So in my case nothing showed up under Storage. I went looking for the answer and it was relatively easy because you have to create a share at the Cluster Storage (c:\ClusterStorage) directory on one of your HyperV Servers. You can do it on multiple HyperV servers in your cluster but one should be enough.

As stated here by Jamal Ahmed

Select “ClusterStorage” folder => Right Click => Select Properties => Clcik on Sharing Tab => Click on “Advanced Sharing” => Enable “Share this folder” (it is at the top of the screen).

After this change the shared storage was displayed in the wizard and I was able to continue with the wizard.

2) No resources available while creating Virtual Machines

When I tried to create some VM’s using the Machine Create option from the Desktop Studio I got an error message “Host does not have sufficient resources”:

When I looked up this error message I found the following post on the Citrix forum:

The first thing to check is the properties of the cluster in the SCVMM console and ensure that the “Cluster reserve state” is OK. If this is showing overcommitted then XenDesktop will not create more virtual machines in the cluster

So what is the Cluster Reserve State:

Depending on your needs, you can configure a cluster reserve for each host cluster that specifies the number of node failures a cluster must be able to sustain while still supporting all virtual machines deployed on the host cluster. If the cluster cannot withstand the specified number of node failures and still keep all of the virtual machines running, the cluster is placed in an Over-Committed state, and the clustered hosts receive a zero rating during virtual machine placement. The administrator can, during a manual placement, override the rating and place an HAVM on an over-committed cluster.

For example, if you specify a node failure reserve of 2 for an 8-node cluster, the rule is applied in the following ways:

  • If all 8 nodes of the cluster are functioning, the host cluster is marked Over-committed if any combination of 6 nodes (8-2) in the cluster lacks the capacity to accommodate existing virtual machines.
  • If only 5 nodes in the cluster are functioning, the cluster is marked Overcommitted if any combination of 3 (5-2) nodes in the cluster lacks the capacity to accommodate existing virtual machines.

 

VMM’s cluster refresher updates the host cluster’s Over-committed status after each of the following events:

  • A change in the cluster reserve value
  • The failure or removal of nodes from the host cluster
  • The addition of nodes to the host cluster
  • The discovery of new virtual machines on nodes in the host cluster

 

The cluster reserve is set on the General tab of the host cluster properties.

View the status of the cluster, and adjust the cluster reserve.

  • In the Cluster reserve field, specify the maximum number of node failures the cluster must be able to sustain but still keep all existing virtual machines running. If the rule is violated, the host cluster is marked Overcommitted

After setting the proper values and modifying the Cluster Reserve nodes I was able to create a couple of VM’s and use them in XenDesktop.

The following two tabs change content below.

Kees Baggerman

Kees Baggerman is a Staff Solutions Architect for End User Computing at Nutanix. Kees has driven numerous Microsoft and Citrix, and RES infrastructures functional/technical designs, migrations, implementations engagements over the years.

Leave a Reply