High CPU usage reported in Prism when using Acropolis Hypervisor
December 19, 2017

If you’ve recently migrated to AHV from another Hypervisor, or noticed CPU usage reported from Prism on your AHV cluster is around 40-50% higher than you’d expect following your sizing exercise, this article will explain why!

There are two reasons for high CPU usage:

Halt Polling System.

The first being related to the hypervisor AHV is based upon, KVM. KVM has a feature called ‘halt polling system’ halt-polling in a nutshell, this feature tries to reduce vCPU latency by continuing to poll the host after the guess VM process has finished using the physical core, just in case something else has to run in quick succession.

As you can imagine this artificially inflates the CPU usage at the host level. For some workloads this would work well, and if you have bags of CPU resource to play with, why not leave it on. You can expect to see a 10-20% reduction in CPU usage at the cluster level if you turn this off. So, how do you turn it off?

Steps to turn off HPS.

SSH to your first AHV host and run the following commands:

“/sys/module/kvm/parameters/halt_poll_ns”

This will stop HPS, however, if your hosts reboot it’ll start again, so we need to add it into the rc.local file, see below:

“sudo vi /etc/rc.local”

Add the following string to the bottom of the file:

“echo 0 > /sys/module/kvm/parameters/halt_poll_ns”

To save your changes do the following:

Press ESC then :wq Enter

OK, now you’re done, look at prism, over the next few hours you should see CPU usage drop.

Hyper-Threading.

The second reason for high CPU usage reporting from Prism is a bit of a contentious one, AHV doesn’t consider Hyper-Threading for virtual machine placement, this differs from ESX and Hyper-V (you can of course choose to disable HT).

You can therefore expect an AHV cluster to report roughly 50% greater CPU usage than a cluster running another hypervisor with HT enabled.

This isn’t to say ESX or Hyper-V are correct and AHV is wrong, a virtual thread after all isn’t another processor, its simply another thread to the same processing core, how much more processing power you get from the core largely depends on the workloads using it.

CPU intensive workloads directed at the same core using HT are likely to get only 50% each, whereas less intensive workloads may appear to get as must as 90% each, it really depends.

We’ve been able to capture CPU usage from 3000 series nodes, from both Prism (via the API) and from TOP, using a host based agent and fed that into our Prevensys® monitoring system, graphing the results has showed we’ve been able to push a host to 92% usage in TOP (which uses HT) before we feel any CPU related performance issues. This is on a cluster running mixed workloads, including Citrix and SQL. If we do the maths here against the Prism usage this would put it around the 180% mark!

Clearly results will differ for different workloads and node types and you’d have to carefully consider this in your environment. Nutanix are being cautious with their VM placement here.

Capacity planning: Runway.

The Nutanix capacity planning system, Runway, which is part of Prism Central, does take hyperthreading into its capacity calculations. Caution should therefore be exercised when using this tool to plan for your future capacity, since results may or may not be useful depending on your workload profile.

More Articles