PVS 7.1 RAM Cache with overflow to disk

UPDATED INFO HERE: http://virtexperience.com/2014/03/10/an-update-about-my-experience-with-pvs/

I was really excited when I discovered that Citrix Provisioning services 7.1 has an option to store cache in device RAM with overflow to local disk. Previously I’ve seen how cache to RAM is 10 times faster than a SSD disk cache or a fast disk RAID. The drawback with cache to RAM is that when the cache is filled, your VM’s will crash instantly with BSOD. You had to be really careful with the sizing and monitoring the cache to use this feature as described in my prevous blog post abort this topic.

Now, with overflow to disk or fallback to disk, this would solve that issue and still make use of all that RAM to make extreme performance at a low cost. Right? I rushed to my lab to test, and I made some interesting discoveries.


First, a quick description of my LAB. I have a Xenserver 6.2 with a PVS server and a 2008R2 PVS device. The new feature requieres Windows 7 or 2008 R2 or newer OS.

The device VM has no other components than OS and PVS drivers installed. I used a small 2GB disk for cache to disk and overflow disk just to see what happens when the disk cache is full. I’m using IO meter to test performance. To make a baseline I first tested IO using cache to local disk SSD And a standalone 7200RPM disk, then with the existing cache to RAM with no overflow.


No surprises here, SSD was 10x faster than the 7200RPM disk and cache to RAM 10x the SSD disk. I’m just displaying read IOPS her but write IOPS is generally 50% of the read IOPS. I will make another blog post later with testing the different storage options combined with different cache options and hypervisors, got some interesting results there too.

Back to cache to RAM with overflow on disk. How does it work? First thing I noticed, was the difference in how the RAM cache is assigned to the VM. With previous RAM cache option, the PVS RAM cache was hidden from the Operations system, so when assigning 1GB RAM as PVS RAM cache, the VM sees only 3GB RAM.


Using PVS status tray app or the mcli PVS command line tool, the percentage of RAM cache used is visible with cache to RAM without overflow.



With overflow enabled the VM was still able to see all 4GB RAM assigned. So where was the cache? Checking the PVS status tray app, showed that I had about 1,5GB cache available, but I had specified 1GB.


The mcli command I’ve previously used to monitor RAM cache also does not show anything. Looking at the overflow disk, the local disk assigned to the VM where cache is usually stored, there is a vdiskdif.vhdx file and no .vdiskcache file like when using cache to device disk.


So this is where the cache is stored a vhdx differencing file, and the size of it equals the cache size used. I assume that the idea is to leverage windows built-in function to cache files in RAM. But how much performance do we gain from this, and what happens when the vdiskdif.vhdx is full?  Let’s see. I copied a 2GB file the PVS device C disk. After 1,5GB de file copy hangs.


The VM is still up and shows a warning that the cache disk is full, but any action done at the VM hangs. Only way out is to reboot. So still not much better than cache to RAM without overflow, but you may work around it by having a large overflow disk. So what about performance. Running IO meter again, expecting to see same result as cache to RAM. But…


The performance is about the same as running with cache to disk.

Here is a complete diagram of my testing


So to the conclusion from my LAB results, is that this new option brings no benefit over cache to disk or cache to RAM without overflow. UPDATE: I’ve also tested with Windows 2012 R2 with same result. Only way to achieve better IOPS is by enabling Intermediate buffering on the image, more on this in a new blogpost. If anyone has more information on this subject, please use the comment field or contact me via Twitter.

I will follow up this blog post with more testing with PVS cache to disk using raided disks in SAN, VS local SSD disks, how intermediate buffering gives near RAM like IOPS, and how Hyper-v VS XenServer reacts to the different buffering options.

24 thoughts on “PVS 7.1 RAM Cache with overflow to disk

  1. “Cache in device RAM with overflow on hard disk” is documented in this link:
    http://support.citrix.com/proddocs/topic/provisioning-7/pvs-technology-overview-write-cache-intro.html

    In your test, I suspect the IO meter workload is larger than specified 1GB RAM size, and majority workload is overflowed to local disk as first-in-first-out (FIFO). Thus the performance should match to local disk.

    To have fair comparison with RAM cache, the specified RAM size should be enough to sustain expected workload, then use the same size for this new cache mode for benchmarking, with the benefit of overflow to local disk.

    With RAM size less than workload (default), it should still help reduce local disk IOPs.

    • Thank you so much for the response Moso Lee. I’ve tried now with cache size 64MB and IO Meter 200MB, same result. Also tested with cache size 4GB and IO meter 100MB, same result. I’m the test both on Windows 2008 R2 and 2012 R2 with same result. Only way to get RAM like perfomance (close to 10000 Read IOPS and 5000 write IOPS) is to enable intermediate buffering on the image, and putting the vdiskdif file on fast SSD disk or fast SAN storage. But that will also give good performance with cache to device disk. When you are testing, please make note if intermediate buffering is enabled or not. I’m writing another blogpost about intermediate buffering, cause it seems to improve performance 3 times with xenserver VM’s but with Hyper-V it has a negative impact.

  2. Pingback: How I increased IOPS 200 times with XenServer and PVS | Virtual eXperience

  3. Thank you for this detailed investigation. I tested the ‘RAM cache with overflow to disk ‘option in an effort to reduce write IOPS on the SAN. However I found that there was no noticeable reduction from normal cache to disk. This Citrix forum thread above seems to confirm that it doesn’t work as expected.

  4. Great write up. I do have some info to update everyone on. I have a support case with Citrix escalation and they have confirmed that this new write cache (wc) type has a bug and is not working. A patch/hotfix will be made available soon. Additionally, a new not yet released CTX139627 will also go into an issue where Microsoft ASLR causes the VM (server or desktop) to hang. This issue is seen in PVS 6.0 and 6.1 with the only fix being that you must be on PVS 7.1 AND set the WC to Ram cache with overflow to disk (you can also run the WC on server for 6.x and the issue is not seen but you’ll pay a performance penalty). Specifically, this mode leverages vhdx which Citrix says is the only current solution to the ASLR issue. This came from two months of troubleshooting with Citrix and Microsoft support. I don’t have a definite date for the release of the CTX but according to the author of the CTX (Peter from Citrix escalation) the CTX is in the final stages of review.

  5. Does it mean when using cache in device RAM with overflow to disk that the cache is not persistent (I’m thinking about redirected event viewer logs or EdgeSight info on the cache disk) ?

    Can this vhdx file be present on a VMware vmdk disk ?

  6. Pingback: Best Practices: Atlantis ILIO and Citrix PVS (Provisioning Services) | Atlantis Computing Blog

  7. Pingback: An update about my experience with PVS. | Virtual eXperience

  8. Pingback: IOPS increased in Citrix XenServer and PVS article by Magnar Johnsen | Tannyahmad's Weblog

Leave a Reply

Your email address will not be published. Required fields are marked *

*