Test performance of several types of drives in a virtual environment

Technologies of virtualization are in demand today not only in the segment of "big business", but also in SMB and home users. In particular, for small server companies, virtualization can be used to implement a number of not very resource-intensive services. In this case, it usually refers to stand-alone servers based on single- or dual-processor platforms, with a relatively small amount of RAM in 32-64 GB and without special high-performance storage. But for all the benefits, you need to be aware that in terms of performance, virtual systems are different from real ones. In this article, we compare the speed of local drives of different types (HDD, SSD and NVMe) for several virtual machine configurations in order to estimate the losses from their virtualization. Nobody argues that it is better to use external storage in "correct" implementations of virtualization systems, but in a budget variant one can manage local disks.
 
 
The testing was carried out on the server of the following configuration: motherboard Asus Z10PE-D1? two processors Intel Xeon E5-2609 v? 64 GB of RAM. As the virtualization environment, Proxmox VE version 5.2 was chosen, an open source system based on Debian. To install it, we used a separate SATA SSD, and the test drives were connected separately to other interfaces and ports.
 
First, we'll test the drive from the host platform. The second option is a throw into the virtual machine (KVM and Debian 9 are used for it, 2 cores and 8 GB of RAM are allocated) as a physical disk. The third configuration is a virtual disk on LVM. The fourth is a RAW file on a volume with the ext4 file system. In the last two variants, the disk size in 64 GB was chosen. So the additional result of the article can be a comparison of LVM and RAW for storing images of virtual disks.
 
To measure the speed will be used utility fio with patterns of sequential reading and writing with a block of 256 KB and random operations with a block of 4 KB. Tests were conducted with the iodepth parameter from 1 to 256 for the purpose of emulating a different load. For sequential operations, we estimate the speed in MB /s, for random operations - IOPS. In addition, look at the average delay (clat from the test report).
 
Let's start with the traditional hard drive, in the role of which was made by the elderly HGST HUH728080ALE640 - a drive with SATA interface and 2 TB. The use of single hard disks, especially if there are no volume requirements, in the described scenario of inexpensive virtualization for a small load can be considered a typical option if you completely save or "sculpt from what was" and not include this option in consideration would be wrong.
 
 
Test performance of several types of drives in a virtual environment
 

 
 
On reading, all but the last options show roughly the same results at 190 MB /s (only with a large load with iodepth = 256 for passthrough and LVM the results are reduced to 150 MB /s). Whereas raw, thanks to the caching on the host, "flies into space" and against its background the others are no longer visible. On the one hand, we can say that the test used and the system settings do not allow us to correctly estimate the speed of this configuration and show the performance of not a disk but of RAM. On the other hand, caching is one of the most effective and common technologies for increasing productivity and if it works, it would be strange to abandon it. But do not forget about reliable power in these configurations.
 
 

 

 
 
There is no such effect on the record, so on consecutive operations all configurations are approximately the same - the maximum speed is about 190 MB /s. Although raw still behaves differently than others - with a small load it is slower, but at maximum does not reduce the speed, like the rest. There are no differences in delays.
 
 

 

 
 
The use of the host system's cache is also noticeable in random read operations - here the raw stably is the fastest and shows up to 950 IOPS. About twice slower than lvm - up to 450 IOPS. Well, the hard drive itself, including when I "throw" into the guest system, shows about 200 IOPS. The distribution of participants on the schedule of delays is consistent with the speeds.
 
 

 

 
 
At operations of random recording, the configuration with lvm, which provided up to 400 IOPS, showed itself best. It is followed by raw (~ 330 IOPS), and the list closes the last two participants with 290 IOPS. There are no noticeable differences due to delays.
 
In general, if you do not need the functions provided by lvm, and the key criterion is not the speed of random writing, when placing virtual disks on the local storage in terms of speed, it's better to use raw. The use of the technology of transferring a physical disk to a virtual machine does not give any performance advantages in this case. But it can be interesting if you want to connect a drive with existing data to the virtual machine.
 
The second participant of the test is SSD Samsung 850 EVO. Given his age and work in the system without TRIM, in some tests (in particular, sequential recording), he is already losing to the Winchester. Nevertheless, due to a significant gain in performance on random operations before the traditional hard drive, it is even very interesting for virtual machines.
 
 

 

 
 
The result of consecutive reading in the raw mode can be commented similarly to the variant with the hard disk. But what's more interesting here is that the first two configurations show a stable 370 MB /s at high load, whereas lvm is only capable of 190 MB /s. Delays for this regime are also higher.
 
 

 

 
 
On write operations, as already mentioned, this solid state drive in its current state does not look very interesting and shows a speed of 100 MB /s. Regarding the comparison of configurations, in this test, raw loses under low load, both in speed and in delays.
 
 

 

 
 
Random operations are the main "trump" of SSD. Here we see that any "virtual" options noticeably lose to the "clean" drive - they provide only 3?000 IOPS, while the SSD itself can run three times faster. Apparently, here the hardware and software platform is already a limitation. Nevertheless, the delays in this test do not exceed 7 ms, so it is unlikely that this difference in IOPS can be noticed by applications of a general nature.
 
 

 

 
 
And on a random recording is another alignment of forces. The "real" disk here already loses, although insignificantly. It is capable of displaying up to ??? IOPS. lvm and passthrough are one or two hundred more, and raw is already reaching ?500 IOPS. On the schedule of the delays from the interesting video is clearly a break on iodepth = 32.
 
Testing has shown that SSD behaves in this scenario in a way different from HDD. First, consecutive reading with lvm lags far behind other options. Secondly, virtual disks on SSD noticeably lose in IOPS on random reading.
 
The third participant is somewhat out of the "inexpensive", but this product itself is very interesting for a universal "accelerator" of disk operations and is able to compete in speed not only with single drives, but also with RAID arrays. It's about Intel Optane. In this case, we used the 900P model for PCIe 3.0 x4 bus, with a capacity of 280 GB.
 
 

 

 
 
Intel Optane is already able to compete with the memory in this test. The difference is no longer an order of magnitude, as with other participants, but only two or three times. At the same time, when the load increases, the values ​​are practically compared. Delays both in the tests above are lower in the raw configuration.
 
 

 

 
 
In sequential recording operations, the "clean" drive even loses to other participants - with the increase in the load they go out to stable ??? MB /s, and it reduces the speed to about ?700 MB /s. Delays in this case can not be compared.
 
 

 

 
 
Random read operations for this model of a solid-state drive when accessing it from the host can provide almost 20?000 IOPS (the speed will be at the level of 760 MB /s). But all other connection schemes, as we saw above for SSD with SATA interface, are limited to ??? IOPS, which can not but upset. Accordingly, they have longer delays, approximately five times.
 
 

 

 
 
At random, this unmistakably unique drive model shows almost the same results as on random reading - about 19?000 IOPS for direct connection and 3?000 IOPS for other options. Delays also coincide with the schedule on read operations. On the other hand, more than 700 MB /s in random recording in small blocks - you need to look for such results.
 
The use of the Intel Optane drive for the task in question shows that there will not be a significant reduction in speed for sequential operations in guest operating systems. But if you require high IOPS on random reading or writing, then this platform will limit performance at 3?000 IOPS, although the drive itself is five times faster.
 
 
Testing showed that when building storage systems for virtual servers, you should pay attention to certain losses from virtualization, if speed is important for your virtual machines. In most of the tested configurations, virtual disks show significantly different speed indicators from physical devices. At the same time for traditional hard disks the difference is usually relatively small, because they are not so fast by themselves. For solid-state drives with SATA interface, it is possible to note significant losses in IOPS with random access, but even with this in mind, they remain radically faster in these tasks than hard disks. The Intel Optane storage certainly lost a lot in a virtual environment on random operations, but even in this case it continues to be phenomenally fast on record. And on consecutive operations there are no remarks to it. Another significant advantage of this device is stable performance - it does not require any special cleaning operations, so regardless of the state and past history, as well as the OS and its settings, at any time the speed will be constant. But, as usual, nothing happens for free. Intel Optane 900P is not only uniquely fast, but uniquely expensive.
+ 0 -

Add comment