At work, we use VirtualBox to distribute and run development machines. The primary reasons for this are:
- It is free (gratis), at least the portions we require
- It has import/export
However, it isn’t developed in the open, and it has a worrying tendency to print sanitizer warnings on the console when I shut down my laptop.
Can I replace it with kvm/libvirt/virt-manager? Let’s try!
There’s no import function. There’s not even a vendor-specific way to import/export for other
qemu-system-x86 instances. As far as I know, the best we can do is dump the libvirt XML defining the machine, and the disk image… and those bake in absolute paths.
For instance, an unprivileged guest will store in
$HOME, so an image produced by user foo will have a path like
/home/foo/.local/share/libvirt/images/vm-1.qcow2 stored into it. Anyone who wants to import it, who is not named “foo”, will have to edit the file.
In VirtualBox, on the other hand, I can export an appliance as an OVA file, that my colleagues can import without changes. The process takes care of unpacking the disk inside to VirtualBox’s storage path, so we don’t have to know the details of that, either.
To truly replace VirtualBox, libvirt/kvm/qemu would need that level of frictionless exchange.
This particular VirtualBox guest has a second NIC on a host-only network, where we can access the Web server by a predetermined IP using standard ports, without any NAT rules. We can then forward traffic there with devproxy. (With Let’s Encrypt providing a TLS certificate, and a proxy manager in the browser, we can use the site’s real domain name to access the VM instead. This is much less error-prone than having host-specific configuration settings.)
It looks like I should create an isolated network for the equivalent in kvm, but limited users can’t do that.
virbr1 can’t be created. I was also unsuccessful at finding a way to control the IP address assignment done by usermode networking. Even if I could, I don’t think I could directly access listening ports on the guest.
I want to avoid giving my limited user full rights to fully reconfigure the entire network just to get a guest to have a static IP, but I don’t think it’s possible.
To avoid having to run file sync all the time, and to have access to the files on the host without needing the VM running, this guest mounts the code via NFS. (To have the correct Unix permissions, it does not use VirtualBox shared folders.)
kvm supports virtio-fs as a replacement for NFS, 9p, etc. It is specifically meant to improve the performance over any networked file system by removing the network stack traversals.
Unfortunately, for limited users, it’s not usable yet. A patch to allow unprivileged use of virtio-fs was landed upstream in mid-December 2023, but it’s unclear whether this is soon enough that it will be included in Ubuntu 24.04 LTS. If not, I may not have this feature available until 2026.
Just be privileged?
I want my account for daily usage to be as isolated as possible from root. There is no ssh daemon running; what
sudo permissions exist allow running bounce scripts in /usr/local/sbin, that carefully limit the real operations.
Meanwhile, the libvirt documentation states, “A read-write connection to daemons in system mode typically implies privileges equivalent to having a root shell.” Using privileged mode clearly means the account is no longer isolated.
A good compromise would be a setup where launching
virt-manager.desktop prompts for my (or a custom) password, to gain access to libvirtd and
qemu:///system only for its process tree. That would at least be better than me and my entire desktop session having the group ambiently available.
I am aware that libvirtd may support authentication, but I haven’t worked on that angle too hard. It’s looking like a lot of weeds over there. There are compile-time options for what’s supported, but no trivial way to find out which of those are used by the binary on the system. The
--version switch that often holds such optional information for other programs is of no use with
Where does this leave me?
The most critical problem is the static IP. If I can’t give the guest a fixed IP that the host can reach it on, I can’t use the web server for testing. I’ve invested quite a bit into avoiding host-specific configuration or having the web app ‘know’ its address; I am not backing down now.
(The problem with using a port-forward and accessing localhost:8443 is that the app still sees itself running on port 443. When it builds a self-redirect, the port is wrong. Moreover, I’d need a TLS certificate for localhost, and everyone would need to trust it.)
I’m not sure what the ordering is on the next two issues. Both of them should be solved for an enthusiastic “Yes, I will use this,” but neither of them make libvirt unusable the way the static IP is a complete blocker.
One, import/export should exist. Machine description and disk image(s) in one file to transfer, and no XML editing required to import such a file. This would reach parity with the VirtualBox feature. I don’t think it must be OVA; I would anticipate only using it for libvirt-to-libvirt transfers.
Two, the libvirtd security model should be revised. Either I would like to understand the path to escalate privileges and why it is necessary, or libvirtd should be audited and improved to avoid handing out root implicitly. This would be less relevant if everything else (static IP) worked without privilege. And, noted above, it would be less relevant if the
.desktop file could elevate itself instead of every process in my session having
libvirt group rights.
The final problem is the state of the documentation. Scattered across several projects (kernel, qemu, libvirt) as well as Red Hat and the Stack Overflow network of sites… I feel like I spent a lot of time on research, and gained much less knowledge than might be hoped for.
Even if upstream makes changes to any of this right now, the results may not be available on LTS distros for years.
The ancillary problem
To get started quickly, I converted the VirtualBox disk image, and imported it to a new guest in virt-manager. When I booted, systemd hung forever due to the change in bus path for the primary NIC. The guest (Ubuntu 22.04) uses netplan, which baked the VirtualBox device name into the configuration, and that (DHCP/NAT connection) was mandatory.
So, systemd said it was waiting for the network (no timeout), and things sure hadn’t improved after 90 seconds. I used recovery mode to change the file. (Of course, then NFS failed, since it didn’t have the static IP. But at least that had a 90-second timeout.)
This isn’t libvirt’s fault, but it does show the importance of retaining the bus layout through an export/import cycle.
But what is the performance? We must know!
Best-of-three numbers for
npm run build on a React site I had handy:
Setup : Real/Wall Time : Normalized
distrobox : 16.468 sec. : 1.000 x
KVM virtiofs : 22.810 sec. : 1.385 x
VirtualBox NFS: 39.300 sec. : 2.386 x
I didn’t go to the trouble of installing Node on the bare metal when I already had a distrobox container ready, but I expect
distrobox to be extremely close to native performance. There’s no hypervisor and no other filesystem interposed. (The problem with distrobox is the lack of isolation from the host. Its main purpose is to make the container seamless, with access to the host’s Wayland/X11, etc.)
virtio-fs would be nice to have. Alas!
I also found that the
libvirt group membership doesn’t help assign a static IP. I still didn’t have permission to create the virtual bridge. Alas…