and use it for plugin linked clone
This also enable extended_l2=on, as it's mandatory for backing file
preallocation.
Preallocation was missing previously, so it should increase performance
for linked clone now (around x5 in randwrite 4k)
cluster_size is set to 128k, as it reduce qcow2 overhead (reduce disk,
but also memory needed to cache metadatas)
l2_extended is not enabled yet on base image, but it could help too
to reduce overhead without impacting performance
bench on 100G qcow2 file:
fio --filename=/dev/sdb --direct=1 --rw=randwrite --bs=4k --iodepth=32 --ioengine=libaio --name=test
fio --filename=/dev/sdb --direct=1 --rw=randread --bs=4k --iodepth=32 --ioengine=libaio --name=test
base image:
randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 20215
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 22219
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20217
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 21742
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21599
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 22037
clone image with backing file:
randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 3912
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 21476
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20563
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 22265
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 18016
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21611
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
In 7684225 ("ceph/rbd: set 'keyring' in ceph configuration for
externally managed RBD storages") the ceph config creation was packed
into a new function and checked whether the installation is an external
Ceph cluster or not.
However, a check was forgotten in the RBDPlugin which is now added.
Without this check a configuration in /etc/pve/priv/ceph/<pool>.conf is
created and pvestatd complains
pvestatd[1144]: ignoring custom ceph config for storage 'pool', 'monhost' is not set (assuming pveceph managed cluster)! because the file /etc/pve/priv/ceph/pool.conf
Fixes: 7684225 ("ceph/rbd: set 'keyring' in ceph configuration for externally managed RBD storages")
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250716130117.71785-1-h.duerr@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use '/dev/zvol' as a base path for new storages for providers 'iet'
and 'LIO', because that is what modern distributions use.
This is a breaking change regarding the addition of new storages on
older distributions, but it's enough to specify the base path '/dev'
explicitly for setups that require it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250605111109.52712-1-f.ebner@proxmox.com
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.
Autoactivation is problematic for shared LVM storages, see #4997 [1].
For the inherently local LVM-thin storage it is less problematic, but
it still makes sense to avoid unnecessarily activating LVs and thus
making them visible on the host at boot.
To avoid that, disable autoactivation after creating new LVs. lvcreate
on trixie does not accept the --setautoactivation flag for thin LVs
yet, support was only added with [2]. Hence, setting the flag is is
done with an additional lvchange command for now. With this setting,
LVM autoactivation will not activate these LVs, and the storage stack
will take care of activating/deactivating LVs when needed.
The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] 1fba3b876b
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250709141034.169726-3-f.weber@proxmox.com
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.
This is not necessarily problematic for local LVM VGs, but it is
problematic for VGs on top of a shared LUN used by multiple cluster
nodes (accessed via e.g. iSCSI/Fibre Channel/direct-attached SAS).
Concretely, in a cluster with a shared LVM VG where an LV is active on
nodes 1 and 2, deleting the LV on node 1 will not clean up the
device-mapper device on node 2. If an LV with the same name is
recreated later, the leftover device-mapper device will cause
activation of that LV on node 2 to fail with:
> device-mapper: create ioctl on [...] failed: Device or resource busy
Hence, certain combinations of guest removal (and thus LV removals)
and node reboots can cause guest creation or VM live migration (which
both entail LV activation) to fail with the above error message for
certain VMIDs, see bug #4997 for more information [1].
To avoid this issue in the future, disable autoactivation when
creating new LVs using the `--setautoactivation` flag. With this
setting, LVM autoactivation will not activate these LVs, and the
storage stack will take care of activating/deactivating the LV (only)
on the correct node when needed.
This additionally fixes an issue with multipath on FC/SAS-attached
LUNs where LVs would be activated too early after boot when multipath
is not yet available, see [3] for more details and current workaround.
The `--setautoactivation` flag was introduced with LVM 2.03.12 [2], so
it is available since Bookworm/PVE 8, which ships 2.03.16. Nodes with
older LVM versions ignore the flag and remove it on metadata updates,
which is why PVE 8 could not use the flag reliably, since there may
still be PVE 7 nodes in the cluster that reset it on metadata updates.
The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://gitlab.com/lvmteam/lvm2/-/blob/main/WHATS_NEW
[3] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#FC/SAS-specific_configuration
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250709141034.169726-2-f.weber@proxmox.com
Introduce qemu_blockdev_options() plugin method.
In terms of the plugin API only, adding the qemu_blockdev_options()
method is a fully backwards-compatible change. When qemu-server will
switch to '-blockdev' however, plugins where the default implemenation
is not sufficient, will not be usable for virtual machines anymore.
Therefore, this is intended for the next major release, Proxmox VE 9.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
FG: fixed typo, add paragraph break
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for better backwards compatibility. This also means using path()
rather than filesystem_path() as the latter does not return protocol
paths.
Some protocol paths are not implemented (considered all that are
listed by grepping for '\.protocol_name' in QEMU):
- ftp(s)/http(s), which would access web servers via curl. This one
could be added if there is enough interest.
- nvme://XXXX:XX:XX.X/X, which would access a host NVME device.
- null-{aio,co}, which are mainly useful for debugging.
- pbs, because path-based access is not used anymore for PBS,
live-restore in qemu-server already defines a driver-based device.
- nfs and ssh, because the QEMU build script used by Proxmox VE does
not enable them.
- blk{debug,verify}, because they are for debugging.
- the ones used by blkio, i.e. io_uring, nvme-io_uring,
virtio-blk-vfio-pci, virtio-blk-vhost-user and
virtio-blk-vhost-vdpa, because the QEMU build script used by Proxmox
VE does not enable blkio.
- backup-dump and zeroinit, because they should not be used by the
storage layer directly.
- gluster, because support is dropped in Proxmox VE 9.
- host_cdrom, because the storage layer should not access host CD-ROM
devices.
- fat, because it hopefully isn't used by any third-party plugin here.
Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Everything the default plugin method implementation can return is
allowed, so there is no breakage introduced by this patch.
By far the most common drivers will be 'file' and 'host_device', which
the default implementation of the plugin method currently uses. Other
quite common ones will be 'iscsi' and 'nbd'. There might also be
plugins with 'rbd' and it is planned to support QEMU protocol-paths in
the default plugin method implementation, where the 'rbd:' protocol
will also be supported.
Plugin authors are encouraged to request additional drivers and
options based on their needs on the pve-devel mailing list. The list
just starts out more restrictive, but everything where there is no
good reason to not allow could be allowed in the future upon request.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Plugins can guard based on the machine version to be able to switch
drivers or options in a safe way without the risk of breaking older
versions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This is mostly in preparation for external qcow2 snapshot support.
For internal qcow2 snapshots, which currently are the only supported
variant, it is not possible to attach the snapshot only. If access to
that is required it will need to be handled differently, e.g. via a
FUSE/NBD export.
Such accesses are currently not done for running VMs via '-drive'
either, so there still is feature parity.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
For '-drive', qemu-server sets special cache options for EFI disk
using RBD. In preparation to seamlessly switch to the new '-blockdev'
interface, do the same here. Note that the issue from bug #3329, which
is solved by these cache options, still affects current versions.
With -blockdev, the cache options are split up. While cache.direct and
cache.no-flush can be set in the -blockdev options, cache.writeback is
a front-end property and was intentionally removed from the -blockdev
options by QEMU commit aaa436f998 ("block: Remove cache.writeback from
blockdev-add"). It needs to be configured as the 'write-cache'
property for the ide-hd/scsi-hd/virtio-blk device.
The default is already 'writeback' and no cache mode can be set for an
EFI drive configuration in Proxmox VE currently, so there will not be
a clash.
┌─────────────┬─────────────────┬──────────────┬────────────────┐
│ │ cache.writeback │ cache.direct │ cache.no-flush │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│writeback │ on │ off │ off │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│none │ on │ on │ off │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│writethrough │ off │ off │ off │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│directsync │ off │ on │ off │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│unsafe │ on │ off │ on │
└─────────────┴─────────────────┴──────────────┴────────────────┘
Table from 'man kvm'.
Alternatively, the option could only be set once when allocating the
RBD volume. However, then we would need to detect all cases were a
volume could potentially be used as an EFI disk later. Having a custom
disk type would help a lot there. The approach here was chosen as it
is catch-all and should not be too costly either.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
For QEMU, when using '-blockdev', there is no way to specify the
keyring file like was possible with '-drive', so it has to be set in
the corresponding Ceph configuration file. As it applies to all images
on the storage, it also is the most natural place for the setting.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
ZFS does not have a filesystem_path() method, so the default
implementation for qemu_blockdev_options() cannot be re-used. This is
most likely, because snapshots are currently not directly accessible
via a filesystem path in the Proxmox VE storage layer.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This is in preparation to switch qemu-server from using '-drive' to
the modern '-blockdev' in the QEMU commandline options as well as for
the qemu-storage-daemon, which only supports '-blockdev'. The plugins
know best what driver and options are needed to access an image, so
a dedicated plugin method returning the necessary parameters for
'-blockdev' is the most straight-forward.
There intentionally is only handling for absolute paths in the default
plugin implementation. Any plugin requiring more needs to implement
the method itself. With PVE 9 being a major release and most popular
plugins not using special protocols like 'rbd://', this seems
acceptable.
For NBD, etc. qemu-server should construct the blockdev object.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
As the GlusterFS project is unmaintained since a while and other
projects like QEMU also drop support for using it natively.
One can still use the gluster tools to mount an instance manually and
then use it as directory storage; the better (long term) option will
be to replace the storage server with something maintained though, as
PVE 8 will be supported until the middle of 2026 users have some time
before they need to decide what way they will go.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
using the new top-level `make tidy` target, which calls perltidy via
our wrapper to enforce the desired style as closely as possible.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
See pve-common's commit 5ae1f2e ("buildsys: add tidy make target")
for details about the chosen xargs parameters.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
No other plugin activates the storage inside the path() method either.
The caller needs to ensure that the storage is activated before using
the result of path().
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In the BTRFS plugin, resize_volume() for a subovlume currently fails
with "failed to get btrfs subvolume ID from: ". This is because the
btrfs 'subvol show' command is invoked with '-q', so there is no
output.
As btrfs quotas are currently not implemented, die early with a clean
error instead and comment out the unused code for now.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-6-f.ebner@proxmox.com
The result from the file_size_info() call is not used by
volume_export_formats() and most failure scenarios of file_size_info()
lead to an undefined return value rather than a failure. This includes
the case for a non-existent file. The default path() implementation
doesn't do any existence check either.
An interesting scenario where file_size_info() does fail, is when the
volume is corrupted or not in the queried format. But this is a rare
edge case, so an early check doesn't seem worth it. It will be caught
by volume_export() itself, or in case of VM migration, also when
querying the size during scanning of local volumes.
While checking for the definedness of $size could serve as an early
sanity check, it is not currently done and other plugins don't do such
early checks in their implementation of volume_export_formats()
either. Keep the implementation abstract in Plugin.pm too and avoid
doing IO. Callers that want to do early existence checks or similar
can do so themselves explicitly, covering all plugins.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-4-f.ebner@proxmox.com
Return the same size as in list context. See also commit "plugin: file
size info: be consistent about size of directory subvol".
Fixes cloning containers with unsized subvolumes on BTRFS. Before the
change, this would fail with "mkfs.ext4: Device size reported to be
zero.". That is because with non-zero size, the allocation of the
volume for the clone will be done with 'raw' format by the
alloc_disk() helper in LXC.pm rather than 'subvol'. This change will
make cloning containers with unsized subvol directories possible.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-3-f.ebner@proxmox.com
In list context, the file_size_info() function in Plugin.pm would
return 0 for the size of a subvol directory, but in scalar context 1.
As reported in the community forum [0], the change in commit e50dde0
("volume export: rely on storage plugin's format"), changing the
caller in volume_export() to scalar context exposed the inconsistency
in the return value for the size. This led to breakage of migration
with unsized btrfs subvolumes.
Align the returned values to avoid such surprises. Caller that would
treat 0 as an error should use list context and check if it is a
subvol, if they are able to handle them.
NOTE: it is not possible to attach either a subvol, or a volume with
zero size to a VM.
Existing callers of file_size_info() do not break with this change:
GuestImport/OVF.pm - parse_ovf() +
API/Qemu.pm - create_disks():
Doesn't support subvol directories, dies if size is 0.
Storage/Plugin.pm - create_base() +
Storage/{BTRFS,}Plugin.pm - list_images():
Checks for definedness of $size.
Storage/{BTRFS,ESXi,}Plugin.pm - volume_size_info():
Transitive, see below.
Storage/Plugin.pm - volume_export():
Regressed by commit e50dde0, this change will restore previous
behavior.
Storage/Plugin.pm - volume_export_formats() +
GuestImport.pm - extract_disk_from_import_file() +
Storage.pm - assert_iso_content():
Doesn't use the result.
CLI/qm.pm - importdisk:
Dies early if source is a directory.
CLI/qm.pm - importovf:
Calls parse_ovf() earlier.
Existing callers of volume_size_info() do not break with this change:
API2/Storage/Content.pm - info:
Uses list context, not affected by change.
QemuMigrate.pm - scan_local_volumes() +
API2/Qemu.pm - create_disks(), first call:
Uses list context, not affected by change, does not support subvol
directories.
API2/Qemu.pm - create_disks(), second call +
API2/Qemu.pm - import_from_volid():
Doesn't support subvol directories, dies if size is 0.
API2/Qemu.pm - resize_vm +
VZDump/QemuServer.pm - prepare() +
QemuServer.pm - clone_disk() +
QemuServer.pm - create_efidisk():
Doesn't support subvol directories (see NOTE above), but would not
die directly if size is 0. (In case of create_efidisk(), the size of
the just created disk is queried.)
API2/LXC.pm - resize_vm:
For directory plugins, the subsequent call to resize_volume() fails
with "can't resize this image format".
For BTRFS, quotas are currently not supported and the call to
resize_volume() fails with "failed to get btrfs subvolume ID from:".
This is because the btrfs 'subvol show' command is invoked with
'-q', so there is no output. Even if it would work, it would be more
correct to use 0 as the current size to add to the new quota rather
than 1.
LXC/Config.pm - rescan_volume():
This will happily use size=1 from before this change, but that is
not correct. A subsequent 'pct rescan' will correct the size to 0.
It is only used when hotplugging an existing subvol directory.
LXC.pm - copy_volume():
This will happily use size=1 from before this change, but that is
not correct and will lead to failure "mkfs.ext4: Device size
reported to be zero.". That is because with non-zero size, the
allocation of the volume for the clone will be done with 'raw'
format by the alloc_disk() helper in LXC.pm rather than 'subvol'.
This change will make cloning containers with unsized subvol
directories possible.
QemuServer/Cloudinit.pm - commit_cloudinit_disk():
Doesn't support subvol directories (see NOTE above), will allocate
a new volume when size is 0.
[0]: https://forum.proxmox.com/threads/162943/
Fixes: e50dde0 ("volume export: rely on storage plugin's format")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-2-f.ebner@proxmox.com
Currently, setting up a directory storage with trailing slashes in the
path results in log messages with double slashes, if this path gets
expanded by an action like vzdump. While this is just a cosmetic
issue, it looks odd and some users might think this is a bug if they
currently investigate some other issue and see such paths.
This patch removes those trailing slashes once the directory storage
class config gets updated.
Signed-off-by: Daniel Herzig <d.herzig@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The subvolume itself cannot be included if there is a base snapshot
or the command would fail with e.g.
> ERROR: subvolume /mnt/btrfs/images/400/vm-400-disk-0 is not read-only
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
For a 'raw' volume, the path includes the '/disk.raw' suffix, but the
check expects the containing subvolume directory.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The split_list() helper will return a list, and assignment in scalar
context would result in the number of elements, instead of having the
desired array reference, that the BTRFS plugin expects.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Hard-coding a list of sensitive properties means that custom plugins
cannot define their own sensitive properties for the on_add/on_update
hooks.
Have plugins declare the list of their sensitive properties in the
plugin data. For backwards compatibility, return the previously
hard-coded list if no such declaration is present.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-6-f.ebner@proxmox.com
The new_backup_provider() method can be used by storage plugins for
external backup providers. If the method returns a provider, Proxmox
VE will use callbacks to that provider for backups and restore instead
of using its usual backup/restore mechanisms.
The backup provider API is split into two parts, both of which again
need different implementations for VM and LXC guests:
1. Backup API
In Proxmox VE, a backup job consists of backup tasks for individual
guests. There are methods for initialization and cleanup of the job,
i.e. job_init() and job_cleanup() and for each guest backup, i.e.
backup_init() and backup_cleanup().
The backup_get_mechanism() method is used to decide on the backup
mechanism. Currently, 'file-handle' or 'nbd' for VMs, and 'directory'
for containers is possible. The method also let's the plugin indicate
whether to use a bitmap for incremental VM backup or not. It is enough
to implement one mechanism for VMs and one mechanism for containers.
Next, there are methods for backing up the guest's configuration and
data, backup_vm() for VM backup and backup_container() for container
backup, with the latter running
Finally, some helpers like getting the provider name or volume ID for
the backup target, as well as for handling the backup log.
The backup transaction looks as follows:
First, job_init() is called that can be used to check backup server
availability and prepare the connection. Then for each guest
backup_init() followed by backup_vm() or backup_container() and finally
backup_cleanup(). Afterwards job_cleanup() is called. For containers,
there is an additional backup_container_prepare() call while still
privileged. The actual backup_container() call happens as the
(unprivileged) container root user, so that the file owner and group IDs
match the container's perspective.
1.1 Backup Mechanisms
VM:
Access to the data on the VM's disk from the time the backup started
is made available via a so-called "snapshot access". This is either
the full image, or in case a bitmap is used, the dirty parts of the
image since the last time the bitmap was used for a successful backup.
Reading outside of the dirty parts will result in an error. After
backing up each part of the disk, it should be discarded in the export
to avoid unnecessary space usage on the Proxmox VE side (there is an
associated fleecing image).
VM mechanism 'file-handle':
The snapshot access is exposed via a file descriptor. A subroutine to
read the dirty regions for incremental backup is provided as well.
VM mechanism 'nbd':
The snapshot access and, if used, bitmap are exported via NBD.
Container mechanism 'directory':
A copy or snapshot of the container's filesystem state is made
available as a directory. The method is executed inside the user
namespace associated to the container.
2. Restore API
The restore_get_mechanism() method is used to decide on the restore
mechanism. Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for
containers are possible. It is enough to implement one mechanism for
VMs and one mechanism for containers.
Next, methods for extracting the guest and firewall configuration and
the implementations of the restore mechanism via a pair of methods: an
init method, for making the data available to Proxmox VE and a cleanup
method that is called after restore.
2.1. Restore Mechanisms
VM mechanism 'qemu-img':
The backup provider gives a path to the disk image that will be
restored. The path needs to be something 'qemu-img' can deal with,
e.g. can also be an NBD URI or similar.
Container mechanism 'directory':
The backup provider gives the path to a directory with the full
filesystem structure of the container.
Container mechanism 'tar':
The backup provider gives the path to a (potentially compressed) tar
archive with the full filesystem structure of the container.
See the PVE::BackupProvider::Plugin module for the full API
documentation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: replace backup_vm_available_bitmaps with
backup_vm_query_incremental, which instead of a bitmap name provides
a bitmap mode that is 'new' (create or *recreate* a bitmap) or 'use'
(use an existing bitmap, or create one if none exists)]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-5-f.ebner@proxmox.com