since 'pvesm export' and 'pvesm import' are connected via a pipe and
SSH, a fatal error in the former can lead to no valid header being
written to the pipe. handle this more gracefully by printing an easier
to understand error message, instead of uninitialized warnings with no
context.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Modern kernel, like 5.3, support all those features ('fast-diff',
'object-map', 'deep-flatten'), so we do not want to disable them
there. 5.0 already supports exclusive-locks, so no need to disable
exclusive locking there.
Further, we also want to profit from new features available, so let's
enable those which can be enabled "live" (i.e., after image creation)
if their available.
While we could also parse the kernel information directly from:
/sys/module/libceph/parameters/supported_features
there's not much advantage to that, features cannot be disabled with
KConfig, their also very dependent of the kernel version booted.
So for us it's enough to check that one.
This only affects container and VMs backed by a storage with KRBD
explicitly enabled. But as the enabling and disabling happens
transparently, it has no effect on the running guest.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The bugfix for #2317 introduced a kind of odd API behavior, where
each volume was returned twice from our API if a storage has both
'rootdir' & 'images' content types enabled. To give the content type
of the volume an actual meaning, it is now inferred from the
associated guest, if there's no guest or we don't have an owner for
that volume we default to 'images'.
At the volume level, there is no option to list volumes based on
content types, since the volumes do not know what type they are
actually used for.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
When adding a zfspool storage with 'pvesm add' the mount point is now
added automatically to the storage configuration if it can be
determined. path() does not assume the default mountpoint anymore,
fixing 2085.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When working with several ZFS over iSCSI / LIO storages, we might lookup
between them with less than 15 sec interval.
Previously, the cache of the previous storage was used, which was breaking
disk move for example
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
The common ZFSPlugin was missing volume name parsing
in a few places. This was not a problem for standard
volumes, but broke functionnalities (like resize,
snapshot, rollback) with linked clones as the name of
the zvol must be extracted from the entry in the config
(remove base-X-disk-Y prefix)
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
In the default config, emulate_tpu is set to 0, which disables
unmap support. Once enabled, trim can run from guest to reclaim free
space.
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
It's not needed, LIO sees the new size automatically.
And it was broken anyway. Partially fix#2335
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
Using the json output, as suggested by Thomas, we now die if the decoding
fails and, if not, all return values are set to the corresponding decoded
values. That should prevent any unforeseen null size values, except if
qemu-img info reports it, which we then consider as valid.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
Migration with --targetstorage was broken because of this.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
$1 and $2 get set to undef from the vmid filter regex, so we have to do
the name/format regex after, else we get errors like:
'use of unitiialized value $1[...]'
and the listing is empty
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The patch uses the value from the field 'stored' if it is available.
In Ceph 14.2.2 the storage calculation changed to a per pool basis. This
introduced an additional field 'stored' that holds the amount of data
that has been written to the pool. While the field 'used' now has the
data after replication for the pool.
The new calculation will be used only if all OSDs are running with the
on-disk format introduced by Ceph 14.2.2.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>