Those come normally from virtual devices, like a IPMI disk, if no
media is attached. They spam the log really often on operations like
migrate, and are quite scare-mongering. So filter them out.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it does not work:
disable RBD image features this kernel RBD drivers is not compatible with: fast-diff,object-map,deep-flatten
clone failed: could not disable krbd-incompatible image features 'fast-diff,object-map,deep-flatten' for rbd image: vm-123123123-disk-0@test: rbd: snapshot name specified for a command that doesn't use it
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since 'pvesm export' and 'pvesm import' are connected via a pipe and
SSH, a fatal error in the former can lead to no valid header being
written to the pipe. handle this more gracefully by printing an easier
to understand error message, instead of uninitialized warnings with no
context.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Modern kernel, like 5.3, support all those features ('fast-diff',
'object-map', 'deep-flatten'), so we do not want to disable them
there. 5.0 already supports exclusive-locks, so no need to disable
exclusive locking there.
Further, we also want to profit from new features available, so let's
enable those which can be enabled "live" (i.e., after image creation)
if their available.
While we could also parse the kernel information directly from:
/sys/module/libceph/parameters/supported_features
there's not much advantage to that, features cannot be disabled with
KConfig, their also very dependent of the kernel version booted.
So for us it's enough to check that one.
This only affects container and VMs backed by a storage with KRBD
explicitly enabled. But as the enabling and disabling happens
transparently, it has no effect on the running guest.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The bugfix for #2317 introduced a kind of odd API behavior, where
each volume was returned twice from our API if a storage has both
'rootdir' & 'images' content types enabled. To give the content type
of the volume an actual meaning, it is now inferred from the
associated guest, if there's no guest or we don't have an owner for
that volume we default to 'images'.
At the volume level, there is no option to list volumes based on
content types, since the volumes do not know what type they are
actually used for.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
When adding a zfspool storage with 'pvesm add' the mount point is now
added automatically to the storage configuration if it can be
determined. path() does not assume the default mountpoint anymore,
fixing 2085.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When working with several ZFS over iSCSI / LIO storages, we might lookup
between them with less than 15 sec interval.
Previously, the cache of the previous storage was used, which was breaking
disk move for example
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
The common ZFSPlugin was missing volume name parsing
in a few places. This was not a problem for standard
volumes, but broke functionnalities (like resize,
snapshot, rollback) with linked clones as the name of
the zvol must be extracted from the entry in the config
(remove base-X-disk-Y prefix)
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
In the default config, emulate_tpu is set to 0, which disables
unmap support. Once enabled, trim can run from guest to reclaim free
space.
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
It's not needed, LIO sees the new size automatically.
And it was broken anyway. Partially fix#2335
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
Using the json output, as suggested by Thomas, we now die if the decoding
fails and, if not, all return values are set to the corresponding decoded
values. That should prevent any unforeseen null size values, except if
qemu-img info reports it, which we then consider as valid.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
$1 and $2 get set to undef from the vmid filter regex, so we have to do
the name/format regex after, else we get errors like:
'use of unitiialized value $1[...]'
and the listing is empty
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The patch uses the value from the field 'stored' if it is available.
In Ceph 14.2.2 the storage calculation changed to a per pool basis. This
introduced an additional field 'stored' that holds the amount of data
that has been written to the pool. While the field 'used' now has the
data after replication for the pool.
The new calculation will be used only if all OSDs are running with the
on-disk format introduced by Ceph 14.2.2.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
To maintain full (backwards) compatibility, leave the type name as
'iso' - this makes this patch work without changing every consumer of
storage APIs.
Note that currently these files can only be attached as a CDROM/DVD
drive, so USB-only images can be uploaded but might not work in VMs.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and actually do that not just for creating zvols, but also when
activating them. this should fix a range of issues/races that sometimes
occured on bootup, snapshot rollback or similar operations.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
plugins can still override list_volumes if they want separate methods to
list rootdir and images content.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since list_volumes is only supposed to be called with filtered content
types, this should ensure that get_subdir is only called for plugins
that have a defined 'path' property, like the old code in
PVE::Storage::template_list did.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
checking '$server:$subdir' is too strict to work in all cirumcstances,
e.g. adding/removing a monitor would mean that it is not the same
anymore, same if one is adding/removing the ports from the config
check only if the subdir is the same and if it is a cephfs
this way, it still returns true if someone changes the config
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
don't require any specific file types, if something is here which can
be requested to delete over API/CLI it either comes from an API/CLI
operation and should be thus OK to delete, else a user caused the
creation of the special file and it either works and all is good or
the user gets notified as we check if unlink succeeded anyway
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Symlinks with a non-existing target fail Perls '-f' test and were thus
not deleteable via the API (failing with '$path does not exist').
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Since ceph-fuse is called directly in the CephFS storage plugin, which
can not process the _netdev option, mounting the CephFS storage fails
when fuse is set in the storage.cfg.
This patch moves the _netdev option into the else part of the if fuse is
set statement. _netdev is only added if the CephFS kernel client mounts
the storage.
It seems _netdev is not needed anyway for the fuse mount, as the
connection is closed, once the fuse process gets killed on shutdown.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to simplify a common case of calling 'map_volume' followed
by a defined-check and a call to path if it is not. The path is now
always returned if the plugin in question does not override
'map_volume'.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The underlying issue is that a zpool can get imported only once, so
we first check if it's in `zpool list`, and thus imported, and only
if it does not shows up there we try to import it.
But, this can race with either:
* parallel running activate_storage call, through CLI/API/daemon
* a zpool import from an admin (a bit unlikely, but hey that's the
thing with race conditions ;))
So refactor the "is pool imported" check into a closure, and call it
addditionally if the import failed, and silent the error if the pool
is now listed, and thus imported. This makes it a little bit nicer to
read too, IMO.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
during storage activation.
for pools that don't get imported at boot (e.g. because their vdevs are
not available when zfs-import-*.service runs) it is fatal to include
them in the cachefile, for those that do get imported at boot this code
should never run anyway as they are already imported.
in any case, a fallback to import without cachefile is the safe variant.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>