There are cases where autoactivation can fail, as reported in the
community forum [0]. And it could also be that a volume was
deactivated by something outside of our control.
It doesn't seem strictly necessary to activate the thin pool itself
(creating/removing/activating LVs within the pool still works if it's
not active), but it does not report usage information as long as
neither the pool nor any of its LVs are active. Activate the pool for
that, for being able to use the flag in status(), and it should also
serve as a good indicator that there's a problem with the pool if it
can't be activated.
Before activating, check the (cached) lv_state from lvm_list_volumes.
It's necessary to update the cache in activate_storage, because the
flag is re-used in status(). Also update it for other (de)activations
to be more future-proof.
[0]: https://forum.proxmox.com/threads/local-lvm-not-available-after-kernel-update-on-pve-7.97406
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Functionality has been added for the following storage types:
* directory ones, based on the default implementation:
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `rename` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
the intention of this feature is to support the following use-cases:
- reassign a volume from one owning guest to another (which usually
entails a rename, since the owning vmid is encoded in the volume name)
- rename a volume (e.g., to use a more meaningful name instead of the
auto-assigned ...-disk-123)
only the former is implemented at the caller side in
qemu-server/pve-container for now, but since the lower-level feature is
basically the same for both, we can take advantage of the storage plugin
API bump now to get the building block for this future feature in place
already.
adapt ApiChangelog change to fix conflicts and added more detail above
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We allow snapshot names that match pve-configid but during qm destroy we have
not removed all snapshots that match pve-configid so far. For example, the name
x-y was allowed but the resulting snap_vm-105-disk-0_x-y was not removed.
Reported-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
We can use 'list_images' to get the desired volume IDs in
'find_free_diskname' for most plugins. For the two LVM plugins, 'list_images'
potentially skips untagged volumes, so we keep the custom version. For the
RBD plugin, 'list_images' is much more costly than the custom version, so we
keep the custom version.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
LVMPlugin->volume_import (used by storage_migrate on either offline
migration with local disks, or online migration with storage-only
referenced disks) passed 'conv=sparse' to `dd`. This can lead to
data-corruption, if the target volume is not zero-initialized.
dropping the sparse argument completely would fix the problem, but
breaks keeping data sparse for LvmThinPlugin.
This patch moves the dd out into (LVM*) plugin specific sub so that
each can control the parameters.
Steps for reproducing the issue:
* create a cluster with (at least) 2 nodes A and B, with a free
disk-device (/dev/sdx)
* write a recognizable pattern to /dev/sdx on B:
`dd if=/dev/zero bs=10M | tr '\000' '\255' | dd of=/dev/sdb bs=10M`
(would be grateful for alternatives to the dd| tr| dd)
* on both A and B create a lvm-vg (pvcreate, vgcreate)
* add it as _not_ shared storage, which is available on nodes A and B
* create a small guest on A
* fill a file in the guest with zeros
`dd if=/dev/zero of=/zerofil bs=10M`
* stop the guest, migrate it to B
* start the guest - check that the file `/zerofil` contains `ad`
instead of `00`
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
if no vg is given, give back all thinpools from all vgs
if verbose is 1, then give back the information about the thinpools
(like size and free)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Takes an operation, an optional requested bandwidth
limit override, and a list of storages involved in the
operation and lowers the requested bandwidth against global
and storage-specific limits unless the user has permissions
to change those.
This means:
* Global limits apply to all users without Sys.Modify on /
(as they can change datacenter.cfg options via the API).
* Storage specific limits apply to users without
Datastore.Allocate access on /storage/X for any involved
storage X.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
create_base() uses '-ky' to prevent base images from being
activated by default, similar to snapshots. This means we
need to activate them like snapshots with the '-K' option.
this patch adds an lvmthin scan to the api, so that we can get a list
of thinpools for a specific vg via an api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>