Commit Graph

11 Commits

Author SHA1 Message Date
eda88c94ed lvmplugin: add qcow2 snapshot
we format lvm logical volume with qcow2 to handle snapshot chain.

like for qcow2 file, when a snapshot is taken, the current lvm volume
is renamed to snap volname, and a new current lvm volume is created
with the snap volname as backing file

snapshot volname is similar to lvmthin : snap_${volname}_{$snapname}.qcow2

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
a8d8bdf9ef storage: add volume_support_qemu_snapshot
Returns if the volume is supporting qemu snapshot:
 'internal' : do the snapshot with qemu internal snapshot
 'external' : do the snapshot with qemu external snapshot
  undef     : does not support qemu snapshot

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
5f916079ea storage: add rename_snapshot method
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
bb21ba381d storage: volume_snapshot: add $running param
This add a $running param to volume_snapshot,
it can be used if some extra actions need to be done at the storage
layer when the snapshot has already be done at qemu level.

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
f296ffc4e4 fix #4997: lvm: create: disable autoactivation for new logical volumes
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.

This is not necessarily problematic for local LVM VGs, but it is
problematic for VGs on top of a shared LUN used by multiple cluster
nodes (accessed via e.g. iSCSI/Fibre Channel/direct-attached SAS).

Concretely, in a cluster with a shared LVM VG where an LV is active on
nodes 1 and 2, deleting the LV on node 1 will not clean up the
device-mapper device on node 2. If an LV with the same name is
recreated later, the leftover device-mapper device will cause
activation of that LV on node 2 to fail with:

> device-mapper: create ioctl on [...] failed: Device or resource busy

Hence, certain combinations of guest removal (and thus LV removals)
and node reboots can cause guest creation or VM live migration (which
both entail LV activation) to fail with the above error message for
certain VMIDs, see bug #4997 for more information [1].

To avoid this issue in the future, disable autoactivation when
creating new LVs using the `--setautoactivation` flag. With this
setting, LVM autoactivation will not activate these LVs, and the
storage stack will take care of activating/deactivating the LV (only)
on the correct node when needed.

This additionally fixes an issue with multipath on FC/SAS-attached
LUNs where LVs would be activated too early after boot when multipath
is not yet available, see [3] for more details and current workaround.

The `--setautoactivation` flag was introduced with LVM 2.03.12 [2], so
it is available since Bookworm/PVE 8, which ships 2.03.16. Nodes with
older LVM versions ignore the flag and remove it on metadata updates,
which is why PVE 8 could not use the flag reliably, since there may
still be PVE 7 nodes in the cluster that reset it on metadata updates.

The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://gitlab.com/lvmteam/lvm2/-/blob/main/WHATS_NEW
[3] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#FC/SAS-specific_configuration

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250709141034.169726-2-f.weber@proxmox.com
2025-07-09 17:03:14 +02:00
5a66c27cc6 auto-format code using perltidy with Proxmox style guide
using the new top-level `make tidy` target, which calls perltidy via
our wrapper to enforce the desired style as closely as possible.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-06-11 10:03:21 +02:00
db5c50c079 config api/plugins: let plugins define sensitive properties themselves
Hard-coding a list of sensitive properties means that custom plugins
cannot define their own sensitive properties for the on_add/on_update
hooks.

Have plugins declare the list of their sensitive properties in the
plugin data. For backwards compatibility, return the previously
hard-coded list if no such declaration is present.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-6-f.ebner@proxmox.com
2025-04-06 20:57:40 +02:00
a1140d77d0 plugins: volume import: align size up to 1KiB
Previously, the size was rounded down which, in case of an image with
non-1KiB-aligned sze (only possible for external plugins or manually
created images) would lead to errors when attempting to write beyond
the end of the too small allocated target image.

For image allocation, the size is already rounded up to the
granularity of the storage. Do the same for import.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-19 12:38:59 +01:00
5808b0bf3b lvmplugin: fix: activate|deactivate volume, add missing storeid param in path sub
$storeid param is missing and $snapname is used as third param.

This seem to works actually because $snapname is always empty in lvm

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2024-10-22 12:43:50 +02:00
aa82ad5c25 fix #3004: show progress of offline migration in task log
dd supports a 'status' flag, which enables it to show the copied bytes,
duration, and the transfer rate, which then get printed to stderr.

Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
2023-08-31 15:21:11 +02:00
a2242b41fc separate packaging and source build system
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-24 16:20:27 +02:00