LVMPlugin->volume_import (used by storage_migrate on either offline
migration with local disks, or online migration with storage-only
referenced disks) passed 'conv=sparse' to `dd`. This can lead to
data-corruption, if the target volume is not zero-initialized.
dropping the sparse argument completely would fix the problem, but
breaks keeping data sparse for LvmThinPlugin.
This patch moves the dd out into (LVM*) plugin specific sub so that
each can control the parameters.
Steps for reproducing the issue:
* create a cluster with (at least) 2 nodes A and B, with a free
disk-device (/dev/sdx)
* write a recognizable pattern to /dev/sdx on B:
`dd if=/dev/zero bs=10M | tr '\000' '\255' | dd of=/dev/sdb bs=10M`
(would be grateful for alternatives to the dd| tr| dd)
* on both A and B create a lvm-vg (pvcreate, vgcreate)
* add it as _not_ shared storage, which is available on nodes A and B
* create a small guest on A
* fill a file in the guest with zeros
`dd if=/dev/zero of=/zerofil bs=10M`
* stop the guest, migrate it to B
* start the guest - check that the file `/zerofil` contains `ad`
instead of `00`
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
From `man 8 lvchange`:
--refresh
If the logical volume is active, reload its metadata. This is not
necessary in normal operation, but may be useful ... if you're doing
clustering manually without a clustered lock manager.
Fixes migration in a shared LVM (iscsi) setup, where a disk gets resized on one
node A and the guest is afterwards migrated to another node B: B still presents
the old size to the guest, leading to data corruption.
It is necessary to run `lvchange` twice because the options `-ay` and
`--refresh` are mutually exclusive.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Without volume_size_info a Storage plugin falls back to the Implementation
in PVE/Storage/Plugin.pm, which relies on `qemu-img info`.
`qemu-img info` returns wrong results on a node in the case of shared volume
groups (e.g. when sharing disks via iSCSI), if a disk was resized on another
node (it lseeks to the end of the block-device, and this yields the old size).
Using lvs directly fixes the issue, since the LVM metadata gets updated when
invoking lvs.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
This is important for shared LVM storages. As with deletes and
creates of images, as else we may have not the up-to-date metadata
and extents may get reused if another node created an image during
the same time, for example.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
lvm_find_free_diskname only checked for existing volumes starting with 'vm-',
and not with 'base-'.
Unify implementation with other Plugins.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Takes an operation, an optional requested bandwidth
limit override, and a list of storages involved in the
operation and lowers the requested bandwidth against global
and storage-specific limits unless the user has permissions
to change those.
This means:
* Global limits apply to all users without Sys.Modify on /
(as they can change datacenter.cfg options via the API).
* Storage specific limits apply to users without
Datastore.Allocate access on /storage/X for any involved
storage X.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this caused the webinterface to sort alphabetically instead of numerical
when sorting by image size
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This makes no sense because it should always be exclusive.
Also RDB checks it self.
LVM has not possibility to use lvchange.
DRBD is this feature not implemented.
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.
Signed-off-by: Wolfgang Link <w.link@proxmox.com>
resize the lvm device (online or offline)
return 1 to use qmp block_resize to online update size in guest
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>