since we allow vm-ID-whatever when allocating images, we
should also include those when listing them.
note: '@' is reserved for snapshots in ceph, so it is safe to
skip lines including an '@' in the image name.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
with more than a few images, 'rbd ls -l' gets rather slow
compared to a simple 'rbd ls'. since we only need to check
existing image names for finding a free one, the latter is
sufficient.
example with ~400 rbd images:
$ time rbd ls -p ceph-vm > /dev/null
real 0m0.027s
user 0m0.012s
sys 0m0.008s
$ time rbd ls -l -p ceph-vm > /dev/null
real 0m5.250s
user 0m1.632s
sys 0m0.584s
a linked clone of two disks on the same setup accordingly
also shows a massive speedup:
$ time qm clone 1000 10000 -snap test
create linked clone of drive scsi0 (ceph-vm:vm-1000-disk-2)
clone vm-1000-disk-2: vm-1000-disk-2 snapname test to
vm-10000-disk-1
create linked clone of drive scsi1 (ceph-vm:vm-1000-disk-1)
clone vm-1000-disk-1: vm-1000-disk-1 snapname test to
vm-10000-disk-2
real 0m11.157s
user 0m3.752s
sys 0m1.308s
$ time qm clone 1000 10000 -snap test
create linked clone of drive scsi1 (ceph-vm:vm-1000-disk-1)
clone vm-1000-disk-1: vm-1000-disk-1 snapname test to
vm-10000-disk-1
create linked clone of drive scsi0 (ceph-vm:vm-1000-disk-2)
clone vm-1000-disk-2: vm-1000-disk-2 snapname test to
vm-10000-disk-2
real 0m0.872s
user 0m0.652s
sys 0m0.096s
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
With krbd we resize volume and tell QemuSever to notify running QEMU
with zero $size by returning undef.
Signed-off-by: Dmitry Petuhov <mityapetuhov@gmail.com>
there was still a point where we got the wrong string
on createosd we get the devpath (/dev/cciss/c0d0)
but need the info from get_disks, which looks in /sys/block
where it needs to be cciss!c0d0
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
otherwise there are situations where snapshots are left
behind for already sent volumes. also include more warnings.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this works around a bug, where qemu does not align the qcow2 file
when using the filesystem directly, and the gluster blockdriver
refuses to read from it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the old code was way too broad here, this fixes at least the
following issues:
- importing of other/unconfigured zpools by "import -a"
- possible false positives if a pool name is a substring of
another pool name because of "list" without pool name,
potentially skipping activation for such pools
- not noticing failure to activate in activate_storage
because the success of "zpool import -a" does not tell us
anything about the pool we actually wanted to import
checking specifically for the pool to be activated when
calling "zpool list" gets rid of the second issue, and
trying to import only that pool fixes the other two.
we want this, because the model in /sys/block/<device>/device/model
is limited to 16 characters
and since the model is not always in the udevadm output (nvme),
also read the model from the model file as fallback
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we iterate over the entries in /sys/block
it makes sense to use this path
this should fix#1099
because udevadm does not take
-n cciss!c0d0 (because it only looks in dev for this)
but takes
-p /sys/block/cciss!c0d0
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
without this, having an efidisk on a ceph storage
prevents creating another disk on the same
ceph storage, because it will not be detected
and we try to allocate one with the same name
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
refactored the wear level parsing into its
own function, where we can now define a
vendor <-> attribute id
mapping
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of parsing the output of smart in two places,
give get_smart_data a flag if we only want health
this fixes a bug (not on the bugtracker), where
an ssd with disabled smart had an empty string as health
in the gui
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the smart checks are only needed for the API call(s) that
list all disks and their status, but get_disks is also used
in disk usage checks and in the Ceph code, where the smart
status is completely irrelevant.
drop the implicit skipping of smart checks if $disk is set,
since we have an explicit parameter for this now.