we will use this for adding a partition to a disk when using a device
for ceph osd db/wal which already has partitions on it
first we search for the highest partition number, then add the partition
and search for the resulting device (we cannot assume to simply
append the number, e.g. from /dev/nvme0n1 we get /dev/nvme0n1pX)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we now expect the first parameter to be either a string with a single
disk, or an array ref with a list of disks
this way we can get the info of multiple disks simultaneously while
not iterating over all disks
this will be used to get the info for osd/db/wal disk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Less reading and the own name for the variable should helps to grasp
more quickly what it should contain
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ceph-volume creates osds/journal/etc. on LVM instead of partitions,
so to detect them, we have to parse the lv_tags of the LVs and
match them with the underlying device
also add tests for this detection
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since zfsutils are not a hard dependency of our stack it is possible to not have
`zpool` available.
Checking for existance of `zpool` before calling it suppresses spurious warnings
in the logs (e.g. when creating Ceph OSDs or accessing the 'Disk' Tab in the
GUI).
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
the test would read the real device and if one is an iscsi device
it would fail, move the test code to a sub and mock it in the tests
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
`nvmeX` devices nodes are apparently allocated independently
from their namespace block devices `nvmeXnY` and therefore
they are not strictly related by name. For instance:
$ readlink /sys/block/nvme0n1/device
../../nvme1
$ readlink /sys/block/nvme1n1/device
../../nvme0
Here /dev/nvme0n1 is the first namespace of /dev/nvme1 while
/dev/nvme1n1 is the first namespace of /dev/nvme0.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
to have a clear method name for this. check_XYZ suggests also that we
return true if the check was OK, but we don't.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in get_disks, when called with a parameter 'cciss/cXdY', we replaced
the '/' with '!' so that we can properly poll the information
about it from /sys/block/
but we have to replace the '!' with '/' again in our result list,
because the caller does not know anything about it and fails, because
the original dev is not in the list
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this patch adds information about bluestore/db/wal to the disklist,
and we set the journal count only when we have at least one journal on
the disk
also adapt the regression tests
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
there was still a point where we got the wrong string
on createosd we get the devpath (/dev/cciss/c0d0)
but need the info from get_disks, which looks in /sys/block
where it needs to be cciss!c0d0
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want this, because the model in /sys/block/<device>/device/model
is limited to 16 characters
and since the model is not always in the udevadm output (nvme),
also read the model from the model file as fallback
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we iterate over the entries in /sys/block
it makes sense to use this path
this should fix#1099
because udevadm does not take
-n cciss!c0d0 (because it only looks in dev for this)
but takes
-p /sys/block/cciss!c0d0
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
refactored the wear level parsing into its
own function, where we can now define a
vendor <-> attribute id
mapping
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of parsing the output of smart in two places,
give get_smart_data a flag if we only want health
this fixes a bug (not on the bugtracker), where
an ssd with disabled smart had an empty string as health
in the gui
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the smart checks are only needed for the API call(s) that
list all disks and their status, but get_disks is also used
in disk usage checks and in the Ceph code, where the smart
status is completely irrelevant.
drop the implicit skipping of smart checks if $disk is set,
since we have an explicit parameter for this now.
because we never ever want to die in get_disks because of a
single disk, but the nodes/xyz/disks/smart API path is
allowed to fail if a disk device is unsupported by smartctl
or something else goes wrong.
since smartctl uses the return value to encode
disk health status (such as failure in the past)
we cannot die there, but have to parse the returncode
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this adds the functions for listing the disks (mostly copied from
the ceph code), checking if a disk is a valid blockdevice, if it
is used/in a zfs pool/as an lvm pv, and an init function (just to add a gpt header;
this is important if one wants to use a fresh disk for ceph journals)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>