based on the wipe_disks method from pve-manager's Ceph/Tools.pm with the
following main differences:
* use wipefs to wipe labels first (to avoid sgdisk complaining about the
backed up GPT structure on a subsequent GPT initialization)
* only take one device as an argument
* do not use an absolute path for 'dd'
* die if one of the command fails
The wipefs command makes checks and complains about e.g. mounted or active
devices. One could supply --force to wipefs, but in many such situations it
does not work as expected, because the device would still be detected as in-use
afterwards, and further manaual steps would be needed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Not replacing it with return, because the current behavior is dying:
Can't "next" outside a loop block
and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not check
the return value.
Also check for $st, which can be undefined in case a non-existing path was
provided. This also led to dying previously:
Can't call method "mode" on an undefined value
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in case a disk with partitions also has an fstype set, which happens for our ZFS
boot disks. Do not change the behavior without include-partitons, as we
prefer(red) to be more specific than simply 'partitions' then.
Reported in the enterprise support channel.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and have a parent key for partitions, to be able to see the associated disk in
the result without having to rely on naming heuristics (just adding a number at
the end doesn't work for NVMes).
The disk's usage will not be based on the partitions usage if the flag is set,
but will simply be 'partitions'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
so it can be re-used for partitions.
Also changes the regular expression in get_ceph_volume_info to match the full
device/partition name the LV is on. Not only is this needed for partitions,
especially if there's multiple partitions with an OSD, but it also fixes
handling NVMe devices with an OSD as a side effect. Previuosly those were not
detected here, because of the digits in the name, e.g. /dev/nvme0n1
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Note that this is a slight behavior change, because now the first
partition's usage which is not simply 'partition' will become the disk's
usage. Previously, if any partition was 'mounted', it would become the disk's
usage, then 'LVM', 'ZFS', etc.
A partitions usage defaults to 'partition' if nothing more specific can be
found, and is never treated as unused for now.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in preparation to also query the file system type from lsblk. Note that the
result now also includes devices without a parttype, so a definedness check in
get_devices_by_partuuid is needed. This will be useful when the whole device
contains a filesystem.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
the compat symlink from bin to sbin has been dropped with bullseye, and
we rely on PATH begin set properly in our daemons/CLI tools anyway..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for some controllers/disks there the line is
Percentage used endurance indicator: x%
so extend the regex for that possibilty.
We even had a test-case for SAS but did not notice we could extract
that info from there...
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This replaces a locally maintained hardware map in
get_wear_leveling_info() by commonly used register names of
smartmontool. Smartmontool maintains a labeled register database that
contains a majority of drives (including versions). The current lookup
produces false estimates, this approach hopefully provides more reliable
data.
Signed-off-by: Jan-Jonas Sämann <sprinterfreak@binary-kitchen.de>
when compiling the disk list add a property with a stable
/dev/disk/by-id/ path for a block device when available.
This is needed to create zpools with the stable by-id links
The /dev/disk/by-id/ directory can contain multiple links to the same device
(e.g. when it's used as a LVM PV, or one for the wwn/nvme-eui in addition
to the one with vendor and serial). We take the first one which matches
the bus where the disk is attached. For nvme disks we exclude the one
containing the nvme-eui.
The patch assumes that not all disks need to have such a link (e.g.
virtio-block devices as we pass them to guests).
Additionally the tests were adapted to run successfully.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
with File::stat::stat to minimize variable declarations. And allow to
mock this method in tests instead of the perl build-in stat.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
the '.*' was greedy, also consuming all but one digits of the real percentage
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
switch to \s* instead of .*?, to prevent mis-interpreting potential
strings like '< 50%' or '0-50%'
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we can only do this here, since the ceph cluster is not aware of
osd encryption, only the local node is (via ceph-volume and lv tags)
this way, we are able to show an 'encrypted' flag in the disk gui at least
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
previously ceph included a udev rule to populate
/dev/disk/by-parttypeuuid/
but not anymore, so we now use 'lsblk --json -o path,parttype' to
get a mapping between parttype uuid and partition
fix the test by simulating empty lsblk output
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we will use this for adding a partition to a disk when using a device
for ceph osd db/wal which already has partitions on it
first we search for the highest partition number, then add the partition
and search for the resulting device (we cannot assume to simply
append the number, e.g. from /dev/nvme0n1 we get /dev/nvme0n1pX)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we now expect the first parameter to be either a string with a single
disk, or an array ref with a list of disks
this way we can get the info of multiple disks simultaneously while
not iterating over all disks
this will be used to get the info for osd/db/wal disk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Less reading and the own name for the variable should helps to grasp
more quickly what it should contain
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ceph-volume creates osds/journal/etc. on LVM instead of partitions,
so to detect them, we have to parse the lv_tags of the LVs and
match them with the underlying device
also add tests for this detection
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since zfsutils are not a hard dependency of our stack it is possible to not have
`zpool` available.
Checking for existance of `zpool` before calling it suppresses spurious warnings
in the logs (e.g. when creating Ceph OSDs or accessing the 'Disk' Tab in the
GUI).
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
the test would read the real device and if one is an iscsi device
it would fail, move the test code to a sub and mock it in the tests
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
`nvmeX` devices nodes are apparently allocated independently
from their namespace block devices `nvmeXnY` and therefore
they are not strictly related by name. For instance:
$ readlink /sys/block/nvme0n1/device
../../nvme1
$ readlink /sys/block/nvme1n1/device
../../nvme0
Here /dev/nvme0n1 is the first namespace of /dev/nvme1 while
/dev/nvme1n1 is the first namespace of /dev/nvme0.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>