so that there is a better code locality and also we avoid forgetting
to adapt the check for each specific draid-config parameter if a new
one gets added or an existing one changed.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.
Signed-off-by: Stefan Hrdlicka <s.hrdlicka@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.
On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.
This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.
By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.
For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
slight style fixup
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The calls for directory and ZFS need slight adaptations. Except for
those, the only thing that needs to be done is support partitions in
the disk_is_used helper.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. Determining whether a
disk is used or not in get_disks() (in part) relies upon lsblk, which
queries the udev database. Ensure the information is updated by
manually calling 'udevadm trigger' for the changed devices.
It's most important for the 'directory' API path, as mounting depends
on the '/dev/disk/by-uuid'-symlink to be generated.
[0]: https://github.com/systemd/systemd/issues/18525
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Because then it might not be unused anymore. If there really is a
race, this prevents e.g. sgdisk creating a partition on a device
already in use by LVM or LVM destroying a partitioned device.
For ZFS, also get the latest udev info once inside the worker.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Previously, top-level vdevs like log or special were wrongly added as
children of the previous outer vdev instead of the root.
Fix it by also showing the vdev with the same name as the pool and
start counting from level 1 (the pool itself serves as the root and
should be the only one with level 0). This results in the same kind
of structure as in PBS and (except for the root) zpool status itself.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When creating a new ZFS storage, also instantiate an import-unit for the pool.
This should help mitigate the case where some pools don't get imported during
boot, because they are not listed in an existing zpool.cache file.
This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
when compiling the disk list add a property with a stable
/dev/disk/by-id/ path for a block device when available.
This is needed to create zpools with the stable by-id links
The /dev/disk/by-id/ directory can contain multiple links to the same device
(e.g. when it's used as a LVM PV, or one for the wwn/nvme-eui in addition
to the one with vendor and serial). We take the first one which matches
the bus where the disk is attached. For nvme disks we exclude the one
containing the nvme-eui.
The patch assumes that not all disks need to have such a link (e.g.
virtio-block devices as we pass them to guests).
Additionally the tests were adapted to run successfully.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
to have a clear method name for this. check_XYZ suggests also that we
return true if the check was OK, but we don't.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the syntax for creating a pool with a single disk is
not the same as for mirror, so let the user select it
explicitely
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>