Commit Graph

17 Commits

Author SHA1 Message Date
bd485fd4aa disks: allow add_storage for already configured local storage
One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2022-09-13 10:05:20 +02:00
55553bd432 disks: die if storage name is already in use
If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2022-09-13 10:05:16 +02:00
cde43c4880 api: disks: delete: add flag for cleaning up storage config
Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>

slight style fixup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 12:35:25 +01:00
f81908eb58 api: disks: delete: add flag for wiping disks
For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
26082b7daf diskmanage: add helper for udev workaround
to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
a83d8eb178 api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
05d9171278 api: disks: create: set correct partition type
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-07 14:39:26 +02:00
21a75847a8 api: disk: work around udev bug to ensure its database is updated
There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. Determining whether a
disk is used or not in get_disks() (in part) relies upon lsblk, which
queries the udev database. Ensure the information is updated by
manually calling 'udevadm trigger' for the changed devices.

It's most important for the 'directory' API path, as mounting depends
on the '/dev/disk/by-uuid'-symlink to be generated.

[0]: https://github.com/systemd/systemd/issues/18525

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-30 18:04:25 +02:00
e99bc248d4 api: disks: create: re-check disk after fork/lock
Because then it might not be unused anymore. If there really is a
race, this prevents e.g. sgdisk creating a partition on a device
already in use by LVM or LVM destroying a partitioned device.

For ZFS, also get the latest udev info once inside the worker.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-30 18:04:22 +02:00
0370861cfd diskmanage: rename check_unused to assert_disk_unused
to have a clear method name for this. check_XYZ suggests also that we
return true if the check was OK, but we don't.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-10-03 14:51:38 +02:00
9280153e10 rename check_available to assert_sid_unused
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-10-03 14:49:14 +02:00
4dcb16c0dc fix #1929: only check storage if user want to create one
this is useful if a user wants to create similar storage on each host

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-10-03 14:30:22 +02:00
820bab50b9 add missing storage check in LVM Disk API
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-10-03 14:30:22 +02:00
76c1e57be7 refactor disk/storage checks for Disk API
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-10-03 14:30:22 +02:00
e39e8ee213 refactor diskmanagement lock_file calls
so that we only have one place where we reference the lockfile
and the timeout

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-08 12:01:02 +02:00
fc1880566c cleanup: do not convert exceptions to strings. 2018-08-02 11:39:35 +02:00
8b6842caa2 add API for LVM management
currently only list and create,
the list is in a format so that we can use it in an extjs tree

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-02 10:38:27 +02:00