Commit Graph

40 Commits

Author SHA1 Message Date
06deafa43e disk manage: fix dereferencing draid config
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-11-17 19:10:58 +01:00
e698cbb9af disk manage: draid: style clean ups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-11-11 09:36:24 +01:00
8a5ffcd991 disk manage: move "draid-config set only on draid level" assertion
so that there is a better code locality and also we avoid forgetting
to adapt the check for each specific draid-config parameter if a new
one gets added or an existing one changed.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-11-11 09:36:03 +01:00
59db1208c3 fix #3967: enable ZFS dRAID creation via API
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.

Signed-off-by: Stefan Hrdlicka <s.hrdlicka@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2022-11-11 09:35:59 +01:00
8b06da647a zfs diskmanage: code/indentation cleanup in get_pool_data
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-11-11 09:35:59 +01:00
88f272b204 api: remove duplicate variable
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-20 10:50:12 +02:00
bd485fd4aa disks: allow add_storage for already configured local storage
One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2022-09-13 10:05:20 +02:00
55553bd432 disks: die if storage name is already in use
If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2022-09-13 10:05:16 +02:00
107208bdbf disks: zfs: code indentation/style improvments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-06 12:56:47 +02:00
cde43c4880 api: disks: delete: add flag for cleaning up storage config
Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>

slight style fixup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 12:35:25 +01:00
f81908eb58 api: disks: delete: add flag for wiping disks
For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
26082b7daf diskmanage: add helper for udev workaround
to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
a83d8eb178 api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
05d9171278 api: disks: create: set correct partition type
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-07 14:39:26 +02:00
a2c34371e6 partially fix #2285: api: disks: allow partitions for creation paths
The calls for directory and ZFS need slight adaptations. Except for
those, the only thing that needs to be done is support partitions in
the disk_is_used helper.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-07 14:39:26 +02:00
21a75847a8 api: disk: work around udev bug to ensure its database is updated
There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. Determining whether a
disk is used or not in get_disks() (in part) relies upon lsblk, which
queries the udev database. Ensure the information is updated by
manually calling 'udevadm trigger' for the changed devices.

It's most important for the 'directory' API path, as mounting depends
on the '/dev/disk/by-uuid'-symlink to be generated.

[0]: https://github.com/systemd/systemd/issues/18525

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-30 18:04:25 +02:00
e99bc248d4 api: disks: create: re-check disk after fork/lock
Because then it might not be unused anymore. If there really is a
race, this prevents e.g. sgdisk creating a partition on a device
already in use by LVM or LVM destroying a partitioned device.

For ZFS, also get the latest udev info once inside the worker.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-30 18:04:22 +02:00
576e143ac1 fix #3610: properly build ZFS detail tree
Previously, top-level vdevs like log or special were wrongly added as
children of the previous outer vdev instead of the root.

Fix it by also showing the vdev with the same name as the pool and
start counting from level 1 (the pool itself serves as the root and
should be the only one with level 0). This results in the same kind
of structure as in PBS and (except for the root) zpool status itself.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-10 14:19:39 +02:00
ae098a191c api: disks: allow zstd compression for zfs pools
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-30 15:21:01 +02:00
977b80c8ab disks: zfs: scan is only returned optionally
the line is not present if a zpool has never been scrubbed before
(e.g. if it's freshly setup)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-02-26 09:10:08 +01:00
c9c90349c3 check for service exsitance before enabling zfs-import service
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-29 18:52:32 +02:00
f720f6c440 Disks: instantiate import unit for created zpool
When creating a new ZFS storage, also instantiate an import-unit for the pool.
This should help mitigate the case where some pools don't get imported during
boot, because they are not listed in an existing zpool.cache file.

This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-09-29 18:52:32 +02:00
0f0d99a3e5 fix #2777 create zpools with stable dev paths
when compiling the disk list add a property with a stable
/dev/disk/by-id/ path for a block device when available.

This is needed to create zpools with the stable by-id links

The /dev/disk/by-id/ directory can contain multiple links to the same device
(e.g. when it's used as a LVM PV, or one for the wwn/nvme-eui in addition
to the one with vendor and serial). We take the first one which matches
the bus where the disk is attached. For nvme disks we exclude the one
containing the nvme-eui.

The patch assumes that not all disks need to have such a link (e.g.
virtio-block devices as we pass them to guests).

Additionally the tests were adapted to run successfully.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-06-06 19:32:33 +02:00
8b6b710265 folowup: fix whitespace errors and s/and/&&/ for consistency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-09 15:27:08 +01:00
5b4b715771 storage zfs: removed unused variable
Signed-off-by: Tim Marx <t.marx@proxmox.com>
2018-11-09 14:49:43 +01:00
32f749b840 storage zfs: changed return value description & optionals
Signed-off-by: Tim Marx <t.marx@proxmox.com>
2018-11-09 14:49:43 +01:00
a49fc735e5 close #1949: storage zfs: changed zpool command parser
Signed-off-by: Tim Marx <t.marx@proxmox.com>
2018-11-09 14:49:43 +01:00
b005f2f483 Fix: api zfs: changed return value name to errors
Signed-off-by: Tim Marx <t.marx@proxmox.com>
2018-10-29 10:33:39 +01:00
0370861cfd diskmanage: rename check_unused to assert_disk_unused
to have a clear method name for this. check_XYZ suggests also that we
return true if the check was OK, but we don't.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-10-03 14:51:38 +02:00
9280153e10 rename check_available to assert_sid_unused
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-10-03 14:49:14 +02:00
4dcb16c0dc fix #1929: only check storage if user want to create one
this is useful if a user wants to create similar storage on each host

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-10-03 14:30:22 +02:00
76c1e57be7 refactor disk/storage checks for Disk API
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-10-03 14:30:22 +02:00
4d12dbffc4 add return description for zfs detail api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-08 12:01:54 +02:00
e39e8ee213 refactor diskmanagement lock_file calls
so that we only have one place where we reference the lockfile
and the timeout

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-08 12:01:02 +02:00
7058abe29e add 'single' raidlevel for zfs
the syntax for creating a pool with a single disk is
not the same as for mirror, so let the user select it
explicitely

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-08 12:00:18 +02:00
38572a8f56 rename raidlvl to raidlevel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-08 11:56:35 +02:00
fdc863c705 fix descriptions of api calls
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-08 11:56:17 +02:00
7d597888a4 cleanup descriptions 2018-08-08 08:22:27 +02:00
5be1a092d6 fix schema - 'string-list' is a format, not a type 2018-08-08 08:21:06 +02:00
c84106edc9 add API for ZFS management
a list, a detail and a create api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-08 08:02:14 +02:00