creates/lists systemd mount units for /mnt/pve/.*
filetypes allowed are ext4 and xfs for now
mount with /dev/disk/by-uuid
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
like the LVM API, but return an array for the list,
because we do not have nested data here
and create a vg and thin lv with the name given and use the full size
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that we can use it for a generic disk selector
this mirrors the functionality we have in
/nodes/NODE/ceph/disks api call (which we can deprecate then)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
on_add_hook allows to encapsulate storage specific add steps, like
copying a keyring (RDB) or creating a volume group (LVM) in a clean
manner.
The same for deletion with on_delete_hook, here all should be cleaned
up, as much as possible.
Until now, this was done directly in the api config CREATE and DELETE
code, respectively, with a series of
if ($storage_type eq 'foo) {
...
} elsif ($storage_type eq 'bar') {
...
}
which isn't really that nice...
Another nice result of this approach is that also external plugins
can use those hooks and to their setup/cleanup steps sanely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as else we write it to /etc/pve/storage.cfg which is readable by
www-data, a not really private group...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In this patch the nodes will be deleted if the nodes parameter comes with a empty string.
We need this in the GUI when update the nodes in the config to reset if a nodes.
If we do not erase the empty hash the storage online check would be skipped.
Also the password and user would not be verified.
we will use this for the gui to figure out if we have to show
a size selector, a file selector, which formats are there, etc.
we have to include this data even for not active storages, else
we cannot show the correct fields
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
It is possible to synchronise a volume to an other node in a defined interval.
So if a node fail there will be an copy of the volumes from a VM
on an other node.
With this copy it is possible to start the VM on this node.
instead of parsing the output of smart in two places,
give get_smart_data a flag if we only want health
this fixes a bug (not on the bugtracker), where
an ssd with disabled smart had an empty string as health
in the gui
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>