Before, 'undef' was equivalent to unlimited, but '0' is the
"explicitly unlimited" value, so if the user doesn't request
an override, apply limits as if the user was unprivileged
(otherwise there's no way for privileged users to explicitly
ask to not override the configured limits).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Takes an operation, an optional requested bandwidth
limit override, and a list of storages involved in the
operation and lowers the requested bandwidth against global
and storage-specific limits unless the user has permissions
to change those.
This means:
* Global limits apply to all users without Sys.Modify on /
(as they can change datacenter.cfg options via the API).
* Storage specific limits apply to users without
Datastore.Allocate access on /storage/X for any involved
storage X.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we will use this for the gui to figure out if we have to show
a size selector, a file selector, which formats are there, etc.
we have to include this data even for not active storages, else
we cannot show the correct fields
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Accommodates changes in 44ae567 and d40e27d by
reordering checks to allow for proper filtering
of disabled storages. Also reorders two checks to
prevent autovivification resulting in disabled
storages always showing in output.
in the Storage/Status API call we have a 'enabled' param which had no
effect because storage_info only returned enabled one way or the
other.
This affected also `pvesm status` which uses the Storage/Status API
call.
So push also disabled storages to the info array but only activate
and get their status when thei are enabled.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This replaces the path-based and lvm/thin special cases in
storage_migrate with the already generic-enough zfspool
case which is already using import/export and does not
directly depend on zfs anymore.
The volume_snapshot call was missing the condition when to
create a snapshot. Make the whole logic easier to follow
with a $migration_snapshot boolean.
Also get rid of the remote `pvesm free -snapshot` call by
using import's new -delete-snapshot parameter.
otherwise there are situations where snapshots are left
behind for already sent volumes. also include more warnings.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
PVE team cannot support specialized vendor-specific storage
plugins because of lack of hardware. But we can allow users to
add own plugins for their storages without need to rewrite any
PVE code and thus ease PVE updates to them.
Idea of this patch is to add folder /usr/share/perl5/PVE/Storage/Custom
where user can place his plugins and PVE will automatically load
them on start or warn if it could not and continue. Maybe we could
even load all plugins (except PVE::Storage::Plugin itself) this way,
because current storage plugins are not really plugins, if they
need to be explicitly loaded in PVE code :-).
Custom plugins MUST have api() method returning version for which
it was designed. If API changes from PVE side, module is just not
being registered and warnig message is printed do log, so user have
to update module. Until module update, corresponding storage will
just disappear from PVE, so it shall not impose any data damage
because of API change.
This approach works (with some limitations) if plugin works in
generic PVE way: full control of volumes lifecycle. And will not
currently work for custom plugins like iSCSI, which needs to select
pre-existing volumes. Maybe someone will add more flexible way to
pve-manager to select input elements for storage plugins to target
this.
Currently tested with my NetApp plugin.
Signed-off-by: Dmitry Petuhov <mityapetuhov@gmail.com>
ssh(1) mentions that compression is only disirable on slow
connections.
since migration from cluster node to cluster node needs a
fast network anyway, we can drop the compression for
a speed improvement
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>