The ability to mark backups as protected broke the implicit assumption
in vzdump that remove=1 and current number of backups being the limit
(i.e. sum of all keep options) will result in a backup being removed.
Introduce a new storage property 'max-protected-backups' to limit the
number of protected backups per guest. Use 5 as a default value, as it
should cover most use cases, while still not having too big of a
potential overhead in many scenarios.
For external plugins that do not return the backup subtype in
list_volumes, all protected backups with the same ID will count
towards the limit.
An alternative would be to count the protected backups when pruning.
While that would avoid the need for a new property, it would break the
current semantics of protected backups being ignored for pruning. It
also would be less flexible, e.g. for PBS, it can make sense to have
both keep-all=1 and a limit for the number of protected snapshots on
the PVE side.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
the plugins for file based storages
* BTRFS
* CIFS
* Dir
* Glusterfs
* NFS
now allow the option 'preallocation'.
'preallocation' can have four values:
* default
* off
* metadata
* falloc
* full
see man pages for `qemu-img` for what these mean exactly. [0]
the defualt value was chosen to be
* qcow2: metadata (as previously)
* raw: off
when using 'metadata' as preallocation mode, for raw images 'off'
is used.
[0] https://qemu.readthedocs.io/en/latest/system/images.html#disk-image-file-formats
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
We can use 'list_images' to get the desired volume IDs in
'find_free_diskname' for most plugins. For the two LVM plugins, 'list_images'
potentially skips untagged volumes, so we keep the custom version. For the
RBD plugin, 'list_images' is much more costly than the custom version, so we
keep the custom version.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When trying to create a qcow2 disk image with a size larger than available on the
storage, this will fail.
As qemu-img does not clean up the disk afterwards, it needs to be deleted
explicitly. Further, the vmid folder is cleaned up once it is empty.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
use the port where the main glusterfs daemon listens on as ping port,
this one is also used by QEMU as default.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Takes an operation, an optional requested bandwidth
limit override, and a list of storages involved in the
operation and lowers the requested bandwidth against global
and storage-specific limits unless the user has permissions
to change those.
This means:
* Global limits apply to all users without Sys.Modify on /
(as they can change datacenter.cfg options via the API).
* Storage specific limits apply to users without
Datastore.Allocate access on /storage/X for any involved
storage X.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this works around a bug, where qemu does not align the qcow2 file
when using the filesystem directly, and the gluster blockdriver
refuses to read from it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
So far this only prevented the creation of the toplevel
directory. This does not cover all problem cases,
particularly when said directory is supposed to be a mount
point, including NFS and glusterfs beside ZFS.
The directory based storages we have already use mkpath
whenever they need to create files, and for actions on files
which are supposed to exist it's fine if it errors out.
So it should also be safe to skip the creation of standard
subdirectories in activate_storage().
Additionally NFS and glusterfs storages should also accept
the mkdir option as they otherwise may exhibit similar
issues, eg. when an NFS storage is mounted onto a directory
inside a ZFS subvolume.
The parse_proc_mounts change made the glusterfs is_mounted
check fail (causing it to be shown as inactive on the GUI).
The NFS check was stricter (not allowing a trailing / in the
source anymore).
This makes no sense because it should always be exclusive.
Also RDB checks it self.
LVM has not possibility to use lvchange.
DRBD is this feature not implemented.