A protected backup is not removed by free_image and ignored when
pruning.
The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.
For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.
This is mainly done to avoid the need to add new methods every time a
new attribute is added.
Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.
For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
# $hash is { 'keep-all' => 1 }
my $s = print_property_string($hash, 'prune-backups');
# $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.
Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
INFO: prune older backups with retention:
INFO: pruned 0 backup(s)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Extracting the config for zstd compressed vma files was broken:
Failed to extract config from VMA archive: zstd: error 70 : Write
error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Bumps APIVER to 9 and resets APIAGE to zero.
The import methods (volume_import, volume_import_formats):
These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
`volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.
Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.
On the export side (volume_export, volume_export_formats):
The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is mostly the same as a directory storage, with 2 major
differences:
* 'subvol' volumes are actual btrfs subvolumes and therefore
allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
also allow snapshots, the raw file for volume
`btrstore:100/vm-100-disk-1.raw` can be found under
`$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
subvolume's directory name, so snapshot 'foo' of the above
would be found under
`$path/images/100/vm-100-disk-1@foo/disk.raw`
or for format "subvol":
`$path/images/100/subvol-100-disk-1.subvol@foo`
Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
stores the regex definition in PVE::Storage.
One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
The enabled check in the lower loop is now redundant and can be removed.
If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.
Remaining vdisk_list users that do not specify a content type are:
1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
rootdir and images, so the storage is scanned (if enabled, same as
before).
2. pve-container migration
3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.
This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.
Also switch the setting for the default 'local' storage to 'prune-backups'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which also checks whether the storage is even enabled. VZDump jobs already
activate the storage, but more direct calls via API/CLI didn't do so yet.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This reverts commit a44c18925d and adds a reminder
comment.
The mentioned commit is actually a backwards-incompatible change that leads to
slightly different behavior when migrating a VM with volumes on a misconfigured
storage. For example, unreferenced volumes on a misconfigured storage won't be
picked up, even though they were before. And for referenced volumes on a
misconfigured storage, the disk size would not be updated on migration anymore.
We should wait until the next major release for this change and then also
re-evaluate the migration behavior with misconfigured disks.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Only these storages are activated in the first place, and it's bad behavior to
list images when no appropriate content type is not set.
For example, on VM destruction, this avoids unreferenced images to be deleted
from a storage with only 'backup' content type set, which is supposedly what
happened in this[0] forum thread.
(Some) callers expect all keys to be present and valid array references in the
result, so initialization is needed.
Now, the enabled check is already done by the preceding code for every element
that is iterated over, and thus isn't needed in the main loop anymore.
[0]: https://forum.proxmox.com/threads/erasing-all-vm-disks-after-a-failed-vm-migration-task.85068
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and squash the __no_lock-variant into it.
This lock is not broad enough, because for a caller that plans to do or not do
some storage operation based on the result of the check, the following could
happen:
1. volume_is_base_and_used is called and the result is used to enter a branch
2. situation on the storage changes in the meantime
3. the branch chosen in 1. might not be the one that should be taken anymore
This means that callers are responsible for locking, and luckily the existing
callers do use their own locks already:
1. vdisk_free used the __no_lock-variant with a broader lock also covering
the free operation.
2. vdisk_clone is not a caller, but is relevant and it does lock the storage
2. the calls during VM migration and VM destruction happen in the context of a
locked VM config. Because the clone operation also locks the VM config, it
cannot happen that a linked clone is created while the template VM is
migrated away or destroyed or vice versa. And even if that were the case,
the base disk would not be freed, because of what vdisk_free/vdisk_clone do.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
the compat symlink from bin to sbin has been dropped with bullseye, and
we rely on PATH begin set properly in our daemons/CLI tools anyway..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
In a very early version I wanted to parse the date from the backup
name, and when switching to using the ctime and localtime() instead,
I forgot to update the usage of strftime.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This needs to happen in a separate loop, because some time intervals are not
subsets of others, i.e. weeks and months. Previously, with a daily backup
schedule, having:
* a backup on Sun, 06 Dec 2020 kept by keep-daily
* a backup on Sun, 29 Nov 2020 kept by keep-weekly
would lead to the backup on Mon, 30 Nov 2020 to be selected for keep-monthly,
because the iteration did not yet reach the backup on Sun, 29 Nov 2020 that
would mark November as being covered.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and add the appropriate api call to set and get the comment
we need to bump APIVER for this and can bump APIAGE, since
we only use it at this new call that can work with the default
implementation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If there are already prune options configured, simply delete the maxfiles
setting. Having set both is invalid from vzdump's perspective anyways, and any
backup job on such a storage failed, meaning a user would've noticed.
If there are no prune options, translate the maxfiles value to keep-last,
except for maxfiles being zero (=unlimited), in which case we use keep-all.
If both are not set, don't set anything, so:
1. Storages don't suddenly have retention options set.
2. People relying on vzdump defaults can still use those.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
useful to have an alternative to the old maxfiles = 0. There has to
be a way for vzdump to distinguish between:
1. use the /etc/vzdump.conf default (when no options are configured for the storage)
2. use no limit (when keep-all=1)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This is basically necessary for the GUI's prune widget, because we want to
pass along all options equal to zero when all the number fields are cleared.
And it's more similar to how it's done in PBS now.
Bumped the APIAGE and APIVER, in case some external plugin needs to adapt to
the now less restrictive schema for 'prune-backups'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
AFAICT the snapshot activation is not necessary for our plugins at the moment,
but it doesn't really hurt and might be relevant in the future or for external
plugins.
Deactivating volumes is up to the caller, because for example, for replication
on a running guest, we obviously don't want to deactivate volumes.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.
This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Commit 8fe00d9944 already
introduced the necessary logging for the secure code path,
so presumably the bug was already fixed for most people.
Delay the potential die for the send command to be able to log
the ouput+error from the receive command. Like this we also see e.g.
'volume ... already exists' instead of just 'broken pipe'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Do not try to scan (and thus activate) storages which aren't
configured to support (or cannot support) "vdisks" anyway.
Avoids seemingly strange failures of VM migrations due to a backup storage
not being currently online - even if that storage isn't referenced in
the VM config anywhere..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we already differentiate between standard and non-standard names anyway
and don't detect and return the VMID in the latter case anyway. drop it
from the RE as well to allow names like 'vzdump-qemu-template.vma.lzo'
without the need for a fake VMID.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>