Commit Graph

293 Commits

Author SHA1 Message Date
0227e28e8e get bandwidth limit: improve detecting if storages are involved
Previously, calling with e.g. $storage_list = [undef] would lead to an
early return of $override and not consider the limit from
datacenter.cfg.

Refactoring the bandwidth limit handling for migration introduced
calls such as described above, which broke applying the limit from
datacenter.cfg for VM RAM/state migration.

Reported in the community forum:
https://forum.proxmox.com/threads/37920/post-513005

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-11-24 08:25:12 +01:00
71460c8ace (remote) export: check and untaint format
this format comes from the remote cluster, so it might not be supported
on the source side - checking whether it's known (as additional
safeguard) and untainting (to avoid open3 failure) is required.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
 [ T: squashed in canonical perl array ref access ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-09-29 14:21:08 +02:00
90778f7cdd Added a LOG_EXT constant as a counterpart to NOTES_EXT
and refactored usages for .log and .notes with them.
At some parts in the test case code I had to source new variables to
shorten the line length to not exceed the 100 column line limit.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
3048641bb4 Switched to using log_warn of PVE::RESTEnvironment
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
e573445e93 Adapted unlink calls for archive files in case of ENOENT
This improves handling when two archive remove calls are creating a
race condition where one would formerly encounter an error. Now both
finish successfully.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
c3e2ff806f fix #3972: Remove the .notes file when a backup is deleted
When a VM or Container backup was deleted, the .notes file was not
removed, therefore, over time the dump folder would get polluted with
notes for backups that no longer existed. As backup names contain a
timestamp and as the notes cannot be reused because of this, I think
it is safe to just delete them just like we do with the .log file.

Furthermore, I sourced the deletion of the log and notes file into a
new function called "archive_auxiliaries_remove". Additionally, the
archive_info object now returns one more field containing the name of
the notes file. The test cases have to be adapted to expect this new
value as the package will not compile otherwise.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
8009417d0d plugins: allow limiting the number of protected backups per guest
The ability to mark backups as protected broke the implicit assumption
in vzdump that remove=1 and current number of backups being the limit
(i.e. sum of all keep options) will result in a backup being removed.

Introduce a new storage property 'max-protected-backups' to limit the
number of protected backups per guest. Use 5 as a default value, as it
should cover most use cases, while still not having too big of a
potential overhead in many scenarios.

For external plugins that do not return the backup subtype in
list_volumes, all protected backups with the same ID will count
towards the limit.

An alternative would be to count the protected backups when pruning.
While that would avoid the need for a new property, it would break the
current semantics of protected backups being ignored for pruning. It
also would be less flexible, e.g. for PBS, it can make sense to have
both keep-all=1 and a limit for the number of protected snapshots on
the PVE side.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-06 09:47:12 +02:00
21c7b54622 check volume accesss: add content type parameter
Adding such a check here avoids the need to parse at the call sites in
some cases.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
42352a4988 check volume access: allow for images/rootdir if user has VM.Config.Disk
Listing guest images should not require Datastore.Allocate in this
case. In preparation for adding disk import to the GUI.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
3e1a618e34 check volume access: always allow with Datastore.Allocate privilege
Such users are supposed to be administrators of the storage, but
previously, access to backups was not allowed when not also having
VM.Backup.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
38431808b4 activate storage: improve error when check_connection dies
by making sure the storage ID is part of the error. This can happen
for (at least) CIFS, and GlusterFS with local server.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-03-16 17:36:38 +01:00
18bf2e59da storage/plugin: factoring out regex for backup extension re
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2022-03-16 17:13:59 +01:00
cd461a5012 storage: rename REs for iso and vztmpl extensions
these changes make it more clear, how many capture groups each
RE inclues.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2022-03-16 17:13:59 +01:00
3363fcf65e add volume_import/export_start helpers
exposing the two halves of a storage migration for usage across
cluster boundaries.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
7a158d0bc9 storage_migrate: pull out import/export_prepare
for re-use with remote migration, where import and export happen on
different clusters connected via a websocket instead of SSH tunnel.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
686b07376f storage_migrate_snapshot: skip for btrfs without snapshots
this allows migrating from btrfs to other raw+size accepting storages,
provided no snapshots exist.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
05b07a67f5 storage_migrate: pull out snapshot decision
into new top-level helper for re-use with remote migration.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-04 17:30:42 +01:00
f78f26ae60 volname_for_storage: parse volname before calling
to allow reusing this with remote migration, where parsing of the source
volid has to happen on the source node, but this call has to happen on
the target node.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-04 17:30:42 +01:00
95dfa44ca1 add disk rename feature
Functionality has been added for the following storage types:

* directory ones, based on the default implementation:
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `rename` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>

the intention of this feature is to support the following use-cases:
- reassign a volume from one owning guest to another (which usually
  entails a rename, since the owning vmid is encoded in the volume name)
- rename a volume (e.g., to use a more meaningful name instead of the
  auto-assigned ...-disk-123)

only the former is implemented at the caller side in
qemu-server/pve-container for now, but since the lower-level feature is
basically the same for both, we can take advantage of the storage plugin
API bump now to get the building block for this future feature in place
already.

adapt ApiChangelog change to fix conflicts and added more detail above

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 17:02:29 +01:00
56897a9203 fix #3307: make it possible to set protection for backups
A protected backup is not removed by free_image and ignored when
pruning.

The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.

For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
9a4c0e8471 prune mark: preserve additional information for the keep-all case
Currently, if an entry is already marked as 'protected'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
f1de828166 add generalized functions to manage volume attributes
replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.

This is mainly done to avoid the need to add new methods every time a
new attribute is added.

Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.

For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
a799f7529b bump APIVER and APIAGE
Added blockers parameter to volume_rollback_is_possible.
Replaced volume_snapshot_list with volume_snapshot_info.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
dc992e7b89 plugin: remove volume_snapshot_list
which was only used by replication, but now replication uses
volume_snapshot_info instead.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
8c20d8afa3 plugin: add volume_snapshot_info function
which allows for better choices of common replication snapshots.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
9a5d50950c zfspool: add blockers parameter to volume_snapshot_is_possible
useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
2caa1194e9 ct templates: support zstd compressed archives
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 09:06:51 +02:00
a000e26ce7 prune {validate, mark}: preserve input parameter
While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
    # $hash is { 'keep-all' => 1 }
    my $s = print_property_string($hash, 'prune-backups');
    # $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.

Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
    INFO: prune older backups with retention:
    INFO: pruned 0 backup(s)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-09 18:07:13 +02:00
576e560d17 extract backup config: less precise matching for broken pipe detection
Extracting the config for zstd compressed vma files was broken:
    Failed to extract config from VMA archive: zstd: error 70 : Write
    error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-06 16:24:22 +02:00
edda43ed4f status: factoring out normalize_content_filename
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-06-23 22:28:44 +02:00
a0e3e224ea btrfs: add 'btrfs' import/export format
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
3cc29a0487 bump storage API: update import/export methods
Bumps APIVER to 9 and resets APIAGE to zero.

The import methods (volume_import, volume_import_formats):

These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
  `volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.

Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.

On the export side (volume_export, volume_export_formats):

The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
  The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
  This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
af50c2e671 add BTRFS storage plugin
This is mostly the same as a directory storage, with 2 major
differences:

* 'subvol' volumes are actual btrfs subvolumes and therefore
  allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
  also allow snapshots, the raw file for volume
  `btrstore:100/vm-100-disk-1.raw` can be found under
  `$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
  subvolume's directory name, so snapshot 'foo' of the above
  would be found under
  `$path/images/100/vm-100-disk-1@foo/disk.raw`
  or for format "subvol":
  `$path/images/100/subvol-100-disk-1.subvol@foo`

Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
bba10cf4af factoring out regex for vztmpl
stores the regex definition in PVE::Storage.

One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-06-23 20:19:09 +02:00
ffc31266da tree-wide: fix typos with codespell
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
d96b789aed vdisk_list: only scan storages with the correct content type(s)
The enabled check in the lower loop is now redundant and can be removed.

If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.

Remaining vdisk_list users that do not specify a content type are:
    1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
       rootdir and images, so the storage is scanned (if enabled, same as
       before).
    2. pve-container migration
    3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.

This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
    0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:21:45 +02:00
823e8afe72 plugin loader: text-width cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-18 18:33:20 +02:00
bbadd1659d config: mention that maxfiles is deprecated
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.

Also switch the setting for the default 'local' storage to 'prune-backups'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
883c811f7f prune backups: activate storage
which also checks whether the storage is even enabled. VZDump jobs already
activate the storage, but more direct calls via API/CLI didn't do so yet.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-15 10:11:17 +02:00
522cd32738 remove some more DRBD references
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:14:17 +02:00
4b84ad5e25 Optionally allow blockdev in abs_filesystem_path
This is required to import from LVM storages, for example

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2021-03-31 11:00:25 +02:00
2c5246e1ea vdisk list: allow specifying content type
and only scan storages that support it if specified.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-31 10:22:52 +02:00
26d7314be0 Revert "vdisk list: only collect images from storages with an appropriate content type"
This reverts commit a44c18925d and adds a reminder
comment.

The mentioned commit is actually a backwards-incompatible change that leads to
slightly different behavior when migrating a VM with volumes on a misconfigured
storage. For example, unreferenced volumes on a misconfigured storage won't be
picked up, even though they were before. And for referenced volumes on a
misconfigured storage, the disk size would not be updated on migration anymore.

We should wait until the next major release for this change and then also
re-evaluate the migration behavior with misconfigured disks.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-31 10:22:52 +02:00
a44c18925d vdisk list: only collect images from storages with an appropriate content type
Only these storages are activated in the first place, and it's bad behavior to
list images when no appropriate content type is not set.

For example, on VM destruction, this avoids unreferenced images to be deleted
from a storage with only 'backup' content type set, which is supposedly what
happened in this[0] forum thread.

(Some) callers expect all keys to be present and valid array references in the
result, so initialization is needed.

Now, the enabled check is already done by the preceding code for every element
that is iterated over, and thus isn't needed in the main loop anymore.

[0]: https://forum.proxmox.com/threads/erasing-all-vm-disks-after-a-failed-vm-migration-task.85068

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-15 13:36:53 +01:00
7cdc75a290 storage migration: insecure: improve logging
by including the message/error from the remote side. Some people on the forum[0]
ran into 'no tunnel IP received', but without information from the remote side
it's hard to tell why.

[0]: https://forum.proxmox.com/threads/failed-no-tunnel-ip-received.80172

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-19 15:29:04 +01:00
50853be2c5 remove lock from is_base_and_used check
and squash the __no_lock-variant into it.

This lock is not broad enough, because for a caller that plans to do or not do
some storage operation based on the result of the check, the following could
happen:
1. volume_is_base_and_used is called and the result is used to enter a branch
2. situation on the storage changes in the meantime
3. the branch chosen in 1. might not be the one that should be taken anymore

This means that callers are responsible for locking, and luckily the existing
callers do use their own locks already:
1. vdisk_free used the __no_lock-variant with a broader lock also covering
   the free operation.
2. vdisk_clone is not a caller, but is relevant and it does lock the storage
2. the calls during VM migration and VM destruction happen in the context of a
   locked VM config. Because the clone operation also locks the VM config, it
   cannot happen that a linked clone is created while the template VM is
   migrated away or destroyed or vice versa. And even if that were the case,
   the base disk would not be freed, because of what vdisk_free/vdisk_clone do.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-06 14:47:40 +01:00
d3a5e30963 drop absolute udevadm path
the compat symlink from bin to sbin has been dropped with bullseye, and
we rely on PATH begin set properly in our daemons/CLI tools anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 18:27:14 +01:00
189e67ffa6 fix #3199: by fixing usage of strftime
In a very early version I wanted to parse the date from the backup
name, and when switching to using the ctime and localtime() instead,
I forgot to update the usage of strftime.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-15 14:40:45 +01:00
10dfeb9ef0 prune mark: correctly keep track of already included backups
This needs to happen in a separate loop, because some time intervals are not
subsets of others, i.e. weeks and months. Previously, with a daily backup
schedule, having:
* a backup on Sun, 06 Dec 2020 kept by keep-daily
* a backup on Sun, 29 Nov 2020 kept by keep-weekly
would lead to the backup on Mon, 30 Nov 2020 to be selected for keep-monthly,
because the iteration did not yet reach the backup on Sun, 29 Nov 2020 that
would mark November as being covered.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-14 16:11:29 +01:00
e9991d2694 Storage/Plugin: add get/update_volume_comment and implement for dir
and add the appropriate api call to set and get the comment
we need to bump APIVER for this and can bump APIAGE, since
we only use it at this new call that can work with the default
implementation

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-24 10:23:25 +01:00