Commit Graph

274 Commits

Author SHA1 Message Date
56897a9203 fix #3307: make it possible to set protection for backups
A protected backup is not removed by free_image and ignored when
pruning.

The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.

For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
9a4c0e8471 prune mark: preserve additional information for the keep-all case
Currently, if an entry is already marked as 'protected'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
f1de828166 add generalized functions to manage volume attributes
replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.

This is mainly done to avoid the need to add new methods every time a
new attribute is added.

Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.

For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
a799f7529b bump APIVER and APIAGE
Added blockers parameter to volume_rollback_is_possible.
Replaced volume_snapshot_list with volume_snapshot_info.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
dc992e7b89 plugin: remove volume_snapshot_list
which was only used by replication, but now replication uses
volume_snapshot_info instead.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
8c20d8afa3 plugin: add volume_snapshot_info function
which allows for better choices of common replication snapshots.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
9a5d50950c zfspool: add blockers parameter to volume_snapshot_is_possible
useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
2caa1194e9 ct templates: support zstd compressed archives
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 09:06:51 +02:00
a000e26ce7 prune {validate, mark}: preserve input parameter
While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
    # $hash is { 'keep-all' => 1 }
    my $s = print_property_string($hash, 'prune-backups');
    # $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.

Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
    INFO: prune older backups with retention:
    INFO: pruned 0 backup(s)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-09 18:07:13 +02:00
576e560d17 extract backup config: less precise matching for broken pipe detection
Extracting the config for zstd compressed vma files was broken:
    Failed to extract config from VMA archive: zstd: error 70 : Write
    error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-06 16:24:22 +02:00
edda43ed4f status: factoring out normalize_content_filename
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-06-23 22:28:44 +02:00
a0e3e224ea btrfs: add 'btrfs' import/export format
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
3cc29a0487 bump storage API: update import/export methods
Bumps APIVER to 9 and resets APIAGE to zero.

The import methods (volume_import, volume_import_formats):

These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
  `volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.

Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.

On the export side (volume_export, volume_export_formats):

The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
  The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
  This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
af50c2e671 add BTRFS storage plugin
This is mostly the same as a directory storage, with 2 major
differences:

* 'subvol' volumes are actual btrfs subvolumes and therefore
  allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
  also allow snapshots, the raw file for volume
  `btrstore:100/vm-100-disk-1.raw` can be found under
  `$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
  subvolume's directory name, so snapshot 'foo' of the above
  would be found under
  `$path/images/100/vm-100-disk-1@foo/disk.raw`
  or for format "subvol":
  `$path/images/100/subvol-100-disk-1.subvol@foo`

Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
bba10cf4af factoring out regex for vztmpl
stores the regex definition in PVE::Storage.

One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-06-23 20:19:09 +02:00
ffc31266da tree-wide: fix typos with codespell
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
d96b789aed vdisk_list: only scan storages with the correct content type(s)
The enabled check in the lower loop is now redundant and can be removed.

If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.

Remaining vdisk_list users that do not specify a content type are:
    1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
       rootdir and images, so the storage is scanned (if enabled, same as
       before).
    2. pve-container migration
    3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.

This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
    0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:21:45 +02:00
823e8afe72 plugin loader: text-width cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-18 18:33:20 +02:00
bbadd1659d config: mention that maxfiles is deprecated
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.

Also switch the setting for the default 'local' storage to 'prune-backups'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
883c811f7f prune backups: activate storage
which also checks whether the storage is even enabled. VZDump jobs already
activate the storage, but more direct calls via API/CLI didn't do so yet.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-15 10:11:17 +02:00
522cd32738 remove some more DRBD references
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:14:17 +02:00
4b84ad5e25 Optionally allow blockdev in abs_filesystem_path
This is required to import from LVM storages, for example

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2021-03-31 11:00:25 +02:00
2c5246e1ea vdisk list: allow specifying content type
and only scan storages that support it if specified.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-31 10:22:52 +02:00
26d7314be0 Revert "vdisk list: only collect images from storages with an appropriate content type"
This reverts commit a44c18925d and adds a reminder
comment.

The mentioned commit is actually a backwards-incompatible change that leads to
slightly different behavior when migrating a VM with volumes on a misconfigured
storage. For example, unreferenced volumes on a misconfigured storage won't be
picked up, even though they were before. And for referenced volumes on a
misconfigured storage, the disk size would not be updated on migration anymore.

We should wait until the next major release for this change and then also
re-evaluate the migration behavior with misconfigured disks.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-31 10:22:52 +02:00
a44c18925d vdisk list: only collect images from storages with an appropriate content type
Only these storages are activated in the first place, and it's bad behavior to
list images when no appropriate content type is not set.

For example, on VM destruction, this avoids unreferenced images to be deleted
from a storage with only 'backup' content type set, which is supposedly what
happened in this[0] forum thread.

(Some) callers expect all keys to be present and valid array references in the
result, so initialization is needed.

Now, the enabled check is already done by the preceding code for every element
that is iterated over, and thus isn't needed in the main loop anymore.

[0]: https://forum.proxmox.com/threads/erasing-all-vm-disks-after-a-failed-vm-migration-task.85068

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-15 13:36:53 +01:00
7cdc75a290 storage migration: insecure: improve logging
by including the message/error from the remote side. Some people on the forum[0]
ran into 'no tunnel IP received', but without information from the remote side
it's hard to tell why.

[0]: https://forum.proxmox.com/threads/failed-no-tunnel-ip-received.80172

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-19 15:29:04 +01:00
50853be2c5 remove lock from is_base_and_used check
and squash the __no_lock-variant into it.

This lock is not broad enough, because for a caller that plans to do or not do
some storage operation based on the result of the check, the following could
happen:
1. volume_is_base_and_used is called and the result is used to enter a branch
2. situation on the storage changes in the meantime
3. the branch chosen in 1. might not be the one that should be taken anymore

This means that callers are responsible for locking, and luckily the existing
callers do use their own locks already:
1. vdisk_free used the __no_lock-variant with a broader lock also covering
   the free operation.
2. vdisk_clone is not a caller, but is relevant and it does lock the storage
2. the calls during VM migration and VM destruction happen in the context of a
   locked VM config. Because the clone operation also locks the VM config, it
   cannot happen that a linked clone is created while the template VM is
   migrated away or destroyed or vice versa. And even if that were the case,
   the base disk would not be freed, because of what vdisk_free/vdisk_clone do.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-06 14:47:40 +01:00
d3a5e30963 drop absolute udevadm path
the compat symlink from bin to sbin has been dropped with bullseye, and
we rely on PATH begin set properly in our daemons/CLI tools anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 18:27:14 +01:00
189e67ffa6 fix #3199: by fixing usage of strftime
In a very early version I wanted to parse the date from the backup
name, and when switching to using the ctime and localtime() instead,
I forgot to update the usage of strftime.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-15 14:40:45 +01:00
10dfeb9ef0 prune mark: correctly keep track of already included backups
This needs to happen in a separate loop, because some time intervals are not
subsets of others, i.e. weeks and months. Previously, with a daily backup
schedule, having:
* a backup on Sun, 06 Dec 2020 kept by keep-daily
* a backup on Sun, 29 Nov 2020 kept by keep-weekly
would lead to the backup on Mon, 30 Nov 2020 to be selected for keep-monthly,
because the iteration did not yet reach the backup on Sun, 29 Nov 2020 that
would mark November as being covered.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-14 16:11:29 +01:00
e9991d2694 Storage/Plugin: add get/update_volume_comment and implement for dir
and add the appropriate api call to set and get the comment
we need to bump APIVER for this and can bump APIAGE, since
we only use it at this new call that can work with the default
implementation

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-24 10:23:25 +01:00
d4e8a1f27c convert maxfiles to prune_backups when reading the storage configuration
If there are already prune options configured, simply delete the maxfiles
setting. Having set both is invalid from vzdump's perspective anyways, and any
backup job on such a storage failed, meaning a user would've noticed.

If there are no prune options, translate the maxfiles value to keep-last,
except for maxfiles being zero (=unlimited), in which case we use keep-all.

If both are not set, don't set anything, so:
1. Storages don't suddenly have retention options set.
2. People relying on vzdump defaults can still use those.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-23 15:57:30 +01:00
1b87f01388 prune: introduce keep-all option
useful to have an alternative to the old maxfiles = 0. There has to
be a way for vzdump to distinguish between:
1. use the /etc/vzdump.conf default (when no options are configured for the storage)
2. use no limit (when keep-all=1)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-23 15:27:17 +01:00
1207620cf0 perlcritic: avoid conditional variable declaration
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-23 15:24:02 +01:00
343ca2570c prune: allow having all prune options zero/missing
This is basically necessary for the GUI's prune widget, because we want to
pass along all options equal to zero when all the number fields are cleared.
And it's more similar to how it's done in PBS now.

Bumped the APIAGE and APIVER, in case some external plugin needs to adapt to
the now less restrictive schema for 'prune-backups'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 10:14:11 +01:00
f514181d28 prune mark: keep all if all prune options are zero/missing
as an additional safety measure. And add some tests.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 10:14:11 +01:00
5324ceffef fix #3030: always activate volumes in storage_migrate
AFAICT the snapshot activation is not necessary for our plugins at the moment,
but it doesn't really hurt and might be relevant in the future or for external
plugins.

Deactivating volumes is up to the caller, because for example, for replication
on a running guest, we obviously don't want to deactivate volumes.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 19:04:58 +01:00
2c036838ed add check for fsfreeze before snapshot
In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.

This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-10 18:58:45 +01:00
af2dd59eaf fix typo in comment
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-10 18:58:45 +01:00
57acd6a124 fix #1452: also log stderr of remote command with insecure storage migration
Commit 8fe00d9944 already
introduced the necessary logging for the secure code path,
so presumably the bug was already fixed for most people.

Delay the potential die for the send command to be able to log
the ouput+error from the receive command. Like this we also see e.g.
'volume ... already exists' instead of just 'broken pipe'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-10-28 14:05:49 +01:00
93d1812e5a storage_migrate: log bandwidth limit
and avoid undefined post-if declaration of @cstream.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-08-04 13:57:24 +02:00
8f26b3910d Add prune_backups to storage API
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-07-24 15:44:53 +02:00
c43655d2ed vdisk_list: skip scanning storages which cannot have images/rootdisks
Do not try to scan (and thus activate) storages which aren't
configured to support (or cannot support) "vdisks" anyway.

Avoids seemingly strange failures of VM migrations due to a backup storage
not being currently online - even if that storage isn't referenced in
the VM config anywhere..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 16:16:23 +02:00
28be2a431b archive_info: relax custom name handling
we already differentiate between standard and non-standard names anyway
and don't detect and return the VMID in the latter case anyway. drop it
from the RE as well to allow names like 'vzdump-qemu-template.vma.lzo'
without the need for a fake VMID.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-08 15:26:19 +02:00
b1ddc54a93 archive_info: use timelocal correctly
Because we always have 4-digit years, we can simply pass
the year itself to timelocal instead of subtracting 1900.
Like this it will also work for years not in the range 2000-2999.

See also:
https://perldoc.perl.org/Time/Local.html#Year-Value-Interpretation

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-07-08 10:45:49 +02:00
d37729b95b scan_cifs: fix scanning server with no SMB1 fallback
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-01 15:46:30 +02:00
8594155a66 scan cifs: fix passing "no pass" parameter
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-01 12:17:02 +02:00
c42d86559c scan_cifs: do not add NT_STATUS lines to result
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-01 12:11:59 +02:00
67a2371180 scan_cifs: do not enforce password for users
there can be accounts with explicit null password others than the
mapped guest account.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-01 12:11:59 +02:00
afaa98f10f scan_cifs: pass user/pass over environment
As command line argument they are readable by ever user in the same
PID namespace.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-01 12:06:53 +02:00