Commit Graph

747 Commits

Author SHA1 Message Date
ecfe25058b prune: mark renamed and protected backups differently
While it makes no difference for pruning itself, protected backups are
additionally protected against removal. Avoid the potential to confuse
the two. Also update the description for the API return value and add
an enum constraint.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
56897a9203 fix #3307: make it possible to set protection for backups
A protected backup is not removed by free_image and ignored when
pruning.

The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.

For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
f1de828166 add generalized functions to manage volume attributes
replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.

This is mainly done to avoid the need to add new methods every time a
new attribute is added.

Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.

For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
e0aa2070f6 dir plugin: get notes: return undef if notes are not supported
This avoids showing empty notes in the result of the content/{volid}
API call for volumes that do not even support notes. It's also in
preparation for the proposed get_volume_attribute generalization,
which expects undef to be returned when an attribute is not supported.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
ddb3263031 dir plugin: update notes: don't fail if file is already removed
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
dc992e7b89 plugin: remove volume_snapshot_list
which was only used by replication, but now replication uses
volume_snapshot_info instead.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
8c20d8afa3 plugin: add volume_snapshot_info function
which allows for better choices of common replication snapshots.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:35:53 +01:00
9a5d50950c zfspool: add blockers parameter to volume_snapshot_is_possible
useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
ac5c1af57c zfspool: add zfs_get_sorted_snapshot_list helper
replacing the current zfs_get_latest_snapshot. For
volume_snapshot_list, ignore errors as before.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
bc7fecb082 cephfs: add support for multiple ceph filesystems
by optionally saving the name of the cephfs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-05 13:53:56 +01:00
85043c0193 rbd plugin: free image: use actual command in error message
For linked clones, the base name was included, which is confusing.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-27 17:03:14 +02:00
95ff5dbd64 fix #3580: plugins: make preallocation mode selectable for qcow2 and raw images
the plugins for file based storages
 * BTRFS
 * CIFS
 * Dir
 * Glusterfs
 * NFS
now allow the option 'preallocation'.

'preallocation' can have four values:
 * default
 * off
 * metadata
 * falloc
 * full
see man pages for `qemu-img` for what these mean exactly. [0]

the defualt value was chosen to be
 * qcow2: metadata (as previously)
 * raw: off

when using 'metadata' as preallocation mode, for raw images 'off'
is used.

[0] https://qemu.readthedocs.io/en/latest/system/images.html#disk-image-file-formats

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-14 11:00:23 +02:00
9524b31ee3 btrfs: free image: only remove snapshots for current subvol
instead of all in the same directory.

Reported in the community forum:
https://forum.proxmox.com/threads/error-could-not-statfs-no-such-file-or-directory.96057/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-06 13:55:14 +02:00
b19ae5b47e import: don't check for 1K aligned size
TPM state disks on directory storages may have completely unaligned
sizes, this check doesn't make sense for them.

This appears to just be a (weak) safeguard and not serve an actual
functional purpose, so simply get rid of it to allow migration of TPM
state.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-05 06:19:39 +02:00
dcd8f3a3dd btrfs: call free_image correctly
Currently, 'PVE::Storage::DirPlugin' is implicitly passed along as
$class, which means that if the base class's free_image calls another
method (e.g.  filesystem_path) then the DirPlugin's method will be
used, rather than the one from BTRFSPlugin. Change it so that $class
itself is passed along.

See also commit 279d9de510 for context,
where the approach in this patch was suggested.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-22 08:52:53 +02:00
e1667a2253 cifs: negotiates the highest SMB2+ version supported by default
instead of hardcoding it to a potential outdated value.

For `smbclient` we only set max-protocol version and that could only
be smb2 or smb3 (no finer granularity) any how, so this was not
really correct.

Nowadays the kernel dropped SMB1 and tries to go for SMB2.1 or higher
by default, depending on what client and server supports. SMB2.1 is
Windows 7/2008R2 - both EOL since quite a bit, so ok as default lower
boundary.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-15 10:59:00 +02:00
9fff8c7aca cifs: allow "3" and "default" for version
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-14 14:28:15 +02:00
396ea58b65 fix #3609: cifs: add support to SMB 3.11
Added support for the SMB version SMB3_11 When the `min protocol =
SMB3_11` in the smb.conf, the CIFS mount will return with the
following error:
```
CIFS VFS: cifs_mount failed w/return code = -95
```
added an optional option to use the `vers=3.11`

Signed-off-by: Moayad Almalat <m.almalat@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
[ Thomas: move text from cover letter to commit message &
  add S-o-b ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-14 11:50:14 +02:00
d5fc368503 fix prune-backups validation (again)
Commit a000e26ce7 caused a test failure
in pve-manager, because now 'keep-all=0' is not thrown out upon
validation anymore. Fix the issue the commit addressed differently,
by simply creating a copy of the (shallow) hash first, and using
the logic from before the commit.

Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-10 14:16:26 +02:00
a000e26ce7 prune {validate, mark}: preserve input parameter
While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
    # $hash is { 'keep-all' => 1 }
    my $s = print_property_string($hash, 'prune-backups');
    # $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.

Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
    INFO: prune older backups with retention:
    INFO: pruned 0 backup(s)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-09 18:07:13 +02:00
08ca395503 btrfs: style: add missing semicolon
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-08 14:26:12 +02:00
6c315e4587 btrfs: avoid undef warnings with format
which is only set by parse_volname when the volume is a VM or
container image, but not for other content types.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-08 14:26:12 +02:00
e2f8e86c83 btrfs: fix calling alloc_image from DirPlugin
similar to commit 279d9de510

This calling style is pretty dangerous in general for such plugin
systems...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-06 08:25:58 +02:00
22b68016f7 Ceph: add keyring parameter for external clusters
By adding the keyring for RBD storage or the secret for CephFS ones, it
is possible to add an external Ceph cluster with only one API call.

Previously the keyring / secret file needed to be placed in
/etc/pve/priv/ceph/$storeID.{keyring,secret} manually.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-08-26 18:15:30 +02:00
ab3516a6d7 zfs: fix unmount request
by not dying when the dataset is already unmounted. Can be triggered
for a container by doing two rollbacks in a row.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-08-12 11:48:42 +02:00
279d9de510 fix #3555: BTRFS: call DirPlugin's free_image correctly
The method is only derived in the DirPlugin module from the base
Plugin, so we do not have it available there through a static module
method call using ::, but only when using a class dereference.

Other fix options would have been:

  PVE::Storage::Plugin::free_image(@_);

or:
  $class->SUPER::free_image($storeid, ...);

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ Thomas: add some background to the commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-30 14:52:51 +02:00
5a16629577 lvm: wipe signatures on lvcreate
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)

This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].

Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of prompting) fixes the aborted lvcreate.

Adding only to LVMPlugin and not to the lvcreate calls in
LvmThinPlugin, since I assume (and my quick tests confirm) that thin
pools are not affected by this issue..

Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
target and a local lvmthin storage.

[0] https://lists.proxmox.com/pipermail/pve-user/2021-July/172660.html

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-07-06 12:39:50 +02:00
b4e88b7fd3 cifs: improve warning for password but no username set
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-06 07:50:29 +02:00
02f43ab4a8 cifs: fix sensitive parameter name for on-update/add
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-06 07:50:06 +02:00
38f0f4698e btrfs: fix path_is_mounted invocation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-28 08:40:03 +02:00
1c1589e60d btrfs: support newer prune-backups for backup retention
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-24 16:11:48 +02:00
a1234a04df btrfs: add mkdir as option for now
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-24 11:45:45 +02:00
f449cddc79 btrfs: do not reuse DirPlugins activate_storage directrly
as then the btrfs assertion would happen after we already created
subdirectories on some path, leaving those left-over..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-24 11:18:40 +02:00
f6abd82a6d btrfs: check for btrfs in on_add_hook and activate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-24 11:06:02 +02:00
347e677b78 btrfs: drop qcow2 and vmdk for now
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 20:22:52 +02:00
d3c5cf2487 btrfs: make NOCOW optional
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
a0e3e224ea btrfs: add 'btrfs' import/export format
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
3cc29a0487 bump storage API: update import/export methods
Bumps APIVER to 9 and resets APIAGE to zero.

The import methods (volume_import, volume_import_formats):

These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
  `volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.

Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.

On the export side (volume_export, volume_export_formats):

The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
  The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
  This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
af50c2e671 add BTRFS storage plugin
This is mostly the same as a directory storage, with 2 major
differences:

* 'subvol' volumes are actual btrfs subvolumes and therefore
  allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
  also allow snapshots, the raw file for volume
  `btrstore:100/vm-100-disk-1.raw` can be found under
  `$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
  subvolume's directory name, so snapshot 'foo' of the above
  would be found under
  `$path/images/100/vm-100-disk-1@foo/disk.raw`
  or for format "subvol":
  `$path/images/100/subvol-100-disk-1.subvol@foo`

Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 20:20:31 +02:00
bba10cf4af factoring out regex for vztmpl
stores the regex definition in PVE::Storage.

One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-06-23 20:19:09 +02:00
339a4eb3c0 file size info: return early if we cannot parse json
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
d4e00f2bd5 file/volume size info: add actual errors to untaint messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
ac598d851e plugins: untaint volume_size_info retuns
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.

One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/

A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.

This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
  by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
  returns.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-06-23 08:28:48 +02:00
ffc31266da tree-wide: fix typos with codespell
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
5b955999b9 pbs: fix typo
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-22 13:44:06 +02:00
03c487e553 config: prevent empty content list when content type 'none' is not supported
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:21:45 +02:00
6a4545601b lvm: volume import: handle worker returned by free_image
only affects LVM storages with 'saferemove 1' where the import fails at a rather
advanced stage. Previously in such cases, the renamed (by free_image) volume
del-vm-XYZ-disk-N would be left over.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 09:38:03 +02:00
7ae13a34d2 pbs: free image: explicitly return undef
Storage.pm's vdisk_free interprets truthy return values as worker subs, so be
explicit about returning undef here. Not an issue at the moment, because
run_client_command already returns undef, but better be safe than sorry.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 09:38:03 +02:00
cda32b2361 cephfs: update reminder for systemd_netmount removal
Commit d9ece228fb introduced the workaround with
using systemd units and 25e222ca0d re-used the
functionality for fuse-mounts too.

The latter commit suggests to switch to using mount.fuse.ceph for the '_netdev'
option, but it doesn't seem to work:

 root@pve701 / # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
 ceph-fuse[20729]: starting ceph client
 2021-06-15T14:22:00.631+0200 7f995f878080 -1 init, newargv = 0x55e09fc11a40 newargc=11
 ceph-fuse[20729]: starting fuse
 root@pve701 / # mount -t ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/normal -o 'name=admin,secretfile=/etc/pve/priv/ceph/cephfs.secret,conf=/etc/pve/ceph.conf,_netdev'
 root@pve701 / # mount | grep mnttest
 ceph-fuse on /mnttest/fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
 10.10.10.11,10.10.10.12,10.10.10.13:/ on /mnttest/normal type ceph (rw,relatime,name=admin,secret=<hidden>,acl,_netdev)

Also, the return value is not propagated by mount.fuse.ceph, meaning the output
would need to be parsed...

 root@pve701 ~ # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
 2021-06-15T14:42:56.326+0200 7f634edae080 -1 init, newargv = 0x560cdb5e0a40 newargc=11
 ceph-fuse[34480]: starting ceph client
 fuse: mountpoint is not empty
 fuse: if you are sure this is safe, use the 'nonempty' mount option
 ceph-fuse[34480]: fuse failed to start
 2021-06-15T14:42:56.338+0200 7f634edae080 -1
 fuse_mount(mountpoint=/mnttest/fuse) failed.
 Mount failed with status code: 5
 root@pve701 ~ # echo $?
 0

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
9531988d5e cephfs: revert safe-guard check for Luminous
It's necessary to be on Nautilus before upgrading to 7.x, so the check is no
longer needed. See commit e54c3e3347. It didn't
cleanly revert, because there were cleanups made afterwards.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00