Currently, 'PVE::Storage::DirPlugin' is implicitly passed along as
$class, which means that if the base class's free_image calls another
method (e.g. filesystem_path) then the DirPlugin's method will be
used, rather than the one from BTRFSPlugin. Change it so that $class
itself is passed along.
See also commit 279d9de510 for context,
where the approach in this patch was suggested.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
instead of hardcoding it to a potential outdated value.
For `smbclient` we only set max-protocol version and that could only
be smb2 or smb3 (no finer granularity) any how, so this was not
really correct.
Nowadays the kernel dropped SMB1 and tries to go for SMB2.1 or higher
by default, depending on what client and server supports. SMB2.1 is
Windows 7/2008R2 - both EOL since quite a bit, so ok as default lower
boundary.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Added support for the SMB version SMB3_11 When the `min protocol =
SMB3_11` in the smb.conf, the CIFS mount will return with the
following error:
```
CIFS VFS: cifs_mount failed w/return code = -95
```
added an optional option to use the `vers=3.11`
Signed-off-by: Moayad Almalat <m.almalat@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
[ Thomas: move text from cover letter to commit message &
add S-o-b ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previously, top-level vdevs like log or special were wrongly added as
children of the previous outer vdev instead of the root.
Fix it by also showing the vdev with the same name as the pool and
start counting from level 1 (the pool itself serves as the root and
should be the only one with level 0). This results in the same kind
of structure as in PBS and (except for the root) zpool status itself.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Commit a000e26ce7 caused a test failure
in pve-manager, because now 'keep-all=0' is not thrown out upon
validation anymore. Fix the issue the commit addressed differently,
by simply creating a copy of the (shallow) hash first, and using
the logic from before the commit.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
# $hash is { 'keep-all' => 1 }
my $s = print_property_string($hash, 'prune-backups');
# $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.
Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
INFO: prune older backups with retention:
INFO: pruned 0 backup(s)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is only set by parse_volname when the volume is a VM or
container image, but not for other content types.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
try to comment why not what, what is already described good enough by
the code here.
Also, we want to go up to 100cc text-width if it improves
readability, which for post-if's it most often does.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is the first step in which not the http server removes the
temporary file, but the worker itself.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
similar to commit 279d9de510
This calling style is pretty dangerous in general for such plugin
systems...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
By adding the keyring for RBD storage or the secret for CephFS ones, it
is possible to add an external Ceph cluster with only one API call.
Previously the keyring / secret file needed to be placed in
/etc/pve/priv/ceph/$storeID.{keyring,secret} manually.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This allows us to manually pass the used RBD keyring or CephFS secret.
Useful mostly when adding external Ceph clusters where we have no other
means to fetch them.
I renamed the previous $secret to $cephfs_secret to be able to use
$secret as parameter.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by not dying when the dataset is already unmounted. Can be triggered
for a container by doing two rollbacks in a row.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The method is only derived in the DirPlugin module from the base
Plugin, so we do not have it available there through a static module
method call using ::, but only when using a class dereference.
Other fix options would have been:
PVE::Storage::Plugin::free_image(@_);
or:
$class->SUPER::free_image($storeid, ...);
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ Thomas: add some background to the commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after an error while copying the file to its destination the local
path of the destination was unlinked in every case, even when on the
destination was copied to via scp.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
the addition of this enum does not change API behaviour, because
it is checked for 'iso' or 'vztmpl' aftwerwards anyway.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Extracting the config for zstd compressed vma files was broken:
Failed to extract config from VMA archive: zstd: error 70 : Write
error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)
This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].
Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of prompting) fixes the aborted lvcreate.
Adding only to LVMPlugin and not to the lvcreate calls in
LvmThinPlugin, since I assume (and my quick tests confirm) that thin
pools are not affected by this issue..
Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
target and a local lvmthin storage.
[0] https://lists.proxmox.com/pipermail/pve-user/2021-July/172660.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
as then the btrfs assertion would happen after we already created
subdirectories on some path, leaving those left-over..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
uses common function PVE::Tools::download_file_from_url to download
iso files.
Only users with permissions `Sys.Audit` and `Sys.Modify` on `/` are
permitted to perform this action. This restriction is due to the
fact, that the download function is able to download files from
internal networks (which are not visible/accessible from outside).
Users with these permissions anyway have the means to alter node
(network) config, so this does not create any further security risk.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Bumps APIVER to 9 and resets APIAGE to zero.
The import methods (volume_import, volume_import_formats):
These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
`volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.
Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.
On the export side (volume_export, volume_export_formats):
The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>