The method is only derived in the DirPlugin module from the base
Plugin, so we do not have it available there through a static module
method call using ::, but only when using a class dereference.
Other fix options would have been:
PVE::Storage::Plugin::free_image(@_);
or:
$class->SUPER::free_image($storeid, ...);
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ Thomas: add some background to the commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)
This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].
Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of prompting) fixes the aborted lvcreate.
Adding only to LVMPlugin and not to the lvcreate calls in
LvmThinPlugin, since I assume (and my quick tests confirm) that thin
pools are not affected by this issue..
Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
target and a local lvmthin storage.
[0] https://lists.proxmox.com/pipermail/pve-user/2021-July/172660.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
as then the btrfs assertion would happen after we already created
subdirectories on some path, leaving those left-over..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Bumps APIVER to 9 and resets APIAGE to zero.
The import methods (volume_import, volume_import_formats):
These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
`volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.
Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.
On the export side (volume_export, volume_export_formats):
The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is mostly the same as a directory storage, with 2 major
differences:
* 'subvol' volumes are actual btrfs subvolumes and therefore
allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
also allow snapshots, the raw file for volume
`btrstore:100/vm-100-disk-1.raw` can be found under
`$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
subvolume's directory name, so snapshot 'foo' of the above
would be found under
`$path/images/100/vm-100-disk-1@foo/disk.raw`
or for format "subvol":
`$path/images/100/subvol-100-disk-1.subvol@foo`
Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
stores the regex definition in PVE::Storage.
One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.
One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/
A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.
This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
returns.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
only affects LVM storages with 'saferemove 1' where the import fails at a rather
advanced stage. Previously in such cases, the renamed (by free_image) volume
del-vm-XYZ-disk-N would be left over.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Storage.pm's vdisk_free interprets truthy return values as worker subs, so be
explicit about returning undef here. Not an issue at the moment, because
run_client_command already returns undef, but better be safe than sorry.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Commit d9ece228fb introduced the workaround with
using systemd units and 25e222ca0d re-used the
functionality for fuse-mounts too.
The latter commit suggests to switch to using mount.fuse.ceph for the '_netdev'
option, but it doesn't seem to work:
root@pve701 / # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
ceph-fuse[20729]: starting ceph client
2021-06-15T14:22:00.631+0200 7f995f878080 -1 init, newargv = 0x55e09fc11a40 newargc=11
ceph-fuse[20729]: starting fuse
root@pve701 / # mount -t ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/normal -o 'name=admin,secretfile=/etc/pve/priv/ceph/cephfs.secret,conf=/etc/pve/ceph.conf,_netdev'
root@pve701 / # mount | grep mnttest
ceph-fuse on /mnttest/fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
10.10.10.11,10.10.10.12,10.10.10.13:/ on /mnttest/normal type ceph (rw,relatime,name=admin,secret=<hidden>,acl,_netdev)
Also, the return value is not propagated by mount.fuse.ceph, meaning the output
would need to be parsed...
root@pve701 ~ # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
2021-06-15T14:42:56.326+0200 7f634edae080 -1 init, newargv = 0x560cdb5e0a40 newargc=11
ceph-fuse[34480]: starting ceph client
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
ceph-fuse[34480]: fuse failed to start
2021-06-15T14:42:56.338+0200 7f634edae080 -1
fuse_mount(mountpoint=/mnttest/fuse) failed.
Mount failed with status code: 5
root@pve701 ~ # echo $?
0
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It's necessary to be on Nautilus before upgrading to 7.x, so the check is no
longer needed. See commit e54c3e3347. It didn't
cleanly revert, because there were cleanups made afterwards.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is used if there is no ('dir'-type) 'local' entry. Storage configurations
made by the installer also support backups for the 'local' storage, and the
'prune-backups' parameter is not really useful otherwise.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.
Also switch the setting for the default 'local' storage to 'prune-backups'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This was never marked stable and the recommended one is the external
version, which is maintained by linbit themselves.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
similar to the existing encryption key handling, but without
auto-generation since we only have the public part here.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
A restore to ZFS for a container which has a volume (rootfs / mount
point) of size 0 failed because the refquota property does not accept
'0k' but wants 'none' in that situation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This patch introduces support for Cephs RBD namespaces.
A new storage config parameter 'namespace' defines the namespace to be
used for the RBD storage.
The namespace must already exist in the Ceph cluster as it is not
automatically created.
The main intention is to use this for external Ceph clusters. With
namespaces, each PVE cluster can get its own namespace and will not
conflict with other PVE clusters.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The <pool>/<image> paths are needed in quite a lot of places. Having one
single place where they are created helps to reduce duplicate code and
makes it easier to introduce new features.
The 'add_pool_to_disk' sub was already doing that but the name was not
really fitting. This commit renames it to the more general
'get_rbd_path' and changes the second parameter to the more widely used
$volume instead of $disk.
Furthermore, all occurences where "$pool/$volume" has been concatenated
have been replaced with a call to get_rbd_path.
Plus some minor code style cleanups for long function calls that were
touched.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by relying on archive_info's vmid first. archive_info is already used to
determine if it's a standard name, and in that case the vmid is certainly set.
Also add asserts to make sure we got what we expected.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>