Previously, the transport format (which currently is always 'zfs') was
passed in, resulting in subvol-disks not to be renamed correctly.
Fixes: a97d3ee ("Introduce allow_rename parameter for pvesm import and storage_migrate")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
`qemu-img info --output=json` returns the size and used values as integers in
the JSON format, but the regex match converts them to strings.
As we know they only contain digits, we can simply cast them back to integers
after the regex.
The API requires them to be integers.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
The first step is to allocate rbd images correctly.
The metadata objects still need to be stored in a replicated pool, but
by providing the --data-pool parameter on image creation, we can place
the data objects on the erasure coded (EC) pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Some versions of ZFS do not automatically display the child snapshots
when '-t snapshot' is used, but require '-r' to be present
additionally[1]. And in general, it's cleaner to specify the flag
explicitly.
Because of that, commit ac5c1af led to a regression[0] in the context
of ZFS over iSCSI with zfs_get_sorted_snapshot_list. Fix it, by adding
a -r flag again.
The volume_snapshot_info function is currently only used in the
context of replication and that requires a local ZFS pool, but it
would be affected by the same issue if it is ever used in the context
of ZFS over iSCSI, so also add -r there.
[0]: https://forum.proxmox.com/threads/102683/
[1]: https://forum.proxmox.com/threads/102683/post-442577
Fixes: 8c20d8a ("plugin: add volume_snapshot_info function")
Fixes: ac5c1af ("zfspool: add zfs_get_sorted_snapshot_list helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
There are cases where autoactivation can fail, as reported in the
community forum [0]. And it could also be that a volume was
deactivated by something outside of our control.
It doesn't seem strictly necessary to activate the thin pool itself
(creating/removing/activating LVs within the pool still works if it's
not active), but it does not report usage information as long as
neither the pool nor any of its LVs are active. Activate the pool for
that, for being able to use the flag in status(), and it should also
serve as a good indicator that there's a problem with the pool if it
can't be activated.
Before activating, check the (cached) lv_state from lvm_list_volumes.
It's necessary to update the cache in activate_storage, because the
flag is re-used in status(). Also update it for other (de)activations
to be more future-proof.
[0]: https://forum.proxmox.com/threads/local-lvm-not-available-after-kernel-update-on-pve-7.97406
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Functionality has been added for the following storage types:
* directory ones, based on the default implementation:
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `rename` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
the intention of this feature is to support the following use-cases:
- reassign a volume from one owning guest to another (which usually
entails a rename, since the owning vmid is encoded in the volume name)
- rename a volume (e.g., to use a more meaningful name instead of the
auto-assigned ...-disk-123)
only the former is implemented at the caller side in
qemu-server/pve-container for now, but since the lower-level feature is
basically the same for both, we can take advantage of the storage plugin
API bump now to get the building block for this future feature in place
already.
adapt ApiChangelog change to fix conflicts and added more detail above
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
free_image doesn't need to check for protection, because that will
happen on the server.
Getting/updating notes has also been refactored to re-use the code
for the PBS api calls.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
add missing b-d and depend on libposix-strptime-perl
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
While it makes no difference for pruning itself, protected backups are
additionally protected against removal. Avoid the potential to confuse
the two. Also update the description for the API return value and add
an enum constraint.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
A protected backup is not removed by free_image and ignored when
pruning.
The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.
For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.
This is mainly done to avoid the need to add new methods every time a
new attribute is added.
Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.
For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This avoids showing empty notes in the result of the content/{volid}
API call for volumes that do not even support notes. It's also in
preparation for the proposed get_volume_attribute generalization,
which expects undef to be returned when an attribute is not supported.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
the plugins for file based storages
* BTRFS
* CIFS
* Dir
* Glusterfs
* NFS
now allow the option 'preallocation'.
'preallocation' can have four values:
* default
* off
* metadata
* falloc
* full
see man pages for `qemu-img` for what these mean exactly. [0]
the defualt value was chosen to be
* qcow2: metadata (as previously)
* raw: off
when using 'metadata' as preallocation mode, for raw images 'off'
is used.
[0] https://qemu.readthedocs.io/en/latest/system/images.html#disk-image-file-formats
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
TPM state disks on directory storages may have completely unaligned
sizes, this check doesn't make sense for them.
This appears to just be a (weak) safeguard and not serve an actual
functional purpose, so simply get rid of it to allow migration of TPM
state.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Currently, 'PVE::Storage::DirPlugin' is implicitly passed along as
$class, which means that if the base class's free_image calls another
method (e.g. filesystem_path) then the DirPlugin's method will be
used, rather than the one from BTRFSPlugin. Change it so that $class
itself is passed along.
See also commit 279d9de510 for context,
where the approach in this patch was suggested.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
instead of hardcoding it to a potential outdated value.
For `smbclient` we only set max-protocol version and that could only
be smb2 or smb3 (no finer granularity) any how, so this was not
really correct.
Nowadays the kernel dropped SMB1 and tries to go for SMB2.1 or higher
by default, depending on what client and server supports. SMB2.1 is
Windows 7/2008R2 - both EOL since quite a bit, so ok as default lower
boundary.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Added support for the SMB version SMB3_11 When the `min protocol =
SMB3_11` in the smb.conf, the CIFS mount will return with the
following error:
```
CIFS VFS: cifs_mount failed w/return code = -95
```
added an optional option to use the `vers=3.11`
Signed-off-by: Moayad Almalat <m.almalat@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
[ Thomas: move text from cover letter to commit message &
add S-o-b ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit a000e26ce7 caused a test failure
in pve-manager, because now 'keep-all=0' is not thrown out upon
validation anymore. Fix the issue the commit addressed differently,
by simply creating a copy of the (shallow) hash first, and using
the logic from before the commit.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
# $hash is { 'keep-all' => 1 }
my $s = print_property_string($hash, 'prune-backups');
# $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.
Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
INFO: prune older backups with retention:
INFO: pruned 0 backup(s)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is only set by parse_volname when the volume is a VM or
container image, but not for other content types.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
similar to commit 279d9de510
This calling style is pretty dangerous in general for such plugin
systems...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
By adding the keyring for RBD storage or the secret for CephFS ones, it
is possible to add an external Ceph cluster with only one API call.
Previously the keyring / secret file needed to be placed in
/etc/pve/priv/ceph/$storeID.{keyring,secret} manually.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by not dying when the dataset is already unmounted. Can be triggered
for a container by doing two rollbacks in a row.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The method is only derived in the DirPlugin module from the base
Plugin, so we do not have it available there through a static module
method call using ::, but only when using a class dereference.
Other fix options would have been:
PVE::Storage::Plugin::free_image(@_);
or:
$class->SUPER::free_image($storeid, ...);
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ Thomas: add some background to the commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)
This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].
Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of prompting) fixes the aborted lvcreate.
Adding only to LVMPlugin and not to the lvcreate calls in
LvmThinPlugin, since I assume (and my quick tests confirm) that thin
pools are not affected by this issue..
Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
target and a local lvmthin storage.
[0] https://lists.proxmox.com/pipermail/pve-user/2021-July/172660.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
as then the btrfs assertion would happen after we already created
subdirectories on some path, leaving those left-over..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>