The PBS storage plugin used PVE code to detect if an API token was
entered in the username field. This lead to bad requests for some
valid PBS tokens which are not valid PVE tokens. Examples are
"root@pam!1234" and "root@pam!_-".
Relax the token pattern to allow token names and realms that start
with numbers or underscores. Also allow single character token names,
which are allowed on the backend even though they can't be created
through the PBS Web UI.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
Link: https://lore.proxmox.com/20251120131149.147981-1-r.obkircher@proxmox.com
Doing a rollback via CLI on an LVM storage with 'saferemove' and
'snapshot-as-volume-chain' would run into a locking issue, because
the forked zero-out worker would try to acquire the lock while the
main CLI task is still inside the locked section for
volume_snapshot_rollback_locked(). The same issue does not happen when
the rollback is done via UI. The reason for this can be found in the
note regarding fork_worker():
> we simulate running in foreground if ($self->{type} eq 'cli')
So the worker will be awaited synchronously in CLI context, resulting
in the deadlock, while via API/UI, the main task would move on and
release the lock allowing the zero-out worker to acquire it.
Avoid doing fork_cleanup_worker() inside the locked section to avoid
the issue.
Fixes: 8eabcc7 ("lvm plugin: snapshot-as-volume-chain: use locking for snapshot operations")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20251120101742.24843-1-f.ebner@proxmox.com
It's more robust and cheaper to just always unlink (always one
syscall) and ignore ENOENT compared to stat and optional unlink (two
syscalls worst case).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
There are multiple reasons to disallow disabling
'snapshot-as-volume-chain' while a qcow2 image exists:
1. The list of allowed formats depends on 'snapshot-as-volume-chain'.
2. Snapshot functionality is broken. This includes creating snapshots,
but also rollback, which removes the current volume and then fails.
3. There already is coupling between having qcow2 on LVM and having
'snapshot-as-volume-chain' enabled. For example, the
'discard-no-unref' option is required for qcow2 on LVM, but
qemu-server only checks for 'snapshot-as-volume-chain' to avoid
hard-coding LVM. Another one is that volume_qemu_snapshot_method()
returns 'mixed' when the format is qcow2 even when
'snapshot-as-volume-chain' is disabled. Hunting down these corner
cases just to make it easier to disable does not seem to be worth
it, considering there's already 1. and 2. as reasons too.
4. There might be other similar issues that have not surfaced yet,
because disabling the feature while qcow2 is present is essentially
untested and very uncommon.
For file-based storages, the 'snapshot-as-volume-chain' property is
already fixed, i.e. is determined upon storage creation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The original on_update_hook() method is limited, because only the
updated properties and values are passed in. Introduce a new
on_update_hook_full() method which also receives the current storage
configuration and the list of which properties are to be deleted. This
allows detecting and reacting to all changes and knowing how values
changed.
Deletion of properties is deferred to after the on_update_hook(_full)
call. This makes it possible to pass the unmodified current storage
configuration to the method.
The default implementation of on_update_hook_full() just falls back to
the original on_update_hook().
Bump APIVER and APIAGE.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When mapping a volume (e.g., because KRBD is enabled) and the hint
'guest-is-windows' is given and true, pass the rxbounce option. This
is to avoid "bad crc/signature" warnings in the journal, retransmits
and degraded performance, see [1]. If the volume is already mapped
without rxbounce (this can be determined from the map options exposed
in sysfs), and it should be mapped with rxbounce, and the
'plugin-may-deactivate-volume' hint denotes it is currently safe to
deactivate the volume, unmap the volume and re-map it with rxbounce.
If 'guest-is-windows' is not given or not true, and the volume is
already mapped, take no action. This also means that guest volumes
that are mapped with rxbounce, but do not have to be (because they do
not belong to a Windows guest), are not deactivated. This can be the
case if a user applied the workaround of adding rxbounce to
'rbd_default_map_options', since this applies to all volumes.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5779
[2] https://forum.proxmox.com/threads/155741/post-710845
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-6-f.weber@proxmox.com
Plugin methods {activate,map}_volume accept an optional hints
parameter. Make PVE::Storage::{activate,map}_volumes also accept
hints, verify they are consistent with the schema, and pass them down
to the plugin when activating the volumes.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-4-f.weber@proxmox.com
Currently, when a storage plugin activates or maps a guest volume, it
has no information about the respective guest. This is by design to
reduce coupling between the storage layer and the upper layers.
However, in some cases, storage plugins may need to activate volumes
differently based on certain features of the guest. An example is the
RBD plugin with KRBD enabled, where guest volumes of Windows VMs have
to be mapped with the rxbounce option.
Introduce "hints" as a mechanism that allows the upper layers to pass
down well-defined information to the storage plugins on volume
activation/mapping. The storage plugin can make adjustments to its
volume activation/mapping based on the hints. The supported hints are
specified by a JSON schema and may be extended in the future.
Add a subroutine that checks whether a particular hint is supported
(may be used by the storage plugin as well as upper layers). This
allows to add further hints without having to bump pve-storage, since
upper layers can just check whether the current pve-storage supports a
particular hint.
The Boolean 'guest-is-windows' hint denotes that the
to-be-activated/mapped volume belongs to a Windows VM.
It is not guaranteed that the volume is inactive when
{activate,map}_volume are called, and it is not guaranteed that hints
are passed on every storage activation. Hence, it can happen that a
volume is already active but applying the hint would require unmapping
the volume and mapping it again with the hint applied (this is the
case for rxbounce). To cover such cases, the Boolean hint
'plugin-may-deactivate-volume' denotes whether unmapping the volume is
currently safe. Only if this hint is true, the plugin may deactivate
the volume and map it again with the hint applied.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-2-f.weber@proxmox.com
As reported in the community forum [0], there currently is a warning
from LVM when converting to a base image on an LVM-thin storage:
> WARNING: Combining activation change with other commands is not advised.
From a comment in the LVM source code:
> Unfortunately, lvchange has previously allowed changing an LV
> property and changing LV activation in a single command. This was
> not a good idea because the behavior/results are hard to predict and
> not possible to sensibly describe. It's also unnecessary. So, this
> is here for the sake of compatibility.
>
> This is extremely ugly; activation should always be done separately.
> This is not the full-featured lvchange capability, just the basic
> (the advanced activate options are not provided.)
>
> FIXME: wrap this in a config setting that we can disable by default
> to phase this out?
While it's not clear there's an actual issue in the specific use case
here, just follow what LVM recommends for future-proofing.
[0]: https://forum.proxmox.com/threads/165279/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250422133314.60806-1-f.ebner@proxmox.com
Extend volume import functionality to support 'iso', 'snippets',
'vztmpl', and 'import' types, in addition to the existing support for
'images' and 'rootdir'. This is a prerequisite for the ability to move
ISOs, snippets and container templates between nodes.
Existing behavior for importing VM disks and container volumes remains
unchanged.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20250916123257.107491-4-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
returning the formats in the way of:
```
"format": [
{
"format1" => 1,
"format2" => 1,
...
},
"defaultFormat"
]
```
is not a very good return format, since it abuses an array as a
tuple, and unnecessarily encodes a list of formats as an object.
Also, we can't describe it properly in JSONSchema in perl, nor our
perl->rust generator is able to handle that.
Instead, return it like this:
```
"formats": {
"default": "defaultFormat",
"supported": ["format1", "format2", ...]
}
```
which makes it much more sensible for an api return schema, and it's
possible to annotate it in the JSONSchema.
For compatibility reasons, keep the old property around, and add a
comment to remove with 10.0
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
no problem for 'select_existing', but we cannot actually describe
'format' with our JSONSchema, since it uses an array as a form of tuple,
and even with oneOf this cannot be described currently.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As reported by a user in the enterprise support in a ticket handled by
Friedrich, concurrent snapshot operations could lead to metadata
corruption of the volume group with unlucky timing. Add the missing
locking for operations modifying the metadata, i.e. allocation, rename
and removal. Since volume_snapshot() and volume_snapshot_rollback()
only do those, use a wrapper for the whole function. Since
volume_snapshot_delete() can do longer-running commit or rebase
operations, only lock the necessary sections there.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-5-f.ebner@proxmox.com
With a shared LVM storage, parallel imports, which might be done in
the context of remote migration, could lead to metadata corruption
with unlucky timing, because of missing locking. Add locking around
allocation and removal, which are the sections that modify LVM
metadata. Note that other plugins suffer from missing locking here as
well, but only regarding naming conflicts. Adding locking around the
full call to volume_import() would mean locking for much too long.
Other plugins could follow the approach here, or there could be a
reservation approach like proposed in [0].
[0]: https://lore.proxmox.com/pve-devel/20240403150712.262773-1-h.duerr@proxmox.com/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-4-f.ebner@proxmox.com