Commit Graph

2139 Commits

Author SHA1 Message Date
b6fefb03ba lvm plugin: disallow disabling 'snapshot-as-volume-chain' while a qcow2 image exists
There are multiple reasons to disallow disabling
'snapshot-as-volume-chain' while a qcow2 image exists:

1. The list of allowed formats depends on 'snapshot-as-volume-chain'.
2. Snapshot functionality is broken. This includes creating snapshots,
   but also rollback, which removes the current volume and then fails.
3. There already is coupling between having qcow2 on LVM and having
   'snapshot-as-volume-chain' enabled. For example, the
   'discard-no-unref' option is required for qcow2 on LVM, but
   qemu-server only checks for 'snapshot-as-volume-chain' to avoid
   hard-coding LVM. Another one is that volume_qemu_snapshot_method()
   returns 'mixed' when the format is qcow2 even when
   'snapshot-as-volume-chain' is disabled. Hunting down these corner
   cases just to make it easier to disable does not seem to be worth
   it, considering there's already 1. and 2. as reasons too.
4. There might be other similar issues that have not surfaced yet,
   because disabling the feature while qcow2 is present is essentially
   untested and very uncommon.

For file-based storages, the 'snapshot-as-volume-chain' property is
already fixed, i.e. is determined upon storage creation.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-11-14 00:34:58 +01:00
0b1331ccda close #6669: plugin api: introduce on_update_hook_full() method
The original on_update_hook() method is limited, because only the
updated properties and values are passed in. Introduce a new
on_update_hook_full() method which also receives the current storage
configuration and the list of which properties are to be deleted. This
allows detecting and reacting to all changes and knowing how values
changed.

Deletion of properties is deferred to after the on_update_hook(_full)
call. This makes it possible to pass the unmodified current storage
configuration to the method.

The default implementation of on_update_hook_full() just falls back to
the original on_update_hook().

Bump APIVER and APIAGE.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-11-14 00:34:58 +01:00
24367c07d3 plugin: rbd: pass rxbounce when mapping Windows VM guest volumes
When mapping a volume (e.g., because KRBD is enabled) and the hint
'guest-is-windows' is given and true, pass the rxbounce option. This
is to avoid "bad crc/signature" warnings in the journal, retransmits
and degraded performance, see [1]. If the volume is already mapped
without rxbounce (this can be determined from the map options exposed
in sysfs), and it should be mapped with rxbounce, and the
'plugin-may-deactivate-volume' hint denotes it is currently safe to
deactivate the volume, unmap the volume and re-map it with rxbounce.

If 'guest-is-windows' is not given or not true, and the volume is
already mapped, take no action. This also means that guest volumes
that are mapped with rxbounce, but do not have to be (because they do
not belong to a Windows guest), are not deactivated. This can be the
case if a user applied the workaround of adding rxbounce to
'rbd_default_map_options', since this applies to all volumes.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5779
[2] https://forum.proxmox.com/threads/155741/post-710845

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-6-f.weber@proxmox.com
2025-11-14 00:32:59 +01:00
738897852c plugin: rbd: factor out subroutine to obtain RBD ID
This allows the subroutine to be reused.

No functional change intended.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-5-f.weber@proxmox.com
2025-11-14 00:32:55 +01:00
7c2a554b97 storage: activate/map volumes: verify and pass hints to plugin
Plugin methods {activate,map}_volume accept an optional hints
parameter. Make PVE::Storage::{activate,map}_volumes also accept
hints, verify they are consistent with the schema, and pass them down
to the plugin when activating the volumes.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-4-f.weber@proxmox.com
2025-11-14 00:32:53 +01:00
8818ff0d1d plugin api: bump api version and age
Introduce $hints parameter to activate_volume() and map_volume().

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-3-f.weber@proxmox.com
2025-11-14 00:32:51 +01:00
e9573e1db5 plugin: introduce hints for activating and mapping volumes
Currently, when a storage plugin activates or maps a guest volume, it
has no information about the respective guest. This is by design to
reduce coupling between the storage layer and the upper layers.

However, in some cases, storage plugins may need to activate volumes
differently based on certain features of the guest. An example is the
RBD plugin with KRBD enabled, where guest volumes of Windows VMs have
to be mapped with the rxbounce option.

Introduce "hints" as a mechanism that allows the upper layers to pass
down well-defined information to the storage plugins on volume
activation/mapping. The storage plugin can make adjustments to its
volume activation/mapping based on the hints. The supported hints are
specified by a JSON schema and may be extended in the future.

Add a subroutine that checks whether a particular hint is supported
(may be used by the storage plugin as well as upper layers). This
allows to add further hints without having to bump pve-storage, since
upper layers can just check whether the current pve-storage supports a
particular hint.

The Boolean 'guest-is-windows' hint denotes that the
to-be-activated/mapped volume belongs to a Windows VM.

It is not guaranteed that the volume is inactive when
{activate,map}_volume are called, and it is not guaranteed that hints
are passed on every storage activation. Hence, it can happen that a
volume is already active but applying the hint would require unmapping
the volume and mapping it again with the hint applied (this is the
case for rxbounce). To cover such cases, the Boolean hint
'plugin-may-deactivate-volume' denotes whether unmapping the volume is
currently safe. Only if this hint is true, the plugin may deactivate
the volume and map it again with the hint applied.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-2-f.weber@proxmox.com
2025-11-14 00:32:48 +01:00
ebec84ff87 fix #6224: disks: get: set timeout for retrieval of SMART stat data
In rare scenarios, `smartctl` takes up to 60 seconds to timeout for SCSI
commands to be completed, as reported in our user forum [0] and bugzilla
[1]. It seems that USB drives handled by the USB Attached SCSI (UAS)
kernel module are more likely to be affected by this [2], but is more of
a case-by-case situation.

Therefore, set a more reasonable timeout of 10 seconds, so that callers
don't have to wait too long or seem unresponsive (e.g. Node Disks view
in the WebGUI).

[0] https://forum.proxmox.com/threads/164799/
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=6224
[2] https://www.smartmontools.org/wiki/SAT-with-UAS-Linux

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250415071123.36921-3-d.kral@proxmox.com
2025-11-14 00:29:06 +01:00
b6c18e9116 disks: get: separate error path for retrieving SMART data
Make the subroutine get_smart_data() die with the error message from
running the `smartctl` command before. This is in preparation for the
next patch, which makes that command fail in certain scenarios.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250415071123.36921-2-d.kral@proxmox.com
2025-11-14 00:28:27 +01:00
0d2df3048a api: smart: return unknown health instead of error message
In case of an error, the WebGUI expects the SMART data API endpoint to
return a health value, but it will return an error message directly. To
make this more user friendly, mask the error in the API handler.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250415071123.36921-1-d.kral@proxmox.com
2025-11-14 00:27:12 +01:00
7b41368fc3 lvm thin plugin: do not combine activation change and property change
As reported in the community forum [0], there currently is a warning
from LVM when converting to a base image on an LVM-thin storage:

> WARNING: Combining activation change with other commands is not advised.

From a comment in the LVM source code:

> Unfortunately, lvchange has previously allowed changing an LV
> property and changing LV activation in a single command.  This was
> not a good idea because the behavior/results are hard to predict and
> not possible to sensibly describe.  It's also unnecessary.  So, this
> is here for the sake of compatibility.
>
> This is extremely ugly; activation should always be done separately.
> This is not the full-featured lvchange capability, just the basic
> (the advanced activate options are not provided.)
>
> FIXME: wrap this in a config setting that we can disable by default
> to phase this out?

While it's not clear there's an actual issue in the specific use case
here, just follow what LVM recommends for future-proofing.

[0]: https://forum.proxmox.com/threads/165279/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250422133314.60806-1-f.ebner@proxmox.com
2025-11-14 00:22:48 +01:00
0ba0739f69 plugin: allow volume import of iso, snippets, vztmpl and import
Extend volume import functionality to support 'iso', 'snippets',
'vztmpl', and 'import' types, in addition to the existing support for
'images' and 'rootdir'. This is a prerequisite for the ability to move
ISOs, snippets and container templates between nodes.

Existing behavior for importing VM disks and container volumes remains
unchanged.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20250916123257.107491-4-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
829b0cd728 storage migrate: avoid ssh when moving a volume locally
Avoid the overhead of SSH when $target_sshinfo is undefined. Instead
move a volume between storages on the same node.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20250916123257.107491-3-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
e556534459 storage migrate: remove remnant from rsync-based migration
rsync-based migration was replaced by import/export in commit
da72898cc6 ("migrate: only use import/export")

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20250916123257.107491-2-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
5b949979f7 disks: add a guard possible nonexistent field
When running

    pveceph osd create <device>

one would get one or two warnings:

    Use of uninitialized value in pattern match (m//) at /usr/share/perl5/PVE/Diskmanage.pm line 317.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20251024121309.1253604-1-m.sandoval@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
c9a2ce281b common: add pve-vm-image-format standard option for VM image formats
The image formats defined for the pve-vm-image-format standard option
are the formats that are allowed on Proxmox VE storages for VM images.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20251113144131.560130-4-f.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
beacf9735d bump version to 9.0.15
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-13 07:39:09 +01:00
aa8cd93ca4 status: rrddata: use fixed pve-storage-9.0 path
Because we now always create it, should it not exists and old data is
used from the old files.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Link: https://lore.proxmox.com/20251103220024.2488005-8-a.lauterer@proxmox.com
2025-11-13 07:37:18 +01:00
0d392b295c allow .tar container templates
This is needed for OCI container images bundled as tar files, as
generated by `docker save`. OCI images do not need additional
compression, since the content is usually compressed already.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251008171028.196998-13-f.schauer@proxmox.com
2025-11-12 20:08:51 +01:00
a85ebe36af bump version to 9.0.14
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-11-05 09:13:20 +01:00
633392285c api: list: return 'formats' info in a better structured way
returning the formats in the way of:
```
"format": [
    {
        "format1" => 1,
        "format2" => 1,
        ...
    },
    "defaultFormat"
]
```

is not a very good return format, since it abuses an array as a
tuple, and unnecessarily encodes a list of formats as an object.
Also, we can't describe it properly in JSONSchema in perl, nor our
perl->rust generator is able to handle that.

Instead, return it like this:
```
"formats": {
    "default": "defaultFormat",
    "supported": ["format1", "format2", ...]
}
```

which makes it much more sensible for an api return schema, and it's
possible to annotate it in the JSONSchema.

For compatibility reasons, keep the old property around, and add a
comment to remove with 10.0

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2025-11-05 09:07:43 +01:00
ede776abef api: try to add more return schema information
no problem for 'select_existing', but we cannot actually describe
'format' with our JSONSchema, since it uses an array as a form of tuple,
and even with oneOf this cannot be described currently.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2025-11-05 09:07:35 +01:00
8eabcc7011 lvm plugin: snapshot-as-volume-chain: use locking for snapshot operations
As reported by a user in the enterprise support in a ticket handled by
Friedrich, concurrent snapshot operations could lead to metadata
corruption of the volume group with unlucky timing. Add the missing
locking for operations modifying the metadata, i.e. allocation, rename
and removal. Since volume_snapshot() and volume_snapshot_rollback()
only do those, use a wrapper for the whole function. Since
volume_snapshot_delete() can do longer-running commit or rebase
operations, only lock the necessary sections there.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-5-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
5988ac0250 lvm plugin: volume import: lock allocation and removal sections
With a shared LVM storage, parallel imports, which might be done in
the context of remote migration, could lead to metadata corruption
with unlucky timing, because of missing locking. Add locking around
allocation and removal, which are the sections that modify LVM
metadata. Note that other plugins suffer from missing locking here as
well, but only regarding naming conflicts. Adding locking around the
full call to volume_import() would mean locking for much too long.
Other plugins could follow the approach here, or there could be a
reservation approach like proposed in [0].

[0]: https://lore.proxmox.com/pve-devel/20240403150712.262773-1-h.duerr@proxmox.com/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-4-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
0864fda2fd lvm plugin: fix error handling in volume_snapshot_rollback()
In case a cleanup worker is spawned, the error from the eval block
for allocation was lost. Save it in a variable for checking later.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-3-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
f0f9054926 lvm plugin: snapshot delete: propagate previously captured error
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-2-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
d5995ffbf7 fix #6941: lvmplugin: fix volume activation of raw disk before secure delete
The volume activate before secure delete was lost in qcow2 snapshot implementation
in commit eda88c94ed.

This re-add activation just before the the delete, to be sure to not write zero
to not existing /dev/.. (so in memory instead the device)

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.251.1761222222.362.pve-devel@lists.proxmox.com
[FE: Remove extra space before colons in commit title
     Slightly improve code comment]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-27 15:28:41 +01:00
728d8f3992 lvmplugin: use blkdiscard when supported instead cstream to saferemove drive
Current cstream implementation is pretty slow, even without throttling.

use blkdiscard --zeroout instead when storage support it,
which is a few magnitudes faster.

Another benefit is that blkdiscard is skipping already zeroed block, so for empty
temp images like snapshot, is pretty fast.

blkdiscard don't have throttling like cstream, but we can tune the step size
of zeroes pushed to the storage.
I'm using 32MB stepsize by default , like ovirt, where it seem to be the best
balance between speed and load.
79f1d79058

but it can be reduce with "saferemove_stepsize" option.

stepsize is also autoreduce to sysfs write_zeroes_max_bytes, which is the maximum
zeroing batch supported by the storage

test with a 100G volume (empty):

time /usr/bin/cstream -i /dev/zero -o /dev/test/vm-100-disk-0.qcow2 -T 10 -v 1 -b 1048576

13561233408 B 12.6 GB 10.00 s 1356062979 B/s 1.26 GB/s
26021462016 B 24.2 GB 20.00 s 1301029969 B/s 1.21 GB/s
38585499648 B 35.9 GB 30.00 s 1286135343 B/s 1.20 GB/s
50998542336 B 47.5 GB 40.00 s 1274925312 B/s 1.19 GB/s
63702765568 B 59.3 GB 50.00 s 1274009877 B/s 1.19 GB/s
76721885184 B 71.5 GB 60.00 s 1278640698 B/s 1.19 GB/s
89126539264 B 83.0 GB 70.00 s 1273178488 B/s 1.19 GB/s
101666459648 B 94.7 GB 80.00 s 1270779024 B/s 1.18 GB/s
107390959616 B 100.0 GB 84.39 s 1272531142 B/s 1.19 GB/s
write: No space left on device

real    1m24.394s
user    0m0.171s
sys     1m24.052s

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v
/dev/test/vm-100-disk-0.qcow2: Zero-filled 107390959616 bytes from the offset 0

real    0m3.641s
user    0m0.001s
sys     0m3.433s

test with a 100G volume with random data:

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v

/dev/test/vm-112-disk-1: Zero-filled 4764729344 bytes from the offset 0
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 4764729344
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 9428795392
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 14260633600
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 19092471808
/dev/test/vm-112-disk-1: Zero-filled 4865392640 bytes from the offset 23924310016
/dev/test/vm-112-disk-1: Zero-filled 4596957184 bytes from the offset 28789702656
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 33386659840
/dev/test/vm-112-disk-1: Zero-filled 4294967296 bytes from the offset 38117834752
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 42412802048
/dev/test/vm-112-disk-1: Zero-filled 4697620480 bytes from the offset 47076868096
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 51774488576
/dev/test/vm-112-disk-1: Zero-filled 4261412864 bytes from the offset 56438554624
/dev/test/vm-112-disk-1: Zero-filled 4362076160 bytes from the offset 60699967488
/dev/test/vm-112-disk-1: Zero-filled 4127195136 bytes from the offset 65062043648
/dev/test/vm-112-disk-1: Zero-filled 4328521728 bytes from the offset 69189238784
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 73517760512
/dev/test/vm-112-disk-1: Zero-filled 4026531840 bytes from the offset 78248935424
/dev/test/vm-112-disk-1: Zero-filled 4194304000 bytes from the offset 82275467264
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 86469771264
/dev/test/vm-112-disk-1: Zero-filled 4395630592 bytes from the offset 91133837312
/dev/test/vm-112-disk-1: Zero-filled 3623878656 bytes from the offset 95529467904
/dev/test/vm-112-disk-1: Zero-filled 4462739456 bytes from the offset 99153346560
/dev/test/vm-112-disk-1: Zero-filled 3758096384 bytes from the offset 103616086016

real    0m23.969s
user    0m0.030s
sys     0m0.144s

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.253.1761222252.362.pve-devel@lists.proxmox.com
[FE: Minor language improvements
     Use more common style for importing with qw()
     Don't specify full path to blkdiscard binary for run_command()]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-27 15:23:17 +01:00
10e47fc8bb pvesm: print units in 'status' subcommand table headings
The units used are not documented in the man page.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250829124738.412902-1-m.sandoval@proxmox.com
[FE: improve commit title]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-22 14:39:39 +02:00
68c3142605 api schema: storage: config: fix typos in return schema description
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-08 15:09:16 +02:00
c10e73d93b plugin: pod: fix variable name for volume_qemu_snapshot_method() example code
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-08 14:27:25 +02:00
6e5a42052c fix #6845: make regexes in zvol deletion retry logic less restrictive
As reported by a storage plugin developer in our community [0], some
plugins might not throw an exception in the exact format we expect. In
particular, this also applies to the built-in ZFS over iSCSI plugin.

In that plugin, if `$method` is not a "LUN command" [2], `zfs`
subcommands (or `zpool list`) [1] are executed over SSH. In the case
of image deletion, the command executed on the remote is always `zfs
destroy -r [...]`.

Therefore, match against "dataset is busy" / "dataset does not exist"
directly.

Tested this with an LIO iSCSI provider set up in a Debian Trixie VM,
as well as with the "legacy" proxmox-truenas plugin of the
community [3] (the one that patches our existing sources), by
migrating a VM's disk back and forth between the two ZFS-over-iSCSI
storages, and also to others and back again.

[0]: https://lore.proxmox.com/pve-devel/mailman.271.1758597756.390.pve-devel@lists.proxmox.com/
[1]: https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPlugin.pm;h=99d8c8f43a27ae911ffd09c3aa9f25f1a8857015;hb=refs/heads/master#l84
[2]: https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPlugin.pm;h=99d8c8f43a27ae911ffd09c3aa9f25f1a8857015;hb=refs/heads/master#l22
[3]: https://github.com/boomshankerx/proxmox-truenas

Fixes: #6845
Signed-off-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250925160721.445256-1-m.carrara@proxmox.com
[FE: explicitly mention ZFS over iSCSI plugin in commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-09-26 09:49:09 +02:00
9eb914de16 api: status: document return types
this is useful, e.g. when we want to generate bindings for this api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2025-09-08 16:38:52 +02:00
02acde02b6 make zfs tests declarative
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-08-07 16:49:04 +02:00
0f7a4d2d84 make tidy
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-08-07 16:24:08 +02:00
6bf171ec54 iscsi: add hostname support in portal addresses
Currently, the iSCSI plugin regex patterns only match IPv4 and IPv6
addresses, causing session parsing to fail when portals use hostnames
(like nas.example.com:3260).

This patch updates ISCSI_TARGET_RE and session parsing regex to accept
any non-whitespace characters before the port, allowing hostname-based
portals to work correctly.

Tested with IP and hostname-based portals on Proxmox VE 8.2, 8.3, and 8.4.1

Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250626022920.1323623-1-stelios@libvirt.dev
2025-08-04 20:41:09 +02:00
c33abdf062 fix #6073: esxi: fix zombie process after storage removal
After removing an ESXi storage, a zombie process is generated because
the forked FUSE process (esxi-folder-fuse) is not properly reaped.

This patch implements a double-fork mechanism to ensure the FUSE process
is reparented to init (PID 1), which will properly reap it when it
exits. Additionally adds the missing waitpid() call to reap the
intermediate child process.

Tested on Proxmox VE 8.4.1 with ESXi 8.0U3e storage.

Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250701154135.2387872-1-stelios@libvirt.dev
2025-08-04 20:36:38 +02:00
609752f3ae bump version to 9.0.13
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-08-01 18:36:56 +02:00
5750596f5b deactivate volumes: terminate error message with newline
Avoid that Perl auto-attaches the line number and file name.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250801081649.13882-1-f.ebner@proxmox.com
2025-08-01 13:22:45 +02:00
153f7d8f85 bump version to 9.0.12
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 14:22:16 +02:00
3c209eaeb7 plugin: nfs, cifs: use volume qemu snapshot methods from dir plugin
Taking an offline snapshot of a VM on an NFS/CIFS storage with
snapshot-as-volume-chain currently creates a volume-chain snapshot as
expected, but taking an online snapshot unexpectedly creates a qcow2
snapshot. This was also reported in the forum [1].

The reason is that the NFS/CIFS plugins inherit the method
volume_qemu_snapshot_method from the Plugin base class, whereas they
actually behave similarly to the Directory plugin. To fix this,
implement the method for the NFS/CIFS plugins and let it call the
Directory plugin's implementation.

[1] https://forum.proxmox.com/threads/168619/post-787374

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731082538.31891-1-f.weber@proxmox.com
2025-07-31 14:19:13 +02:00
81261f9ca1 re-tidy perl code
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 14:16:25 +02:00
7513e21d74 plugin: parse_name_dir: drop deprecation warning
this gets printed very often if such a volume exists - e.g. adding such a
volume to a config with `qm set` prints it 10 times..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-5-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
6dbeba59da plugin: extend snapshot name parsing to legacy volnames
otherwise a volume like `100/oldstyle-100-disk-0.qcow2` can be snapshotted, but
the snapshot file is treated as a volume instead of a snapshot afterwards.

this also avoids issues with volnames with `vm-` in their names, similar to the
LVM fix for underscores.

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-4-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
59a54b3d5f fix #6584: plugin: list_images: only include parseable filenames
by only including filenames that are also valid when actually parsing them,
things like snapshot files or files not following our naming scheme are no
longer candidates for rescanning or included in other output.

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-3-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
a477189575 plugin: fix parse_name_dir regression for custom volume names
prior to the introduction of snapshot as volume chains, volume names of
almost arbitrary form were accepted. only forbid filenames which are
part of the newly introduced namespace for snapshot files, while
deprecating other names not following our usual naming scheme, instead
of forbidding them outright.

Fixes: b63147f5df "plugin: fix volname parsing"

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-2-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
94a54793cd bump version to 9.0.11
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 09:19:03 +02:00
92efe5c6cb plugin: lvm: volume snapshot info: untaint snapshot filename
Without untainting, offline-deleting a volume-chain snapshot on an LVM
storage via the GUI can fail with an "Insecure dependecy in exec
[...]" error, because volume_snapshot_delete uses the filename its
qemu-img invocation.

Commit 93f0dfb ("plugin: volume snapshot info: untaint snapshot
filename") fixed this already for the volume_snapshot_info
implementation of the Plugin base class, but missed that the LVM
plugin overrides the method and was still missing the untaint.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731071306.11777-1-f.weber@proxmox.com
2025-07-31 09:18:33 +02:00
74b5031c9a bump version to 9.0.10
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 04:14:23 +02:00
0dc6c9d39c status: rrddata: use new pve-storage-9.0 rrd location if file is present
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20250726010626.1496866-26-a.lauterer@proxmox.com
2025-07-31 04:13:27 +02:00