Commit Graph

2113 Commits

Author SHA1 Message Date
d5995ffbf7 fix #6941: lvmplugin: fix volume activation of raw disk before secure delete
The volume activate before secure delete was lost in qcow2 snapshot implementation
in commit eda88c94ed.

This re-add activation just before the the delete, to be sure to not write zero
to not existing /dev/.. (so in memory instead the device)

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.251.1761222222.362.pve-devel@lists.proxmox.com
[FE: Remove extra space before colons in commit title
     Slightly improve code comment]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-27 15:28:41 +01:00
728d8f3992 lvmplugin: use blkdiscard when supported instead cstream to saferemove drive
Current cstream implementation is pretty slow, even without throttling.

use blkdiscard --zeroout instead when storage support it,
which is a few magnitudes faster.

Another benefit is that blkdiscard is skipping already zeroed block, so for empty
temp images like snapshot, is pretty fast.

blkdiscard don't have throttling like cstream, but we can tune the step size
of zeroes pushed to the storage.
I'm using 32MB stepsize by default , like ovirt, where it seem to be the best
balance between speed and load.
79f1d79058

but it can be reduce with "saferemove_stepsize" option.

stepsize is also autoreduce to sysfs write_zeroes_max_bytes, which is the maximum
zeroing batch supported by the storage

test with a 100G volume (empty):

time /usr/bin/cstream -i /dev/zero -o /dev/test/vm-100-disk-0.qcow2 -T 10 -v 1 -b 1048576

13561233408 B 12.6 GB 10.00 s 1356062979 B/s 1.26 GB/s
26021462016 B 24.2 GB 20.00 s 1301029969 B/s 1.21 GB/s
38585499648 B 35.9 GB 30.00 s 1286135343 B/s 1.20 GB/s
50998542336 B 47.5 GB 40.00 s 1274925312 B/s 1.19 GB/s
63702765568 B 59.3 GB 50.00 s 1274009877 B/s 1.19 GB/s
76721885184 B 71.5 GB 60.00 s 1278640698 B/s 1.19 GB/s
89126539264 B 83.0 GB 70.00 s 1273178488 B/s 1.19 GB/s
101666459648 B 94.7 GB 80.00 s 1270779024 B/s 1.18 GB/s
107390959616 B 100.0 GB 84.39 s 1272531142 B/s 1.19 GB/s
write: No space left on device

real    1m24.394s
user    0m0.171s
sys     1m24.052s

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v
/dev/test/vm-100-disk-0.qcow2: Zero-filled 107390959616 bytes from the offset 0

real    0m3.641s
user    0m0.001s
sys     0m3.433s

test with a 100G volume with random data:

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v

/dev/test/vm-112-disk-1: Zero-filled 4764729344 bytes from the offset 0
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 4764729344
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 9428795392
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 14260633600
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 19092471808
/dev/test/vm-112-disk-1: Zero-filled 4865392640 bytes from the offset 23924310016
/dev/test/vm-112-disk-1: Zero-filled 4596957184 bytes from the offset 28789702656
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 33386659840
/dev/test/vm-112-disk-1: Zero-filled 4294967296 bytes from the offset 38117834752
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 42412802048
/dev/test/vm-112-disk-1: Zero-filled 4697620480 bytes from the offset 47076868096
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 51774488576
/dev/test/vm-112-disk-1: Zero-filled 4261412864 bytes from the offset 56438554624
/dev/test/vm-112-disk-1: Zero-filled 4362076160 bytes from the offset 60699967488
/dev/test/vm-112-disk-1: Zero-filled 4127195136 bytes from the offset 65062043648
/dev/test/vm-112-disk-1: Zero-filled 4328521728 bytes from the offset 69189238784
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 73517760512
/dev/test/vm-112-disk-1: Zero-filled 4026531840 bytes from the offset 78248935424
/dev/test/vm-112-disk-1: Zero-filled 4194304000 bytes from the offset 82275467264
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 86469771264
/dev/test/vm-112-disk-1: Zero-filled 4395630592 bytes from the offset 91133837312
/dev/test/vm-112-disk-1: Zero-filled 3623878656 bytes from the offset 95529467904
/dev/test/vm-112-disk-1: Zero-filled 4462739456 bytes from the offset 99153346560
/dev/test/vm-112-disk-1: Zero-filled 3758096384 bytes from the offset 103616086016

real    0m23.969s
user    0m0.030s
sys     0m0.144s

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.253.1761222252.362.pve-devel@lists.proxmox.com
[FE: Minor language improvements
     Use more common style for importing with qw()
     Don't specify full path to blkdiscard binary for run_command()]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-27 15:23:17 +01:00
10e47fc8bb pvesm: print units in 'status' subcommand table headings
The units used are not documented in the man page.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250829124738.412902-1-m.sandoval@proxmox.com
[FE: improve commit title]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-22 14:39:39 +02:00
68c3142605 api schema: storage: config: fix typos in return schema description
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-08 15:09:16 +02:00
c10e73d93b plugin: pod: fix variable name for volume_qemu_snapshot_method() example code
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-08 14:27:25 +02:00
6e5a42052c fix #6845: make regexes in zvol deletion retry logic less restrictive
As reported by a storage plugin developer in our community [0], some
plugins might not throw an exception in the exact format we expect. In
particular, this also applies to the built-in ZFS over iSCSI plugin.

In that plugin, if `$method` is not a "LUN command" [2], `zfs`
subcommands (or `zpool list`) [1] are executed over SSH. In the case
of image deletion, the command executed on the remote is always `zfs
destroy -r [...]`.

Therefore, match against "dataset is busy" / "dataset does not exist"
directly.

Tested this with an LIO iSCSI provider set up in a Debian Trixie VM,
as well as with the "legacy" proxmox-truenas plugin of the
community [3] (the one that patches our existing sources), by
migrating a VM's disk back and forth between the two ZFS-over-iSCSI
storages, and also to others and back again.

[0]: https://lore.proxmox.com/pve-devel/mailman.271.1758597756.390.pve-devel@lists.proxmox.com/
[1]: https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPlugin.pm;h=99d8c8f43a27ae911ffd09c3aa9f25f1a8857015;hb=refs/heads/master#l84
[2]: https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPlugin.pm;h=99d8c8f43a27ae911ffd09c3aa9f25f1a8857015;hb=refs/heads/master#l22
[3]: https://github.com/boomshankerx/proxmox-truenas

Fixes: #6845
Signed-off-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250925160721.445256-1-m.carrara@proxmox.com
[FE: explicitly mention ZFS over iSCSI plugin in commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-09-26 09:49:09 +02:00
9eb914de16 api: status: document return types
this is useful, e.g. when we want to generate bindings for this api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2025-09-08 16:38:52 +02:00
02acde02b6 make zfs tests declarative
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-08-07 16:49:04 +02:00
0f7a4d2d84 make tidy
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-08-07 16:24:08 +02:00
6bf171ec54 iscsi: add hostname support in portal addresses
Currently, the iSCSI plugin regex patterns only match IPv4 and IPv6
addresses, causing session parsing to fail when portals use hostnames
(like nas.example.com:3260).

This patch updates ISCSI_TARGET_RE and session parsing regex to accept
any non-whitespace characters before the port, allowing hostname-based
portals to work correctly.

Tested with IP and hostname-based portals on Proxmox VE 8.2, 8.3, and 8.4.1

Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250626022920.1323623-1-stelios@libvirt.dev
2025-08-04 20:41:09 +02:00
c33abdf062 fix #6073: esxi: fix zombie process after storage removal
After removing an ESXi storage, a zombie process is generated because
the forked FUSE process (esxi-folder-fuse) is not properly reaped.

This patch implements a double-fork mechanism to ensure the FUSE process
is reparented to init (PID 1), which will properly reap it when it
exits. Additionally adds the missing waitpid() call to reap the
intermediate child process.

Tested on Proxmox VE 8.4.1 with ESXi 8.0U3e storage.

Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250701154135.2387872-1-stelios@libvirt.dev
2025-08-04 20:36:38 +02:00
609752f3ae bump version to 9.0.13
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-08-01 18:36:56 +02:00
5750596f5b deactivate volumes: terminate error message with newline
Avoid that Perl auto-attaches the line number and file name.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250801081649.13882-1-f.ebner@proxmox.com
2025-08-01 13:22:45 +02:00
153f7d8f85 bump version to 9.0.12
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 14:22:16 +02:00
3c209eaeb7 plugin: nfs, cifs: use volume qemu snapshot methods from dir plugin
Taking an offline snapshot of a VM on an NFS/CIFS storage with
snapshot-as-volume-chain currently creates a volume-chain snapshot as
expected, but taking an online snapshot unexpectedly creates a qcow2
snapshot. This was also reported in the forum [1].

The reason is that the NFS/CIFS plugins inherit the method
volume_qemu_snapshot_method from the Plugin base class, whereas they
actually behave similarly to the Directory plugin. To fix this,
implement the method for the NFS/CIFS plugins and let it call the
Directory plugin's implementation.

[1] https://forum.proxmox.com/threads/168619/post-787374

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731082538.31891-1-f.weber@proxmox.com
2025-07-31 14:19:13 +02:00
81261f9ca1 re-tidy perl code
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 14:16:25 +02:00
7513e21d74 plugin: parse_name_dir: drop deprecation warning
this gets printed very often if such a volume exists - e.g. adding such a
volume to a config with `qm set` prints it 10 times..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-5-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
6dbeba59da plugin: extend snapshot name parsing to legacy volnames
otherwise a volume like `100/oldstyle-100-disk-0.qcow2` can be snapshotted, but
the snapshot file is treated as a volume instead of a snapshot afterwards.

this also avoids issues with volnames with `vm-` in their names, similar to the
LVM fix for underscores.

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-4-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
59a54b3d5f fix #6584: plugin: list_images: only include parseable filenames
by only including filenames that are also valid when actually parsing them,
things like snapshot files or files not following our naming scheme are no
longer candidates for rescanning or included in other output.

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-3-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
a477189575 plugin: fix parse_name_dir regression for custom volume names
prior to the introduction of snapshot as volume chains, volume names of
almost arbitrary form were accepted. only forbid filenames which are
part of the newly introduced namespace for snapshot files, while
deprecating other names not following our usual naming scheme, instead
of forbidding them outright.

Fixes: b63147f5df "plugin: fix volname parsing"

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-2-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
94a54793cd bump version to 9.0.11
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 09:19:03 +02:00
92efe5c6cb plugin: lvm: volume snapshot info: untaint snapshot filename
Without untainting, offline-deleting a volume-chain snapshot on an LVM
storage via the GUI can fail with an "Insecure dependecy in exec
[...]" error, because volume_snapshot_delete uses the filename its
qemu-img invocation.

Commit 93f0dfb ("plugin: volume snapshot info: untaint snapshot
filename") fixed this already for the volume_snapshot_info
implementation of the Plugin base class, but missed that the LVM
plugin overrides the method and was still missing the untaint.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731071306.11777-1-f.weber@proxmox.com
2025-07-31 09:18:33 +02:00
74b5031c9a bump version to 9.0.10
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 04:14:23 +02:00
0dc6c9d39c status: rrddata: use new pve-storage-9.0 rrd location if file is present
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20250726010626.1496866-26-a.lauterer@proxmox.com
2025-07-31 04:13:27 +02:00
868de9b1a8 bump version to 9.0.9
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-30 19:51:11 +02:00
e502404fa2 config: drop 'maxfiles' parameter
The 'maxfiles' parameter has been deprecated since the addition of
'prune-backups' in the Proxmox VE 7 beta.

The setting was auto-converted when reading the storage
configuration.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250718125408.133376-2-f.ebner@proxmox.com
2025-07-30 19:35:50 +02:00
fc633887dc lvm plugin: volume snapshot: actually print error when renaming
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250730162117.160498-4-f.ebner@proxmox.com
2025-07-30 19:32:40 +02:00
db2025f5ba fix #6587: lvm plugin: snapshot info: fix parsing snapshot name
Volume names are allowed to contain underscores, so it is impossible
to determine the snapshot name from just the volume name, e.g:
snap_vm-100-disk_with_underscore_here_s_some_more.qcow2

Therefore, pass along the short volume name too and match against it.

Note that none of the variables from the result of parse_volname()
were actually used previously.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250730162117.160498-3-f.ebner@proxmox.com
2025-07-30 19:32:40 +02:00
819dafe516 lvm plugin: snapshot info: avoid superfluous argument for closure
The $volname variable is never modified in the function, so it doesn't
need to be passed into the $get_snapname_from_path closure.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250730162117.160498-2-f.ebner@proxmox.com
2025-07-30 19:32:40 +02:00
169f8091dd test: add tests for volume access checks
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250730130506.96278-1-f.ebner@proxmox.com
2025-07-30 18:42:52 +02:00
5245e044ad fix #5181: pbs: store and read passwords as unicode
At the moment calling
```
pvesm add pbs test --password="bär12345" --datastore='test' # ..other params
```

Will result in the API handler getting the param->{passowrd} as a utf-8
encoded string. When dumped with Debug::Peek's Dump() one can see:

```
SV = PV(0x5a02c1a3ff10) at 0x5a02bd713670
  REFCNT = 1
  FLAGS = (POK,IsCOW,pPOK,UTF8)
  PV = 0x5a02c1a409b0 "b\xC3\xA4r12345"\0 [UTF8 "b\x{e4}r12345"]
  CUR = 9
  LEN = 11
  COW_REFCNT = 0
```

Then when writing the file via file_set_contents (using syswrite
internally) will result in perl encoding the password as latin1 and a
file with contents:

```
$ hexdump -C /etc/pve/priv/storage/test.pw
00000000  62 e4 72 31 32 33 34 35                           |b.r12345|
00000008
```

when the correct contents should have been:
```
00000000  62 c3 a4 72 31 32 33 34  35                       |b..r12345|
00000009
```

Later when the file is read via file_read_firstline it will result in

```
SV = PV(0x5e8baa411090) at 0x5e8baa5a96b8
  REFCNT = 1
  FLAGS = (POK,pPOK)
  PV = 0x5e8baa43ee20 "b\xE4r12345"\0
  CUR = 8
  LEN = 81
```

which is a different string than the original.

At the moment, adding the storage will work as the utf8 password is
still in memory, however, however subsequent uses (e.g. pvestatd) will
fail.

This patch fixes the issue by encoding the string as utf8 both when
reading and storing it to disk. The user was able in the past to go
around the issue by writing the right password in
/etc/pve/priv/{storage}.pw and this fix is compatible with that.

It is documented at
https://pbs.proxmox.com/docs/backup-client.html#environment-variables
that the Backup Server password must be valid utf-8.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250730072239.24928-1-m.sandoval@proxmox.com
2025-07-30 11:55:18 +02:00
cafbdb8c52 bump version to 9.0.8
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 17:28:23 +02:00
172c71a64d common: use v5.36
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
1afe55b35b escape dirs in path_to_volume_id regexes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
dfad07158d drop rootdir case in path_to_volume_id
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
715ec4f95b parse_volname: remove openvz 'rootdir' case
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
f62fc773ad tests: drop rootdir/ tests
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[FE: use 'images' rather than not-yet-existing 'ct-vol' for now
     disable seen vtype tests for now]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 16:42:18 +02:00
9b7fa1e758 btrfs: remove unnecessary mkpath call
The existence of the original volume should imply the existence of its
parent directory, after all... And with the new typed subdirectories
this was wrong.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 15:52:00 +02:00
a9315a0ed3 fix #6561: zfspool: track refquota for subvolumes via user properties
ZFS itself does not track the refquota per snapshot, so this needs to
be handled by Proxmox VE. Otherwise, rolling back a volume that has
been resized since the snapshot was taken, will retain the new size.
This is problematic, as it means the value in the guest config does
not match the size of the disk on the storage anymore.

This implementation does so by leveraging a user property per
snapshot.

Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729121151.159797-1-s.sterz@proxmox.com
[FE: improve capitalization and wording in commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 15:16:03 +02:00
d0239ba9c0 lvm plugin: use relative path for qcow2 rebase command
otherwise the resulting qcow2 file will contain an absolute path, which makes
renaming the backing VG of the storage impossible.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-5-f.gruenbichler@proxmox.com
2025-07-29 14:43:07 +02:00
7da44f56e4 plugin: use relative path for qcow2 rebase command
otherwise the resulting qcow2 file will contain an absolute path, which makes
changing the backing path of the directory storage impossible.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-4-f.gruenbichler@proxmox.com
2025-07-29 14:43:07 +02:00
191cddac30 lvm plugin: fix typo in rebase log message
this was copied over from Plugin.pm

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-3-f.gruenbichler@proxmox.com
[FE: use string concatenation rather than multi-argument print]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 14:43:01 +02:00
a7afad969d plugin: fix typo in rebase log message
by directly printing the to-be-executed command, instead of copying it which is
error-prone.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-2-f.gruenbichler@proxmox.com
[FE: use string concatenation rather than multi-argument print]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 14:41:48 +02:00
93f0dfbc75 plugin: volume snapshot info: untaint snapshot filename
Without untainting, offline-deleting a volume-chain snapshot on a
directory storage via the GUI fails with an "Insecure dependecy in
exec [...]" error, because volume_snapshot_delete uses the filename
its qemu-img invocation.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
2025-07-28 15:10:49 +02:00
43ec7bdfe6 plugin: move 'parse_snap_name' up to before its use
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-23 08:52:17 +02:00
3cb0c3398c bump version to 9.0.7
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-22 15:01:58 +02:00
42bc721b41 make tidy
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-22 14:57:22 +02:00
cfe7d7ebe7 default format helper: only return default format
Callers that required the valid formats are now using the
resolve_format_hint() helper instead.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00
c86d8f6d80 introduce resolve_format_hint() helper
Callers interested in the list of valid formats from
storage_default_format() actually want this functionality.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00
ad20e4faef api: status: rely on get_formats() method for determining format-related info
Rely on get_formats() rather than just the static plugin data in the
'status' API call. This removes the need for the special casing for
LVM storages without the 'snapshot-as-volume-chain' option. It also
fixes the issue that the 'format' storage configuration option to
override the default format was previously ignored there.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00