returning the formats in the way of:
```
"format": [
{
"format1" => 1,
"format2" => 1,
...
},
"defaultFormat"
]
```
is not a very good return format, since it abuses an array as a
tuple, and unnecessarily encodes a list of formats as an object.
Also, we can't describe it properly in JSONSchema in perl, nor our
perl->rust generator is able to handle that.
Instead, return it like this:
```
"formats": {
"default": "defaultFormat",
"supported": ["format1", "format2", ...]
}
```
which makes it much more sensible for an api return schema, and it's
possible to annotate it in the JSONSchema.
For compatibility reasons, keep the old property around, and add a
comment to remove with 10.0
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
no problem for 'select_existing', but we cannot actually describe
'format' with our JSONSchema, since it uses an array as a form of tuple,
and even with oneOf this cannot be described currently.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As reported by a user in the enterprise support in a ticket handled by
Friedrich, concurrent snapshot operations could lead to metadata
corruption of the volume group with unlucky timing. Add the missing
locking for operations modifying the metadata, i.e. allocation, rename
and removal. Since volume_snapshot() and volume_snapshot_rollback()
only do those, use a wrapper for the whole function. Since
volume_snapshot_delete() can do longer-running commit or rebase
operations, only lock the necessary sections there.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-5-f.ebner@proxmox.com
With a shared LVM storage, parallel imports, which might be done in
the context of remote migration, could lead to metadata corruption
with unlucky timing, because of missing locking. Add locking around
allocation and removal, which are the sections that modify LVM
metadata. Note that other plugins suffer from missing locking here as
well, but only regarding naming conflicts. Adding locking around the
full call to volume_import() would mean locking for much too long.
Other plugins could follow the approach here, or there could be a
reservation approach like proposed in [0].
[0]: https://lore.proxmox.com/pve-devel/20240403150712.262773-1-h.duerr@proxmox.com/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-4-f.ebner@proxmox.com
Current cstream implementation is pretty slow, even without throttling.
use blkdiscard --zeroout instead when storage support it,
which is a few magnitudes faster.
Another benefit is that blkdiscard is skipping already zeroed block, so for empty
temp images like snapshot, is pretty fast.
blkdiscard don't have throttling like cstream, but we can tune the step size
of zeroes pushed to the storage.
I'm using 32MB stepsize by default , like ovirt, where it seem to be the best
balance between speed and load.
79f1d79058
but it can be reduce with "saferemove_stepsize" option.
stepsize is also autoreduce to sysfs write_zeroes_max_bytes, which is the maximum
zeroing batch supported by the storage
test with a 100G volume (empty):
time /usr/bin/cstream -i /dev/zero -o /dev/test/vm-100-disk-0.qcow2 -T 10 -v 1 -b 1048576
13561233408 B 12.6 GB 10.00 s 1356062979 B/s 1.26 GB/s
26021462016 B 24.2 GB 20.00 s 1301029969 B/s 1.21 GB/s
38585499648 B 35.9 GB 30.00 s 1286135343 B/s 1.20 GB/s
50998542336 B 47.5 GB 40.00 s 1274925312 B/s 1.19 GB/s
63702765568 B 59.3 GB 50.00 s 1274009877 B/s 1.19 GB/s
76721885184 B 71.5 GB 60.00 s 1278640698 B/s 1.19 GB/s
89126539264 B 83.0 GB 70.00 s 1273178488 B/s 1.19 GB/s
101666459648 B 94.7 GB 80.00 s 1270779024 B/s 1.18 GB/s
107390959616 B 100.0 GB 84.39 s 1272531142 B/s 1.19 GB/s
write: No space left on device
real 1m24.394s
user 0m0.171s
sys 1m24.052s
time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v
/dev/test/vm-100-disk-0.qcow2: Zero-filled 107390959616 bytes from the offset 0
real 0m3.641s
user 0m0.001s
sys 0m3.433s
test with a 100G volume with random data:
time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v
/dev/test/vm-112-disk-1: Zero-filled 4764729344 bytes from the offset 0
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 4764729344
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 9428795392
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 14260633600
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 19092471808
/dev/test/vm-112-disk-1: Zero-filled 4865392640 bytes from the offset 23924310016
/dev/test/vm-112-disk-1: Zero-filled 4596957184 bytes from the offset 28789702656
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 33386659840
/dev/test/vm-112-disk-1: Zero-filled 4294967296 bytes from the offset 38117834752
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 42412802048
/dev/test/vm-112-disk-1: Zero-filled 4697620480 bytes from the offset 47076868096
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 51774488576
/dev/test/vm-112-disk-1: Zero-filled 4261412864 bytes from the offset 56438554624
/dev/test/vm-112-disk-1: Zero-filled 4362076160 bytes from the offset 60699967488
/dev/test/vm-112-disk-1: Zero-filled 4127195136 bytes from the offset 65062043648
/dev/test/vm-112-disk-1: Zero-filled 4328521728 bytes from the offset 69189238784
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 73517760512
/dev/test/vm-112-disk-1: Zero-filled 4026531840 bytes from the offset 78248935424
/dev/test/vm-112-disk-1: Zero-filled 4194304000 bytes from the offset 82275467264
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 86469771264
/dev/test/vm-112-disk-1: Zero-filled 4395630592 bytes from the offset 91133837312
/dev/test/vm-112-disk-1: Zero-filled 3623878656 bytes from the offset 95529467904
/dev/test/vm-112-disk-1: Zero-filled 4462739456 bytes from the offset 99153346560
/dev/test/vm-112-disk-1: Zero-filled 3758096384 bytes from the offset 103616086016
real 0m23.969s
user 0m0.030s
sys 0m0.144s
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.253.1761222252.362.pve-devel@lists.proxmox.com
[FE: Minor language improvements
Use more common style for importing with qw()
Don't specify full path to blkdiscard binary for run_command()]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Currently, the iSCSI plugin regex patterns only match IPv4 and IPv6
addresses, causing session parsing to fail when portals use hostnames
(like nas.example.com:3260).
This patch updates ISCSI_TARGET_RE and session parsing regex to accept
any non-whitespace characters before the port, allowing hostname-based
portals to work correctly.
Tested with IP and hostname-based portals on Proxmox VE 8.2, 8.3, and 8.4.1
Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250626022920.1323623-1-stelios@libvirt.dev
After removing an ESXi storage, a zombie process is generated because
the forked FUSE process (esxi-folder-fuse) is not properly reaped.
This patch implements a double-fork mechanism to ensure the FUSE process
is reparented to init (PID 1), which will properly reap it when it
exits. Additionally adds the missing waitpid() call to reap the
intermediate child process.
Tested on Proxmox VE 8.4.1 with ESXi 8.0U3e storage.
Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250701154135.2387872-1-stelios@libvirt.dev
Taking an offline snapshot of a VM on an NFS/CIFS storage with
snapshot-as-volume-chain currently creates a volume-chain snapshot as
expected, but taking an online snapshot unexpectedly creates a qcow2
snapshot. This was also reported in the forum [1].
The reason is that the NFS/CIFS plugins inherit the method
volume_qemu_snapshot_method from the Plugin base class, whereas they
actually behave similarly to the Directory plugin. To fix this,
implement the method for the NFS/CIFS plugins and let it call the
Directory plugin's implementation.
[1] https://forum.proxmox.com/threads/168619/post-787374
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731082538.31891-1-f.weber@proxmox.com
prior to the introduction of snapshot as volume chains, volume names of
almost arbitrary form were accepted. only forbid filenames which are
part of the newly introduced namespace for snapshot files, while
deprecating other names not following our usual naming scheme, instead
of forbidding them outright.
Fixes: b63147f5df "plugin: fix volname parsing"
Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-2-f.gruenbichler@proxmox.com
Without untainting, offline-deleting a volume-chain snapshot on an LVM
storage via the GUI can fail with an "Insecure dependecy in exec
[...]" error, because volume_snapshot_delete uses the filename its
qemu-img invocation.
Commit 93f0dfb ("plugin: volume snapshot info: untaint snapshot
filename") fixed this already for the volume_snapshot_info
implementation of the Plugin base class, but missed that the LVM
plugin overrides the method and was still missing the untaint.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731071306.11777-1-f.weber@proxmox.com
Volume names are allowed to contain underscores, so it is impossible
to determine the snapshot name from just the volume name, e.g:
snap_vm-100-disk_with_underscore_here_s_some_more.qcow2
Therefore, pass along the short volume name too and match against it.
Note that none of the variables from the result of parse_volname()
were actually used previously.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250730162117.160498-3-f.ebner@proxmox.com
At the moment calling
```
pvesm add pbs test --password="bär12345" --datastore='test' # ..other params
```
Will result in the API handler getting the param->{passowrd} as a utf-8
encoded string. When dumped with Debug::Peek's Dump() one can see:
```
SV = PV(0x5a02c1a3ff10) at 0x5a02bd713670
REFCNT = 1
FLAGS = (POK,IsCOW,pPOK,UTF8)
PV = 0x5a02c1a409b0 "b\xC3\xA4r12345"\0 [UTF8 "b\x{e4}r12345"]
CUR = 9
LEN = 11
COW_REFCNT = 0
```
Then when writing the file via file_set_contents (using syswrite
internally) will result in perl encoding the password as latin1 and a
file with contents:
```
$ hexdump -C /etc/pve/priv/storage/test.pw
00000000 62 e4 72 31 32 33 34 35 |b.r12345|
00000008
```
when the correct contents should have been:
```
00000000 62 c3 a4 72 31 32 33 34 35 |b..r12345|
00000009
```
Later when the file is read via file_read_firstline it will result in
```
SV = PV(0x5e8baa411090) at 0x5e8baa5a96b8
REFCNT = 1
FLAGS = (POK,pPOK)
PV = 0x5e8baa43ee20 "b\xE4r12345"\0
CUR = 8
LEN = 81
```
which is a different string than the original.
At the moment, adding the storage will work as the utf8 password is
still in memory, however, however subsequent uses (e.g. pvestatd) will
fail.
This patch fixes the issue by encoding the string as utf8 both when
reading and storing it to disk. The user was able in the past to go
around the issue by writing the right password in
/etc/pve/priv/{storage}.pw and this fix is compatible with that.
It is documented at
https://pbs.proxmox.com/docs/backup-client.html#environment-variables
that the Backup Server password must be valid utf-8.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250730072239.24928-1-m.sandoval@proxmox.com
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[FE: use 'images' rather than not-yet-existing 'ct-vol' for now
disable seen vtype tests for now]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The existence of the original volume should imply the existence of its
parent directory, after all... And with the new typed subdirectories
this was wrong.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
ZFS itself does not track the refquota per snapshot, so this needs to
be handled by Proxmox VE. Otherwise, rolling back a volume that has
been resized since the snapshot was taken, will retain the new size.
This is problematic, as it means the value in the guest config does
not match the size of the disk on the storage anymore.
This implementation does so by leveraging a user property per
snapshot.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729121151.159797-1-s.sterz@proxmox.com
[FE: improve capitalization and wording in commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>