Commit Graph

2158 Commits

Author SHA1 Message Date
Bad
7f5aaac51c Add README.md 2025-12-14 16:27:12 +00:00
7fcf6e29bf add pmem support 2025-12-14 18:13:32 +02:00
6f49432acc bump version to 9.1.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-20 14:59:46 +01:00
28268eabaa pbs: move api-token detection into own local sub method
for better encapsulation.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-20 14:55:54 +01:00
d11122d7f9 pbs: reduce line-bloat, improve variable names
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-20 14:48:28 +01:00
80770ef49e fix #6900: correctly detect PBS API tokens in storage plugin
The PBS storage plugin used PVE code to detect if an API token was
entered in the username field. This lead to bad requests for some
valid PBS tokens which are not valid PVE tokens. Examples are
"root@pam!1234" and "root@pam!_-".

Relax the token pattern to allow token names and realms that start
with numbers or underscores. Also allow single character token names,
which are allowed on the backend even though they can't be created
through the PBS Web UI.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
Link: https://lore.proxmox.com/20251120131149.147981-1-r.obkircher@proxmox.com
2025-11-20 14:42:24 +01:00
5001f03269 lvm plugin: fix locking for rollback when using CLI
Doing a rollback via CLI on an LVM storage with 'saferemove' and
'snapshot-as-volume-chain' would run into a locking issue, because
the forked zero-out worker would try to acquire the lock while the
main CLI task is still inside the locked section for
volume_snapshot_rollback_locked(). The same issue does not happen when
the rollback is done via UI. The reason for this can be found in the
note regarding fork_worker():

> we simulate running in foreground if ($self->{type} eq 'cli')

So the worker will be awaited synchronously in CLI context, resulting
in the deadlock, while via API/UI, the main task would move on and
release the lock allowing the zero-out worker to acquire it.

Avoid doing fork_cleanup_worker() inside the locked section to avoid
the issue.

Fixes: 8eabcc7 ("lvm plugin: snapshot-as-volume-chain: use locking for snapshot operations")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20251120101742.24843-1-f.ebner@proxmox.com
2025-11-20 14:41:57 +01:00
bb958344ec plugin: import/export formats: fix regression to unbreak import/export of 'images' content
In particular, this affects offline migration of guest volumes.

Reported in the community forum:
https://forum.proxmox.com/threads/176352/

Fixes: 0ba0739 ("plugin: allow volume import of iso, snippets, vztmpl and import")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/all/20251120121304.67944-1-f.ebner@proxmox.com
2025-11-20 14:03:50 +01:00
a52a1ef526 test: add missing test case for .tar container template
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251118114900.62955-1-f.schauer@proxmox.com
2025-11-18 13:00:39 +01:00
035a734463 bump version to 9.0.18
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 21:54:37 +01:00
c45b9b430a api: oci image pull: unconditionally unlink temporary file on interrupt
It's more robust and cheaper to just always unlink (always one
syscall) and ignore ENOENT compared to stat and optional unlink (two
syscalls worst case).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 21:54:37 +01:00
8ce79d86bb re-tidy perl code
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 21:54:37 +01:00
007c700af1 api: oci image pull: oci-registry-pull: fix incomplete reference regex
Previously trying to pull an OCI image with a '-' character in its name
failed because the regex did not match on it. This commit fixes this by
basing the regex on the one used in 'query-oci-repo-tags'.

Reported-by: Nicolas Frey <n.frey@proxmox.com>
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251117171528.262443-5-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 21:54:32 +01:00
21584f8350 api: oci image pull: add optional OCI image filename parameter
Give the user the ability to choose a custom destination file name for
the OCI image.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251117171528.262443-4-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 21:54:21 +01:00
0ed80aaf31 api: oci image pull: do not pull OCI image if file already exists
This ensures that the API call fails early, before pulling the OCI image
from the registry and then failing to rename the temporary file.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251117171528.262443-3-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 21:54:17 +01:00
fe3b2915f2 api: oci image pull: pull OCI image to temporary file
Pull the OCI image to a temporary file first. Once it has finished,
rename it, which is an atomic operation. This prevents accidental usage
of an incomplete OCI image file for container creation.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251117171528.262443-2-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 21:54:11 +01:00
60a80163e0 bump version to 9.0.17
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-17 00:00:21 +01:00
7744cf2bbd api: add storage/{storage}/oci-registry-pull method
Add a storage API method to pull an OCI image from a registry using
skopeo.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251008171028.196998-14-f.schauer@proxmox.com
2025-11-15 09:29:50 +01:00
e49b2222d6 bump version to 9.0.16
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:39:45 +01:00
b6fefb03ba lvm plugin: disallow disabling 'snapshot-as-volume-chain' while a qcow2 image exists
There are multiple reasons to disallow disabling
'snapshot-as-volume-chain' while a qcow2 image exists:

1. The list of allowed formats depends on 'snapshot-as-volume-chain'.
2. Snapshot functionality is broken. This includes creating snapshots,
   but also rollback, which removes the current volume and then fails.
3. There already is coupling between having qcow2 on LVM and having
   'snapshot-as-volume-chain' enabled. For example, the
   'discard-no-unref' option is required for qcow2 on LVM, but
   qemu-server only checks for 'snapshot-as-volume-chain' to avoid
   hard-coding LVM. Another one is that volume_qemu_snapshot_method()
   returns 'mixed' when the format is qcow2 even when
   'snapshot-as-volume-chain' is disabled. Hunting down these corner
   cases just to make it easier to disable does not seem to be worth
   it, considering there's already 1. and 2. as reasons too.
4. There might be other similar issues that have not surfaced yet,
   because disabling the feature while qcow2 is present is essentially
   untested and very uncommon.

For file-based storages, the 'snapshot-as-volume-chain' property is
already fixed, i.e. is determined upon storage creation.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-11-14 00:34:58 +01:00
0b1331ccda close #6669: plugin api: introduce on_update_hook_full() method
The original on_update_hook() method is limited, because only the
updated properties and values are passed in. Introduce a new
on_update_hook_full() method which also receives the current storage
configuration and the list of which properties are to be deleted. This
allows detecting and reacting to all changes and knowing how values
changed.

Deletion of properties is deferred to after the on_update_hook(_full)
call. This makes it possible to pass the unmodified current storage
configuration to the method.

The default implementation of on_update_hook_full() just falls back to
the original on_update_hook().

Bump APIVER and APIAGE.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-11-14 00:34:58 +01:00
24367c07d3 plugin: rbd: pass rxbounce when mapping Windows VM guest volumes
When mapping a volume (e.g., because KRBD is enabled) and the hint
'guest-is-windows' is given and true, pass the rxbounce option. This
is to avoid "bad crc/signature" warnings in the journal, retransmits
and degraded performance, see [1]. If the volume is already mapped
without rxbounce (this can be determined from the map options exposed
in sysfs), and it should be mapped with rxbounce, and the
'plugin-may-deactivate-volume' hint denotes it is currently safe to
deactivate the volume, unmap the volume and re-map it with rxbounce.

If 'guest-is-windows' is not given or not true, and the volume is
already mapped, take no action. This also means that guest volumes
that are mapped with rxbounce, but do not have to be (because they do
not belong to a Windows guest), are not deactivated. This can be the
case if a user applied the workaround of adding rxbounce to
'rbd_default_map_options', since this applies to all volumes.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5779
[2] https://forum.proxmox.com/threads/155741/post-710845

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-6-f.weber@proxmox.com
2025-11-14 00:32:59 +01:00
738897852c plugin: rbd: factor out subroutine to obtain RBD ID
This allows the subroutine to be reused.

No functional change intended.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-5-f.weber@proxmox.com
2025-11-14 00:32:55 +01:00
7c2a554b97 storage: activate/map volumes: verify and pass hints to plugin
Plugin methods {activate,map}_volume accept an optional hints
parameter. Make PVE::Storage::{activate,map}_volumes also accept
hints, verify they are consistent with the schema, and pass them down
to the plugin when activating the volumes.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-4-f.weber@proxmox.com
2025-11-14 00:32:53 +01:00
8818ff0d1d plugin api: bump api version and age
Introduce $hints parameter to activate_volume() and map_volume().

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-3-f.weber@proxmox.com
2025-11-14 00:32:51 +01:00
e9573e1db5 plugin: introduce hints for activating and mapping volumes
Currently, when a storage plugin activates or maps a guest volume, it
has no information about the respective guest. This is by design to
reduce coupling between the storage layer and the upper layers.

However, in some cases, storage plugins may need to activate volumes
differently based on certain features of the guest. An example is the
RBD plugin with KRBD enabled, where guest volumes of Windows VMs have
to be mapped with the rxbounce option.

Introduce "hints" as a mechanism that allows the upper layers to pass
down well-defined information to the storage plugins on volume
activation/mapping. The storage plugin can make adjustments to its
volume activation/mapping based on the hints. The supported hints are
specified by a JSON schema and may be extended in the future.

Add a subroutine that checks whether a particular hint is supported
(may be used by the storage plugin as well as upper layers). This
allows to add further hints without having to bump pve-storage, since
upper layers can just check whether the current pve-storage supports a
particular hint.

The Boolean 'guest-is-windows' hint denotes that the
to-be-activated/mapped volume belongs to a Windows VM.

It is not guaranteed that the volume is inactive when
{activate,map}_volume are called, and it is not guaranteed that hints
are passed on every storage activation. Hence, it can happen that a
volume is already active but applying the hint would require unmapping
the volume and mapping it again with the hint applied (this is the
case for rxbounce). To cover such cases, the Boolean hint
'plugin-may-deactivate-volume' denotes whether unmapping the volume is
currently safe. Only if this hint is true, the plugin may deactivate
the volume and map it again with the hint applied.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251031103709.60233-2-f.weber@proxmox.com
2025-11-14 00:32:48 +01:00
ebec84ff87 fix #6224: disks: get: set timeout for retrieval of SMART stat data
In rare scenarios, `smartctl` takes up to 60 seconds to timeout for SCSI
commands to be completed, as reported in our user forum [0] and bugzilla
[1]. It seems that USB drives handled by the USB Attached SCSI (UAS)
kernel module are more likely to be affected by this [2], but is more of
a case-by-case situation.

Therefore, set a more reasonable timeout of 10 seconds, so that callers
don't have to wait too long or seem unresponsive (e.g. Node Disks view
in the WebGUI).

[0] https://forum.proxmox.com/threads/164799/
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=6224
[2] https://www.smartmontools.org/wiki/SAT-with-UAS-Linux

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250415071123.36921-3-d.kral@proxmox.com
2025-11-14 00:29:06 +01:00
b6c18e9116 disks: get: separate error path for retrieving SMART data
Make the subroutine get_smart_data() die with the error message from
running the `smartctl` command before. This is in preparation for the
next patch, which makes that command fail in certain scenarios.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250415071123.36921-2-d.kral@proxmox.com
2025-11-14 00:28:27 +01:00
0d2df3048a api: smart: return unknown health instead of error message
In case of an error, the WebGUI expects the SMART data API endpoint to
return a health value, but it will return an error message directly. To
make this more user friendly, mask the error in the API handler.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250415071123.36921-1-d.kral@proxmox.com
2025-11-14 00:27:12 +01:00
7b41368fc3 lvm thin plugin: do not combine activation change and property change
As reported in the community forum [0], there currently is a warning
from LVM when converting to a base image on an LVM-thin storage:

> WARNING: Combining activation change with other commands is not advised.

From a comment in the LVM source code:

> Unfortunately, lvchange has previously allowed changing an LV
> property and changing LV activation in a single command.  This was
> not a good idea because the behavior/results are hard to predict and
> not possible to sensibly describe.  It's also unnecessary.  So, this
> is here for the sake of compatibility.
>
> This is extremely ugly; activation should always be done separately.
> This is not the full-featured lvchange capability, just the basic
> (the advanced activate options are not provided.)
>
> FIXME: wrap this in a config setting that we can disable by default
> to phase this out?

While it's not clear there's an actual issue in the specific use case
here, just follow what LVM recommends for future-proofing.

[0]: https://forum.proxmox.com/threads/165279/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250422133314.60806-1-f.ebner@proxmox.com
2025-11-14 00:22:48 +01:00
0ba0739f69 plugin: allow volume import of iso, snippets, vztmpl and import
Extend volume import functionality to support 'iso', 'snippets',
'vztmpl', and 'import' types, in addition to the existing support for
'images' and 'rootdir'. This is a prerequisite for the ability to move
ISOs, snippets and container templates between nodes.

Existing behavior for importing VM disks and container volumes remains
unchanged.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20250916123257.107491-4-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
829b0cd728 storage migrate: avoid ssh when moving a volume locally
Avoid the overhead of SSH when $target_sshinfo is undefined. Instead
move a volume between storages on the same node.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20250916123257.107491-3-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
e556534459 storage migrate: remove remnant from rsync-based migration
rsync-based migration was replaced by import/export in commit
da72898cc6 ("migrate: only use import/export")

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20250916123257.107491-2-f.schauer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
5b949979f7 disks: add a guard possible nonexistent field
When running

    pveceph osd create <device>

one would get one or two warnings:

    Use of uninitialized value in pattern match (m//) at /usr/share/perl5/PVE/Diskmanage.pm line 317.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20251024121309.1253604-1-m.sandoval@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
c9a2ce281b common: add pve-vm-image-format standard option for VM image formats
The image formats defined for the pve-vm-image-format standard option
are the formats that are allowed on Proxmox VE storages for VM images.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20251113144131.560130-4-f.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-14 00:16:36 +01:00
beacf9735d bump version to 9.0.15
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-11-13 07:39:09 +01:00
aa8cd93ca4 status: rrddata: use fixed pve-storage-9.0 path
Because we now always create it, should it not exists and old data is
used from the old files.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Link: https://lore.proxmox.com/20251103220024.2488005-8-a.lauterer@proxmox.com
2025-11-13 07:37:18 +01:00
0d392b295c allow .tar container templates
This is needed for OCI container images bundled as tar files, as
generated by `docker save`. OCI images do not need additional
compression, since the content is usually compressed already.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Link: https://lore.proxmox.com/20251008171028.196998-13-f.schauer@proxmox.com
2025-11-12 20:08:51 +01:00
a85ebe36af bump version to 9.0.14
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-11-05 09:13:20 +01:00
633392285c api: list: return 'formats' info in a better structured way
returning the formats in the way of:
```
"format": [
    {
        "format1" => 1,
        "format2" => 1,
        ...
    },
    "defaultFormat"
]
```

is not a very good return format, since it abuses an array as a
tuple, and unnecessarily encodes a list of formats as an object.
Also, we can't describe it properly in JSONSchema in perl, nor our
perl->rust generator is able to handle that.

Instead, return it like this:
```
"formats": {
    "default": "defaultFormat",
    "supported": ["format1", "format2", ...]
}
```

which makes it much more sensible for an api return schema, and it's
possible to annotate it in the JSONSchema.

For compatibility reasons, keep the old property around, and add a
comment to remove with 10.0

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2025-11-05 09:07:43 +01:00
ede776abef api: try to add more return schema information
no problem for 'select_existing', but we cannot actually describe
'format' with our JSONSchema, since it uses an array as a form of tuple,
and even with oneOf this cannot be described currently.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2025-11-05 09:07:35 +01:00
8eabcc7011 lvm plugin: snapshot-as-volume-chain: use locking for snapshot operations
As reported by a user in the enterprise support in a ticket handled by
Friedrich, concurrent snapshot operations could lead to metadata
corruption of the volume group with unlucky timing. Add the missing
locking for operations modifying the metadata, i.e. allocation, rename
and removal. Since volume_snapshot() and volume_snapshot_rollback()
only do those, use a wrapper for the whole function. Since
volume_snapshot_delete() can do longer-running commit or rebase
operations, only lock the necessary sections there.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-5-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
5988ac0250 lvm plugin: volume import: lock allocation and removal sections
With a shared LVM storage, parallel imports, which might be done in
the context of remote migration, could lead to metadata corruption
with unlucky timing, because of missing locking. Add locking around
allocation and removal, which are the sections that modify LVM
metadata. Note that other plugins suffer from missing locking here as
well, but only regarding naming conflicts. Adding locking around the
full call to volume_import() would mean locking for much too long.
Other plugins could follow the approach here, or there could be a
reservation approach like proposed in [0].

[0]: https://lore.proxmox.com/pve-devel/20240403150712.262773-1-h.duerr@proxmox.com/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-4-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
0864fda2fd lvm plugin: fix error handling in volume_snapshot_rollback()
In case a cleanup worker is spawned, the error from the eval block
for allocation was lost. Save it in a variable for checking later.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-3-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
f0f9054926 lvm plugin: snapshot delete: propagate previously captured error
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20251103162330.112603-2-f.ebner@proxmox.com
2025-11-04 19:57:52 +01:00
d5995ffbf7 fix #6941: lvmplugin: fix volume activation of raw disk before secure delete
The volume activate before secure delete was lost in qcow2 snapshot implementation
in commit eda88c94ed.

This re-add activation just before the the delete, to be sure to not write zero
to not existing /dev/.. (so in memory instead the device)

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.251.1761222222.362.pve-devel@lists.proxmox.com
[FE: Remove extra space before colons in commit title
     Slightly improve code comment]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-27 15:28:41 +01:00
728d8f3992 lvmplugin: use blkdiscard when supported instead cstream to saferemove drive
Current cstream implementation is pretty slow, even without throttling.

use blkdiscard --zeroout instead when storage support it,
which is a few magnitudes faster.

Another benefit is that blkdiscard is skipping already zeroed block, so for empty
temp images like snapshot, is pretty fast.

blkdiscard don't have throttling like cstream, but we can tune the step size
of zeroes pushed to the storage.
I'm using 32MB stepsize by default , like ovirt, where it seem to be the best
balance between speed and load.
79f1d79058

but it can be reduce with "saferemove_stepsize" option.

stepsize is also autoreduce to sysfs write_zeroes_max_bytes, which is the maximum
zeroing batch supported by the storage

test with a 100G volume (empty):

time /usr/bin/cstream -i /dev/zero -o /dev/test/vm-100-disk-0.qcow2 -T 10 -v 1 -b 1048576

13561233408 B 12.6 GB 10.00 s 1356062979 B/s 1.26 GB/s
26021462016 B 24.2 GB 20.00 s 1301029969 B/s 1.21 GB/s
38585499648 B 35.9 GB 30.00 s 1286135343 B/s 1.20 GB/s
50998542336 B 47.5 GB 40.00 s 1274925312 B/s 1.19 GB/s
63702765568 B 59.3 GB 50.00 s 1274009877 B/s 1.19 GB/s
76721885184 B 71.5 GB 60.00 s 1278640698 B/s 1.19 GB/s
89126539264 B 83.0 GB 70.00 s 1273178488 B/s 1.19 GB/s
101666459648 B 94.7 GB 80.00 s 1270779024 B/s 1.18 GB/s
107390959616 B 100.0 GB 84.39 s 1272531142 B/s 1.19 GB/s
write: No space left on device

real    1m24.394s
user    0m0.171s
sys     1m24.052s

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v
/dev/test/vm-100-disk-0.qcow2: Zero-filled 107390959616 bytes from the offset 0

real    0m3.641s
user    0m0.001s
sys     0m3.433s

test with a 100G volume with random data:

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v

/dev/test/vm-112-disk-1: Zero-filled 4764729344 bytes from the offset 0
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 4764729344
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 9428795392
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 14260633600
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 19092471808
/dev/test/vm-112-disk-1: Zero-filled 4865392640 bytes from the offset 23924310016
/dev/test/vm-112-disk-1: Zero-filled 4596957184 bytes from the offset 28789702656
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 33386659840
/dev/test/vm-112-disk-1: Zero-filled 4294967296 bytes from the offset 38117834752
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 42412802048
/dev/test/vm-112-disk-1: Zero-filled 4697620480 bytes from the offset 47076868096
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 51774488576
/dev/test/vm-112-disk-1: Zero-filled 4261412864 bytes from the offset 56438554624
/dev/test/vm-112-disk-1: Zero-filled 4362076160 bytes from the offset 60699967488
/dev/test/vm-112-disk-1: Zero-filled 4127195136 bytes from the offset 65062043648
/dev/test/vm-112-disk-1: Zero-filled 4328521728 bytes from the offset 69189238784
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 73517760512
/dev/test/vm-112-disk-1: Zero-filled 4026531840 bytes from the offset 78248935424
/dev/test/vm-112-disk-1: Zero-filled 4194304000 bytes from the offset 82275467264
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 86469771264
/dev/test/vm-112-disk-1: Zero-filled 4395630592 bytes from the offset 91133837312
/dev/test/vm-112-disk-1: Zero-filled 3623878656 bytes from the offset 95529467904
/dev/test/vm-112-disk-1: Zero-filled 4462739456 bytes from the offset 99153346560
/dev/test/vm-112-disk-1: Zero-filled 3758096384 bytes from the offset 103616086016

real    0m23.969s
user    0m0.030s
sys     0m0.144s

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.253.1761222252.362.pve-devel@lists.proxmox.com
[FE: Minor language improvements
     Use more common style for importing with qw()
     Don't specify full path to blkdiscard binary for run_command()]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-27 15:23:17 +01:00
10e47fc8bb pvesm: print units in 'status' subcommand table headings
The units used are not documented in the man page.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250829124738.412902-1-m.sandoval@proxmox.com
[FE: improve commit title]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-22 14:39:39 +02:00
68c3142605 api schema: storage: config: fix typos in return schema description
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-08 15:09:16 +02:00
c10e73d93b plugin: pod: fix variable name for volume_qemu_snapshot_method() example code
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-10-08 14:27:25 +02:00