Like this, the property will get added when parsing the storage configuration
and PBS storages will correctly show up as shared storages in API results.
AFAICT the only affected PBS operation is free_image via vdisk_free, which will
now be protected by a cluster-wide lock, and that shouldn't hurt.
Another issue this fixes, which is the reason this patch exists, was reported
in the forum[0]. The free space from PBS storages was counted once for each node
that had access to the storage.
[0]: https://forum.proxmox.com/threads/pve-6-3-the-storage-size-was-displayed-incorrectly.83136/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
LVM RAID logical volumes (including mirrors) can be valid disk images, so they
should show up in storage content listings (for example pvesm list).
Including LV types is safer than excluding, especially because of possible
additional types in the future.
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
the check_connection is done by querying the exports of the nfs server
in question. With nfs v4 those exports aren't listed anymore since nfs
v4 employs a pseudo-filesystem starting from root (/).
rpcinfo allows to query the existence of an nfs v4 service.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
as described in the zfs bug https://github.com/openzfs/zfs/issues/10931
the kernel keeps around cached data from mmaps after a rollback, thus
having invalid data in files that were allegedly rolled back
to workaround this (until a real fix comes along), we unmount the subvol,
invalidating the kernel cache anyway
the dataset gets mounted on the next 'activate_volume' again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
reuse the one from DirPlugin by directing the call to it, but with
the actual $class. This should stay stable, as we provide an ABI and
try to always use $class->helpers.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previous to this we did not called the plugins update_volume_notes at
all in the case where a user delted the textarea, which results to
passing a falsy value ('').
Also adapt the currently sole implementation to delete the notes field
in the undef or '' value case. This can be done safely, as we default
to returning an empty string if no notes file exists.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
mostly re-ordering to improve statement grouping and avoiding the
need for an intermediate variable
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
improves UX of on_update and on_add hooks *a lot*.
This is a bit more expensive than the TCP ping, or even just an
unauthenticated ping, but not as bad as a full datastore status - as
this only reads the datastore config file (which is normally in page
cache anyway).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it is flexible enough to easily do so, and should do well until we
actually have cheap native bindings (e.g., through wolfgangs rust
permlod magic).
Make it a private helper, we do *not* want to expose it directly for
now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it could be debated do have some security implications and that
deletion is safer, but key deletion is a pretty hairy thing.
Should be documented, and people just should use delete instead of
autogen if they want to "destroy" a key.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and add the appropriate api call to set and get the comment
we need to bump APIVER for this and can bump APIAGE, since
we only use it at this new call that can work with the default
implementation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
useful to have an alternative to the old maxfiles = 0. There has to
be a way for vzdump to distinguish between:
1. use the /etc/vzdump.conf default (when no options are configured for the storage)
2. use no limit (when keep-all=1)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When using the path to request properties, and no ZFS file system is mounted
at that path, ZFS will fall back to the parent filesystem:
> # zfs unmount myzpool/subvol-172-disk-0
> # zfs get mounted /myzpool/subvol-172-disk-0
> NAME PROPERTY VALUE SOURCE
> myzpool mounted yes -
> # zfs get mounted myzpool/subvol-172-disk-0
> NAME PROPERTY VALUE SOURCE
> myzpool/subvol-172-disk-0 mounted no -
Thus, we cannot use the path and need to use the dataset directly.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
We allow snapshot names that match pve-configid but during qm destroy we have
not removed all snapshots that match pve-configid so far. For example, the name
x-y was allowed but the resulting snap_vm-105-disk-0_x-y was not removed.
Reported-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
This is basically necessary for the GUI's prune widget, because we want to
pass along all options equal to zero when all the number fields are cleared.
And it's more similar to how it's done in PBS now.
Bumped the APIAGE and APIVER, in case some external plugin needs to adapt to
the now less restrictive schema for 'prune-backups'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
That was lots of code and hash map touching for the case where one
avoided a extra stat, which result probably was in the page cache
anyway, for the case that a backup has a comment.
A case which is rather be unlikely - comments are normally done for
the occasional explicit backup (e.g., before major upgrade, before a
configuration change in that guest, ...), at least not worth some
relatively complicated effort making that sub harder to read and
maintain.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.
This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
quiet takes care of both the error and success case.
Without this, there are lines like:
myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ name myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ -
in the log if the dataset exists, and this information is
already present in more readable form.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we already have the ZFS pool plugin as precedent to use 10s, at for
network with remote off-site storage one can get to 200 - 300ms
RTT latency, which means that for a protocol needing multiple rounds of
communication, one can easily get over 2s while not being in a broken
network.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The LIO backend for ZFS over iSCSI fetches the json-config periodically from
the target.
This patch reduces the stored config values to those which are actually used
and additonally untaints the values read from the remote host's config-file.
Since the LUN index is used in calls to targetcli on the remote host (via
run_command), untainting prevents the call to crash when run with '-T'.
Tested by creating a zfs over iscsi backed VM, starting it, adding disks,
resizing disks, removing disks, creating snapshots, rolling back to a snapshot.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
ZFS over iSCSI fetches information about the disk-images via ssh, thus
the obtainted data is tainted (perlsec (1)).
Since pvedaemon runs with '-T' enabled trying to start a VM via GUI/API failed,
while it still worked via `qm` or `pvesh`.
The issue surfaced after commit cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in
pve-common ('run_command: improve performance for logging and long lines'),
and results from concatenating the original (tainted) buffer to a variable,
instead of a captured subgroup.
Untainting the value in ZFSPlugin should not cause any regressiosn, since the
other 3 target providers already have a match on '\d+' for retrieving the
lun number.
reported via pve-user [0].
reproduced and tested by setting up a LIO-target (on top of a virtual PVE),
adding it as storage and trying to start a guest (with a disk on the
ZFS over iSCSI storage) with `perl -T /usr/sbin/qm start $vmid`
[0] https://lists.proxmox.com/pipermail/pve-user/2020-October/172055.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
and other stat failure modes.
this method returns undef if 'qemu-img info ...' fails to return
information, so callers must handle this already.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
commit 815df2dd08 introduced a small issue
when activating linked clone volumes - the volname passed contains
basevol/subvol, which needs to be translated to subvol.
using the path method should be a robust way to get the actual path for
activation.
Found and tested by building the package as root (otherwise the zfs
regressiontests are skipped).
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Makes it possible to clone and start a container whose
ZFS subvols are not yet mounted for some reason. If a
subvol cannot be mounted, there's a better error now:
zfs error: cannot mount '/myzpool/subvol-103-disk-0': directory is not empty
Previously, cloning would quietly do an "empty" clone,
and startup would fail with:
mount_autodev: 1074 Permission denied - Failed to create "/dev" directory
lxc_setup: 3238 Failed to mount "/dev"
do_start: 1224 Failed to setup container "103"
__sync_wait: 41 An error occurred in another process (expected sequence number 5)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we don't even know whether $snap exists at all, so the old variant could
be rather misleading..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>