Commit Graph

1642 Commits

Author SHA1 Message Date
cfe46e2d4a rbd: fix #3969: add rbd dev paths with cluster info
By adding our own customized rbd udev rules and ceph-rbdnamer we can
create device paths that include the cluster fsid and avoid any
ambiguity if the same pool and namespace combination is used in
different clusters we connect to.

Additionally to the '/dev/rbd/<pool>/...' paths we now have
'/dev/rbd-pve/<cluster fsid>/<pool>/...' paths.

The other half of the patch makes use of the new device paths in the RBD
plugin.

The new 'get_rbd_dev_path' method the full device path. In case that the
image has been mapped before the rbd-pve udev rule has been installed,
it returns the old path.

The cluster fsid is read from the 'ceph.conf' file in the case of a
hyperconverged setup. In the case of an external Ceph cluster we need to
fetch it via a rados api call.

Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2022-04-27 12:57:22 +02:00
43f8112f0b storage plugins: en/decode volume notes as UTF-8
When writing into the file, explicitly utf8 encode it, and then try
to utf8 decode it on read.

If the notes are not valid utf8, we assume they were iso-8859 encoded
and return as is.

Technically this is a breaking change, since there are iso-8859
comments that would successfully decode as utf8, for example: the
byte sequence "C2 A9" would be "£" in iso, but would decode to "£".

From what i can tell though, this is rather unlikely to happen for
"real world" notes, because the first byte would be in the range of
C0-F7 (which are mostly language dependent characters like "Â") and
the following bytes would have to be in the range of 80-BF, which are
only special characters like "£" (or undefined)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-26 15:33:13 +02:00
7a8751a2cd zfs pool: bump non-worker timeoiut default to 10s
With 30s we got for sync api calls 10s leaves still enough room for
answering and other stuff.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-26 15:25:40 +02:00
4cd9b85d15 fix #3803: ZFSPoolPlugin: zfs_request: increase minimum timeout in worker
Since most zfs operations can take a while (under certain conditions),
increase the minimum timeout for zfs_request in workers to 5 minutes.

We cannot increase the timeouts in synchronous api calls, since they are
hard limited to 30 seconds, but in worker we do not have such limits.

The existing default timeout does not change (60minutes in worker,
5seconds otherwise), but all zfs_requests with a set timeout (<5minutes)
will use the increased 5 minutes in a worker.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-26 15:19:50 +02:00
428872eb71 ceph config: minor code cleanup & comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-26 12:47:58 +02:00
34fd49a69b some code style refcatoring/cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-22 14:30:01 +02:00
8c1a476be9 bump version to 7.1-2
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-04-06 13:30:21 +02:00
107208bdbf disks: zfs: code indentation/style improvments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-06 12:56:47 +02:00
8009417d0d plugins: allow limiting the number of protected backups per guest
The ability to mark backups as protected broke the implicit assumption
in vzdump that remove=1 and current number of backups being the limit
(i.e. sum of all keep options) will result in a backup being removed.

Introduce a new storage property 'max-protected-backups' to limit the
number of protected backups per guest. Use 5 as a default value, as it
should cover most use cases, while still not having too big of a
potential overhead in many scenarios.

For external plugins that do not return the backup subtype in
list_volumes, all protected backups with the same ID will count
towards the limit.

An alternative would be to count the protected backups when pruning.
While that would avoid the need for a new property, it would break the
current semantics of protected backups being ignored for pruning. It
also would be less flexible, e.g. for PBS, it can make sense to have
both keep-all=1 and a limit for the number of protected snapshots on
the PVE side.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-06 09:47:12 +02:00
89a7507c7a api: file restore: use check_volume_access to restrict content type
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
1f37ad56f9 pvesm: extract config: add content type check
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
21c7b54622 check volume accesss: add content type parameter
Adding such a check here avoids the need to parse at the call sites in
some cases.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
42352a4988 check volume access: allow for images/rootdir if user has VM.Config.Disk
Listing guest images should not require Datastore.Allocate in this
case. In preparation for adding disk import to the GUI.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
3e1a618e34 check volume access: always allow with Datastore.Allocate privilege
Such users are supposed to be administrators of the storage, but
previously, access to backups was not allowed when not also having
VM.Backup.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
f303dec6e4 pvesm: extract config: check for VM.Backup privilege
In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-01 09:24:16 +02:00
c66e0b8a0a list volumes: also return backup type for backups
Otherwise, there is no storage-agnostic way to filter by backup group.

Call it subtype, to not confuse it with content type, and to be able
to re-use it for other content types than backup, if the need ever
arises.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-03-16 17:42:39 +01:00
5ef50a8262 cifs: check connection: bubble up NT_STATUS_LOGON_FAILURE
in the same manner as NT_STATUS_ACCESS_DENIED. It can be assumed to be
a configuration error, so avoid showing the generic "storage <storeid>
is not online". Reported in the community forum:
https://forum.proxmox.com/threads/storage-is-not-online-cifs.99201/post-428858

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-03-16 17:36:38 +01:00
38431808b4 activate storage: improve error when check_connection dies
by making sure the storage ID is part of the error. This can happen
for (at least) CIFS, and GlusterFS with local server.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-03-16 17:36:38 +01:00
18bf2e59da storage/plugin: factoring out regex for backup extension re
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2022-03-16 17:13:59 +01:00
cd461a5012 storage: rename REs for iso and vztmpl extensions
these changes make it more clear, how many capture groups each
RE inclues.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2022-03-16 17:13:59 +01:00
726c432964 zfs: volume import: use correct format for renaming
Previously, the transport format (which currently is always 'zfs') was
passed in, resulting in subvol-disks not to be renamed correctly.

Fixes: a97d3ee ("Introduce allow_rename parameter for pvesm import and storage_migrate")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-03 14:33:33 +01:00
eba7935f83 file_size_info: cast 'size' and 'used' to integer
`qemu-img info --output=json` returns the size and used values as integers in
the JSON format, but the regex match converts them to strings.
As we know they only contain digits, we can simply cast them back to integers
after the regex.

The API requires them to be integers.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2022-02-21 16:07:30 +01:00
7f30857519 fix #3894: cast 'size' and 'used' to integer
Perl's automatic conversion can lead to integers being converted to
strings, for example by matching it in a regex.

To make sure we always return an integer in the API call, add an
explicit cast to integer.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2022-02-21 16:07:27 +01:00
3363fcf65e add volume_import/export_start helpers
exposing the two halves of a storage migration for usage across
cluster boundaries.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
7a158d0bc9 storage_migrate: pull out import/export_prepare
for re-use with remote migration, where import and export happen on
different clusters connected via a websocket instead of SSH tunnel.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
686b07376f storage_migrate_snapshot: skip for btrfs without snapshots
this allows migrating from btrfs to other raw+size accepting storages,
provided no snapshots exist.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-09 18:52:59 +01:00
db3d1ef163 bump version to 7.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 18:08:09 +01:00
c915afca7e rbd: followup code style cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 18:04:31 +01:00
ef2afce74a fix #1816: rbd: add support for erasure coded ec pools
The first step is to allocate rbd images correctly.

The metadata objects still need to be stored in a replicated pool, but
by providing the --data-pool parameter on image creation, we can place
the data objects on the erasure coded (EC) pool.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2022-02-04 17:51:48 +01:00
05b07a67f5 storage_migrate: pull out snapshot decision
into new top-level helper for re-use with remote migration.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-04 17:30:42 +01:00
f78f26ae60 volname_for_storage: parse volname before calling
to allow reusing this with remote migration, where parsing of the source
volid has to happen on the source node, but this call has to happen on
the target node.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-04 17:30:42 +01:00
e7bc1f03b7 CephConfig: ensure newline in $secret and $cephfs_secret parameter
Ensure that the user provided $secret ends in a newline. Otherwise we
will have Input/output errors from rados_connect.

For consistency and possible future proofing, also add a newline to
CephFS secrets.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2022-01-25 10:57:58 +01:00
73dfe360dd zfs: use -r parameter when listing snapshots
Some versions of ZFS do not automatically display the child snapshots
when '-t snapshot' is used, but require '-r' to be present
additionally[1]. And in general, it's cleaner to specify the flag
explicitly.

Because of that, commit ac5c1af led to a regression[0] in the context
of ZFS over iSCSI with zfs_get_sorted_snapshot_list. Fix it, by adding
a -r flag again.

The volume_snapshot_info function is currently only used in the
context of replication and that requires a local ZFS pool, but it
would be affected by the same issue if it is ever used in the context
of ZFS over iSCSI, so also add -r there.

[0]: https://forum.proxmox.com/threads/102683/
[1]: https://forum.proxmox.com/threads/102683/post-442577

Fixes: 8c20d8a ("plugin: add volume_snapshot_info function")
Fixes: ac5c1af ("zfspool: add zfs_get_sorted_snapshot_list helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-01-10 13:33:31 +01:00
b4616e5c4a lvm thin: add missing newline to error message
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-18 11:18:03 +01:00
904f06a82e pbs: update attribute: cleaner error message if not supported
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-12 16:53:27 +01:00
c7aa1e28dd bump version to 7.0-15
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 14:25:26 +01:00
6fbffac0a3 lvm thin: don't assume that a thin pool and its volumes are active
There are cases where autoactivation can fail, as reported in the
community forum [0]. And it could also be that a volume was
deactivated by something outside of our control.

It doesn't seem strictly necessary to activate the thin pool itself
(creating/removing/activating LVs within the pool still works if it's
not active), but it does not report usage information as long as
neither the pool nor any of its LVs are active. Activate the pool for
that, for being able to use the flag in status(), and it should also
serve as a good indicator that there's a problem with the pool if it
can't be activated.

Before activating, check the (cached) lv_state from lvm_list_volumes.
It's necessary to update the cache in activate_storage, because the
flag is re-used in status(). Also update it for other (de)activations
to be more future-proof.

[0]: https://forum.proxmox.com/threads/local-lvm-not-available-after-kernel-update-on-pve-7.97406

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 14:23:54 +01:00
2668a86719 lvm thin: status: code cleanup
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 14:18:28 +01:00
cde43c4880 api: disks: delete: add flag for cleaning up storage config
Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>

slight style fixup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 12:35:25 +01:00
f81908eb58 api: disks: delete: add flag for wiping disks
For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
26082b7daf diskmanage: add helper for udev workaround
to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
a83d8eb178 api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
a510449e2b api: list thin pools: add volume group to properties
So that DELETE can be called using only information from GET.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:35:25 +01:00
b02db5fcea LVM: add lvm_destroy_volume_group
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-10 12:14:32 +01:00
40a079bd87 bump version to 7.0-14
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 17:02:48 +01:00
95dfa44ca1 add disk rename feature
Functionality has been added for the following storage types:

* directory ones, based on the default implementation:
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `rename` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>

the intention of this feature is to support the following use-cases:
- reassign a volume from one owning guest to another (which usually
  entails a rename, since the owning vmid is encoded in the volume name)
- rename a volume (e.g., to use a more meaningful name instead of the
  auto-assigned ...-disk-123)

only the former is implemented at the caller side in
qemu-server/pve-container for now, but since the lower-level feature is
basically the same for both, we can take advantage of the storage plugin
API bump now to get the building block for this future feature in place
already.

adapt ApiChangelog change to fix conflicts and added more detail above

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 17:02:29 +01:00
93fbc01963 api changelog: add volume attributes change
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 16:54:33 +01:00
bfba962647 pbs: integrate support for protected
free_image doesn't need to check for protection, because that will
happen on the server.

Getting/updating notes has also been refactored to re-use the code
for the PBS api calls.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>

add missing b-d and depend on libposix-strptime-perl

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 16:54:13 +01:00
ecfe25058b prune: mark renamed and protected backups differently
While it makes no difference for pruning itself, protected backups are
additionally protected against removal. Avoid the potential to confuse
the two. Also update the description for the API return value and add
an enum constraint.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00
56897a9203 fix #3307: make it possible to set protection for backups
A protected backup is not removed by free_image and ignored when
pruning.

The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.

For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 14:56:15 +01:00