Commit Graph

1685 Commits

Author SHA1 Message Date
f553e6e639 bump version to 7.2-9
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-20 09:20:14 +02:00
bd485fd4aa disks: allow add_storage for already configured local storage
One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2022-09-13 10:05:20 +02:00
55553bd432 disks: die if storage name is already in use
If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2022-09-13 10:05:16 +02:00
4de6002558 diskmanage: add mounted_paths
returns a list of mounted paths with the backing devices

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2022-09-13 10:04:48 +02:00
6a44cc417d RBD plugin: librados connect: increase timeout when in worker
The default timeout in PVE/RADOS.pm is 5 seconds, but this is not
always enough for external clusters under load. Workers can and should
take their time to not fail here too quickly.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-09-13 09:57:19 +02:00
2be327abf6 RBD plugin: librados connect: pass along options
In preparation to increase the timeout for workers. Both existing
callers of librados_connect() don't currently use the parameter.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-09-13 09:57:19 +02:00
e8e477112f RBD plugin: path: conditionalize get_rbd_dev_path() call
The return value of get_rbd_dev_path() is only used when $scfg->{krbd}
evaluates to true and the function shouldn't have any side effects
that are needed later, so the call can be avoided otherwise.

This also saves a RADOS connection and command with configurations for
external clusters with krbd disabled.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-09-13 09:55:56 +02:00
c560cb58a5 fix #4189: pbs: bump list_volumes timeout to 2mins
When switching this from calling the external binary to
using the perl api client the timeout got reduced to 7
seconds, which is definitely insufficient for larger stores.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-08-17 13:12:49 +02:00
78e3ebb040 bump version to 7.2-8
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-16 13:56:56 +02:00
de0f92442a pbs: die if master key is missing
while the resulting backups are encrypted, they would not be restorable
using the master key (only) if the original PVE system is lost.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-16 13:55:43 +02:00
de635c2668 pbs: warn about missing, but config master key
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-16 13:52:24 +02:00
2bc4cfb866 pbs: detect mismatch of encryption settings and key
if the key file doesn't exist (anymore), but the storage.cfg references
one, die on commands that should use encryption instead of falling back
to plain-text operations.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2022-08-16 13:51:38 +02:00
0eb1803679 bump version to 7.2-7
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-07-15 13:36:39 +02:00
ad3666d716 pbs: fix namespace handling in list_volumes
Before af07f67 ("pbs: use vmid parameter in list_snapshots") the
namespace was set via do_raw_client_command, but now it needs to be
set explicitly here.

Fixes: af07f67 ("pbs: use vmid parameter in list_snapshots")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-07-15 13:35:37 +02:00
752a0bcabb bump version to 7.2-6
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-07-14 13:47:14 +02:00
af07f67a89 pbs: use vmid parameter in list_snapshots
Particularly for operations such as pruning backups after a
scheduled backups we do not want to list the entire
store.

(pbs_api_connect is moved up unmodified)

Note that the 'snapshots' CLI command only takes a full
group, but the API does allow specifying a backup-id without
a backup-type!

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-14 13:30:16 +02:00
0d4c46bf35 BTRFSPlugin: reuse DirPlugin update/get_volume_attribute
this allows setting notes+protected for backups on btrfs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-05 11:24:58 +02:00
91d49d1d52 DirPlugin: update_volume_attribute: don't use update_volume_notes
by refactoring it into a helper and use that.
With this, we can omit the 'update_volume_notes' in subclasses

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-07-05 11:24:53 +02:00
22bef1f99a bump version to 7.2-5
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:54:55 +02:00
15051d3d6c fixup tests
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
2949acd638 diskmanage: only set mounted property for mounted devices
instead of setting an empty string for not mounted devices

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
90778f7cdd Added a LOG_EXT constant as a counterpart to NOTES_EXT
and refactored usages for .log and .notes with them.
At some parts in the test case code I had to source new variables to
shorten the line length to not exceed the 100 column line limit.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
3048641bb4 Switched to using log_warn of PVE::RESTEnvironment
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
e573445e93 Adapted unlink calls for archive files in case of ENOENT
This improves handling when two archive remove calls are creating a
race condition where one would formerly encounter an error. Now both
finish successfully.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
c3e2ff806f fix #3972: Remove the .notes file when a backup is deleted
When a VM or Container backup was deleted, the .notes file was not
removed, therefore, over time the dump folder would get polluted with
notes for backups that no longer existed. As backup names contain a
timestamp and as the notes cannot be reused because of this, I think
it is safe to just delete them just like we do with the .log file.

Furthermore, I sourced the deletion of the log and notes file into a
new function called "archive_auxiliaries_remove". Additionally, the
archive_info object now returns one more field containing the name of
the notes file. The test cases have to be adapted to expect this new
value as the package will not compile otherwise.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-15 10:49:22 +02:00
76c98d2353 api2: disks: add mounted boolean field
... and remove '(mounted)' from usage string

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2022-06-15 10:17:10 +02:00
a1528ffe8f rbd: get_rbd_dev_path: return /dev/rbd path only if cluster matches
The changes in cfe46e2d4a git not catch
all situations.
In the case of a guest having 2 disk images with the same name on a pool
with the same name but in two different ceph clusters we still had
issues when starting it. The first disk got mapped as expected. The
second disk did not get mapped because we returned the old $path to
"/dev/rbd/<pool>/<image>" because it already existed from the first
disk.

In the case that only the "old" /dev/rbd path exists and we do not have
the /dev/rbd-pve/<cluster>/... path available, we now check if the
cluster fsid used by that rbd device matches the one we expect. If it
does, then we are in the situation that the image has been mapped before
the new rbd-pve udev rule was introduced. If it does not, then we have
the situation of an ambiguous mapping in /dev/rbd and return the
$pve_path.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2022-06-14 10:50:43 +02:00
e4671f734b rbd: fix #4060 show data-pool usage when configured
When a data-pool is configured, use it for status infos. The 'data-pool'
config option is used to mark the erasure coded pool while the 'pool'
will be the replicated pool holding meta data such as the omap.

This means, the 'pool' will only use a small amount of space and people
are interested how much they can store in the erasure coded pool anyway.

Therefore this patch reorders the assignment of the used pool name by
availability of the scfg parameters: data-pool -> pool -> fallback 'rbd'

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2022-05-20 09:43:20 +02:00
ab234cbb92 bump version to 7.2-4
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-13 14:27:28 +02:00
ae93163343 rbd: warn if no stats for a pool could be gathered
happens in case of a mistyped poolname, and the new message should be
more helpful than:
`Use of uninitialized value $free in addition (+) at \
/usr/share/perl5/PVE/Storage/RBDPlugin.pm line 64`

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2022-05-13 14:08:13 +02:00
0c317c6c0f rbd: add fallback default poolname 'rbd' to status
the fallback to a default pool name of 'rbd' was introduced in:
1440604a4b
and worked for the status command, because it used the `rados_cmd`
sub.

This fallback was lost with the changes in:
41aacc6cde

leading to confusing errors:
`Use of uninitialized value in string eq at \
/usr/share/perl5/PVE/Storage/RBDPlugin.pm line 633`
(e.g. in the journal from pvestatd)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2022-05-13 14:08:13 +02:00
f49c6cde16 d/control: bump versioned dependecy for proxmox-backup-client
for the s/backup-ns/ns/ change

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-13 14:06:45 +02:00
4c9c5f3782 pbs: backup-ns parameter was renamed to ns
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-13 14:05:38 +02:00
120b0ff20c bump version to 7.2-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-12 14:57:10 +02:00
4be3f811b9 d/control: bump versioned dependecy for proxmox-backup-client
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-12 14:57:10 +02:00
39887721cb d/control: bump versioned dependency for pve-common
for namespace support

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-12 14:45:22 +02:00
78eac5baf2 pbs: namespace support
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-12 11:56:48 +02:00
ba41f78f32 bump version to 7.2-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-28 18:20:02 +02:00
78638b3dea rbd: get path: allow fake override of fsid in scfg for some regression tests
to avoid calls into RADOS connect, that trigger RPCEnv not
initialized breakage in regression tests, but wouldn't really work
otherwise either

in the future the RBD $scfg could actually support this (or similarly
named) property, to safe on storage addition and then avoid frequent
mon commands

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-28 18:17:58 +02:00
cd58a69125 bump version to 7.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-28 17:38:22 +02:00
9594717848 rbd: unmap volume after rename
When krbd is used, subsequent removal after an an operation
involving a rename could fail with
> librbd::image::PreRemoveRequest: 0x559b7506a470 \
> check_image_watchers: image has watchers - not removing
because the old mapping was still present.

For both operations with a rename, the owning guest should be offline,
but even if it weren't, unmap simply fails when the volume is in-use.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-04-28 13:44:21 +02:00
cc682faafc rbd: drop get_kernel_device_path
it only redirected to get_rbd_dev_path with the same signature and both
are private subs..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-04-27 13:03:16 +02:00
647a667e10 rbd: reduce number of stats in likely path
the new udev rule is expected to be in place and active, switching the
checks around means 1 instead of 2 stat()s in this rather hot code path.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-04-27 13:01:42 +02:00
cfe46e2d4a rbd: fix #3969: add rbd dev paths with cluster info
By adding our own customized rbd udev rules and ceph-rbdnamer we can
create device paths that include the cluster fsid and avoid any
ambiguity if the same pool and namespace combination is used in
different clusters we connect to.

Additionally to the '/dev/rbd/<pool>/...' paths we now have
'/dev/rbd-pve/<cluster fsid>/<pool>/...' paths.

The other half of the patch makes use of the new device paths in the RBD
plugin.

The new 'get_rbd_dev_path' method the full device path. In case that the
image has been mapped before the rbd-pve udev rule has been installed,
it returns the old path.

The cluster fsid is read from the 'ceph.conf' file in the case of a
hyperconverged setup. In the case of an external Ceph cluster we need to
fetch it via a rados api call.

Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2022-04-27 12:57:22 +02:00
43f8112f0b storage plugins: en/decode volume notes as UTF-8
When writing into the file, explicitly utf8 encode it, and then try
to utf8 decode it on read.

If the notes are not valid utf8, we assume they were iso-8859 encoded
and return as is.

Technically this is a breaking change, since there are iso-8859
comments that would successfully decode as utf8, for example: the
byte sequence "C2 A9" would be "£" in iso, but would decode to "£".

From what i can tell though, this is rather unlikely to happen for
"real world" notes, because the first byte would be in the range of
C0-F7 (which are mostly language dependent characters like "Â") and
the following bytes would have to be in the range of 80-BF, which are
only special characters like "£" (or undefined)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-26 15:33:13 +02:00
7a8751a2cd zfs pool: bump non-worker timeoiut default to 10s
With 30s we got for sync api calls 10s leaves still enough room for
answering and other stuff.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-26 15:25:40 +02:00
4cd9b85d15 fix #3803: ZFSPoolPlugin: zfs_request: increase minimum timeout in worker
Since most zfs operations can take a while (under certain conditions),
increase the minimum timeout for zfs_request in workers to 5 minutes.

We cannot increase the timeouts in synchronous api calls, since they are
hard limited to 30 seconds, but in worker we do not have such limits.

The existing default timeout does not change (60minutes in worker,
5seconds otherwise), but all zfs_requests with a set timeout (<5minutes)
will use the increased 5 minutes in a worker.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-26 15:19:50 +02:00
428872eb71 ceph config: minor code cleanup & comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-26 12:47:58 +02:00
34fd49a69b some code style refcatoring/cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-22 14:30:01 +02:00
8c1a476be9 bump version to 7.1-2
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-04-06 13:30:21 +02:00