Commit Graph

1345 Commits

Author SHA1 Message Date
d4e8a1f27c convert maxfiles to prune_backups when reading the storage configuration
If there are already prune options configured, simply delete the maxfiles
setting. Having set both is invalid from vzdump's perspective anyways, and any
backup job on such a storage failed, meaning a user would've noticed.

If there are no prune options, translate the maxfiles value to keep-last,
except for maxfiles being zero (=unlimited), in which case we use keep-all.

If both are not set, don't set anything, so:
1. Storages don't suddenly have retention options set.
2. People relying on vzdump defaults can still use those.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-23 15:57:30 +01:00
1b87f01388 prune: introduce keep-all option
useful to have an alternative to the old maxfiles = 0. There has to
be a way for vzdump to distinguish between:
1. use the /etc/vzdump.conf default (when no options are configured for the storage)
2. use no limit (when keep-all=1)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-23 15:27:17 +01:00
5045e0b77a perlcritic: don't modify $_ in list functions, use for
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-23 15:27:14 +01:00
1207620cf0 perlcritic: avoid conditional variable declaration
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-23 15:24:02 +01:00
afeda18256 plugin: hooks: avoid that method param count gets returned
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-23 15:12:58 +01:00
3893d2755e fix volume activation for ZFS subvols
When using the path to request properties, and no ZFS file system is mounted
at that path, ZFS will fall back to the parent filesystem:

> # zfs unmount myzpool/subvol-172-disk-0
> # zfs get mounted /myzpool/subvol-172-disk-0
> NAME     PROPERTY  VALUE    SOURCE
> myzpool  mounted   yes      -
> # zfs get mounted myzpool/subvol-172-disk-0
> NAME                       PROPERTY  VALUE    SOURCE
> myzpool/subvol-172-disk-0  mounted   no       -

Thus, we cannot use the path and need to use the dataset directly.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-22 18:36:23 +01:00
68c4bca39e d/control: bump versioned dependency on libpve-common-perl
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-16 18:25:26 +01:00
9b6ff01b79 lvmthin: Match snapshot remove regex to allowed names
We allow snapshot names that match pve-configid but during qm destroy we have
not removed all snapshots that match pve-configid so far. For example, the name
x-y was allowed but the resulting snap_vm-105-disk-0_x-y was not removed.

Reported-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2020-11-16 18:24:22 +01:00
343ca2570c prune: allow having all prune options zero/missing
This is basically necessary for the GUI's prune widget, because we want to
pass along all options equal to zero when all the number fields are cleared.
And it's more similar to how it's done in PBS now.

Bumped the APIAGE and APIVER, in case some external plugin needs to adapt to
the now less restrictive schema for 'prune-backups'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 10:14:11 +01:00
f514181d28 prune mark: keep all if all prune options are zero/missing
as an additional safety measure. And add some tests.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 10:14:11 +01:00
14c922b7da don't pass along keep-options equal to zero to PBS
In PBS, zero is not allowed for these options.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 10:14:11 +01:00
0b6b98d189 pbs: add/update: return enc. key, if newly set or auto-generated
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 18:05:28 +01:00
cd69cedf3e api: storage create/update: return parts of the configuration
First, doing such things can make client work slightly easier, as the
submitted values do not need to be made available in any callback
handling the response.

But the actual reason for doing this now is, that this is a
preparatory step for allowing the user to download/print/.. an
autogenerated PBS client encryption key.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 18:01:43 +01:00
1d95f21615 avoid unecessary try for over optimization
That was lots of code and hash map touching for the case where one
avoided a extra stat, which result probably was in the page cache
anyway, for the case that a backup has a comment.

A case which is rather  be unlikely - comments are normally done for
the occasional explicit backup (e.g., before major upgrade, before a
configuration change in that guest, ...), at least not worth some
relatively complicated effort making that sub harder to read and
maintain.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 17:31:26 +01:00
d260b9f357 storage: get subdirectory files: read .comment files for comments
we have no way of setting them yet via the API, but we can read them
now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-12 17:17:44 +01:00
9778e5c216 api: content listing: add comment and verification fields
for now only for PBS, since we do not have such info elsewhere

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-12 17:16:55 +01:00
4558cb6eb6 pbs: autogen encryption key: bubble up error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 11:49:01 +01:00
8ff8e27713 api/config: fix indentation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 09:35:53 +01:00
790ca37e69 bump version to 6.2-10
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 19:05:12 +01:00
5324ceffef fix #3030: always activate volumes in storage_migrate
AFAICT the snapshot activation is not necessary for our plugins at the moment,
but it doesn't really hurt and might be relevant in the future or for external
plugins.

Deactivating volumes is up to the caller, because for example, for replication
on a running guest, we obviously don't want to deactivate volumes.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 19:04:58 +01:00
2c036838ed add check for fsfreeze before snapshot
In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.

This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-10 18:58:45 +01:00
af2dd59eaf fix typo in comment
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-10 18:58:45 +01:00
dbad606d57 Diskmanage: Use S.M.A.R.T. attributes for SSDs wearout lookup
This replaces a locally maintained hardware map in
get_wear_leveling_info() by commonly used register names of
smartmontool. Smartmontool maintains a labeled register database that
contains a majority of drives (including versions). The current lookup
produces false estimates, this approach hopefully provides more reliable
data.

Signed-off-by: Jan-Jonas Sämann <sprinterfreak@binary-kitchen.de>
2020-10-30 15:25:11 +01:00
641146add8 Update disk_tests/ssd_smart/sde data
Provides recent test data for disk_tests/ssd_smart/sde_smart. The
previous data was created using an older smartmontools version, which
did not yet support the drive and therefore had bogus attribute mapping.

Signed-off-by: Jan-Jonas Sämann <sprinterfreak@binary-kitchen.de>
2020-10-30 15:25:11 +01:00
57acd6a124 fix #1452: also log stderr of remote command with insecure storage migration
Commit 8fe00d9944 already
introduced the necessary logging for the secure code path,
so presumably the bug was already fixed for most people.

Delay the potential die for the send command to be able to log
the ouput+error from the receive command. Like this we also see e.g.
'volume ... already exists' instead of just 'broken pipe'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-10-28 14:05:49 +01:00
0fd0a6270b avoid output of zfs get command on volume import
quiet takes care of both the error and success case.
Without this, there are lines like:
myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__	name	myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__	-
in the log if the dataset exists, and this information is
already present in more readable form.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-10-28 14:05:49 +01:00
70232472bc fix #3097: cifs, nfs: increase connection check timeout to 10s
we already have the ZFS pool plugin as precedent to use 10s, at for
network with remote off-site storage one can get to 200 - 300ms
RTT latency, which means that for a protocol needing multiple rounds of
communication, one can easily get over 2s while not being in a broken
network.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 07:03:19 +01:00
a9078a7922 bump version to 6.2-9
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-13 11:14:10 +02:00
6726a2e037 LIO: drop unused statements
minor cleanup of left-over/unused statements.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-10-13 11:11:05 +02:00
d4abdf4e22 LIO: untaint values read from remote config
The LIO backend for ZFS over iSCSI fetches the json-config periodically from
the target.
This patch reduces the stored config values to those which are actually used
and additonally untaints the values read from the remote host's config-file.

Since the LUN index is used in calls to targetcli on the remote host (via
run_command), untainting prevents the call to crash when run with '-T'.

Tested by creating a zfs over iscsi backed VM, starting it, adding disks,
resizing disks, removing disks, creating snapshots, rolling back to a snapshot.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-10-13 11:11:05 +02:00
609f117ff2 ZFSPlugin: untaint lun number
ZFS over iSCSI fetches information about the disk-images via ssh, thus
the obtainted data is tainted (perlsec (1)).

Since pvedaemon runs with '-T' enabled trying to start a VM via GUI/API failed,
while it still worked via `qm` or `pvesh`.

The issue surfaced after commit cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in
pve-common ('run_command: improve performance for logging and long lines'),
and results from concatenating the original (tainted) buffer to a variable,
instead of a captured subgroup.

Untainting the value in ZFSPlugin should not cause any regressiosn, since the
other 3 target providers already have a match on '\d+' for retrieving the
lun number.

reported via pve-user [0].

reproduced and tested by setting up a LIO-target (on top of a virtual PVE),
adding it as storage and trying to start a guest (with a disk on the
ZFS over iSCSI storage) with `perl -T /usr/sbin/qm start $vmid`

[0] https://lists.proxmox.com/pipermail/pve-user/2020-October/172055.html

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-10-09 18:07:37 +02:00
4731eb1118 disk management: set more specific type for nvme
some users are confused, and it's nicer to have the more specific
type presented here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-09 18:06:38 +02:00
d5c80a5bd5 code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-09 18:06:07 +02:00
d1f4700063 file_size_info: handle dangling symlinks
and other stat failure modes.

this method returns undef if 'qemu-img info ...' fails to return
information, so callers must handle this already.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-05 12:51:44 +02:00
c018887fd3 bump version to 6.2-8
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-02 16:09:15 +02:00
3de423680a PBS: use simple TCP ping for online check for now
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-02 15:55:03 +02:00
4133e6e216 PBS: add support to specify port
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-02 15:49:48 +02:00
00f1de310e bump version to 6.2-7
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-29 18:56:04 +02:00
c9c90349c3 check for service exsitance before enabling zfs-import service
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-29 18:52:32 +02:00
f720f6c440 Disks: instantiate import unit for created zpool
When creating a new ZFS storage, also instantiate an import-unit for the pool.
This should help mitigate the case where some pools don't get imported during
boot, because they are not listed in an existing zpool.cache file.

This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-09-29 18:52:32 +02:00
59fdc2b71e fix regression in zfs volume activation
commit 815df2dd08 introduced a small issue
when activating linked clone volumes - the volname passed contains
basevol/subvol, which needs to be translated to subvol.

using the path method should be a robust way to get the actual path for
activation.

Found and tested by building the package as root (otherwise the zfs
regressiontests are skipped).

Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-09-29 18:52:32 +02:00
e3eb131ec5 zfs pool: clean up use statements
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-29 05:08:57 +02:00
815df2dd08 ZFS: mount subvols in activate_volume
Makes it possible to clone and start a container whose
ZFS subvols are not yet mounted for some reason. If a
subvol cannot be mounted, there's a better error now:
zfs error: cannot mount '/myzpool/subvol-103-disk-0': directory is not empty

Previously, cloning would quietly do an "empty" clone,
and startup would fail with:
mount_autodev: 1074 Permission denied - Failed to create "/dev" directory
lxc_setup: 3238 Failed to mount "/dev"
do_start: 1224 Failed to setup container "103"
__sync_wait: 41 An error occurred in another process (expected sequence number 5)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-09-29 05:07:15 +02:00
d0eaf18571 zfs: rollback: improve error message
we don't even know whether $snap exists at all, so the old variant could
be rather misleading..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-23 15:11:17 +02:00
c8eb017867 zfs: handle unexpectedly missing snapshots better
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-23 15:11:17 +02:00
48d0cd02c1 fix indentation of $prune_backups_format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-03 18:21:09 +02:00
8ca00a63f7 prune-backups: improve command description
This is shown in the man page, so it's not important to mention
that this is a wrapper. Also mention the fact that the keep options
from the storage configuration serve as a fallback, which was previously
mentioned in the description of the (now removed) prune-backups parameter.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-09-03 18:20:06 +02:00
a0933d7e16 prune-backups CLI: use keep-options directly
Makes the interface cleaner; e.g. --keep-daily=2 instead of
--prune-backups=keep-daily=2

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-09-03 18:20:06 +02:00
c3e87d0f6e prune_backups CLI: print different message when there's no backups at all
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-08-20 17:28:07 +02:00
7b73d327b5 prune_backups: fix message
For prune selections, it doesn't matter what the current time is,
only the timestamps of the backups matter.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-08-20 17:28:07 +02:00