Commit Graph

1173 Commits

Author SHA1 Message Date
d4e00f2bd5 file/volume size info: add actual errors to untaint messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
ac598d851e plugins: untaint volume_size_info retuns
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.

One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/

A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.

This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
  by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
  returns.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-06-23 08:28:48 +02:00
ffc31266da tree-wide: fix typos with codespell
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:28:48 +02:00
5b955999b9 pbs: fix typo
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-22 13:44:06 +02:00
03c487e553 config: prevent empty content list when content type 'none' is not supported
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:21:45 +02:00
d96b789aed vdisk_list: only scan storages with the correct content type(s)
The enabled check in the lower loop is now redundant and can be removed.

If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.

Remaining vdisk_list users that do not specify a content type are:
    1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
       rootdir and images, so the storage is scanned (if enabled, same as
       before).
    2. pve-container migration
    3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.

This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
    0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:21:45 +02:00
6a4545601b lvm: volume import: handle worker returned by free_image
only affects LVM storages with 'saferemove 1' where the import fails at a rather
advanced stage. Previously in such cases, the renamed (by free_image) volume
del-vm-XYZ-disk-N would be left over.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 09:38:03 +02:00
7ae13a34d2 pbs: free image: explicitly return undef
Storage.pm's vdisk_free interprets truthy return values as worker subs, so be
explicit about returning undef here. Not an issue at the moment, because
run_client_command already returns undef, but better be safe than sorry.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 09:38:03 +02:00
ead6be934d api: status: sort index and add missing "file-restore"
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-21 09:32:55 +02:00
823e8afe72 plugin loader: text-width cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-18 18:33:20 +02:00
f985f33afd api: content/delete: die with newline to avoid addign file-context
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-16 19:24:38 +02:00
cda32b2361 cephfs: update reminder for systemd_netmount removal
Commit d9ece228fb introduced the workaround with
using systemd units and 25e222ca0d re-used the
functionality for fuse-mounts too.

The latter commit suggests to switch to using mount.fuse.ceph for the '_netdev'
option, but it doesn't seem to work:

 root@pve701 / # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
 ceph-fuse[20729]: starting ceph client
 2021-06-15T14:22:00.631+0200 7f995f878080 -1 init, newargv = 0x55e09fc11a40 newargc=11
 ceph-fuse[20729]: starting fuse
 root@pve701 / # mount -t ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/normal -o 'name=admin,secretfile=/etc/pve/priv/ceph/cephfs.secret,conf=/etc/pve/ceph.conf,_netdev'
 root@pve701 / # mount | grep mnttest
 ceph-fuse on /mnttest/fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
 10.10.10.11,10.10.10.12,10.10.10.13:/ on /mnttest/normal type ceph (rw,relatime,name=admin,secret=<hidden>,acl,_netdev)

Also, the return value is not propagated by mount.fuse.ceph, meaning the output
would need to be parsed...

 root@pve701 ~ # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
 2021-06-15T14:42:56.326+0200 7f634edae080 -1 init, newargv = 0x560cdb5e0a40 newargc=11
 ceph-fuse[34480]: starting ceph client
 fuse: mountpoint is not empty
 fuse: if you are sure this is safe, use the 'nonempty' mount option
 ceph-fuse[34480]: fuse failed to start
 2021-06-15T14:42:56.338+0200 7f634edae080 -1
 fuse_mount(mountpoint=/mnttest/fuse) failed.
 Mount failed with status code: 5
 root@pve701 ~ # echo $?
 0

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
9531988d5e cephfs: revert safe-guard check for Luminous
It's necessary to be on Nautilus before upgrading to 7.x, so the check is no
longer needed. See commit e54c3e3347. It didn't
cleanly revert, because there were cleanups made afterwards.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
3a3ff9d52b config: add backup content type to default local storage
which is used if there is no ('dir'-type) 'local' entry. Storage configurations
made by the installer also support backups for the 'local' storage, and the
'prune-backups' parameter is not really useful otherwise.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
bbadd1659d config: mention that maxfiles is deprecated
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.

Also switch the setting for the default 'local' storage to 'prune-backups'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
1a4ab884e8 postinst: move cifs credential files into subdirectory upon update
and drop the compat code.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-16 13:20:35 +02:00
d7f6f85ea0 fix find_free_disk_name invocations
The interface takes the storeid now, not the image dir.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-15 14:36:12 +02:00
883c811f7f prune backups: activate storage
which also checks whether the storage is even enabled. VZDump jobs already
activate the storage, but more direct calls via API/CLI didn't do so yet.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-15 10:11:17 +02:00
f7a95153d6 diskmanage: fix determining array length
$#* is the last index, not the length.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-15 10:10:33 +02:00
0e30b3121d api: get rid of moved 'usb' call
pve-manger commit bd328734deb1dcea296858bb38d085e392adb99e changed the frontend
to use the new call.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-08 15:19:36 +02:00
d938178298 disks: fixup join usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 14:19:53 +02:00
839afff896 disks: wipe blockdev: pass all child partitions to wipefs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 13:13:26 +02:00
fa6d05ab24 disks: wipe blockdev: improve variable locality/readability
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 13:12:57 +02:00
70dc70984a disks: factor out stripping of /dev and cleanup vicinity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 13:10:10 +02:00
2829e6a853 api: add wipedisk call
Try to detect active mounts and holders early, because it's cheap. The wipefs
command in the worker will detect even more situations where wiping alone is
not enough for the device to show up as unused, or could otherwise be
problematic.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
cb057e21c5 diskmanage: add has_holder method
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
3bf7f8891b diskmanage: add is_mounted method
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
7e14102a4b diskmanage: factor out mounted_blockdevs helper
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
262ad7a92e diskmanage: add wipe_blockdev method
based on the wipe_disks method from pve-manager's Ceph/Tools.pm with the
following main differences:
    * use wipefs to wipe labels first (to avoid sgdisk complaining about the
      backed up GPT structure on a subsequent GPT initialization)
    * only take one device as an argument
    * do not use an absolute path for 'dd'
    * die if one of the command fails

The wipefs command makes checks and complains about e.g. mounted or active
devices. One could supply --force to wipefs, but in many such situations it
does not work as expected, because the device would still be detected as in-use
afterwards, and further manaual steps would be needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-02 11:56:51 +02:00
522cd32738 remove some more DRBD references
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-12 13:14:17 +02:00
dbf11c2f05 remove internal, unmaintained, DRBD plugin
This was never marked stable and the recommended one is the external
version, which is maintained by linbit themselves.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 09:08:22 +02:00
a1e09e496e iscsi: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-04 12:02:47 +02:00
9177cc2eda clone image: specify base format option with qemu-img
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-05-03 13:07:02 +02:00
c1ec1acbde file-restore: pass in volume ID or name
instead of just the snapshot for consistency with other API endpoints,
and possible future extension to VMA backups (where 'snapshot' would be
a rather strange terminology).

add some additional checks (pbs storage type, backup volume type),
completion and magic (allow passing in either a full volume ID with
correct storage, or just the volume name, or just the snapshot for
easier API/CLI usage/convenience).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-23 14:09:54 +02:00
82f764e119 file-restore: return perl-y booleans
like we do in most of our API.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-23 14:09:54 +02:00
f1a3ce3b17 add FileRestore API for PBS
Includes list and restore calls.

Requires VM.Backup and Datastore.Audit permissions, for the accessed
VM/CT and containing datastore respectively.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-23 14:09:48 +02:00
6035a5dfb1 api: fix typo in error message
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-23 12:29:36 +02:00
c56f7a71af pbs: allow setting up a master key
similar to the existing encryption key handling, but without
auto-generation since we only have the public part here.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-22 21:56:31 +02:00
3c93115570 rbd: fix typo in error message
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-20 18:21:31 +02:00
ceb7b1ed09 diskmanage: get_partnum: fix check
Not replacing it with return, because the current behavior is dying:
    Can't "next" outside a loop block
and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not check
the return value.

Also check for $st, which can be undefined in case a non-existing path was
provided. This also led to dying previously:
    Can't call method "mode" on an undefined value

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-20 18:13:18 +02:00
415dc3985d diskmanage: improve setting usage for whole disk with include-partitions
in case a disk with partitions also has an fstype set, which happens for our ZFS
boot disks. Do not change the behavior without include-partitons, as we
prefer(red) to be more specific than simply 'partitions' then.

Reported in the enterprise support channel.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-16 12:52:08 +02:00
1ebd925dcf import: allow import from UNIX socket
this allows forwarding over websockets without requiring a (free) port.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-16 12:23:43 +02:00
bef7920d1e volume export/import: allow uppercase letters
Bug reported in the community forum[0].

Currently, it's possible to break replication by:
1. have an existing snapshot whose name contains an uppercase letter
2. set up a replication job and run it
3. rollback to the existing snapshot
4. replicate again -> fails

The failure occurs, because after step 3, the most recent common snapshot is the
previously existing one and currently no uppercase letters are allowed for
export/import.

The pve-snapshot-name option uses the CONFIGID_RE
    qr/[a-z][a-z0-9_-]+/i
so it cannot be used here, because it would not allow for e.g. '__migrate__'.
Simply allow uppercase letters, to be backwards compatible and allow all
possible pve-snapshot-name values.

There is still an issue if there also was state volume, but that's a different
bug[1].

[0]: https://forum.proxmox.com/threads/solved-migration-error-base-value-does-not-match-the-regex-pattern.85946/
[1]: https://bugzilla.proxmox.com/show_bug.cgi?id=3111

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-12 14:52:29 +02:00
8c858f7eeb fix #3345: zfs: restore container volume to ZFS with size 0
A restore to ZFS for a container which has a volume (rootfs / mount
point) of size 0 failed because the refquota property does not accept
'0k' but wants 'none' in that situation.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 14:37:50 +02:00
c27fe64810 rbd: make volume param for get_rbd_path to allow further use
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 14:19:48 +02:00
ed7ea5a352 rbd: list images: early return to avoid indentation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:51:15 +02:00
a3cad0b50d rbd: list images: sort by keys when pushing on result array
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:49:56 +02:00
6d0d0a977d rbd: indentation and whitespace cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:48:27 +02:00
22265bd990 rbd: get kernel device sub returns a path, not a name
also transform to private sub instead of local variable closure.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:45:21 +02:00
72bbd8a6f7 rbd: consistent closure call style
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:43:33 +02:00