Commit Graph

492 Commits

Author SHA1 Message Date
d560ec2860 map_volume: fall back to 'path'
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to simplify a common case of calling 'map_volume' followed
by a defined-check and a call to path if it is not. The path is now
always returned if the plugin in question does not override
'map_volume'.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2019-04-29 13:44:40 +00:00
b5c8278a3e zpool: handle race with other zpool imports
The underlying issue is that a zpool can get imported only once, so
we first check if it's in `zpool list`, and thus imported, and only
if it does not shows up there we try to import it.

But, this can race with either:
* parallel running activate_storage call, through CLI/API/daemon
* a zpool import from an admin (a bit unlikely, but hey that's the
  thing with race conditions ;))

So refactor the "is pool imported" check into a closure, and call it
addditionally if the import failed, and silent the error if the pool
is now listed, and thus imported. This makes it a little bit nicer to
read too, IMO.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-18 10:01:39 +02:00
e9ab8ea313 zPool: fixup timeout setting for import
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-17 14:54:42 +00:00
a10695b4e8 zpool: cleanup zfs_request command a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-17 14:31:05 +00:00
dc18abe07b ZFS Pool: improve error output from activate_storage
related to #2154 which has some issues on zpool list, but we do not
see the error messages from that step : Buggy "pvesm status" output
2019-04-17 08:05:04 +00:00
c6f1315524 zfs: don't generate/update cachefile on pool import
during storage activation.

for pools that don't get imported at boot (e.g. because their vdevs are
not available when zfs-import-*.service runs) it is fatal to include
them in the cachefile, for those that do get imported at boot this code
should never run anyway as they are already imported.

in any case, a fallback to import without cachefile is the safe variant.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2019-04-03 12:18:37 +02:00
cdef3abb25 workaround zfs create -V error for unaligned sizes
fixes the 'cannot create 'nvme/foo': volume size must be a multiple of
volume block size' error by always rounding the size up to the next 1M
boundary. this is a workaround until
https://github.com/zfsonlinux/zfs/issues/8541 is solved.
the current manpage says 128k is the maximum blocksize, but a local test
showed that values up to 1M are allowed. it might be possible to
increase it even further (see f1512ee61).

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2019-03-30 15:38:35 +01:00
3add8714a9 fix content listing for user mode iscsi plugin
the format is a required in the result schema

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-03-07 11:08:43 +01:00
a8ec2f0227 fix #585: remove leftover disks/directory after VM creation failed
When trying to create a qcow2 disk image with a size larger than available on the
storage, this will fail.
As qemu-img does not clean up the disk afterwards, it needs to be deleted
explicitly. Further, the vmid folder is cleaned up once it is empty.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2019-03-05 10:36:51 +01:00
d1eb35ea74 enable snippets content type for all directory based storages
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-01-31 11:04:29 +01:00
7c7ae12f43 add new content type 'snippets'
will be used to contain files which can be executed as hookscripts or
contain custom cloud-init configs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-01-31 11:04:29 +01:00
931c35cfa0 fix #1598: use glusterfs daemon default port for online check
use the port where the main glusterfs daemon listens on as ping port,
this one is also used by QEMU as default.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-01-29 08:27:27 +01:00
712e27f178 Fix #1941: remove empty directories when freeing image on FS based storages
Remove directories if they are empty, which can happen if all images
from a VM got deleted, e.g., after destroying said VM.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2019-01-24 15:09:20 +01:00
4e8de9ad56 Fix #2050: only provide 'conv=sparse' for LvmThin
LVMPlugin->volume_import (used by storage_migrate on either offline
migration with local disks, or online migration with storage-only
referenced disks) passed 'conv=sparse' to `dd`. This can lead to
data-corruption, if the target volume is not zero-initialized.

dropping the sparse argument completely would fix the problem, but
breaks keeping data sparse for LvmThinPlugin.

This patch moves the dd out into (LVM*) plugin specific sub so that
each can control the parameters.

Steps for reproducing the issue:
* create a cluster with (at least) 2 nodes A and B, with a free
  disk-device (/dev/sdx)
* write a recognizable pattern to /dev/sdx on B:
  `dd if=/dev/zero bs=10M | tr '\000' '\255' | dd of=/dev/sdb bs=10M`
  (would be grateful for alternatives to the dd| tr| dd)
* on both A and B create a lvm-vg (pvcreate, vgcreate)
* add it as _not_ shared storage, which is available on nodes A and B
* create a small guest on A
* fill a file in the guest with zeros
  `dd if=/dev/zero of=/zerofil bs=10M`
* stop the guest, migrate it to B
* start the guest - check that the file `/zerofil` contains `ad`
  instead of `00`

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-01-18 10:46:33 +01:00
628a921a94 LVM: Add '--refresh' when activating volumes
From `man 8 lvchange`:
  --refresh
      If the logical volume is active, reload its metadata. This is not
      necessary in normal operation, but may be useful ... if you're doing
      clustering manually without a clustered lock manager.

Fixes migration in a shared LVM (iscsi) setup, where a disk gets resized on one
node A and the guest is afterwards migrated to another node B: B still presents
the old size to the guest, leading to data corruption.

It is necessary to run `lvchange` twice because the options `-ay` and
`--refresh` are mutually exclusive.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-01-15 09:43:07 +01:00
955c1f2cf7 fix #2046 add volume_size_info to LVMPlugin
Without volume_size_info a Storage plugin falls back to the Implementation
in PVE/Storage/Plugin.pm, which relies on `qemu-img info`.

`qemu-img info` returns wrong results on a node in the case of shared volume
groups (e.g. when sharing disks via iSCSI), if a disk was resized on another
node (it lseeks to the end of the block-device, and this yields the old size).

Using lvs directly fixes the issue, since the LVM metadata gets updated when
invoking lvs.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-01-15 09:43:07 +01:00
4050fcc16b move Storage/CephTools to CephConfig
it is not a storage plugin, and it makes more sense to have it
top-level, but there we cannot name it CephTools because of the
existing ones in pve-manager

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:26:11 +01:00
d35a0b4b62 Fix #2019: CephFS storage misses maxfiles
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2018-12-07 13:47:05 +01:00
54e0b0034b cephfs: tell systemd that mount mount requires network
As we mount this manually and thus systemd doesn't know about any
dependency for cephFS mounts, this got umounted only at the last
stage of shutdown, where network wasn't active anymore.

But, CephFS needs to be connected to an active MDS for a clean
unmount so without network this mount would delay shutdown for quite
a bit, until after some minutes systemd gave up and forced unmount.

So tell systemd that this mount requires network, which can be done
with the '_netdev'[0] mount option, that lucky for us can be also
passed to a mount call and isn't only available for fstab.

with this a mount gets, among others:
> Wants=network-online.target
> Before=umount.target remote-fs.target
> After=remote-fs-pre.target system.slice network.target network-online.target -.mount

Which does the trick for us.

[0]: https://www.freedesktop.org/software/systemd/man/systemd.mount.html#_netdev

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-27 12:10:36 +01:00
49cc7802f7 LVM: lock on volume_resize
This is important for shared LVM storages. As with deletes and
creates of images, as else we may have not the up-to-date metadata
and extents may get reused if another node created an image during
the same time, for example.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-15 10:13:57 +01:00
40d698932e implement map_volume and unmap_volume
This allows to request a mapped device/path explicitly, regardles of
the storage option, eg. krbd option in the RBDplugin.

Bump of the storage ABI => 2

Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2018-11-09 17:25:51 +01:00
dd9e97ed14 find_free_diskname: fixup regex match operator
Co-developed-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-09-19 11:21:37 +02:00
0057171085 Fix #1925: untaint rbd JSON output
Reviewed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2018-09-19 11:21:37 +02:00
dd1fa860d0 get_vm_disk_number: follow up cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-09-14 09:19:08 +02:00
f4cc2c4afe Fix #1913: get_vm_disk_number: clone plugindata to avoid side effects
Accessing a non-existing 'format' key in plugindata (e.g., in LvmThinPlugin),
created it by autovivication, thus breaking the fallback to the default value
'raw' upon any following access.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2018-09-14 09:17:40 +02:00
e5b2206f8a rbd: krbd_feature_disable was not disabling features
$features is actually an array reference, so use it as one.
This broke creation and migration of disks on rbd storages

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-09-12 14:56:36 +02:00
83a40b8d3b refactor is_valid_vm_diskname to private simpler sub
This was newly introduced and is only used once, so having a
wantarray return mechanism, without ever using it or knowing for sure
if this may help with reuse of the method is not ideal.

Make the sub a module private one just returning the vm disk number
or explicit undef. Pass it the $suffix variable, to avoid recomputing
it every time called by out caller's loop.

If there's re-use potential in the future we can actually decide what
makes sense to return.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-09-11 07:52:43 +02:00
c4a29df483 refactor finding next diskname for all plugins
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2018-09-10 12:21:10 +02:00
59fa9fd6a3 next diskname: start ids with 0 to honor MAX_VOLUMES_PER_GUEST
else we can only have MAX_VOLUMES_PER_GUEST-1 disk per VMID,
not tragic but possible confusing

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-09-10 11:21:12 +02:00
c0535aa72f make max number of disks a constant
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2018-09-07 15:28:15 +02:00
345f898108 add vm_diskname helpers (get_next, is_valid)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2018-09-07 15:28:15 +02:00
aa14def420 Addition to fix #1895, skip image if no owner
Non conforming image names are not ignored anymore by the new rbd_ls
implementation, this patch adds the old behaviour.

This fix is a temporary workaround and should be removed, once the new
image name parser is ready.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2018-09-07 13:51:18 +02:00
1be93fe2fd rdb: followup cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-09-06 15:19:16 +02:00
c093e93b28 rbd: remove unused size conversion function
since the json output gives the sizes in bytes, we do not
need to convert anymore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-09-06 15:18:23 +02:00
89a8800bc3 fix #1895: use json for 'rbd ls -l' and 'rbd info'
since ceph changed the plain output format for 12.2.8
we have to change the code anyway, and when were at it,
we can change it to the (hopefully) more robust json output

Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-09-06 15:11:26 +02:00
ca552c7639 Fix #1858: lvm_find_free_diskname check for base
lvm_find_free_diskname only checked for existing volumes starting with 'vm-',
and not with 'base-'.

Unify implementation with other Plugins.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2018-08-07 12:14:11 +02:00
f15ac9b5a8 LIO: followup: various small cleanups
move two loop bodies from

if (condition) {
    ...
}

too
next if !condition;

...

to save an indentation level

rename variables to a bit shorter version, i.e.:
s/oneTarget/target/
s/oneTpg/tpg/

and a comment rewording

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-08-02 14:57:07 +02:00
ff69c66022 LIO: followup: shorter stderr/out logging
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-08-02 14:56:20 +02:00
ccdf8ddbda LIO: followup: fix indentation 2018-08-02 14:56:20 +02:00
d9254744a8 LIO: followup: remove trailing whitespaces
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-08-02 14:56:20 +02:00
46c6107eb1 Linux LIO/targetcli support
Introducing LIO/targetcli support allowing to use recent linux
distributions as iSCSI targets for ZFS volumes.

In order for this to work, two preconditions have to be met:

1. the portal has to be set up correctly using targetcli
2. the initiator has to be authorized to connect to the target
   based on the initiator's InitiatorName

When adding a LIO iSCSI target, a new "LIO target portal group" field needs
to be correctly populated in the "Add: ZFS over iSCSI" popup, containing the
fitting "LIO target portal group" name (typically something like 'tpg1').

Signed-Off-By: Udo Rader <udo.rader@bestsolution.at>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2018-08-02 14:56:20 +02:00
2c2fd98b87 add metadata_size and _used to lv list
so that we can show it in the webinterface and the user can check
how full it is

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-02 11:16:05 +02:00
49d149e638 extend list_thinpools for multiple vgs and more information
if no vg is given, give back all thinpools from all vgs
if verbose is 1, then give back the information about the thinpools
(like size and free)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-02 11:10:33 +02:00
8cccb3447b add an option to include pvs in lvm_vgs
this will be used for the lvm part of the disk management

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-08-02 09:38:27 +02:00
074b2cb4fa remove unused Data::Dumper usages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-07-11 12:23:44 +02:00
5527c824dd cephtools: simplify ceph_check_keyfile 2018-07-04 16:56:24 +02:00
5402cea50d cephfs plugin: followup with some code cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-07-04 15:07:33 +02:00
fe0d606f5e Use keyfile create/remove from CephTools
in the RBDPlugin, that is also shared by the CephFSPlugin

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2018-07-04 13:18:32 +02:00
e34ce14443 Cephfs storage plugin
- ability to mount through kernel and fuse client
 - allow mount options
 - get MONs from ceph config if not in storage.cfg
 - allow the use of ceph config with fuse client
 - Delete secret on cephfs storage creation

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2018-07-04 13:18:19 +02:00
3e47917203 Add simple keyring check for cephfs/rbd
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2018-07-04 13:13:21 +02:00