Commit Graph

800 Commits

Author SHA1 Message Date
37ab64f388 api/config update: indentation and whitespace fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-15 10:36:01 +02:00
91f42b33a0 api/config update: only iterate over hash keys, not values
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-15 10:34:12 +02:00
4273e3ace9 pvesm set: handle deletion of properties
the delete parameter get's injected by the SectionConfigs
updateSchem, but we need to handle it ourself in the code
This makes the following possible:

pvesm set STORAGEID --delete property

Also the API equivalent is now possible. Adapted from the HA
managers Resource update API call.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-14 16:24:34 +02:00
d560ec2860 map_volume: fall back to 'path'
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to simplify a common case of calling 'map_volume' followed
by a defined-check and a call to path if it is not. The path is now
always returned if the plugin in question does not override
'map_volume'.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2019-04-29 13:44:40 +00:00
b5c8278a3e zpool: handle race with other zpool imports
The underlying issue is that a zpool can get imported only once, so
we first check if it's in `zpool list`, and thus imported, and only
if it does not shows up there we try to import it.

But, this can race with either:
* parallel running activate_storage call, through CLI/API/daemon
* a zpool import from an admin (a bit unlikely, but hey that's the
  thing with race conditions ;))

So refactor the "is pool imported" check into a closure, and call it
addditionally if the import failed, and silent the error if the pool
is now listed, and thus imported. This makes it a little bit nicer to
read too, IMO.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-18 10:01:39 +02:00
e9ab8ea313 zPool: fixup timeout setting for import
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-17 14:54:42 +00:00
a10695b4e8 zpool: cleanup zfs_request command a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-17 14:31:05 +00:00
dc18abe07b ZFS Pool: improve error output from activate_storage
related to #2154 which has some issues on zpool list, but we do not
see the error messages from that step : Buggy "pvesm status" output
2019-04-17 08:05:04 +00:00
4af7713268 Status: Include command error in error message when storage activation fails
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2019-04-11 08:04:32 +02:00
a2a04139da followup: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-10 10:15:42 +02:00
b1f9d99017 Fix #318: Delete vzdump log when deleting a backup
Vzdump log files were not deleted when a backup was deleted.
Consequently, the folder continuously filled with .log files.
Now they get deleted after the backup is removed.

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2019-04-10 10:10:44 +02:00
501562d4b7 code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-08 17:48:59 +02:00
4526dffa53 Diskmanage: don't run zpool if not present
Since zfsutils are not a hard dependency of our stack it is possible to not have
`zpool` available.

Checking for existance of `zpool` before calling it suppresses spurious warnings
in the logs (e.g. when creating Ceph OSDs or accessing the 'Disk' Tab in the
GUI).

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-04-08 17:47:07 +02:00
396aedff95 get_bandwidt_limits: ignore 'undef' as storage
If one of the storages passed in $storage_list was not defined
get_bandwidth_limit died (see [0], of an occurence of this).
This patch changes the behavior to ignore undef as storage instead.

[0] https://pve.proxmox.com/pipermail/pve-devel/2019-April/036515.html

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-04-05 18:15:01 +02:00
c6f1315524 zfs: don't generate/update cachefile on pool import
during storage activation.

for pools that don't get imported at boot (e.g. because their vdevs are
not available when zfs-import-*.service runs) it is fatal to include
them in the cachefile, for those that do get imported at boot this code
should never run anyway as they are already imported.

in any case, a fallback to import without cachefile is the safe variant.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2019-04-03 12:18:37 +02:00
cdef3abb25 workaround zfs create -V error for unaligned sizes
fixes the 'cannot create 'nvme/foo': volume size must be a multiple of
volume block size' error by always rounding the size up to the next 1M
boundary. this is a workaround until
https://github.com/zfsonlinux/zfs/issues/8541 is solved.
the current manpage says 128k is the maximum blocksize, but a local test
showed that values up to 1M are allowed. it might be possible to
increase it even further (see f1512ee61).

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2019-03-30 15:38:35 +01:00
074bdd354f Storage::get_bandwidth_limit: fix if condition
Passing 'undef' as '$storage_list' led to a warning about using an
uninitialized value as array_ref.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-03-29 09:04:48 +01:00
eebcdb1119 fix tests when one has iscsi devices
the test would read the real device and if one is an iscsi device
it would fail, move the test code to a sub and mock it in the tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-03-07 11:08:43 +01:00
3add8714a9 fix content listing for user mode iscsi plugin
the format is a required in the result schema

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-03-07 11:08:43 +01:00
a8ec2f0227 fix #585: remove leftover disks/directory after VM creation failed
When trying to create a qcow2 disk image with a size larger than available on the
storage, this will fail.
As qemu-img does not clean up the disk afterwards, it needs to be deleted
explicitly. Further, the vmid folder is cleaned up once it is empty.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2019-03-05 10:36:51 +01:00
4b5b01192e followup: try to be a bit more like systemd-escape
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-20 16:31:25 +01:00
fc31916384 followup comment that we do not escape completely like systemd
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-20 16:30:46 +01:00
43e04c681e fix #2099: escape systemd path names in mount unit
we only allow '-' '_' and '.' int storage-ids and names,
and we do not need to escape '_' and '.' (see man 5 systemd.unit)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-02-20 16:06:39 +01:00
775fdc697d followup: improve comment
while the commit message tells it nicely a comment should add
additional info for people just giving this a quick look

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-05 17:42:42 +01:00
061b9ca666 check_volume_access: tighten checks for iso/tmpl
(custom) templates might contain sensitive data, so require at least
read access on the underlying storage to access ISO and template files.

the same permissions are already needed for listing them, so this is
unlikely to cause fallout.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2019-02-05 17:14:50 +01:00
f2e5018e70 diskmanage: fix device encoding handling
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-02-04 11:39:02 +01:00
4ec588fe92 allow snippets by default for new dir storages
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-01-31 11:04:29 +01:00
d1eb35ea74 enable snippets content type for all directory based storages
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-01-31 11:04:29 +01:00
7c7ae12f43 add new content type 'snippets'
will be used to contain files which can be executed as hookscripts or
contain custom cloud-init configs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-01-31 11:04:29 +01:00
931c35cfa0 fix #1598: use glusterfs daemon default port for online check
use the port where the main glusterfs daemon listens on as ping port,
this one is also used by QEMU as default.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-01-29 08:27:27 +01:00
712e27f178 Fix #1941: remove empty directories when freeing image on FS based storages
Remove directories if they are empty, which can happen if all images
from a VM got deleted, e.g., after destroying said VM.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2019-01-24 15:09:20 +01:00
4e8de9ad56 Fix #2050: only provide 'conv=sparse' for LvmThin
LVMPlugin->volume_import (used by storage_migrate on either offline
migration with local disks, or online migration with storage-only
referenced disks) passed 'conv=sparse' to `dd`. This can lead to
data-corruption, if the target volume is not zero-initialized.

dropping the sparse argument completely would fix the problem, but
breaks keeping data sparse for LvmThinPlugin.

This patch moves the dd out into (LVM*) plugin specific sub so that
each can control the parameters.

Steps for reproducing the issue:
* create a cluster with (at least) 2 nodes A and B, with a free
  disk-device (/dev/sdx)
* write a recognizable pattern to /dev/sdx on B:
  `dd if=/dev/zero bs=10M | tr '\000' '\255' | dd of=/dev/sdb bs=10M`
  (would be grateful for alternatives to the dd| tr| dd)
* on both A and B create a lvm-vg (pvcreate, vgcreate)
* add it as _not_ shared storage, which is available on nodes A and B
* create a small guest on A
* fill a file in the guest with zeros
  `dd if=/dev/zero of=/zerofil bs=10M`
* stop the guest, migrate it to B
* start the guest - check that the file `/zerofil` contains `ad`
  instead of `00`

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-01-18 10:46:33 +01:00
628a921a94 LVM: Add '--refresh' when activating volumes
From `man 8 lvchange`:
  --refresh
      If the logical volume is active, reload its metadata. This is not
      necessary in normal operation, but may be useful ... if you're doing
      clustering manually without a clustered lock manager.

Fixes migration in a shared LVM (iscsi) setup, where a disk gets resized on one
node A and the guest is afterwards migrated to another node B: B still presents
the old size to the guest, leading to data corruption.

It is necessary to run `lvchange` twice because the options `-ay` and
`--refresh` are mutually exclusive.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-01-15 09:43:07 +01:00
955c1f2cf7 fix #2046 add volume_size_info to LVMPlugin
Without volume_size_info a Storage plugin falls back to the Implementation
in PVE/Storage/Plugin.pm, which relies on `qemu-img info`.

`qemu-img info` returns wrong results on a node in the case of shared volume
groups (e.g. when sharing disks via iSCSI), if a disk was resized on another
node (it lseeks to the end of the block-device, and this yields the old size).

Using lvs directly fixes the issue, since the LVM metadata gets updated when
invoking lvs.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-01-15 09:43:07 +01:00
4b3088a0a8 get_monaddr_list: also ensure that returned 'mon addr' are defined
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-01-09 15:23:44 +01:00
187df8553e ceph: get_monaddr_list: exclude general monitor section
Else, if a general MON section existed in the ceph.conf, we added a
undefined entry and a cephfs storage can't be mounted anymore.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-01-09 13:40:21 +01:00
5b5534a9d7 fix use of uninitialized value in parse_ceph_config
Signed-off-by: David Limbeck <d.limbeck@proxmox.com>
2019-01-03 10:26:57 +01:00
71be011328 register ceph.conf parser/writer
With this we can use cfs_read_file/cfs_write_file and cfs_lock_file.

Code for writing is mostly copied from pve-managers CephTools.pm,
with the addition of mgr sections.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:26:11 +01:00
4050fcc16b move Storage/CephTools to CephConfig
it is not a storage plugin, and it makes more sense to have it
top-level, but there we cannot name it CephTools because of the
existing ones in pve-manager

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:26:11 +01:00
c3442aa554 Fix #2020: use /sys to map nvmeXnY to nvmeX
`nvmeX` devices nodes are apparently allocated independently
from their namespace block devices `nvmeXnY` and therefore
they are not strictly related by name. For instance:
  $ readlink /sys/block/nvme0n1/device
  ../../nvme1
  $ readlink /sys/block/nvme1n1/device
  ../../nvme0

Here /dev/nvme0n1 is the first namespace of /dev/nvme1 while
/dev/nvme1n1 is the first namespace of /dev/nvme0.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2018-12-10 14:54:11 +01:00
d35a0b4b62 Fix #2019: CephFS storage misses maxfiles
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2018-12-07 13:47:05 +01:00
54e0b0034b cephfs: tell systemd that mount mount requires network
As we mount this manually and thus systemd doesn't know about any
dependency for cephFS mounts, this got umounted only at the last
stage of shutdown, where network wasn't active anymore.

But, CephFS needs to be connected to an active MDS for a clean
unmount so without network this mount would delay shutdown for quite
a bit, until after some minutes systemd gave up and forced unmount.

So tell systemd that this mount requires network, which can be done
with the '_netdev'[0] mount option, that lucky for us can be also
passed to a mount call and isn't only available for fstab.

with this a mount gets, among others:
> Wants=network-online.target
> Before=umount.target remote-fs.target
> After=remote-fs-pre.target system.slice network.target network-online.target -.mount

Which does the trick for us.

[0]: https://www.freedesktop.org/software/systemd/man/systemd.mount.html#_netdev

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-27 12:10:36 +01:00
957321a86e pvesm: add scan subcommands
Change to a cleaner sub command interface grouping all scan commands.

Alias to old command names for backward compatibility
Best viewed with the ignore whitespace/indent change '-w' flag from
git

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-22 14:11:38 +01:00
98bf79f78b remove usb scan code
this is now in PVE::SysFSTools

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-11-19 13:20:04 +01:00
a0965edde7 remove PVE/API2/Storage/Scan.pm
since those are now defined in pve-manager

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-11-19 13:20:04 +01:00
7963ba74bb move Scan API calls from PVE/API2/Storage/Scan.pm to pvesm.pm
since the calls for the real API are defined now in pve-managers Scan.pm

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-11-19 13:20:04 +01:00
a0908caa99 APIAGE followup: fix typo and print versions in error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-19 12:18:03 +01:00
042dd4be1f plugin loader: add an APIAGE
With the addition of the map/unmap_volume() methods we made
an (actually unnecessary) API version bump.
All current users of these methods fall back to path() when
they return undef, so plugins implementing version 1 are
in fact compatible currently. (In fact, the default
Plugin::map_volume() could fall back to it on its own, but
doesn't currently).

For now let's just allow plugins older plugins to also be
loaded by introducing an API age variable. With it, if we
have a reason to break older plugins, we can have a
deprecation period during which older plugins cause a
warning instead of refusing to load altogether.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2018-11-19 10:47:46 +01:00
49cc7802f7 LVM: lock on volume_resize
This is important for shared LVM storages. As with deletes and
creates of images, as else we may have not the up-to-date metadata
and extents may get reused if another node created an image during
the same time, for example.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-15 10:13:57 +01:00
a4bdab17bb fix #862: do not resolve portal adress on storage add
as described in #862:

> I experienced a problem with ISCSI portal when using a hostname and
> not IP.
> The GUI resolves the hostname to an IP and writes it to storage.cfg.
> As my setup requires hostnames, i needed to change the config
> manually back to the hostname which is working fine.
>
> Why is this conversion done? If I enter a hostname, i want to have a
> hostname. If i enter an IP address i want to have an IP address.

This makes sense to me, a feature of using domains is that they
are/should be resolved when actually using (i.e., connecting to them)
so resolving it once on add does not seems like a good idea (if I do
not miss something - as this is a classic "imported from SVN" I do
not have any rationale to look at).

So save the work and pass it as is.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-15 10:06:19 +01:00