Commit Graph

371 Commits

Author SHA1 Message Date
a0028cf97e allow rbd images < 1M to be detected
without this, having an efidisk on a ceph storage
prevents creating another disk on the same
ceph storage, because it will not be detected
and we try to allocate one with the same name

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-10-07 08:17:40 +02:00
1993540bf8 fix #1122: correctly create LUNs for linked clones 2016-09-29 08:42:26 +02:00
d547f26c7d Fix #1012: dir storage: add is_mountpoint option
While the mkdir option deals with the case where we don't
want to clobber a mount point with directories (like ZFS,
gluster or NFS), putting a directory storage directly onto a
mount point is still risky:
If the path exists - which it usually does even if not
mounted - the storage will be considered successfully
activated, but empty (or with unexpected content). Some
operations will then lead to unexpected problems: the
free_disk operation for instance only warns if the disk does
not exist, but does not throw an error. In this case the
configuration might be updated without the real disk being
deleted. Once it's mounted back in, later operations which
check existing disks which are not part of the current VM
configuration (like migration) might error unexpectedly.

This adds an 'is_mountpoint' option to directory storages
which assumes the directory is an externally managed mount
point (eg. fstab or zfs) and changes activate_storage() to
throw an error if the path is not mounted.
2016-09-27 09:56:55 +02:00
c7616abcb2 path based storages: improve the mkdir option
So far this only prevented the creation of the toplevel
directory. This does not cover all problem cases,
particularly when said directory is supposed to be a mount
point, including NFS and glusterfs beside ZFS.

The directory based storages we have already use mkpath
whenever they need to create files, and for actions on files
which are supposed to exist it's fine if it errors out.
So it should also be safe to skip the creation of standard
subdirectories in activate_storage().

Additionally NFS and glusterfs storages should also accept
the mkdir option as they otherwise may exhibit similar
issues, eg. when an NFS storage is mounted onto a directory
inside a ZFS subvolume.
2016-09-27 09:54:53 +02:00
787624dfc0 add comments about LVM thin clones 2016-09-15 14:01:07 +02:00
5510f5c9f9 fix typo 2016-09-15 13:56:17 +02:00
1b83c3d9c7 harmonize list_images code 2016-09-15 13:54:47 +02:00
cfd58f1fcc code cleanup 2016-09-14 11:31:10 +02:00
883d9b81f0 fix indentation 2016-09-14 11:23:52 +02:00
9690e55e9b rbd: detect linked clones/base volumes correctly
since the rbd images themselves are named differently than
the volumes in our config files, we need to recreate this
information from the parent relation in the ceph metadata,
otherwise list_images() might return wrong volume names/IDs

since list_images is used by PVE::Storage::vdisk_free() to
check for children still referencing a base image, because
of the wrong volume id RBDPlugin->parse_volname() does not
detect the base image of linked clones and the check fails.
this is thankfully mitigated by the protected status of the
base snapshot, but creates a rather confusing error message.

scenario (VM 701 is a linked clone of template VM 700):

$ qm config 700 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1,size=2G
$ qm config 701 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1/vm-701-disk-1,size=2G

before (pvesm list reports wrong volume ID, check fails):

$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1   raw 2147483648 700
ceph_qemu:vm-701-disk-1     raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd
rbd unprotect base-700-disk-1 snap '__base__' error: snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd

after (correct volume ID, check works as intended):

$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1                   raw 2147483648 700
ceph_qemu:base-700-disk-1/vm-701-disk-1     raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
base volume 'base-700-disk-1' is still in use (use by 'base-700-disk-1/vm-701-disk-1')
2016-09-14 11:23:39 +02:00
5e6aa346c7 rbd: use correct key to access hash elements 2016-09-14 11:17:16 +02:00
2622a5ca2d sheepdog 1.0 changed the path from /usr/sbin/dog to /usr/bin/dog 2016-09-07 09:11:43 +02:00
6d2b278c51 white space cleanups 2016-09-01 06:28:54 +02:00
e7ac2d5cf6 rbd_unittobytes: use a local var instead of a sub 2016-09-01 06:24:51 +02:00
134172255f rbd: allow to use client custom ceph conf for each storeid
If you want to use different ceph storage,
something they have differents values like ms_nocrc = true.(they are also others ones).

The client need to specify theses special options to be able to connect

This patch allow to create a ceph config file for each storeid in

/etc/pve/priv/ceph/$storeid.conf

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2016-09-01 06:21:40 +02:00
e6ccfdeb21 Remove unused pve-storage-monhost format 2016-07-11 13:58:56 +02:00
e858048fc5 rbd: use pve-storage-portal-dns-list for monhost
This way we get parameter verification on monitor addresses
as well as the ability to pass multiple `--monhost`
arguments to `pvesm add`.

Since our '-list' schemas default to using commas we now
need to properly support these, so all uses of the monhost
property now replace all of semicolon, space or comma into
the currently required character.
This should fix the issues reported by Alwin Antreich on the
pve-user list.

Since this schema supports both ipv6+port notations we need
to make sure we convert to the bracket enclosed variant.
Added a helper for this.
2016-07-11 13:58:56 +02:00
33cef4c84e rbd: path: don't build the entire path if we don't use it 2016-07-11 13:58:56 +02:00
0423e8c686 fix indentation 2016-06-29 11:42:03 +02:00
7a9dd1195d add tagged_only option to LVM storage
to filter volumes by the 'pve-vm-ID' tag, which is set on
all volumes created via the PVE storage layer.
2016-06-29 11:42:03 +02:00
4a7d222204 add check if format is defined to avoid warning 2016-06-28 11:40:26 +02:00
e76dbd9204 use correct ceph version command
"ceph version" retrieves the version from the cluster (i.e.,
from the queried monitor), but what is needed here is the
local ceph version, which is returned by "ceph --version".
2016-06-14 11:46:32 +02:00
5c95e48479 Dir storage creation: check for a sane path
Ideally we don't need this, but this with the directory
storage this is a user-input field which gets returned
by the storage's path() method which is used in various
external command calls.
2016-06-09 18:15:28 +02:00
602eacfe6a split udevam command call 2016-06-09 18:15:09 +02:00
b521247bf4 fix 1012: dir: add mkdir option
By default a directory storage creates its path. In some
cases this can be undesired, mostly when storages have
nested paths (eg. a dir storage on a ZFS path or in an NFS
share, or inside custom mount points).
As a simple fix to this the 'mkdir' option (default ON)
can now be used to disable this behavior.
2016-06-07 10:47:16 +02:00
0be02e0f21 s/version_parser/ceph_version_parser/ 2016-06-07 10:32:42 +02:00
d86fd0a49b disable jewel image features when using krdb
otherwise mapping those images will fail. disabling the
features only needs to be done once per image, so it makes
sense to do this when creating the images.

unfortunately, the command does not work in hammer, so
it needs a version check for jewel or higher.
2016-06-07 10:25:36 +02:00
7aeda03306 add ceph version helpers 2016-06-07 10:25:25 +02:00
daccf21ef7 docs: typo, newlines, cleanup 2016-04-15 16:37:01 +02:00
9e4632c2fe DRBDPlugin: check_drbd_res() ignore info codes
Messages for return codes 1 to 99 are not considered an error.
2016-04-15 08:07:01 +02:00
72e743bd65 rbd: fix error message 2016-04-11 12:57:51 +02:00
74b724a699 In path use parsed volname not the volid and add 'basevol' 2016-04-05 15:43:02 +02:00
8e5b96cac3 zfs_parse_zvol_list: simplify regex 2016-04-04 08:51:13 +02:00
851658c3b0 Change zfs path when link clone are used
The new naming is coherent to Dir plugin.

So if we make an licked clone the parent will be coded in the path of the storage.
2016-04-04 06:38:15 +02:00
703de49ea9 Skip invalid property's in storage parser
There is no need to remove the hole storage, if one property is not valid.
Just ignore the property.
2016-04-04 06:30:53 +02:00
21430e5088 Use is_worker to decide default timeout for ZFS
Bump timeout to 1 hour if running in a worker and no timeout
specified.
2016-03-15 16:50:00 +01:00
f44e50fed1 lvmthin: activate base volumes
create_base() uses '-ky' to prevent base images from being
activated by default, similar to snapshots. This means we
need to activate them like snapshots with the '-K' option.
2016-03-15 06:48:59 +01:00
7a047fce4a Remove content type container from GlusterFS. 2016-03-02 17:14:07 +01:00
baafddbd02 add sparseinit to has_feature
we will use this for determining
if we need to write zeros to a volume

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-02-24 17:16:25 +01:00
d0ea89e564 prepare storage for lvmthin gui
this patch adds an lvmthin scan to the api, so that we can get a list
of thinpools for a specific vg via an api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-02-20 09:51:12 +01:00
19e5596a74 increase timeout for ZFSPugin
This is useful on large zfs pools because they take longer to response.
2016-01-25 10:48:16 +01:00
030bc5c803 lvmthin: allow to clone from snapshot 2016-01-20 11:32:18 +01:00
920ecf563a remove option maxfiles from zfspool plugin
It makse no sense, because this plugin is unable to store backup files.
2015-12-30 17:16:37 +01:00
1773e785c2 nfs: is_mounted: match /^nfs.*/ type
This is consistent with the old behavior.
2015-12-09 16:15:31 +01:00
aed6c85d28 nfs/glusterfs: is_mounted fixes
The parse_proc_mounts change made the glusterfs is_mounted
check fail (causing it to be shown as inactive on the GUI).
The NFS check was stricter (not allowing a trailing / in the
source anymore).
2015-12-09 09:22:15 +01:00
f482231e48 Revert "Change zfspoolplugin path when snapshot is given."
This reverts commit fdd31ce759.

The assumption was wrong. Turned out that we cannot assume
snapshots are always mounted there.
2015-12-09 07:38:36 +01:00
1f5734bb8d allow rx permissions for group/world on .subvol dirs
vdisk_alloc comes in with an umask of 0037, which means the
.subvol dir has permissions 0740, which means that the root
directory of containers has permissions 0740, essentially
preventing the users inside a container from accessing
anything.
2015-11-26 12:04:59 +01:00
fdd31ce759 Change zfspoolplugin path when snapshot is given.
Zfs subvol snapshots are always mounted under $path_of_subvol/.zfs/$snapname
2015-11-19 12:36:19 +01:00
281f958706 Fixed ZFS over iSCSI snapshot rollback
I converted several zfs_request($class, ...) calls to $class->zfs_request(...) calls in ZFSPoolPlugin.pm and removed a superfluous $class parameter in ZFSPlugin.pm.

Fixes #816

Signed-off-by: Phillip Schichtel <phillip.public@schich.tel>
2015-11-18 11:00:40 +01:00
80b647882e make use of the new ProcFSTools::parse_proc_mounts 2015-11-14 10:37:06 +01:00