Commit Graph

629 Commits

Author SHA1 Message Date
1993540bf8 fix #1122: correctly create LUNs for linked clones 2016-09-29 08:42:26 +02:00
e3b02ffe6e disks: fix typo 2016-09-29 08:42:15 +02:00
1c99955364 disks: parse smart attributes using RE 2016-09-29 08:42:06 +02:00
0c486b09df disks: use smartctl -H -A
to only list SMART health and attributes, instead of
"smartctl -a", which prints "all SMART information"
2016-09-29 08:41:31 +02:00
acd3d91649 move SMART error handling into get_disks
because we never ever want to die in get_disks because of a
single disk, but the nodes/xyz/disks/smart API path is
allowed to fail if a disk device is unsupported by smartctl
or something else goes wrong.
2016-09-29 08:40:19 +02:00
d547f26c7d Fix #1012: dir storage: add is_mountpoint option
While the mkdir option deals with the case where we don't
want to clobber a mount point with directories (like ZFS,
gluster or NFS), putting a directory storage directly onto a
mount point is still risky:
If the path exists - which it usually does even if not
mounted - the storage will be considered successfully
activated, but empty (or with unexpected content). Some
operations will then lead to unexpected problems: the
free_disk operation for instance only warns if the disk does
not exist, but does not throw an error. In this case the
configuration might be updated without the real disk being
deleted. Once it's mounted back in, later operations which
check existing disks which are not part of the current VM
configuration (like migration) might error unexpectedly.

This adds an 'is_mountpoint' option to directory storages
which assumes the directory is an externally managed mount
point (eg. fstab or zfs) and changes activate_storage() to
throw an error if the path is not mounted.
2016-09-27 09:56:55 +02:00
c7616abcb2 path based storages: improve the mkdir option
So far this only prevented the creation of the toplevel
directory. This does not cover all problem cases,
particularly when said directory is supposed to be a mount
point, including NFS and glusterfs beside ZFS.

The directory based storages we have already use mkpath
whenever they need to create files, and for actions on files
which are supposed to exist it's fine if it errors out.
So it should also be safe to skip the creation of standard
subdirectories in activate_storage().

Additionally NFS and glusterfs storages should also accept
the mkdir option as they otherwise may exhibit similar
issues, eg. when an NFS storage is mounted onto a directory
inside a ZFS subvolume.
2016-09-27 09:54:53 +02:00
ff3badd83f white space cleanups 2016-09-26 13:40:43 +02:00
a9ef8ffb16 Avoid JavaScript gets a string 0.
If the JavaScript gets a "0" it convents it to a boolean false.
So to ensure the GUI always get valid int we cast the values.
2016-09-26 13:38:56 +02:00
e968c5da43 bump version to 4.0-61 2016-09-16 07:59:14 +02:00
787624dfc0 add comments about LVM thin clones 2016-09-15 14:01:07 +02:00
5510f5c9f9 fix typo 2016-09-15 13:56:17 +02:00
1b83c3d9c7 harmonize list_images code 2016-09-15 13:54:47 +02:00
17fb7e4215 move check for existing clones into own method
and change its return type to boolean
2016-09-15 13:52:57 +02:00
9924228be1 remove unused method
only used by test case, which should use what the rest of
the codebase uses as well
2016-09-15 13:42:55 +02:00
cfd58f1fcc code cleanup 2016-09-14 11:31:10 +02:00
3718e83ab5 fix error message 2016-09-14 11:24:06 +02:00
883d9b81f0 fix indentation 2016-09-14 11:23:52 +02:00
9690e55e9b rbd: detect linked clones/base volumes correctly
since the rbd images themselves are named differently than
the volumes in our config files, we need to recreate this
information from the parent relation in the ceph metadata,
otherwise list_images() might return wrong volume names/IDs

since list_images is used by PVE::Storage::vdisk_free() to
check for children still referencing a base image, because
of the wrong volume id RBDPlugin->parse_volname() does not
detect the base image of linked clones and the check fails.
this is thankfully mitigated by the protected status of the
base snapshot, but creates a rather confusing error message.

scenario (VM 701 is a linked clone of template VM 700):

$ qm config 700 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1,size=2G
$ qm config 701 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1/vm-701-disk-1,size=2G

before (pvesm list reports wrong volume ID, check fails):

$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1   raw 2147483648 700
ceph_qemu:vm-701-disk-1     raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd
rbd unprotect base-700-disk-1 snap '__base__' error: snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd

after (correct volume ID, check works as intended):

$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1                   raw 2147483648 700
ceph_qemu:base-700-disk-1/vm-701-disk-1     raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
base volume 'base-700-disk-1' is still in use (use by 'base-700-disk-1/vm-701-disk-1')
2016-09-14 11:23:39 +02:00
5e6aa346c7 rbd: use correct key to access hash elements 2016-09-14 11:17:16 +02:00
fa1ed7a341 bump version to 4.0-60 2016-09-09 06:40:50 +02:00
9018a4e639 do not automatically die on smartctl exit code > 0
since smartctl uses the return value to encode
disk health status (such as failure in the past)
we cannot die there, but have to parse the returncode

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-09-08 16:52:33 +02:00
16171cedec bump version to 4.0-59 2016-09-07 11:14:29 +02:00
07ccd0f05d add smartmontools as dependency
since we need it in the diskmanager module, add it as a
dependency

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-09-07 11:13:36 +02:00
643e8fd163 use new repoman for upload target 2016-09-07 09:35:58 +02:00
a121b2ec7f bump version to 4.0-58 2016-09-07 09:14:53 +02:00
2622a5ca2d sheepdog 1.0 changed the path from /usr/sbin/dog to /usr/bin/dog 2016-09-07 09:11:43 +02:00
409f8203e0 add api entries for disk management
adds a new class (intended to be used under nodes in pve-manager)
which adds the three api calls: list, smart and init

list being a general list of the available disk with infos
smart being a call to get the smart data from a given device
init being a call to write a gpt header to an unused disk

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-09-05 13:49:28 +02:00
cbba9b5b9c add Diskmanage Utilities
this adds the functions for listing the disks (mostly copied from
the ceph code), checking if a disk is a valid blockdevice, if it
is used/in a zfs pool/as an lvm pv, and an init function (just to add a gpt header;
this is important if one wants to use a fresh disk for ceph journals)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-09-05 11:31:19 +02:00
6d2b278c51 white space cleanups 2016-09-01 06:28:54 +02:00
e7ac2d5cf6 rbd_unittobytes: use a local var instead of a sub 2016-09-01 06:24:51 +02:00
134172255f rbd: allow to use client custom ceph conf for each storeid
If you want to use different ceph storage,
something they have differents values like ms_nocrc = true.(they are also others ones).

The client need to specify theses special options to be able to connect

This patch allow to create a ceph config file for each storeid in

/etc/pve/priv/ceph/$storeid.conf

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2016-09-01 06:21:40 +02:00
4dee23d305 Add support for custom storage plugins
PVE team cannot support specialized vendor-specific storage
plugins because of lack of hardware. But we can allow users to
add own plugins for their storages without need to rewrite any
PVE code and thus ease PVE updates to them.

Idea of this patch is to add folder /usr/share/perl5/PVE/Storage/Custom
where user can place his plugins and PVE will automatically load
them on start or warn if it could not and continue. Maybe we could
even load all plugins (except PVE::Storage::Plugin itself) this way,
because current storage plugins are not really plugins, if they
need to be explicitly loaded in PVE code :-).

Custom plugins MUST have api() method returning version for which
it was designed. If API changes from PVE side, module is just not
being registered and warnig message is printed do log, so user have
to update module. Until module update, corresponding storage will
just disappear from PVE, so it shall not impose any data damage
because of API change.

This approach works (with some limitations) if plugin works in
generic PVE way: full control of volumes lifecycle. And will not
currently work for custom plugins like iSCSI, which needs to select
pre-existing volumes. Maybe someone will add more flexible way to
pve-manager to select input elements for storage plugins to target
this.

Currently tested with my NetApp plugin.

Signed-off-by: Dmitry Petuhov <mityapetuhov@gmail.com>
2016-08-26 15:44:49 +02:00
c2c8789dc8 bump version to 4.0-57 2016-08-19 14:59:41 +02:00
f3b3b2a3b7 remove compression option from lvm migration
ssh(1) mentions that compression is only disirable on slow
connections.

since migration from cluster node to cluster node needs a
fast network anyway, we can drop the compression for
a speed improvement

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-08-05 08:36:13 +02:00
82fc923fd4 fix spelling / grammar 2016-07-13 13:59:26 +02:00
dd6c784ca8 bump version to 4.0-56 2016-07-11 14:24:32 +02:00
e6ccfdeb21 Remove unused pve-storage-monhost format 2016-07-11 13:58:56 +02:00
e858048fc5 rbd: use pve-storage-portal-dns-list for monhost
This way we get parameter verification on monitor addresses
as well as the ability to pass multiple `--monhost`
arguments to `pvesm add`.

Since our '-list' schemas default to using commas we now
need to properly support these, so all uses of the monhost
property now replace all of semicolon, space or comma into
the currently required character.
This should fix the issues reported by Alwin Antreich on the
pve-user list.

Since this schema supports both ipv6+port notations we need
to make sure we convert to the bracket enclosed variant.
Added a helper for this.
2016-07-11 13:58:56 +02:00
33cef4c84e rbd: path: don't build the entire path if we don't use it 2016-07-11 13:58:56 +02:00
0423e8c686 fix indentation 2016-06-29 11:42:03 +02:00
7a9dd1195d add tagged_only option to LVM storage
to filter volumes by the 'pve-vm-ID' tag, which is set on
all volumes created via the PVE storage layer.
2016-06-29 11:42:03 +02:00
4a7d222204 add check if format is defined to avoid warning 2016-06-28 11:40:26 +02:00
d2c0bb589b bump version to 4.0-55 2016-06-17 14:57:10 +02:00
966ecef2e8 fix #1033 storage_migrate on LVMThin - add die.
This is necessary to ensure the process will proper finished.
2016-06-17 14:55:38 +02:00
e967e0ef27 fix #1022 correct typo 2016-06-14 12:32:26 +02:00
8a6f69c9b0 bump version to 4.0-54 2016-06-14 11:47:41 +02:00
e76dbd9204 use correct ceph version command
"ceph version" retrieves the version from the cluster (i.e.,
from the queried monitor), but what is needed here is the
local ceph version, which is returned by "ceph --version".
2016-06-14 11:46:32 +02:00
ed1f84c26f bump version to 4.0-53 2016-06-09 18:16:57 +02:00
5c95e48479 Dir storage creation: check for a sane path
Ideally we don't need this, but this with the directory
storage this is a user-input field which gets returned
by the storage's path() method which is used in various
external command calls.
2016-06-09 18:15:28 +02:00