Commit Graph

402 Commits

Author SHA1 Message Date
b43d0f3043 ZFSPoolPlugin.pm: remove unused code 2017-06-08 08:45:22 +02:00
636ac5b82f PVE::Storage::volume_snapshot_list - remove comment about ordering
Some storage types supports arbitrary snapshot trees, so there is
no strict ordering relation.
2017-06-07 06:36:55 +02:00
8b622c2dff PVE::Storage::volume_snapshot_list - remove $prefix parameter
Always return the full list of snapshots. Users of this library can easily
filter with a simply 'grep' instead.
2017-06-07 06:20:07 +02:00
a3f38a644c fix #1379: return size as number instead of string
this caused the webinterface to sort alphabetically instead of numerical
when sorting by image size

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2017-06-02 10:24:30 +02:00
d390328bfd api: add import/export format querying 2017-05-12 14:42:17 +02:00
47f37b5362 pvesm: import/export commands 2017-05-12 14:42:16 +02:00
17be2e9a0c volume_snapshot_list: remove $ip parameter
We want to handle ssh connections somewhere else (not inside the
storage plugins).
2017-05-10 07:02:42 +02:00
3d4949692a Revert "Include new storage function volume_send."
This reverts commit b76774e57f.
2017-05-10 06:58:44 +02:00
44257d2e38 Revert "Add ip parameter in zfs_request to execute on remote host."
This reverts commit c4bb4a3d19.
2017-05-10 06:55:42 +02:00
889d7485cb Revert "Add function volume_snapshot_delete_remote."
This reverts commit 4bd0b38f53.
2017-05-10 06:55:00 +02:00
f189504ccb Add replicate as new storage feature.
This feature shows that the storage can send and receive images.
2017-04-28 10:05:27 +02:00
4bd0b38f53 Add function volume_snapshot_delete_remote.
This function we need for replica to handle snapshots on remote nodes.
2017-04-28 10:05:27 +02:00
c4bb4a3d19 Add ip parameter in zfs_request to execute on remote host.
We need this function to delete remote snapshots.
2017-04-28 10:05:27 +02:00
aefe82ea03 Include new storage function volume_snapshot_list.
Returns a list of snapshots (youngest snap first) form a given volid.
It is possible to use a prefix to filter the list.
2017-04-28 10:05:27 +02:00
b76774e57f Include new storage function volume_send.
If the storage backend support import and export
we can send the contend to a remote host.
2017-04-28 10:05:27 +02:00
c59a0f2452 rbd: fix rados df parser for luminous 2017-03-21 10:13:02 +01:00
f5451f288d remove immutable flag also for subvols on directory storage
or else the removal of such templates (with rootfs size 0) fails

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2017-03-10 09:21:53 +01:00
d746da86d8 rbd: fix exit from 'rbd ls -l' parser 2017-02-23 09:22:57 +01:00
b2e430d30f rbd: minor regex fixup 2017-02-20 10:00:57 +01:00
292a33fdd4 rbd: fix typo
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2017-02-13 12:13:06 +01:00
4db25fa2a6 rbd: use consistent image name schemes
since we allow vm-ID-whatever when allocating images, we
should also include those when listing them.

note: '@' is reserved for snapshots in ceph, so it is safe to
skip lines including an '@' in the image name.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2017-02-13 12:13:06 +01:00
53a236f2c5 rbd: use 'rbd ls' without '-l' to find free names
with more than a few images, 'rbd ls -l' gets rather slow
compared to a simple 'rbd ls'. since we only need to check
existing image names for finding a free one, the latter is
sufficient.

example with ~400 rbd images:
$ time rbd ls -p ceph-vm > /dev/null
real    0m0.027s
user    0m0.012s
sys     0m0.008s

$ time rbd ls -l -p ceph-vm > /dev/null
real    0m5.250s
user    0m1.632s
sys     0m0.584s

a linked clone of two disks on the same setup accordingly
also shows a massive speedup:

$ time qm clone 1000 10000 -snap test
create linked clone of drive scsi0 (ceph-vm:vm-1000-disk-2)
clone vm-1000-disk-2: vm-1000-disk-2 snapname test to
vm-10000-disk-1
create linked clone of drive scsi1 (ceph-vm:vm-1000-disk-1)
clone vm-1000-disk-1: vm-1000-disk-1 snapname test to
vm-10000-disk-2

real    0m11.157s
user    0m3.752s
sys     0m1.308s

$ time qm clone 1000 10000 -snap test
create linked clone of drive scsi1 (ceph-vm:vm-1000-disk-1)
clone vm-1000-disk-1: vm-1000-disk-1 snapname test to
vm-10000-disk-1
create linked clone of drive scsi0 (ceph-vm:vm-1000-disk-2)
clone vm-1000-disk-2: vm-1000-disk-2 snapname test to
vm-10000-disk-2

real    0m0.872s
user    0m0.652s
sys     0m0.096s

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2017-02-13 12:13:06 +01:00
ef881e10eb fixes for new PVE::RPCEnvironment implementation
Use PVE::RPCEnvironment->is_worker() instead of
PVE::RPCEnvironment::is_worker().
2017-01-19 09:14:41 +01:00
e438d0940f fix #1252: rbd: delete snapshots when using krbd 2017-01-17 10:32:23 +01:00
a4aee43380 Fix RBD resize with krbd option enabled.
With krbd we resize volume and tell QemuSever to notify running QEMU
with zero $size by returning undef.

Signed-off-by: Dmitry Petuhov <mityapetuhov@gmail.com>
2017-01-16 09:14:00 +01:00
e2e74320b7 sheepdog : volume_resize return if running
fix : https://forum.proxmox.com/threads/sheepdog-disk-resize-not-working.31760/

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2017-01-09 13:27:11 +01:00
2ff3b506a1 use qemu gluster blockdriver for linked clone creation
this works around a bug, where qemu does not align the qcow2 file
when using the filesystem directly, and the gluster blockdriver
refuses to read from it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-11-29 09:40:10 +01:00
72bdeea1bf increase default timeout for zpool import
as zpool import can easily take longer than 5 seconds on
systems with lots of disks
2016-11-29 09:30:26 +01:00
e2e6380112 improve zpool activate_storage
the old code was way too broad here, this fixes at least the
following issues:
- importing of other/unconfigured zpools by "import -a"
- possible false positives if a pool name is a substring of
  another pool name because of "list" without pool name,
  potentially skipping activation for such pools
- not noticing failure to activate in activate_storage
  because the success of "zpool import -a" does not tell us
  anything about the pool we actually wanted to import

checking specifically for the pool to be activated when
calling "zpool list" gets rid of the second issue, and
trying to import only that pool fixes the other two.
2016-11-29 09:29:33 +01:00
4b7dd9d743 allow --allow-shrink on RBD resize
Signed-off-by: Stefan Priebe <s.priebe@profihost.ag>
2016-11-17 09:06:55 +01:00
8b5ccc06b7 fix #1165: only check mount status when is_mountpoint is set
otherwise the status() method returns garbage for non-mount
point directory storages.
2016-10-13 08:29:30 +02:00
a0028cf97e allow rbd images < 1M to be detected
without this, having an efidisk on a ceph storage
prevents creating another disk on the same
ceph storage, because it will not be detected
and we try to allocate one with the same name

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2016-10-07 08:17:40 +02:00
1993540bf8 fix #1122: correctly create LUNs for linked clones 2016-09-29 08:42:26 +02:00
d547f26c7d Fix #1012: dir storage: add is_mountpoint option
While the mkdir option deals with the case where we don't
want to clobber a mount point with directories (like ZFS,
gluster or NFS), putting a directory storage directly onto a
mount point is still risky:
If the path exists - which it usually does even if not
mounted - the storage will be considered successfully
activated, but empty (or with unexpected content). Some
operations will then lead to unexpected problems: the
free_disk operation for instance only warns if the disk does
not exist, but does not throw an error. In this case the
configuration might be updated without the real disk being
deleted. Once it's mounted back in, later operations which
check existing disks which are not part of the current VM
configuration (like migration) might error unexpectedly.

This adds an 'is_mountpoint' option to directory storages
which assumes the directory is an externally managed mount
point (eg. fstab or zfs) and changes activate_storage() to
throw an error if the path is not mounted.
2016-09-27 09:56:55 +02:00
c7616abcb2 path based storages: improve the mkdir option
So far this only prevented the creation of the toplevel
directory. This does not cover all problem cases,
particularly when said directory is supposed to be a mount
point, including NFS and glusterfs beside ZFS.

The directory based storages we have already use mkpath
whenever they need to create files, and for actions on files
which are supposed to exist it's fine if it errors out.
So it should also be safe to skip the creation of standard
subdirectories in activate_storage().

Additionally NFS and glusterfs storages should also accept
the mkdir option as they otherwise may exhibit similar
issues, eg. when an NFS storage is mounted onto a directory
inside a ZFS subvolume.
2016-09-27 09:54:53 +02:00
787624dfc0 add comments about LVM thin clones 2016-09-15 14:01:07 +02:00
5510f5c9f9 fix typo 2016-09-15 13:56:17 +02:00
1b83c3d9c7 harmonize list_images code 2016-09-15 13:54:47 +02:00
cfd58f1fcc code cleanup 2016-09-14 11:31:10 +02:00
883d9b81f0 fix indentation 2016-09-14 11:23:52 +02:00
9690e55e9b rbd: detect linked clones/base volumes correctly
since the rbd images themselves are named differently than
the volumes in our config files, we need to recreate this
information from the parent relation in the ceph metadata,
otherwise list_images() might return wrong volume names/IDs

since list_images is used by PVE::Storage::vdisk_free() to
check for children still referencing a base image, because
of the wrong volume id RBDPlugin->parse_volname() does not
detect the base image of linked clones and the check fails.
this is thankfully mitigated by the protected status of the
base snapshot, but creates a rather confusing error message.

scenario (VM 701 is a linked clone of template VM 700):

$ qm config 700 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1,size=2G
$ qm config 701 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1/vm-701-disk-1,size=2G

before (pvesm list reports wrong volume ID, check fails):

$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1   raw 2147483648 700
ceph_qemu:vm-701-disk-1     raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd
rbd unprotect base-700-disk-1 snap '__base__' error: snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd

after (correct volume ID, check works as intended):

$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1                   raw 2147483648 700
ceph_qemu:base-700-disk-1/vm-701-disk-1     raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
base volume 'base-700-disk-1' is still in use (use by 'base-700-disk-1/vm-701-disk-1')
2016-09-14 11:23:39 +02:00
5e6aa346c7 rbd: use correct key to access hash elements 2016-09-14 11:17:16 +02:00
2622a5ca2d sheepdog 1.0 changed the path from /usr/sbin/dog to /usr/bin/dog 2016-09-07 09:11:43 +02:00
6d2b278c51 white space cleanups 2016-09-01 06:28:54 +02:00
e7ac2d5cf6 rbd_unittobytes: use a local var instead of a sub 2016-09-01 06:24:51 +02:00
134172255f rbd: allow to use client custom ceph conf for each storeid
If you want to use different ceph storage,
something they have differents values like ms_nocrc = true.(they are also others ones).

The client need to specify theses special options to be able to connect

This patch allow to create a ceph config file for each storeid in

/etc/pve/priv/ceph/$storeid.conf

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2016-09-01 06:21:40 +02:00
e6ccfdeb21 Remove unused pve-storage-monhost format 2016-07-11 13:58:56 +02:00
e858048fc5 rbd: use pve-storage-portal-dns-list for monhost
This way we get parameter verification on monitor addresses
as well as the ability to pass multiple `--monhost`
arguments to `pvesm add`.

Since our '-list' schemas default to using commas we now
need to properly support these, so all uses of the monhost
property now replace all of semicolon, space or comma into
the currently required character.
This should fix the issues reported by Alwin Antreich on the
pve-user list.

Since this schema supports both ipv6+port notations we need
to make sure we convert to the bracket enclosed variant.
Added a helper for this.
2016-07-11 13:58:56 +02:00
33cef4c84e rbd: path: don't build the entire path if we don't use it 2016-07-11 13:58:56 +02:00
0423e8c686 fix indentation 2016-06-29 11:42:03 +02:00