without this, having an efidisk on a ceph storage
prevents creating another disk on the same
ceph storage, because it will not be detected
and we try to allocate one with the same name
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the rbd images themselves are named differently than
the volumes in our config files, we need to recreate this
information from the parent relation in the ceph metadata,
otherwise list_images() might return wrong volume names/IDs
since list_images is used by PVE::Storage::vdisk_free() to
check for children still referencing a base image, because
of the wrong volume id RBDPlugin->parse_volname() does not
detect the base image of linked clones and the check fails.
this is thankfully mitigated by the protected status of the
base snapshot, but creates a rather confusing error message.
scenario (VM 701 is a linked clone of template VM 700):
$ qm config 700 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1,size=2G
$ qm config 701 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1/vm-701-disk-1,size=2G
before (pvesm list reports wrong volume ID, check fails):
$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1 raw 2147483648 700
ceph_qemu:vm-701-disk-1 raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd
rbd unprotect base-700-disk-1 snap '__base__' error: snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd
after (correct volume ID, check works as intended):
$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1 raw 2147483648 700
ceph_qemu:base-700-disk-1/vm-701-disk-1 raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
base volume 'base-700-disk-1' is still in use (use by 'base-700-disk-1/vm-701-disk-1')
If you want to use different ceph storage,
something they have differents values like ms_nocrc = true.(they are also others ones).
The client need to specify theses special options to be able to connect
This patch allow to create a ceph config file for each storeid in
/etc/pve/priv/ceph/$storeid.conf
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This way we get parameter verification on monitor addresses
as well as the ability to pass multiple `--monhost`
arguments to `pvesm add`.
Since our '-list' schemas default to using commas we now
need to properly support these, so all uses of the monhost
property now replace all of semicolon, space or comma into
the currently required character.
This should fix the issues reported by Alwin Antreich on the
pve-user list.
Since this schema supports both ipv6+port notations we need
to make sure we convert to the bracket enclosed variant.
Added a helper for this.
"ceph version" retrieves the version from the cluster (i.e.,
from the queried monitor), but what is needed here is the
local ceph version, which is returned by "ceph --version".
otherwise mapping those images will fail. disabling the
features only needs to be done once per image, so it makes
sense to do this when creating the images.
unfortunately, the command does not work in hammer, so
it needs a version check for jewel or higher.
This makes no sense because it should always be exclusive.
Also RDB checks it self.
LVM has not possibility to use lvchange.
DRBD is this feature not implemented.
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.
Signed-off-by: Wolfgang Link <w.link@proxmox.com>
we need to escape ":" used to defined mon ports
"10.5.0.11:6789; 10.5.0.12:6789; 10.5.0.13:6789"
->
"10.5.0.11\:6789; 10.5.0.12\:6789; 10.5.0.13\:6789"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Currently vmstate snapshot with rbd have wrong name,
because rbd alloc_image don't care if $name is provided
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Always use a custom error sub to get the real errors out of rbd command instead of the typical:
2014-02-06 11:20:20.187190 7f3b6c37c760 -1 librbd: removing snapshot from header failed: (16) Device or resource busy
before:
rbd: snapshot 'abc' is protected from removal.
TASK ERROR: rbd snapshot vm-173-disk-1' error: 2014-02-06 11:06:02.438336 7f6f4ac92760 -1 librbd: removing snapshot from header failed: (16) Device or resource busy
now:
TASK ERROR: rbd: snapshot 'abc' is protected from removal.
Signed-off-by: Stefan Priebe <s.priebe@profihost.ag>
pool is now optional, default value is 'rbd';
username is now optional, default value is 'admin';
auth_supported option is removed and is autodetected.
auth = cephx if private key exist
auth = none if private key does not exist
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>