it is not a storage plugin, and it makes more sense to have it
top-level, but there we cannot name it CephTools because of the
existing ones in pve-manager
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As we mount this manually and thus systemd doesn't know about any
dependency for cephFS mounts, this got umounted only at the last
stage of shutdown, where network wasn't active anymore.
But, CephFS needs to be connected to an active MDS for a clean
unmount so without network this mount would delay shutdown for quite
a bit, until after some minutes systemd gave up and forced unmount.
So tell systemd that this mount requires network, which can be done
with the '_netdev'[0] mount option, that lucky for us can be also
passed to a mount call and isn't only available for fstab.
with this a mount gets, among others:
> Wants=network-online.target
> Before=umount.target remote-fs.target
> After=remote-fs-pre.target system.slice network.target network-online.target -.mount
Which does the trick for us.
[0]: https://www.freedesktop.org/software/systemd/man/systemd.mount.html#_netdev
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is important for shared LVM storages. As with deletes and
creates of images, as else we may have not the up-to-date metadata
and extents may get reused if another node created an image during
the same time, for example.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This allows to request a mapped device/path explicitly, regardles of
the storage option, eg. krbd option in the RBDplugin.
Bump of the storage ABI => 2
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Accessing a non-existing 'format' key in plugindata (e.g., in LvmThinPlugin),
created it by autovivication, thus breaking the fallback to the default value
'raw' upon any following access.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
$features is actually an array reference, so use it as one.
This broke creation and migration of disks on rbd storages
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was newly introduced and is only used once, so having a
wantarray return mechanism, without ever using it or knowing for sure
if this may help with reuse of the method is not ideal.
Make the sub a module private one just returning the vm disk number
or explicit undef. Pass it the $suffix variable, to avoid recomputing
it every time called by out caller's loop.
If there's re-use potential in the future we can actually decide what
makes sense to return.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
else we can only have MAX_VOLUMES_PER_GUEST-1 disk per VMID,
not tragic but possible confusing
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Non conforming image names are not ignored anymore by the new rbd_ls
implementation, this patch adds the old behaviour.
This fix is a temporary workaround and should be removed, once the new
image name parser is ready.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
since ceph changed the plain output format for 12.2.8
we have to change the code anyway, and when were at it,
we can change it to the (hopefully) more robust json output
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
lvm_find_free_diskname only checked for existing volumes starting with 'vm-',
and not with 'base-'.
Unify implementation with other Plugins.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
move two loop bodies from
if (condition) {
...
}
too
next if !condition;
...
to save an indentation level
rename variables to a bit shorter version, i.e.:
s/oneTarget/target/
s/oneTpg/tpg/
and a comment rewording
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Introducing LIO/targetcli support allowing to use recent linux
distributions as iSCSI targets for ZFS volumes.
In order for this to work, two preconditions have to be met:
1. the portal has to be set up correctly using targetcli
2. the initiator has to be authorized to connect to the target
based on the initiator's InitiatorName
When adding a LIO iSCSI target, a new "LIO target portal group" field needs
to be correctly populated in the "Add: ZFS over iSCSI" popup, containing the
fitting "LIO target portal group" name (typically something like 'tpg1').
Signed-Off-By: Udo Rader <udo.rader@bestsolution.at>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
if no vg is given, give back all thinpools from all vgs
if verbose is 1, then give back the information about the thinpools
(like size and free)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
- ability to mount through kernel and fuse client
- allow mount options
- get MONs from ceph config if not in storage.cfg
- allow the use of ceph config with fuse client
- Delete secret on cephfs storage creation
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Some methods for connecting to a ceph cluster are the same for RBD and
CephFS, these are merged into the helper modules.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
on_add_hook allows to encapsulate storage specific add steps, like
copying a keyring (RDB) or creating a volume group (LVM) in a clean
manner.
The same for deletion with on_delete_hook, here all should be cleaned
up, as much as possible.
Until now, this was done directly in the api config CREATE and DELETE
code, respectively, with a series of
if ($storage_type eq 'foo) {
...
} elsif ($storage_type eq 'bar') {
...
}
which isn't really that nice...
Another nice result of this approach is that also external plugins
can use those hooks and to their setup/cleanup steps sanely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This command will only check the needed share
and do not query the hole server shares.
This reduce the answer time and also has the benefit we check the
credentials on this share and not on the server.
with the recent refactoring, external clusters were not handled
correctly with librd if a pveceph or storage specific ceph config
exists.
change the behaviour to include the pveceph config file only for pveceph
managed clusters, and a storage specific one only for external ones.
set mon_host correctly using the values from storage.cfg for external
librbd clusters.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
`zfs create` add the creation job in a worker queue,
which should normally execute instantly. But there are circumstances
where the job will take a while to get processed.
If this is the case udev settle will see no dev in the queue and the program
will continue without an allocated dev.
The busy waiting is not best practice but the only way to be sure,
that the block device exists.
The path method of the RBDPlugin got a list with comma separated monhosts,
but it needs the list with semi-colon for qemu.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>