This is mostly the same as a directory storage, with 2 major
differences:
* 'subvol' volumes are actual btrfs subvolumes and therefore
allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
also allow snapshots, the raw file for volume
`btrstore:100/vm-100-disk-1.raw` can be found under
`$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
subvolume's directory name, so snapshot 'foo' of the above
would be found under
`$path/images/100/vm-100-disk-1@foo/disk.raw`
or for format "subvol":
`$path/images/100/subvol-100-disk-1.subvol@foo`
Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
it is not a storage plugin, and it makes more sense to have it
top-level, but there we cannot name it CephTools because of the
existing ones in pve-manager
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
- ability to mount through kernel and fuse client
- allow mount options
- get MONs from ceph config if not in storage.cfg
- allow the use of ceph config with fuse client
- Delete secret on cephfs storage creation
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Some methods for connecting to a ceph cluster are the same for RBD and
CephFS, these are merged into the helper modules.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Turned out it makes no sense to duplicated DirPlugin features. So I
also changed the name to make it less confusing. So we can only
create zvols inside a zfs pool with this plugin.
example of storage.cfg
zfs: omnios
blocksize 8k
target iqn.2010-09.org.openindiana:target1
pool pool1
iscsiprovider comstar
portal 192.168.0.1
sudo 1 (optionnal)
content images
note for fast ssh login:
on solaris host :
/etc/ssh/sshd_config
LookupClientHostnames no
VerifyReverseMapping no
GSSAPIAuthentication no
note for nexenta:
rm /root/.bash_profile
to avoid to go in nmc console by default
Signed-off-by: Michael Rasmussen <mir@datanom.net>
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>