Try to detect active mounts and holders early, because it's cheap. The wipefs
command in the worker will detect even more situations where wiping alone is
not enough for the device to show up as unused, or could otherwise be
problematic.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
based on the wipe_disks method from pve-manager's Ceph/Tools.pm with the
following main differences:
* use wipefs to wipe labels first (to avoid sgdisk complaining about the
backed up GPT structure on a subsequent GPT initialization)
* only take one device as an argument
* do not use an absolute path for 'dd'
* die if one of the command fails
The wipefs command makes checks and complains about e.g. mounted or active
devices. One could supply --force to wipefs, but in many such situations it
does not work as expected, because the device would still be detected as in-use
afterwards, and further manaual steps would be needed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This was never marked stable and the recommended one is the external
version, which is maintained by linbit themselves.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
instead of just the snapshot for consistency with other API endpoints,
and possible future extension to VMA backups (where 'snapshot' would be
a rather strange terminology).
add some additional checks (pbs storage type, backup volume type),
completion and magic (allow passing in either a full volume ID with
correct storage, or just the volume name, or just the snapshot for
easier API/CLI usage/convenience).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Includes list and restore calls.
Requires VM.Backup and Datastore.Audit permissions, for the accessed
VM/CT and containing datastore respectively.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
similar to the existing encryption key handling, but without
auto-generation since we only have the public part here.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Not replacing it with return, because the current behavior is dying:
Can't "next" outside a loop block
and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not check
the return value.
Also check for $st, which can be undefined in case a non-existing path was
provided. This also led to dying previously:
Can't call method "mode" on an undefined value
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in case a disk with partitions also has an fstype set, which happens for our ZFS
boot disks. Do not change the behavior without include-partitons, as we
prefer(red) to be more specific than simply 'partitions' then.
Reported in the enterprise support channel.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Bug reported in the community forum[0].
Currently, it's possible to break replication by:
1. have an existing snapshot whose name contains an uppercase letter
2. set up a replication job and run it
3. rollback to the existing snapshot
4. replicate again -> fails
The failure occurs, because after step 3, the most recent common snapshot is the
previously existing one and currently no uppercase letters are allowed for
export/import.
The pve-snapshot-name option uses the CONFIGID_RE
qr/[a-z][a-z0-9_-]+/i
so it cannot be used here, because it would not allow for e.g. '__migrate__'.
Simply allow uppercase letters, to be backwards compatible and allow all
possible pve-snapshot-name values.
There is still an issue if there also was state volume, but that's a different
bug[1].
[0]: https://forum.proxmox.com/threads/solved-migration-error-base-value-does-not-match-the-regex-pattern.85946/
[1]: https://bugzilla.proxmox.com/show_bug.cgi?id=3111
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
A restore to ZFS for a container which has a volume (rootfs / mount
point) of size 0 failed because the refquota property does not accept
'0k' but wants 'none' in that situation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This test is intended to be run on a hyperconverged PVE cluster to test
the most common operations of VMs using a namespaced Ceph RBD pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This patch introduces support for Cephs RBD namespaces.
A new storage config parameter 'namespace' defines the namespace to be
used for the RBD storage.
The namespace must already exist in the Ceph cluster as it is not
automatically created.
The main intention is to use this for external Ceph clusters. With
namespaces, each PVE cluster can get its own namespace and will not
conflict with other PVE clusters.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The <pool>/<image> paths are needed in quite a lot of places. Having one
single place where they are created helps to reduce duplicate code and
makes it easier to introduce new features.
The 'add_pool_to_disk' sub was already doing that but the name was not
really fitting. This commit renames it to the more general
'get_rbd_path' and changes the second parameter to the more widely used
$volume instead of $disk.
Furthermore, all occurences where "$pool/$volume" has been concatenated
have been replaced with a call to get_rbd_path.
Plus some minor code style cleanups for long function calls that were
touched.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by relying on archive_info's vmid first. archive_info is already used to
determine if it's a standard name, and in that case the vmid is certainly set.
Also add asserts to make sure we got what we expected.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>