Commit Graph

594 Commits

Author SHA1 Message Date
3d10acf89e Fix #2737: Can't call method "mode"
on an undefined value at /usr/share/perl5/PVE/Storage/Plugin.pm line 928

This error message crops up when a file is deleted after getting the
file list and before the loop passed the file entry.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2020-05-15 18:12:01 +02:00
014d36dbbb Fix: #2124 storage: add zstd support
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2020-04-30 18:37:19 +02:00
277cafc0ff backup: compact regex for backup file filter
the more compact form of the regex should allow easier addition of new
file extensions.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2020-04-30 18:37:19 +02:00
40c795e7df Fix: backup: ctime was from stat not file name
The vzdump file was passed with the full path to the regex. That regex
captures the time from the file name, to calculate the epoch.

As the regex didn't match, the ctime from stat was taken instead. This
resulted in the ctime shown when the file was changed, not when the
backup was made.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2020-04-30 18:37:19 +02:00
c48801b52a test: parse_volname
Test to reduce the potential for accidental breakage on regex changes.
And to make sure that all vtype_subdirs are parsed.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2020-04-30 18:37:19 +02:00
92ae59df9e storage: replace build-in stat occurrences
with File::stat::stat to minimize variable declarations. And allow to
mock this method in tests instead of the perl build-in stat.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2020-04-30 18:37:19 +02:00
7435dc9071 s/ceph_version/local_ceph_version/ for clarity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-04-25 11:37:26 +02:00
d4c31eff96 d/control: bump ceph dependency to 12.2
A newer than the Luminous version is shipped with buster, and our
ceph repos are on Nautilus (14.2) in PVE 6.

Allows to drop a check for really old ceph versions (< 10, so
Infernalis and older).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-04-25 11:27:38 +02:00
81c5c736ca followup: only parse version if required, fix whitespace error
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-04-25 11:18:03 +02:00
e54c3e3347 Fix #2705: cephfs: mount fails with bad option
dmesg: libceph: bad option at 'conf=/etc/pve/ceph.conf'

After the upgrade to PVE 6 with Ceph Luminous, the mount.ceph helper
doesn't understand the conf= option yet. And the CephFS mount with the
kernel client fails. After upgrading to Ceph Nautilus the option exists
in the mount.ceph helper.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2020-04-25 11:15:23 +02:00
e05113fbe5 ZFS: use -p flag where possible
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-04-09 10:20:06 +02:00
3881e68025 ZFS: use -p flag and remove zfs_parse_size
ZFS supports the -p flag in the list command since a few years now.
Let us use the real byte values and avoid the error prone calculation
from human readable numbers that can lead to incorrect numbers if the
reported human readable value is a rounded number.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-04-09 10:20:06 +02:00
d99de0f898 ZFSPoolPlugin: fix #2662 get volume size correctly
Getting the volume sizes as byte values instead of converted to human
readable units helps to avoid rounding errors in the further processing
if the volume size is more on the odd side.

The `zfs list` command supports the -p(arseable) flag since a few years
now.
When returning the size in bytes there is no  calculation performed and
thus we need to explicitly cast the size to an integer before returning
it.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-04-09 10:19:59 +02:00
a97d3ee49f Introduce allow_rename parameter for pvesm import and storage_migrate
and also return the ID of the allocated volume. This option
allows plugins to choose a new name if there is a collision.

In storage_migrate, the API version of the receiving side is checked.

In Storage.pm's volume_import, when a plugin returns 'undef',
it can be assumed that the import with the requested volid was
successful (it should've died otherwise) and so volid is returned.
This is done for backwards compatibility with foreign plugins.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-04-09 09:41:01 +02:00
b9364dc683 Fix 2647: Add snippet content type for Gluster
Our wiki mentions snippets as supported content type for GlusterFS storages [0]
and all other directory based storages have it enabled already [1]

[0] https://pve.proxmox.com/wiki/Storage:_GlusterFS
[1] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=d1eb35ea74cf27713625ab7e7c3767a8254a4aee

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2020-04-08 07:53:17 +02:00
3587acc80a fix #2474: always show iscsi content
Instead of relying on list_volumes of Plugin.pm (which filters by
the content types set in the config), use our own to always
show the luns of an iscsi.

This makes sense here, since we need it to show the luns when using
it as base storage for LVM (where we have content type 'none' set).

It does not interfere with the rest of the GUI, since on e.g. disk
creation, we already filter the storages in the dropdown by content
type, iow. an iscsi storage used this way still does not show up
when trying to create a disk.

This also shows the luns now in the 'Content' tab, but this is also
OK, since the user cannot actually do anything there with the luns.
(Besides looking at them)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-04-07 18:09:50 +02:00
e6f4eed435 Allow passing options to volume_has_feature
With the option valid_target_formats it's possible
to let the caller specify possible formats for the target
of an operation.
[0]: If the option is not set, assume that every format is valid.

In most cases the format of the the target and the format
of the source will agree (and therefore assumption [0] is
not actually assuming very much and ensures backwards
compatability). But when cloning a volume on a storage
using Plugin.pm's implementation (e.g. directory based
storages), the result is always a qcow2 image.

When cloning containers, the new option can be used to detect
that qcow2 is not valid and hence the clone feature is not
available.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-03-27 08:50:23 +01:00
6c25dbd495 base plugin: get_subdir_files: split stat variables into single lines
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-03-06 19:27:24 +01:00
c05b1a8cb9 PBS plugin: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-03-06 19:26:45 +01:00
d65590d1be LVM list_images: return creation time
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-03-06 19:26:45 +01:00
51eee96d31 base plugin: return ctime for vm images
Changed file_size_info() to additionally return ctime to avoid
another stat() call.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-03-06 19:26:45 +01:00
ff9c5451a5 base plugin: add ctime for all files
Creation time makes sense for other file types also.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-03-06 19:26:45 +01:00
545e127e52 PBS Plugin: list_volumes: add ctime
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-03-06 19:26:45 +01:00
9c629b3e76 base plugin: add ctime for backup files
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-03-06 19:26:45 +01:00
553c9b21a7 iscis: add iscsi_session helper
allows to write some code sligthly nicer

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-03-03 11:33:34 +01:00
c29bad0d90 iscsi: sort and split module usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-03-03 11:33:34 +01:00
4f430a43ba ISCSI: whitespace cleanup
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-03-03 11:33:34 +01:00
42b988f735 fix #2620: storage API: iSCSI: return active field as integer
If active, the return value was a string: "1" and not an integer.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-03-03 11:33:34 +01:00
93afc379a3 followup: fix VMID regex, use same as JSONSchema does
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-21 16:02:23 +01:00
1b39642528 list_volumes: try to return vmid also for backups
this way the content listing api also returns the vmid on content
listings which, among other things, is useful for the gui for
filtering

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-02-21 16:00:58 +01:00
f33533d4da cifs: followup fix for credential fallback
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-20 13:56:18 +01:00
319441e7cd cifs: move password credential file to storage subdirectory
Do not pollute top-level private directory, use "storage" folder but
with backward compatibility.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-20 13:07:29 +01:00
b494636ac9 PBSPlugin.pm: fix password handling using new on_update_hook 2020-02-20 12:42:59 +01:00
e2fc55b413 CIFSPlugin.pm: fix crediential handling using new on_update_hook 2020-02-20 12:39:50 +01:00
0ff4cfead1 PVE/Storage/Plugin.pm: introduce on_update_hook
We need this to correctly update the password file.
2020-02-20 12:39:44 +01:00
9e34813f6c pbs: ensure storage secret file directory exists
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-20 11:12:39 +01:00
bb8adeb226 PBSPlugin.pm - extract_vzdump_config: fix call to run_raw_client_cmd 2020-02-20 10:45:34 +01:00
462537a270 namespace storage specific secret files to 'priv/storage' folder
As /etc/pve/priv is already pretty polluted, having a
"<storage-id>.pw" file there smells like it could make problems in
the future.

So let the pbs pw file generator use /etc/pve/priv/storages as base
path.
Other storage should move also to that path in the future, if they
save such secrets anywhere in /etc/pve.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-19 15:00:54 +01:00
1574a590a5 check if client executable ist installed before running command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-19 14:50:42 +01:00
fee2ece310 use one liner closure for outfunc
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-19 14:50:04 +01:00
f155c912d0 indentation fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-19 14:49:38 +01:00
c855ac150c implement extract_vzdump_config for PBSPlugin 2020-02-19 14:00:04 +01:00
271fe39460 PVE/Storage/PBSPlugin.pm: start new proxmox backup server plugin 2020-02-19 14:00:04 +01:00
75815bf556 Check whether 'zfs get mountpoint' returns a valid absolute path
The command 'zfs get mountpoint' can return 'none' and so 'mountpoint
none' was written to storage.cfg, which would block the fall-back to
using the default mount point when requesting a path, see [0].

[0]: https://forum.proxmox.com/threads/zfs-backup-with-snapshot-mode-fails.61927/#post-284123

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-02-18 13:26:46 +01:00
1022a7c4a9 systemd unit name escape helpers moved to common, use them
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-05 17:14:39 +01:00
b0373adc71 directory/cephfs: sort module usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-02-05 17:13:30 +01:00
9a80a3eae0 cephfs mount: reload systemd if existing unit gets regenerated
One the first write bringing the unit file in existence we can just
start it, after that we need to tell systemd that we want to actively
reload it.

While this is slightly shaky due to the fact that we do not check all
paths where such a unit could reside, it is something we can do
because earlier one couldn't have a unit/overwrite anyway (from
procfs mountinfo generated one do not support that) and does adding
such override ones from now on should work.

Also note that we can only get here in the "user does no weird stuff"
case when "cephfs_is_mounted" actively tells that there is no cephfs
mounted at the $mountpoint - at which time we can safely re-write the
potential updated unit file, reload and mount again.

So let's make our life a bit easier here until a user actually
complains about a rational issue for this, maybe we have PVE 7.0 then
and can get rid of that anyway :)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-01-29 19:54:40 +01:00
25e222ca0d cephfs: mount fuse through systemd with correct order dependencies
This fixes a potential races where fuse get's unmouted to late in the
shutdown process, i.e., at a time where network was down and it could
not talk to any MDS or monitor anymore.

We could fix it the same way we did once with the kernel based mount,
i.e., adding _netdev, but doing so would require to switch over from
"ceph-fuse" to "mount.fuse.ceph" which has better compatibility with
the common mount tool API.

As that helper exists we can reuse the newer systemd_netmount
ephemeral unit generator, only some options differ in name between
fuse and kernel variant.

So besides solving a potential issue we get a more unified handling
of those two cases.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-01-29 19:41:11 +01:00
d9ece228fb fix random hangs on reboot with active CephFS mount ordering cycle
commit 54e0b0034b introduced the
"_netdev" option, for PVE 5.3. The systemd generator then correctly
resolved that in the following resulting order-dependencies:
> Wants=network-online.target
> Before=umount.target remote-fs.target
> After=remote-fs-pre.target system.slice network.target network-online.target -.mount

This worked well and all were happy. With the current systemd in 6.0
we sometimes get the local-fs ones there generated too. This is a
fallout from a try to better handling nested mount hierachies, where
a .mount unit needs to be mounter or unmounted, before or after,
respectively, the parent mount was processed. It seems that sometime
that glitches and thus a "RequireMountFor=/mnt/pve" gets thrown in
and result sometimes in the local-fs order constraints being added.

The issue now is, that one must not have ordering depends to all,
local-fs, local-fs-pre, remote-fs, remote-fs-pre, as that gets you a
ordering cycle. Systemd tries to solve that cycle by randomly
dropping one constraint and retrying. By luck this is a not so
important unit, and all goes on well. Most of the time one isn't that
lucky and something important gets dropped, for example:

> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found ordering cycle on systemd-timesyncd.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on systemd-tmpfiles-setup.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on local-fs.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on mnt-pve-cephfs.mount/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on remote-fs-pre.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on rbdmap.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on sysinit.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Job remote-fs-pre.target/stop deleted to break ordering cycle starting with sysinit.target/stop

Then, most of the time the host reboot hangs for ~10 minutes, often
showing scapegoat units like the pve-ha-lrm being the cause of the
hang (even if no HA is configure >.<).

This behavior is fixed with newer systemd versions, e.g., the v244
from buster-backports, but that is not a real option for us for now.

So until 7.0 we generate the unit with the correct dependencies
directly in the ephemeral /run/ tmpfs backed systemd/system path and
start it.

While FUSE gets only the local-fs ordering constraint, it seems to cope
very well regarding such symptoms. But it _is_ racy and probably only
works due to systemd stopping it early as it has not much ordering
constraints at all.. It should be moved in the future nonetheless, as
there's a mount.fuse.ceph helper that should be not an issue.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-01-29 18:25:21 +01:00
edaaf47aec CephFSPlugin: copy over systemd_escape
This is but a hack, but we have no general helper/tools module here
and I do not want to do versioned dependencies for this fast-tracked
bugfix to pve-common, so I'll have to live with the shame for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-01-29 18:00:37 +01:00