Weird Issue with NFS and Synology Hyper Backup
-
Run a backup of a VM in XO to a remote NFS share on a Synology device A.
Use Synology’s Hyper Backup to back the NFS share folder on Synology A device onto another Synology device B
Power down XO.
Delete the folder on Synology A that contained XO’s backup (simulating losing our XO backups).
Restore the backup folder on Synology A from Hyper Backups stored on Synology device B.
Power up XO VM and check on the status of NFS remotes: all remotes on Synology A are off line.Log shows:
Command failed with exit code 32: mount -o -t nfs optinas1:/volume1/test_nfs_share /run/xo-server/mounts/c9966433-081a-4d59-b109-3dbe9b054ad4 mount.nfs: access denied by server while mounting optinas1:/volume1/test_nfs_share
However, executing:
sudo mount -t nfs optinas1:/volume1/test_nfs_share /var/test_nfs_share
on a POP!_OS VM is successful. (POP!_OS is a Ubuntu variant)
Now, ssh to XO and execute:
sudo mount -v -t nfs optinas1:/volume1/test_nfs_share /run/xo-server/mounts/c9966433-081a-4d59-b109-3dbe9b054ad4
Gives:
mount.nfs: timeout set for Fri Aug 26 17:29:08 2022 mount.nfs: trying text-based options 'vers=4.2,addr=172.22.1.49,clientaddr=172.22.1.42' mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting optinas1:/volume1/test_nfs_share
After restarting the NFS service on Synology A and executing on XO:
xo@xo-ce:~$ sudo mount -v -t nfs optinas1:/volume1/test_nfs_share /run/xo-server/mounts/c9966433-081a-4d59-b109-3dbe9b054ad4 mount.nfs: timeout set for Fri Aug 26 17:41:49 2022 mount.nfs: trying text-based options 'vers=4.2,addr=172.22.1.49,clientaddr=172.22.1.42' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options 'vers=4.1,addr=172.22.1.49,clientaddr=172.22.1.42' xo@xo-ce:~$
The NFS shares work just fine now.
Maybe the NFS components in XO are a bit out of date?
I'm using: Xen Orchestra, commit 82452
xo-server 5.98.1, xo-web 5.100.0 -
What "NFS components" are you talking about?
-
@Kajetan321 First, is this XOA or XO Source? I guess XO Source?
Second, NFS client is part of the OS not XO directly (ie. Debian 11).
Third, Your NFS server also needs to have the correct NFS support: Synology KB info
NFS4 works differently than NFS3, so you might want to make sure both are enabled on Synology to make client access easier. But it looks like it worked (with NFS4.1).
-
@Andrew Hello Andrew, yes this is XO Source. Thank you for the clarification, I did not know who is "responsible" for the "NFS bits". Maybe I should try with Debian 11 and report back my findings.
-
@olivierlambert Oh, I was just referring to all the software that is needed for NFS to work.
-
I just tried to reproduce the issue on Debian 11, and guess what, the issue DID manifest its self on Debian as well.
kajetan@debian:~$ sudo mount -v -t nfs -o vers=4.1 optinas1:/volume1/test_nfs_share /var/test_nfs_share/ mount.nfs: timeout set for Tue Aug 30 16:33:58 2022 mount.nfs: trying text-based options 'vers=4.1,addr=172.22.1.49,clientaddr=172.22.1.100' mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting optinas1:/volume1/test_nfs_share
Moments later I executed the same command on pop!_os
code_text:kajetan@pop-os:~$ sudo mount -v -t nfs -o vers=4.1 optinas1:/volume1/test_nfs_share /var/test_nfs_share/ mount.nfs: timeout set for Tue Aug 30 16:33:32 2022 mount.nfs: trying text-based options 'vers=4.1,addr=172.22.1.49,clientaddr=172.22.1.51' kajetan@pop-os:~$
And the mount worked fine. It looks to me like somewhere along the chain (Debian -> Ubuntu -> Pop!_os) the issue got fixed. And yes, I did apply
apt-get update && apt-get upgrade
to the Debian box.
-
I'm wondering if I have the same issue. I'm trying to add a backup remote connected via NFS to an OpenMediaVault server (ie Debian-based) with XO Source. I can add an ISO store from the OMV server via NFS "hard" no issues, but the (configured exactly the same) backup NFS mount fails with the same error message as @Kajetan321 gets. I'm using NFS 3, should I specify that somehow?
Or should I be less lazy and set up NFS4?
Either way it's very weird to have the same NFS server setup work in one instance (as an ISO store) and fail in another (as a backup remote). Also, is there a reason these have to be specified in different ways in different places, why isn't "Backup" just a form of storage like "ISO" and set up through the same method?
Sorry to be a little cranky but I've found this process quite unintuitive and I'm a little worried I can't back anything up easily in the meantime, particularly as the XO backup functionality looks great (once it works).
-
It's not weird at all. The NFS SR is connected via the XCP-ng dom0, and the remote/Backup repository is mounted via Xen Orchestra. They are 2 very different systems.
-
@olivierlambert Alright, but this is not immediately apparent to the befuddled newbie and it's not really necessary to make the distinction from a UX perspective. Just because it's that way under the hood doesn't mean it needs to be expressed that way to the user who is not as embedded in the history of the product.
I'm also not sure what to do with this information. The XO install is in a container running on the NFS host, so perhaps that causes it some distress?
-
It's important to understand the components you are using, especially on a software that you use without support/training
In short:
- SR: a Storage Repository is access by your host/dom0/XCP-ng, and will be used to store ISOs and/or VM disks. It's running on a relatively old CentOS release (7.x), and will mount with some specific settings
- BR/remote: a Backup Repository where Xen Orchestra will send the backup of your VMs. Usually, Xen Orchestra runs (in XOA) inside a Debian 11 VM, with default NFS mount options (default: ie from the operating system point of view).
It's hard to tell exactly the issue since you are likely using XO from the sources, so a XO that you choose to install on a system where we (aka "XO devs") have precisely 0 control on. Plus the NFS configuration part is also out our scope because it's not running inside our infrastructure too.
That's a lot of moving pieces, that's why it's important to understand what you do so we can assist here the most efficient way we can