Editing a Remote always fails
-
With the announcement that file level restore now works with the new mode of backup I figured that it was time the I tried it out.
So I upgraded my XO from sources (now on commit 42432) and tried to edit my existing remote to enable the "multiple data blocks" mode, but it fails.
This reminded me that I don't think that I have ever been able to edit an existing remote.
I am selecting the "pencil" button to edit the remote. This populates the fields under the "New file system remote" banner with the information for the desired remote. But it does not matter if I change anything or not. When I select the "Save configuration" button I always get an error.
The error appears a a red pop up with the text "Save configuration E is undefined". Selecting the "Show logs" button in this pop uptakes me to the XO logs, but there is no log entry for this error.
-
Hi,
You can't transform a remote from "normal" VHD to "multiple datablock". You have to create a new one in this mode.
-
@olivierlambert
That doesn't surprise me, but on https://xen-orchestra.com/blog/xen-orchestra-5-72/ it saysTo configure it, edit your current remote (or create a new one) and check "Store backup as multiple data block instead of a whole VHD file":
So I was doing the "edit your current remote" like it said.
I did delete the remote and added a new one in XO, but I pointed it at the same NFS share, which I am assuming is not right either because it had a couple of issues.
- The backup completed but with an "Unused VHD" error.
- It took almost 9x longer to do the backup than the previous "full" backup (44min vs 8min). Or is this expected?
I will create a new NFS share and a new Remote for tonight's backups and see how that goes.
This also doesn't change that editing a Remote always fails with the same error when trying to save it. This is no matter what I try to change, including changing nothing, and that nothing is logged.
Thanks
-
Adding @florent in the loop
-
I added a new NFS share and Remote and kicked the backup off manually rather than waiting for the scheduled job tonight.
To my points above:
- The backup job on the new remote/share worked without errors.
- There was an mdadm "check data" scan running on the array being used for the remote, so things aren't 9x bad. But the back up still took nearly twice as long as previously (15min vs 8Min). Is this expected or do have another issue to figure out?
XO reports the same amount of data transferred (21.87 GiB) for old and new job, but on the remote the back up is 22GB for the old way and 18GB for the new (as reported by ls and du respectively on the NFS server).
Is there some compression being done? If so can it be changed somewhere?
Thanks
-
Yes, there's compression by default IIRC. All those questions are for @florent anyway
-
@mjtbrady compression is used ( brotli, fastest mode by default)
There is a problem on the size reported especially with encryption. I will add a task to the backlog to fix it (there is padding with encryption, that complexify the maths)