Thanks for following up and opening the issue; that's exactly the kind of report the team needs. I've subscribed to it, so I'll see when it moves. The second-run failure pattern is a useful clue; hopefully they can reproduce it from there. 
Best posts made by poddingue
-
RE: Error while scanning disk
-
RE: XOA Unable to connect xo server every 30s
Good news, or at least, that's my understanding.

This was a known bug and it's fixed in XO 6.4 (released April 29, PR #9681). Upgrading should make those alerts disappear for good. Thanks for the video, it made the report unambiguous.
Latest posts made by poddingue
-
RE: Mirror backup: Progress status and ETA
Thanks for testing it, and sorry for the noise.
My understanding from the changelog was that backup jobs now feed into the XO Tasks panel specifically rather than the old backup task view, but I might have got that wrong, or it may not cover mirror incremental. Are you seeing anything at all in XO Tasks while the backup runs?
If the Tasks panel is also empty, then either I misread what the fix actually covers, or there's something else going on that's worth raising directly on the XO tracker. -
RE: Mirror backup: Progress status and ETA
As far as I understand, this landed in XO 6.4 (released April 29, PR #9734).
Backup jobs now feed into the unified XO Tasks system, which gives you live progress during a run: watch the XO Tasks panel while a backup is running.
If I got it right, the roadmap item Florent mentioned is shipped.
-
RE: XO Console: Modifier keys stuck, unable to enter passwords
Just to close the loop on this: XO 6.4 (released on April 29) includes a specific fix for VNC reconnection, which may have been causing keyboard focus loss and, in turn, modifier keys to get stuck (PR #9727).
At least, thatβs my understanding; I could be completely wrong, so please take it with a grain of salt.

If youβre on XO 6 and still seeing the issue, upgrading to 6.4 may help.
-
RE: XOA Unable to connect xo server every 30s
Good news, or at least, that's my understanding.

This was a known bug and it's fixed in XO 6.4 (released April 29, PR #9681). Upgrading should make those alerts disappear for good. Thanks for the video, it made the report unambiguous. -
RE: Error while scanning disk
Thanks for following up and opening the issue; that's exactly the kind of report the team needs. I've subscribed to it, so I'll see when it moves. The second-run failure pattern is a useful clue; hopefully they can reproduce it from there.

-
RE: Error while scanning disk
I can't reproduce this myself, but this looks like something the XO team should see; the LVM volume group deactivation failure during restore is a specific enough error that it should be trackable.
The fact that it works on the first backup run but breaks on subsequent ones narrows it down a bit.Would you be able to open a bug report on the xen-orchestra GitHub with the error output and your XOA version? Including whether this is a delta or continuous backup job would help too. That gives the dev team something to actually pull up and reproduce.
Thanks.
-
RE: Storage domain server & Rolling pool upgrade
Yeah, as far as I understand, this looks like something RPU wasn't really designed to handle, and I'm not deep enough in the internals to say for sure how tricky it would be to add.
My instinct is this sits more in feature request territory than a bug, so I'd suggest filing it on https://feedback.vates.tech with something like "RPU: graceful shutdown/restart for PCI-pinned VMs" and your actual topology in the description. The product team will understand the constraint much better with a concrete Ceph setup to look at.
Someone here probably knows if there's a cleaner workaround than manually draining and shutting down the pinned VMs before each host update, but that's what I'd fall back on in the meantime. -
RE: Build XCP-ng ISO - issue at create-installimg
Thank you so much for your feedback, @Vagrantin !

-
RE: Build XCP-ng ISO - issue at create-installimg
Hi Vagrantin,
If I'm reading this right, the four-slash path is the tell. I can't confirm without seeing your exact Docker invocation, but this pattern points pretty clearly at one thing.
misc.shbuilds its temp dir from$PWD; something like$PWD/tmpdir-XXXXXX. If Docker is running your build from/, that expands to//tmpdir-XXXXXX. Linux is fine with that. Yum is not. It turns the config path into afile://URI, and////tmpdir-VcYHGW/yum-171swX.confis not a URI anything wants to parse.One other thing worth knowing: a
:roread-only mount on your repo directory causes the same symptom.mktempfails silently, TMPDIR ends up empty, and you get the same mangled path. Different cause, same four-slash result.Two things to try. The faster diagnostic is just
cdinto your repo before calling the script: if the build completes, that was it. The cleaner fix for scripted runs is passing-w /path/to/your/repoto yourdocker runcommand, which sets the working directory explicitly.Before you do either, this will tell you what you're actually dealing with:
echo $PWD && ls -la $TMPDIRThat's my best guess, at least. I'm curious what those two commands show.