XO task watcher issue/CR broken
-
@Andrew do you still have the issue without the
ignorePrematureClose
? -
@olivierlambert Yes. Still problems on the new code without the option set. 90% of the VMs fail 10% finish correctly on CR.
-
Thanks for your precious feedback
-
Just for info
I installed the latest version of XOCE, commit 083db67df9e1645a2f8fe2fac564b3aecf30d55e
CR is ok and can be used in case of disaster, with ignorePrematureClose = true.
I managed to start the VM without problems, the VM itself ( that is the CR copy ) .
At the moment I have a very slow copy problem of the VM that is created with CR on the seccond server ( 18Mb max ), the problem is that the same connection goes with 400Mb for a CR copy. I want to make a copy of VM-CR and start it -
-
@Gheppy weird.. DR is now working for me but CR is still never ending
-
@magicker
You need to add this on config file.
Location for me is /opt/xen-orchestra/packages/xo-server/.xo-server.toml[xapiOptions] ignorePrematureClose = true
-
@Gheppy Yes.. putting that in place seemed to fix DR.. but not CR.
-
Any log or something @magicker ? Are you sure it's a CR started after modifying the config and restarting
xo-server
? -
@olivierlambert actually.. just tried CR again... and this time it is working.. will test with a few more vms.
-
@olivierlambert spoke too soon.. CR works in one direction but not the other
https://images.dx3webs.com/kgSC4E.png -
@magicker Any differences between the two hosts? XCP-ng versions maybe?
-
Jumping in as another "victim"... I have lost almost all ability to run any kinds of backups.
System information:
XO from Sources
XO-Server: 5.109.0
XO-Web: 5.111.0
Commit: d4bbfI had no issues running any kinds of backups prior to applying the latest commits two weeks ago, Monday, February 6, 2023. After applying that commit, everything seemed to be OK until late that evening. At that time, one of my CR jobs started and never completed. All of the subsequent jobs failed, of course, because another job was already running. I was able to delete the job and restarted the toolstacks and rebuilt a new job. This started to spread until, eventually, all of my CR jobs would fail. That issue has now spread and my DR jobs are now failing, too. Out of the seven backup jobs that were working with no issues prior to that date, I am now only able to run one job successfully, and that is a DR job.
The DR and CR jobs are written to different repositories (the CR jobs are written to the storage repository on one of the other XCP-ng servers and the DR jobs are written to an NFS share).
I tried running the cr-issue branch update that @julien-f recommended, and modified my config.toml file with the ignorePrematureClose = true code that was also recommended. Nothing works.
If I try to run a DR job now, even a brand-new one, it looks like it wants to start and create the snapshot, but then fails and actually shows it will start in 53.086 years.
Anything I can get, just let me know what to pull and I'll add it here... hopefully that will help. Right now, I'm able to only back up a single VM!
-
@JamfoFL
cr-issue
was a test branch that I've just removed, please try onmaster
. -
@julien-f OK... I just updated again and there's no change.
It certainly looks like a snapshot is created (I see one attached to the VM right now), but nothing ever progresses. Right now, the job is listed with this status:
[XO] Exporting content of VDI VXCTEST-22-02 Primary (on XCPHOST1) 0%
It's been there for about 10 minutes. I will allow this to continue to run to see if there's ever any progress... but this is usually where thigs jam up and the job never completes.
-
@JamfoFL If you can find the latest commit on which it worked for you, that would be very helpful
One way that you could find the correct commit without checking out every one of them is to use a binary search with
git bisect
.First you need to find an older commit that did not exhibit the issue, then you can start the search:
# Move to XO directory cd xen-orchestra # Ensure you are up-to-date on `master` git checkout master git pull --ff-only # Tell git to start the search git bisect start # Current version is bad git bisect bad # Tell git which commit was good git bisect good <commit id> # Now that git knows the good and the bad commit, it will select a commit to test
Testing a commit:
# Re-install dependencies and rebuild XO yarn; yarn build # Run xo-server ./packages/xo-server/dist/cli.mjs # Test XO to see if it has the problem # Interrupt xo-server with Ctrl+C # If the problem is present: git bisect bad # If the problem is absent: git bisect good # Continue testing commits until git tells you which commit is the first with the problem, example: # # Bisecting: 0 revisions left to test after this (roughly 0 steps) # [31f850c19c2c2ad5054c292d0f22e9237869dc04] fix(xo-server-transport-email): log async errors
After the search, to go back to the initial state:
git bisect reset
-
@julien-f Thank you, Julien-f!
I ran through that procedure (sorry it took so long) and the last commit where everything works as expected was bf51b94. Of course, I could be back to the point where it might need to run all day before one fails, but I was, at least, able to run one of each type of backup on that commit.
Is there any way I can revert my environment back to that commit so I can have the security of backups again? Or should I ride this out while you take a look?
Thanks!
-
Just to be sure, if you go on very next commit, like
263c23ae8f1bc6f3f32ab5fa02c3800b29db8d37
, then do agit checkout 263c23ae8f1bc6f3f32ab5fa02c3800b29db8d37
andyarn
, plusyarn build
after, and re-runxo-server
with./packages/xo-server/dist/cli.mjs
. Start a backup job and report -
@olivierlambert I think I'm a bit confused. After I finished the testing as @julien-f recommended and ran git bisect reset, it put me back on the very latest commit (at the time). Right now, it shows me on commit 890b46b and nothing is working again.
Is there a way I can revert my current build back to bf51b94 permanently? I would assume that, after that point, I would then use your suggestions moving forward?
Sorry if I'm understanding correctly!
-
Latest commit on
master
branch is2de9984945a970b7e2273e69c4c89662156d1824
.Are you sure you are on
master
, right? (do agit branch
to so it) -
@olivierlambert Thank you... yes, shortly after I replied, I did figure out how to "roll back" and am currently on commit bf51b945c5348ba76459e603be87416d3415b264, which is the last commit I was on when everything worked. With my XO on that commit, I am now able to run my routine backups and am in the process of doing so! I had no backups due to the error, so I want to get at least one good one for my test machines to make sure I'm covered.
I ran the "git branch" as you asked and can confirm I see two branches listed, master and cr-issue. The "*" was next to master (and it was highlighted in green), so I am assuming that is the current branch I am on.
So... now that everything appears to be working again, are you saying you would like me to run "git checkout 2de9984945a970b7e2273e69c4c89662156d1824" to update to the latest version? Does that have the effect of bypassing all of the commits between the working bf51b and the very latest 2de99? I'd hate to have done all of this and got my backups working only to break them again. I do realize I need to get up-to-date at some point!
Thanks!