@archw
FWIW...It happened again last night.
Start: 2024-05-15 04:00
End: 2024-05-15 07:07
Duration: 3 hours
Error: Body Timeout Error
Duration: 3 hours
Error: all targets have failed, step: writer.run()
Type: full
@archw
FWIW...It happened again last night.
Start: 2024-05-15 04:00
End: 2024-05-15 07:07
Duration: 3 hours
Error: Body Timeout Error
Duration: 3 hours
Error: all targets have failed, step: writer.run()
Type: full
Scratch that one.....I should have googled first. I saw that someone suggested deleting the cache.json.gz file. I did and ran it again and it worked.
And another one.....
For the last week or so, the backup on one particular VM gives this error:
VM2 (xcp-ng-2)
• Clean VM directory
uuid is duplicated
should delete /xo-vm-backups/91bad703-c792-1f05-17a5-042ae49247f7/vdis/371c0511-5058-40bc-b435-3122f21edf50/a888e4cc-19d6-41f4-978a-74bcac97da11/20240509T100018Z.vhd
unexpected number of entries in backup cache
o merge
Start: 2024-05-12 06:00
End: 2024-05-12 06:00
Duration: a few seconds
Error: EIO: i/o error, close
Start: 2024-05-12 06:00
End: 2024-05-12 06:00
Error: EIO: i/o error, close
• Snapshot
Start: 2024-05-12 06:00
End: 2024-05-12 06:00
• NFS1
o transfer
Start: 2024-05-12 06:00
End: 2024-05-12 08:44
Duration: 3 hours
Size: 807.83 GiB
Speed: 84.01 MiB/s
o Clean VM directory
uuid is duplicated
should delete /xo-vm-backups/91bad703-c792-1f05-17a5-042ae49247f7/vdis/371c0511-5058-40bc-b435-3122f21edf50/a888e4cc-19d6-41f4-978a-74bcac97da11/20240509T100018Z.vhd
unexpected number of entries in backup cache
Start: 2024-05-12 08:44
End: 2024-05-12 08:44
• Start: 2024-05-12 06:00
• End: 2024-05-12 08:44
• Duration: 3 hours
Start: 2024-05-12 06:00
End: 2024-05-12 08:44
Duration: 3 hours
Type: full
The host is running xcp beta 3 (updated through 5-12-24), XO is version "Xen Orchestra, commit 9b9c7".
This VM and numerous other VM's back up the same way (goes to two different NFS targets) for the last five months.
I've tried running it in the middle of the day (when other backups are not running) but it didn't help.
Backup type = “Delta Backup”
"Number of retries if VM backup fails"=0
"Timeout " = <blank>
"Compression"="zstd"
Any ideas?
For the last week or so, the backup on one paticular VM gives this error:
VM1 (xcp-ng-1)
• Snapshot
Start: 2024-05-12 04:00
End: 2024-05-12 04:00
• NFS2
o transfer
Start: 2024-05-12 04:00
End: 2024-05-12 05:21
Duration: an hour
Error: Body Timeout Error
• Start: 2024-05-12 04:00
• End: 2024-05-12 05:21
• Duration: an hour
• Error: Body Timeout Error
NFS2
o transfer
Start: 2024-05-12 04:00
End: 2024-05-12 05:21
Duration: an hour
Error: Body Timeout Error
• Start: 2024-05-12 04:00
• End: 2024-05-12 05:21
• Duration: an hour
• Error: Body Timeout Error
Start: 2024-05-12 04:00
End: 2024-05-12 05:23
Duration: an hour
Error: all targets have failed, step: writer.run()
Type: full
The host is running xcp beta 3 (updated through 5-12-24), XO is version "Xen Orchestra, commit 9b9c7".
This VM and numerous other VM's back up the same way (goes to two different NFS targets) for the last five months.
I've tried running it in the middle of the day (when other backups are not running) but it didn't help.
"Number of retries if VM backup fails"=0
"Timeout " = <blank>
"Compression"="zstd"
Any ideas?
@stormi
Worked for me at the first site but it did one weird thing that its never done before.
When I applied the patches to the pool master, it disappeared from the XOA GUI. Normally, after you apply the updates, all the hosts (including the master) disappear for a few minutes and then come back but this time they didn't. I had to just reboot it from the Dell Idrac GUI. It rebooted and then all the hosts came back.
@stormi said in XCP-ng 8.3 beta :
yum update --enablerepo=xcp-ng-lab
After the new updates came out yesterday, I did the update but, this time, I did what you said (" yum update --enablerepo=xcp-ng-lab") and it worked perfectly...thanks!
@stormi I didn't even think to try that I just did it and its rebooting.
I think all of this is my fault and I wouldn't worry about it. After I left the last post, I went in and saw a but of stuff from the @xcp-ng-lab repo. I quasi fixed it doing a bunch of yum downgrades until I no longer got an error and and I think I fixed it. I had to remove a few things but I took a "yum list installed" from a good system and made everything the same. Again, its a dummy system that I don't really use except to test things.
Here is the output from "yum repolist --verbose"
Loading "fastestmirror" plugin
Config time: 0.010
Yum version: 3.4.3
Loading mirror speeds from cached hostfile
Excluding mirror: updates.xcp-ng.org
* xcp-ng-base: mirrors.xcp-ng.org
Excluding mirror: updates.xcp-ng.org
* xcp-ng-updates: mirrors.xcp-ng.org
Setting up Package Sacks
pkgsack time: 0.004
Repo-id : xcp-ng-base
Repo-name : XCP-ng Base Repository
Repo-revision: 1711706182
Repo-updated : Fri Mar 29 05:56:22 2024
Repo-pkgs : 3,599
Repo-size : 12 G
Repo-baseurl : http://mirrors.xcp-ng.org/8/8.3/base/x86_64/,
: http://updates.xcp-ng.org/8/8.3/base/x86_64/
Repo-expire : 21,600 second(s) (last: Sat Mar 30 15:37:49 2024)
Filter : read-only:present
Repo-filename: /etc/yum.repos.d/xcp-ng.repo
Repo-id : xcp-ng-updates
Repo-name : XCP-ng Updates Repository
Repo-revision: 1693236966
Repo-updated : Mon Aug 28 11:36:06 2023
Repo-pkgs : 2
Repo-size : 4.6 k
Repo-baseurl : http://mirrors.xcp-ng.org/8/8.3/updates/x86_64/,
: http://updates.xcp-ng.org/8/8.3/updates/x86_64/
Repo-expire : 21,600 second(s) (last: Sat Mar 30 15:37:50 2024)
Filter : read-only:present
Repo-filename: /etc/yum.repos.d/xcp-ng.repo
repolist: 3,601
Hmmmm...while I'm gonna say a guarded "I don't think so", that is a junk machine that I use to test things so that may be a possibility.
How do I tell?
I noteced new updates for 8.3 this morning. Oen of teh machines will not update. When you try to run it from XOA you get this error:
pool.installPatches
{
"hosts": [
"7ce4f772-4391-4982-a1f9-d1de86be92cb"
]
}
{
"code": "-1",
"params": [
"Command '['yum', 'update', '--disablerepo=*', '--enablerepo=xcp-ng-base,xcp-ng-updates', '-y']' returned non-zero exit status 1",
"",
"Traceback (most recent call last):
File \"/etc/xapi.d/plugins/xcpngutils/__init__.py\", line 119, in wrapper
return func(*args, **kwds)
File \"/etc/xapi.d/plugins/updater.py\", line 96, in decorator
return func(*args, **kwargs)
File \"/etc/xapi.d/plugins/updater.py\", line 182, in update
return install_helper(session, args, 'update')
File \"/etc/xapi.d/plugins/updater.py\", line 153, in install_helper
raise error
CalledProcessError: Command '['yum', 'update', '--disablerepo=*', '--enablerepo=xcp-ng-base,xcp-ng-updates', '-y']' returned non-zero exit status 1
"
],
"call": {
"method": "host.call_plugin",
"params": [
"OpaqueRef:fab0b7b0-de37-a996-1760-92a38cf136c2",
"updater.py",
"update",
{}
]
},
"message": "-1(Command '['yum', 'update', '--disablerepo=*', '--enablerepo=xcp-ng-base,xcp-ng-updates', '-y']' returned non-zero exit status 1, , Traceback (most recent call last):
File \"/etc/xapi.d/plugins/xcpngutils/__init__.py\", line 119, in wrapper
return func(*args, **kwds)
File \"/etc/xapi.d/plugins/updater.py\", line 96, in decorator
return func(*args, **kwargs)
File \"/etc/xapi.d/plugins/updater.py\", line 182, in update
return install_helper(session, args, 'update')
File \"/etc/xapi.d/plugins/updater.py\", line 153, in install_helper
raise error
CalledProcessError: Command '['yum', 'update', '--disablerepo=*', '--enablerepo=xcp-ng-base,xcp-ng-updates', '-y']' returned non-zero exit status 1
)",
"name": "XapiError",
"stack": "XapiError: -1(Command '['yum', 'update', '--disablerepo=*', '--enablerepo=xcp-ng-base,xcp-ng-updates', '-y']' returned non-zero exit status 1, , Traceback (most recent call last):
File \"/etc/xapi.d/plugins/xcpngutils/__init__.py\", line 119, in wrapper
return func(*args, **kwds)
File \"/etc/xapi.d/plugins/updater.py\", line 96, in decorator
return func(*args, **kwargs)
File \"/etc/xapi.d/plugins/updater.py\", line 182, in update
return install_helper(session, args, 'update')
File \"/etc/xapi.d/plugins/updater.py\", line 153, in install_helper
raise error
CalledProcessError: Command '['yum', 'update', '--disablerepo=*', '--enablerepo=xcp-ng-base,xcp-ng-updates', '-y']' returned non-zero exit status 1
)
at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202403291838/packages/xen-api/_XapiError.mjs:16:12)
at file:///opt/xo/xo-builds/xen-orchestra-202403291838/packages/xen-api/transports/json-rpc.mjs:38:21"
If you go to the command line and do a "yum update" you get this:
Transaction Summary
===================================================================================================================================================================================
Install ( 1 Dependent package)
Upgrade 21 Packages
Total size: 84 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction check error:
file /usr/lib64/python2.7/site-packages/xen/__init__.py from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python2.7/site-packages/xen/lowlevel/__init__.py from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python3.6/site-packages/xen/__init__.py from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python3.6/site-packages/xen/lowlevel/__init__.py from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python2.7/site-packages/xen/__init__.pyc from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python2.7/site-packages/xen/lowlevel/__init__.pyc from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python2.7/site-packages/xen/__init__.pyo from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python2.7/site-packages/xen/lowlevel/__init__.pyo from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python2.7/site-packages/xen/lowlevel/xc.so from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python3.6/site-packages/xen/__pycache__/__init__.cpython-36.opt-1.pyc from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python3.6/site-packages/xen/lowlevel/__pycache__/__init__.cpython-36.opt-1.pyc from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python3.6/site-packages/xen/__pycache__/__init__.cpython-36.pyc from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python3.6/site-packages/xen/lowlevel/__pycache__/__init__.cpython-36.pyc from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
file /usr/lib64/python3.6/site-packages/xen/lowlevel/xc.cpython-36m-x86_64-linux-gnu.so from install of xen-installer-files-4.13.5-10.42.3.xcpng8.3.x86_64 conflicts with file from package xen-dom0-tools-4.17.3-2.0.xen417.1.xcpng8.3.x86_64
Any ideas?
I did disk speed tests on about thirty VMs as I moved from esxi 8 to XCP-NG (both 8.2.1 and 8.3). In about 60% of the tests esxi was faster and 40% XCP-NG was faster. I used Crystalmark in my tests. When the test disk was a 1gb disk, esxi was a lot faster but when I changed the test disks to 8gb the results were split and there was not much of a difference between winner and looser..
Yesterday, as I was about to walk out of the office for a deposition, someone walked in and said the connection to oen of the VM's was dead.
I opened up Idrac to the Dell host (Dell Inc. PowerEdge R540) and found a black screen unlike any I've seen before with XCP-NG; my vague recollection was a standard linux screen with "system" or something like that. I had twenty minutes to get to the deposition so I didn't have time to do normal debugging so I rebooted the host and watched as it did a normal reboot. It came back and all was well.
Now that the dust has cleared, this is my first chance to look into what happened. Where do I start? /var/log/xensource.log? /var/log/kern.log? Something else?
Thanks!
@olivierlambert
Thanks!
I found out that the ones where nothing was happening would complete (and leave the Unhealthy VDI section) if I did a snapshot of the vm and then immediately deleted the snapshot.
@olivierlambert
I clicked "forget" under all the SRs under VDIs attached to Control Domain and they immediately populated under Orphan VDIs
Two have this message "January 17, 2024 at 8:10 PM (last month)"
and one has this "February 19, 2024 at 8:10 PM (10 days ago)".
One of the VM's SR's occurs with both the January 17, 2024 at 8:10 PM as well as the February 19, 2024 at 8:10 PM date.
Under Unhealthy VDIs there is still one VM listed (also in the list above).
After clicking on dashboard > health, a myriad of entries appear under under all three of these :
Unhealthy VDIs, Orphan VDIs, VDIs attached to Control Domain.
I'm sure it seems obvious but what are they and can they be deleted? The VM's all have successful recent backups and none of the VM's have active snapshots.
Thanks!
@acomav said in Import from VMware fails after upgrade to XOA 5.91:
redid the job with a snapshot from a running VM to a local SR. Same issue occurred at the same time.
I too did the same thing; it didn't work for me either.
He is what did work:
@florent
I's running it right now!
I tried after you patched it...same error:
Error: no opaque ref found\n at importVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVm.mjs:28:19)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at importVdi (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVdi.mjs:6:17)\n at file:///usr/local/lib/node_modules/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n at Task.runInside (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:158:22)\n at Task.run (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:141:20)"
@florent
Cool...I'll give it a shot...takes about twenty minutes. (it always dies in the last minute).