doogie06
register cli: xo-cli --register http://url login
then remove all tasks: xo-cli rest del tasks

Tristis Oris
@Tristis Oris
Best posts made by Tristis Oris
-
RE: Clearing Failed XO Tasks
-
RE: New Rust Xen guest tools
Problem with broken package on rhel8-9 looks resolved now. Done multiple tests.
-
RE: New Rust Xen guest tools
DustinB said in New Rust Xen guest tools:
systemctl enable xe
yep, found them.
full steps for RHEL:wget https://gitlab.com/xen-project/xen-guest-agent/-/jobs/6041608360/artifacts/raw/RPMS/x86_64/xen-guest-agent-0.4.0-0.fc37.x86_64.rpm rpm -i xen-guest-agent* yum remove -y xe-guest-utilities-latest systemctl enable xen-guest-agent.service --now
-
RE: New Rust Xen guest tools
chrisfonte new tools will remove the old one during install.
-
RE: Can not create a storage for shared iso
ckargitest that was fixed few weeks ago. Do you use latest commit?
-
RE: Pool is connected but Unknown pool
julien-f olivierlambert
2371109b6fea26c15df28caed132be2108a0d88e
Fixed now, thanks you. -
RE: Our future backup code: test it!
1 vm, 1 storage, NBD connections: 1. delta, first full.
Duration: 3 minutes
Size: 26.54 GiB
Speed: 160.71 MiB/sDuration: 4 minutes
Size: 26.53 GiB
Speed: 113.74 MiB/s -
RE: CBT: the thread to centralize your feedback
updated to
fix_cbt
branch.CR NBD backup works.
Delta NBD backup works.
just once, so we can't be sure yet.No broken tasks is generated.
Still confused why CBT toggle is enabled on some VMs.
2 similars vms on same pool, same storage, same ubuntu version. One is enabled automaticaly, other is not. -
RE: Advice on good remote targets?
if you have no any dedicated storage, it's possible to use literally any PC with nfs\smb\external drive, windows share also.
Latest posts made by Tristis Oris
-
Metadata backup management
Sort by date don't work. Name also sorting by random.
No way to remove all backups. I found that still have a lot from old\renamed pool.
-
about Anti-affinity
you wrote like this is new feature at 8.3, but it been always available at 8.x. Or load balancer plugin using another methods?
https://docs.xcp-ng.org/releases/release-8-3/#vm-anti-affinity-xs
https://docs.xenserver.com/en-us/xenserver/8/vms/placement#anti-affinity-placement-groups
I wonder because of
Only 5 anti-affinity groups per pool are supported.
since i'm using much more. -
RE: Backup stuck at 27%
Greg_E
All VMs were created on an Intel host, after I got my mini lab set up I warm migrated over to the AMD
don't hear about limitation here, shouldn't be a problem. butAMD v1756b
is pretty weak and not a server platform, so not sure. Some people have problems with some new ryzen CPUs. -
RE: Backup stuck at 27%
Greg_E simple questions first. Do you have enough empty space on host and backup SR? basic recomendation x2 much than required.
-
RE: Need support with Citrix server 6.1
MW6 ipmi usually show the raid\disks status. Probably it dead.
-
RE: feature request: pause Sequences
Oh yes, jobs menu need some love.
"Current" jobs can be found at "new" tab.
Schedules located under huge calendar, out of screen.
And Sequences located at backup page, even you can use not only backup tasks there. -
RE: Our future backup code: test it!
i tried to move tests to another vm, but again can't build it with same commands(
yarn start yarn run v1.22.22 $ node dist/cli.mjs node:internal/modules/esm/resolve:275 throw new ERR_MODULE_NOT_FOUND( ^ Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/opt/xen-orchestra/@xen-orchestra/xapi/disks/XapiProgress.mjs' imported from /opt/xen-orchestra/@xen-orchestra/xapi/disks/Xapi.mjs at finalizeResolution (node:internal/modules/esm/resolve:275:11) at moduleResolve (node:internal/modules/esm/resolve:860:10) at defaultResolve (node:internal/modules/esm/resolve:984:11) at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:685:12) at #cachedDefaultResolve (node:internal/modules/esm/loader:634:25) at ModuleLoader.resolve (node:internal/modules/esm/loader:617:38) at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:273:38) at ModuleJob._link (node:internal/modules/esm/module_job:135:49) { code: 'ERR_MODULE_NOT_FOUND', url: 'file:///opt/xen-orchestra/@xen-orchestra/xapi/disks/XapiProgress.mjs' } Node.js v22.14.0 error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
-
RE: Our future backup code: test it!
well, that was my CPU bottleneck. XO live at most stable DC, but oldest one.
- Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
flash:
Speed: 151.36 MiB/s
summary: { duration: '3m', cpuUsage: '131%', memoryUsage: '162.19 MiB' }
hdd:
Speed: 152 MiB/s
summary: { duration: '3m', cpuUsage: '201%', memoryUsage: '314.1 MiB' }- Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz
flash:
Speed: 196.78 MiB/s
summary: { duration: '3m', cpuUsage: '129%', memoryUsage: '170.8 MiB' }
hdd:
Speed: 184.72 MiB/s
summary: { duration: '3m', cpuUsage: '198%', memoryUsage: '321.06 MiB' }- Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz
flash:
Speed: 222.32 MiB/s
Speed: 220 MiB/s
summary: { duration: '2m', cpuUsage: '155%', memoryUsage: '183.77 MiB' }hdd:
Speed: 185.63 MiB/s
Speed: 185.21 MiB/s
summary: { duration: '3m', cpuUsage: '196%', memoryUsage: '315.87 MiB' }Look at high memory usage with hdd.
sometimes i still got errors.
"id": "1744875242122:0", "message": "export", "start": 1744875242122, "status": "success", "tasks": [ { "id": "1744875245258", "message": "transfer", "start": 1744875245258, "status": "success", "end": 1744875430762, "result": { "size": 28489809920 } }, { "id": "1744875432586", "message": "clean-vm", "start": 1744875432586, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/d4950e88-f6aa-dbc1-e6fe-e3c73ebe9904/20250417T073405Z.json", "actual": 28489809920, "expected": 28496828928 }, "message": "cleanVm: incorrect backup size in metadata" }
"id": "1744876967012:0", "message": "export", "start": 1744876967012, "status": "success", "tasks": [ { "id": "1744876970075", "message": "transfer", "start": 1744876970075, "status": "success", "end": 1744877108146, "result": { "size": 28489809920 } }, { "id": "1744877119430", "message": "clean-vm", "start": 1744877119430, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/d4950e88-f6aa-dbc1-e6fe-e3c73ebe9904/20250417T080250Z.json", "actual": 28489809920, "expected": 28496828928 }, "message": "cleanVm: incorrect backup size in metadata" }
-
RE: Backup stuck at 27%
Greg_E restart pool toolstack and maybe XO itself.
if backup was interrupted it can stuck this way.