@olivierlambert Just to be sure. Thanks.
Top contributor
Posts
-
Metadata backup management
Sort by date don't work. Name also sorting by random.
No way to remove all backups. I found that still have a lot from old\renamed pool.
-
about Anti-affinity
you wrote like this is new feature at 8.3, but it been always available at 8.x. Or load balancer plugin using another methods?
https://docs.xcp-ng.org/releases/release-8-3/#vm-anti-affinity-xs
https://docs.xenserver.com/en-us/xenserver/8/vms/placement#anti-affinity-placement-groups
I wonder because of
Only 5 anti-affinity groups per pool are supported.
since i'm using much more. -
RE: Backup stuck at 27%
@Greg_E
All VMs were created on an Intel host, after I got my mini lab set up I warm migrated over to the AMD
don't hear about limitation here, shouldn't be a problem. butAMD v1756b
is pretty weak and not a server platform, so not sure. Some people have problems with some new ryzen CPUs. -
RE: Backup stuck at 27%
@Greg_E simple questions first. Do you have enough empty space on host and backup SR? basic recomendation x2 much than required.
-
RE: Need support with Citrix server 6.1
@MW6 ipmi usually show the raid\disks status. Probably it dead.
-
RE: feature request: pause Sequences
Oh yes, jobs menu need some love.
"Current" jobs can be found at "new" tab.
Schedules located under huge calendar, out of screen.
And Sequences located at backup page, even you can use not only backup tasks there. -
RE: Our future backup code: test it!
i tried to move tests to another vm, but again can't build it with same commands(
yarn start yarn run v1.22.22 $ node dist/cli.mjs node:internal/modules/esm/resolve:275 throw new ERR_MODULE_NOT_FOUND( ^ Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/opt/xen-orchestra/@xen-orchestra/xapi/disks/XapiProgress.mjs' imported from /opt/xen-orchestra/@xen-orchestra/xapi/disks/Xapi.mjs at finalizeResolution (node:internal/modules/esm/resolve:275:11) at moduleResolve (node:internal/modules/esm/resolve:860:10) at defaultResolve (node:internal/modules/esm/resolve:984:11) at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:685:12) at #cachedDefaultResolve (node:internal/modules/esm/loader:634:25) at ModuleLoader.resolve (node:internal/modules/esm/loader:617:38) at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:273:38) at ModuleJob._link (node:internal/modules/esm/module_job:135:49) { code: 'ERR_MODULE_NOT_FOUND', url: 'file:///opt/xen-orchestra/@xen-orchestra/xapi/disks/XapiProgress.mjs' } Node.js v22.14.0 error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
-
RE: Our future backup code: test it!
well, that was my CPU bottleneck. XO live at most stable DC, but oldest one.
- Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
flash:
Speed: 151.36 MiB/s
summary: { duration: '3m', cpuUsage: '131%', memoryUsage: '162.19 MiB' }
hdd:
Speed: 152 MiB/s
summary: { duration: '3m', cpuUsage: '201%', memoryUsage: '314.1 MiB' }- Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz
flash:
Speed: 196.78 MiB/s
summary: { duration: '3m', cpuUsage: '129%', memoryUsage: '170.8 MiB' }
hdd:
Speed: 184.72 MiB/s
summary: { duration: '3m', cpuUsage: '198%', memoryUsage: '321.06 MiB' }- Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz
flash:
Speed: 222.32 MiB/s
Speed: 220 MiB/s
summary: { duration: '2m', cpuUsage: '155%', memoryUsage: '183.77 MiB' }hdd:
Speed: 185.63 MiB/s
Speed: 185.21 MiB/s
summary: { duration: '3m', cpuUsage: '196%', memoryUsage: '315.87 MiB' }Look at high memory usage with hdd.
sometimes i still got errors.
"id": "1744875242122:0", "message": "export", "start": 1744875242122, "status": "success", "tasks": [ { "id": "1744875245258", "message": "transfer", "start": 1744875245258, "status": "success", "end": 1744875430762, "result": { "size": 28489809920 } }, { "id": "1744875432586", "message": "clean-vm", "start": 1744875432586, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/d4950e88-f6aa-dbc1-e6fe-e3c73ebe9904/20250417T073405Z.json", "actual": 28489809920, "expected": 28496828928 }, "message": "cleanVm: incorrect backup size in metadata" }
"id": "1744876967012:0", "message": "export", "start": 1744876967012, "status": "success", "tasks": [ { "id": "1744876970075", "message": "transfer", "start": 1744876970075, "status": "success", "end": 1744877108146, "result": { "size": 28489809920 } }, { "id": "1744877119430", "message": "clean-vm", "start": 1744877119430, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/d4950e88-f6aa-dbc1-e6fe-e3c73ebe9904/20250417T080250Z.json", "actual": 28489809920, "expected": 28496828928 }, "message": "cleanVm: incorrect backup size in metadata" }