• suggestions for upgrade path XCP-ng 8.2.1 -> XCP-ng 8.3.0

    XCP-ng
    2
    0 Votes
    2 Posts
    13 Views
    A
    @ditzy-olive You can not join a new XCP 8.3 host to an existing 8.2 pool. You can setup your new host as 8.2 and join it to the 8.2 pool and then upgrade everything to 8.3 using the correct procedure.... Or as you stated. Add the new 8.3 host to the upgraded 8.3 pool.
  • FILE RESTORE / overlapping loop device exists

    Backup
    2
    0 Votes
    2 Posts
    12 Views
    P
    antoher log from listPartitions : { "id": "0miuq9mt5", "properties": { "method": "backupNg.listPartitions", "params": { "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da", "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd" }, "name": "API call: backupNg.listPartitions", "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8", "type": "api.call" }, "start": 1765051796921, "status": "failure", "updatedAt": 1765051856924, "end": 1765051856924, "result": { "url": "https://10.xxx.xxx.61/api/v1", "originalUrl": "https://10.xxx.xxx.61/api/v1", "message": "HTTP connection has timed out", "name": "Error", "stack": "Error: HTTP connection has timed out\n at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n at ClientRequest.emit (node:events:518:28)\n at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n at Object.onceWrapper (node:events:632:28)\n at TLSSocket.emit (node:events:530:35)\n at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.Socket._onTimeout (node:net:595:8)\n at listOnTimeout (node:internal/timers:581:17)\n at processTimers (node:internal/timers:519:7)" } } { "id": "0miunp2s1", "properties": { "method": "backupNg.listPartitions", "params": { "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da", "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd" }, "name": "API call: backupNg.listPartitions", "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8", "type": "api.call" }, "start": 1765047478609, "status": "failure", "updatedAt": 1765047530203, "end": 1765047530203, "result": { "code": -32000, "data": { "code": 5, "killed": false, "signal": null, "cmd": "vgchange -an cl", "stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n at genericNodeError (node:internal/errors:984:15)\n at wrappedFn (node:internal/errors:538:14)\n at ChildProcess.exithandler (node:child_process:422:12)\n at ChildProcess.emit (node:events:518:28)\n at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n at maybeClose (node:internal/child_process:1104:16)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n at Process.callbackTrampoline (node:internal/async_hooks:130:17)" }, "message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n" } }
  • 🛰️ XO 6: dedicated thread for all your feedback!

    Pinned Xen Orchestra
    35
    5 Votes
    35 Posts
    1k Views
    ForzaF
    @olivierlambert that's very good. I like dark mode, but not with high contrast elements as it hurts my eyes. This Nord theme is much better than the other Purple dark mode for me.
  • V2V - Stops at 99%

    Migrate to XCP-ng
    18
    0 Votes
    18 Posts
    635 Views
    nikadeN
    I'm seeing something similar, not sure if its the same issue, but mine stops at 95% and just hangs there: [12:18 sto-xcp1 ~]# xe task-list uuid ( RO) : c1056d36-b195-056a-4121-e82d7fc851fb name-label ( RO): [XO] Importing content into VDI [ESXI]DEBIAN 12 fiona.iextreme.org-flat.vmdk on SR Local storage name-description ( RO): status ( RO): pending progress ( RO): 0.950 The nbdkit debug-log can be found here: https://mirror2.iextreme.org/temp/stderr Edit: some additional info Xen Orchestra, commit 1640a Master, commit 1640a
  • Access XOA Logs via API?

    REST API
    2
    0 Votes
    2 Posts
    36 Views
    olivierlambertO
    Let me ping @mathieuRA
  • Unable to login to XO Lite

    XO Lite
    3
    0 Votes
    3 Posts
    42 Views
    W
    @ph7 Thank you!
  • Xen Orchestra Node 24 compatibility

    Xen Orchestra
    4
    0 Votes
    4 Posts
    242 Views
    M
    After moving from Node 22 to Node 24 on my XO instance I started to see more "Error: ENOMEM: not enough memory, close" for my backup jobs even though my XO VM has 8GB of RAM... I will revert back to Node 22 for now.
  • Test results for Dell Poweredge R770 with NVMe drives

    Hardware
    17
    7
    0 Votes
    17 Posts
    980 Views
    olivierlambertO
    No pun, just wait for everyone to get back from our yearly Vates internal event
  • HOST_NOT_ENOUGH_FREE_MEMORY

    Xen Orchestra
    4
    0 Votes
    4 Posts
    49 Views
    P
    @ideal perhaps you could use advantage of dynamic memory https://docs.xcp-ng.org/vms/#dynamic-memory to oversubscribe memory and have all 4 VMs up at once... or reduce the allocated memory of your VMs, you seem to have a pretty big VM in terms of memory in comparison to the 2 others on your screenshot
  • Install mono on XCP-ng

    Compute
    2
    0 Votes
    2 Posts
    57 Views
    P
    @isdpcman-0 said in Install mono on XCP-ng: Our RMM tools will run on CentOS but fail to install because they are looking for mono to be on the system. How can I install Mono on an XCP-ng host so we can install our monitoring/management tools? Reply I think it is advised to consider hosts as appliances and not install any external packages (repos are disabled for that purpose, that's probably your issue for installing anything) even in case of clusters of many hosts in a pool, you should deploy same packages on all hosts to expect compliancy between hosts... better use SNMP to monitor you hosts ? or standard installed packages ?
  • Mirror backup: No new data to upload for this vm?

    Backup
    9
    1
    0 Votes
    9 Posts
    162 Views
    Bastien NolletB
    @Forza Thanks for the details, I managed to reproduce after a few trials. I'll try to fix it and keep you informed.
  • SR.Scan performance withing XOSTOR

    XOSTOR
    4
    0 Votes
    4 Posts
    67 Views
    D
    From another post I gathered that there is an auto-scan feature that run by default every 30 seconds which seems to cause a lot issue when the storage contains a lot of disks or you have a lot of storage. It is not completely clear if this auto-scan feature is actually necessary and to some customers Vates helpdesk has suggested to reduce the frequency of the scan from 30 seconds to 2 minutes and that seems to have improved the overall experience. The command would be this: xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID> where UUID is the pool master UUID. Of course I won't run that in production without Vates support re-assurance that doing so it won't have a negative impact but I think is worth mentioning this. In my situation I can see how frequents scan would cause delay on the other tasks considering that effectively my system is always under scanning with probably the scan task itself being affected by it.
  • 0 Votes
    6 Posts
    79 Views
    olivierlambertO
    Ah nice!
  • 0 Votes
    22 Posts
    2k Views
    T
    We are in the process of commissioning a batch of R770 with this broadcom nic. @stormi , has a new installer been made available?
  • Citrix or XCP-ng drivers for Windows Server 2022

    XCP-ng
    17
    0 Votes
    17 Posts
    5k Views
    V
    Hi Forza, Have you finished migrating the workloads? If you are still looking for an easy migration solution, you can try Vinchin Backup & Recovery. It will replace the driver during migration intelligently.
  • 0 Votes
    7 Posts
    158 Views
    M
    @Pilow Agreed however as our xcp-ng hosts are in the cloud with OVH Cloud we cannot use a physical appliance. We are currently setting up a dedicated OVH server for pfSense.
  • 0 Votes
    16 Posts
    171 Views
    P
    @MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
  • 0 Votes
    11 Posts
    99 Views
    olivierlambertO
    Create an appliance with all those VMs and configure the order and delay inside it.
  • 0 Votes
    2 Posts
    78 Views
    W
    I hope someone will reply to my post here about how to check status on this but I believe I've resolved the main issue: I disabled and re-enabled the whole task itself yesterday and it ran this morning as scheduled.
  • 0 Votes
    8 Posts
    140 Views
    florentF
    @henri9813 yes thank you for correcting my message