Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    P
    @dinhngtu said in XCP-ng Windows PV tools announcements: @probain The canonical way is to check the product_id instead https://docs.ansible.com/projects/ansible/latest/collections/ansible/windows/win_package_module.html#parameter-product_id The ProductCode changes every time a new version of XCP-ng Windows PV tools is released, and you can get it from each release's MSI: No problem... If you ever decide to have the .exe-file as a separate item. Not bundled within the zip-file. Then I would be even happier. But until then, thanks for everything!
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    A
    @ditzy-olive You can not join a new XCP 8.3 host to an existing 8.2 pool. You can setup your new host as 8.2 and join it to the 8.2 pool and then upgrade everything to 8.3 using the correct procedure.... Or as you stated. Add the new 8.3 host to the upgraded 8.3 pool.
  • 3k Topics
    26k Posts
    P
    antoher log from listPartitions : { "id": "0miuq9mt5", "properties": { "method": "backupNg.listPartitions", "params": { "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da", "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd" }, "name": "API call: backupNg.listPartitions", "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8", "type": "api.call" }, "start": 1765051796921, "status": "failure", "updatedAt": 1765051856924, "end": 1765051856924, "result": { "url": "https://10.xxx.xxx.61/api/v1", "originalUrl": "https://10.xxx.xxx.61/api/v1", "message": "HTTP connection has timed out", "name": "Error", "stack": "Error: HTTP connection has timed out\n at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n at ClientRequest.emit (node:events:518:28)\n at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n at Object.onceWrapper (node:events:632:28)\n at TLSSocket.emit (node:events:530:35)\n at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.Socket._onTimeout (node:net:595:8)\n at listOnTimeout (node:internal/timers:581:17)\n at processTimers (node:internal/timers:519:7)" } } { "id": "0miunp2s1", "properties": { "method": "backupNg.listPartitions", "params": { "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da", "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd" }, "name": "API call: backupNg.listPartitions", "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8", "type": "api.call" }, "start": 1765047478609, "status": "failure", "updatedAt": 1765047530203, "end": 1765047530203, "result": { "code": -32000, "data": { "code": 5, "killed": false, "signal": null, "cmd": "vgchange -an cl", "stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n at genericNodeError (node:internal/errors:984:15)\n at wrappedFn (node:internal/errors:538:14)\n at ChildProcess.exithandler (node:child_process:422:12)\n at ChildProcess.emit (node:events:518:28)\n at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n at maybeClose (node:internal/child_process:1104:16)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n at Process.callbackTrampoline (node:internal/async_hooks:130:17)" }, "message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n" } }
  • Our hyperconverged storage solution

    38 Topics
    694 Posts
    D
    From another post I gathered that there is an auto-scan feature that run by default every 30 seconds which seems to cause a lot issue when the storage contains a lot of disks or you have a lot of storage. It is not completely clear if this auto-scan feature is actually necessary and to some customers Vates helpdesk has suggested to reduce the frequency of the scan from 30 seconds to 2 minutes and that seems to have improved the overall experience. The command would be this: xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID> where UUID is the pool master UUID. Of course I won't run that in production without Vates support re-assurance that doing so it won't have a negative impact but I think is worth mentioning this. In my situation I can see how frequents scan would cause delay on the other tasks considering that effectively my system is always under scanning with probably the scan task itself being affected by it.
  • 31 Topics
    90 Posts
    olivierlambertO
    Yes, account aren't related