XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Pilow
    3. Posts
    P
    Offline
    • Profile
    • Following 3
    • Followers 0
    • Topics 18
    • Posts 194
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: CBT disabling itself / bug ?

      about these KEY backups, I think perhaps LTR got in the way @florent @bastien-nollet

      still no way of knowing WHEN a weekly/monthly backup is happening ?

      posted in Backup
      P
      Pilow
    • RE: CBT disabling itself / bug ?

      @flakpyro indeed seems related.

      I also have this bug :
      2ba22e40-77b8-4671-8223-a82d8d62ac68-image.png

      on some VMs all jobs do KEY points, but in the backup logs they are indeed DELTA

      041a1912-dc57-4508-9fc9-3a8b956e38eb-image.png

      you can see as mere Megabytes are transfered that it's a delta backup... but point is presented as KEY

      here is the log :
      511ecbf6-b633-4140-9299-5380200b75bc-image.png

      posted in Backup
      P
      Pilow
    • CBT disabling itself / bug ?

      Hi,

      Latest XOA, with fully patched XCP 8.3 here.

      I'm fiddling around again with NBD+CBT in backup jobs (was avoiding CBT for a time, to reliably control my backups and avoid unnecessary KEY points) in the context of THICK SRs to spare some space.

      I know that CBT is reset when migrating from one SR to another.

      But here is what I encounter :

      • VM has no CBT enabled on its VDIs, it is on a SHARED SR in a pool of 3 hosts
      • backup option changed for NBD+CBT, was only NBD before
      • CBT is enabled on the next run by the backup job, I get a delta (was expecting a FULL ?)
      • next run, delta, as expected
      • i migrate this VM on another HOST, without changing its SR
      • CBT is immediatly disabled ? why ??
      • next run of backup it tries a delta, but "fall back to a full" (normal as CBT has been disabled...), and do a KEY point on the remote
      • next run is a delta as expected

      does this mean if I do a rolling pool update or host maintenance that will move the all VMs around, all CBT will be disabled and I should expect a FALL BACK TO FULL on all my NBD+CBT enabled backup jobs ??!

      why disabling CBT on a change of HOST and no move of SR ?

      posted in Backup
      P
      Pilow
    • RE: SR.Scan performance withing XOSTOR

      @denis.grilli really big news, I need to have XO STOR working 😃
      Thanks for your problems and support correting them 😄

      posted in XOSTOR
      P
      Pilow
    • RE: Plugins in XO6?

      @olivierlambert so XO5 will have a quite long lifespan, as everything must be included in XO6 ?

      posted in Xen Orchestra
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      @ph7 thank you for your tests

      some Vates dev are lurking in these forums, they will probably stumble upon this post anytime soon 😛

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      @ph7 that's it. I can't, and see the failed task logs I provided earlier.

      I can restore a full VM, but not its files. Either Windows or different flavor of linuxes (debian, ubuntu, alma, ...) same problems.

      I think something is wrong somewhere, but dont know where...

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      any idea anyone ?

      halp needed 😃

      posted in Backup
      P
      Pilow
    • RE: DATE filter on backup logs ?

      annnnnnnd it was as simple as converting date to milliseconds.

      start:>1765209598000 end:<1765231198000
      
      posted in REST API
      P
      Pilow
    • DATE filter on backup logs ?

      Hi,

      2905d717-8829-4b1d-8a9f-135fe2d2363e-image.png

      what would be the syntax to filter logs by start-date / end-date ?
      epoch timestamp ?

      cd2b3eea-9080-47b1-bec1-2a40fe1d6db7-image.png
      any idea or how-to ?

      posted in REST API
      P
      Pilow
    • RE: Mirror backup: No new data to upload for this vm?

      @Forza you will have to switch to LATEST to profit from end month release
      STABLE is one version behind LATEST

      both are production ready.

      posted in Backup
      P
      Pilow
    • RE: Orange Disks

      @bazzacad first time I see orange VDIs in this view 😮

      posted in Management
      P
      Pilow
    • RE: NOT_SUPPORTED_DURING_UPGRADE()

      @paco seems to be the 10Mb cloudconfig drive leftover after template deployement

      you could delete it, if it is not in use anymore (you forgot it ?)

      beware do not delete anything before being sure what you are deleting.

      posted in Management
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      on another simpler install (One host, one XOA, no proxy, SMB remote in same lan not an S3 remote), XOA 5.112.1

      same problem !

      I think something has been broken along the way @bastien-nollet @florent

      granular file restore is important for us, otherwise we have to get Veeam Agent backup instead of XO Backup

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      antoher log from listPartitions :

      {
        "id": "0miuq9mt5",
        "properties": {
          "method": "backupNg.listPartitions",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd"
          },
          "name": "API call: backupNg.listPartitions",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051796921,
        "status": "failure",
        "updatedAt": 1765051856924,
        "end": 1765051856924,
        "result": {
          "url": "https://10.xxx.xxx.61/api/v1",
          "originalUrl": "https://10.xxx.xxx.61/api/v1",
          "message": "HTTP connection has timed out",
          "name": "Error",
          "stack": "Error: HTTP connection has timed out\n    at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n    at ClientRequest.emit (node:events:518:28)\n    at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n    at Object.onceWrapper (node:events:632:28)\n    at TLSSocket.emit (node:events:530:35)\n    at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.Socket._onTimeout (node:net:595:8)\n    at listOnTimeout (node:internal/timers:581:17)\n    at processTimers (node:internal/timers:519:7)"
        }
      }
      
      {
        "id": "0miunp2s1",
        "properties": {
          "method": "backupNg.listPartitions",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd"
          },
          "name": "API call: backupNg.listPartitions",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765047478609,
        "status": "failure",
        "updatedAt": 1765047530203,
        "end": 1765047530203,
        "result": {
          "code": -32000,
          "data": {
            "code": 5,
            "killed": false,
            "signal": null,
            "cmd": "vgchange -an cl",
            "stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n"
        }
      }
      
      posted in Backup
      P
      Pilow
    • FILE RESTORE / overlapping loop device exists

      Hi, on latest channel XOA, we get this error :

      {
        "id": "0miuqao5o",
        "properties": {
          "method": "backupNg.listFiles",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/ec9e8a54-a78e-8ca8-596e-20ebeaaa4308/vdis/70dec2db-a660-4bf4-b8f9-7c90e7e45156/7fe5a104-e9a3-4e16-951c-f88ce78e3b2a/20251206T161309Z.alias.vhd",
            "path": "/",
            "partition": "6f2859cc-5df3-4c47-bd05-37d3b066f11e"
          },
          "name": "API call: backupNg.listFiles",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051845324,
        "status": "failure",
        "updatedAt": 1765051845346,
        "end": 1765051845346,
        "result": {
          "code": -32000,
          "data": {
            "code": 32,
            "killed": false,
            "signal": null,
            "cmd": "mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd",
            "stack": "Error: Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n"
        }
      }
      

      sometimes, we can get through the volume/partition selection, but then the restoration nerver ends...

      Remote is working, tested ok.
      Remote is accessed by a XO PROXY that have been rebooted.

      Backups TO this remote is ok.
      Restoration of FULL VM from the same VM from same remote is also OK.

      Only the granular file restore that is not working...

      any idea ?

      posted in Backup
      P
      Pilow
    • RE: HOST_NOT_ENOUGH_FREE_MEMORY

      @ideal perhaps you could use advantage of dynamic memory
      https://docs.xcp-ng.org/vms/#dynamic-memory
      to oversubscribe memory and have all 4 VMs up at once... or reduce the allocated memory of your VMs, you seem to have a pretty big VM in terms of memory in comparison to the 2 others on your screenshot

      posted in Xen Orchestra
      P
      Pilow
    • RE: Install mono on XCP-ng

      @isdpcman-0 said in Install mono on XCP-ng:

      Our RMM tools will run on CentOS but fail to install because they are looking for mono to be on the system. How can I install Mono on an XCP-ng host so we can install our monitoring/management tools?

      Reply

      I think it is advised to consider hosts as appliances and not install any external packages (repos are disabled for that purpose, that's probably your issue for installing anything)
      even in case of clusters of many hosts in a pool, you should deploy same packages on all hosts to expect compliancy between hosts...

      better use SNMP to monitor you hosts ? or standard installed packages ?

      posted in Compute
      P
      Pilow
    • RE: HOST_NOT_ENOUGH_FREE_MEMORY

      @ideal you should, yes.
      beware of dom0 memory (the host), it consumes memory too
      f805ffa6-a49f-43fc-a4dc-fd1ab11f5e8a-{15B3F9E5-B4C7-455B-9BC0-D74D4FF84901}.png

      posted in Xen Orchestra
      P
      Pilow
    • RE: Test results for Dell Poweredge R770 with NVMe drives

      @olivierlambert said in Test results for Dell Poweredge R770 with NVMe drives:

      Hang on!

      no pun intended ? 😃

      posted in Hardware
      P
      Pilow