XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Pilow
    3. Posts
    P
    Offline
    • Profile
    • Following 3
    • Followers 0
    • Topics 17
    • Posts 189
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: FILE RESTORE / overlapping loop device exists

      @ph7 thank you for your tests

      some Vates dev are lurking in these forums, they will probably stumble upon this post anytime soon 😛

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      @ph7 that's it. I can't, and see the failed task logs I provided earlier.

      I can restore a full VM, but not its files. Either Windows or different flavor of linuxes (debian, ubuntu, alma, ...) same problems.

      I think something is wrong somewhere, but dont know where...

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      any idea anyone ?

      halp needed 😃

      posted in Backup
      P
      Pilow
    • RE: DATE filter on backup logs ?

      annnnnnnd it was as simple as converting date to milliseconds.

      start:>1765209598000 end:<1765231198000
      
      posted in REST API
      P
      Pilow
    • DATE filter on backup logs ?

      Hi,

      2905d717-8829-4b1d-8a9f-135fe2d2363e-image.png

      what would be the syntax to filter logs by start-date / end-date ?
      epoch timestamp ?

      cd2b3eea-9080-47b1-bec1-2a40fe1d6db7-image.png
      any idea or how-to ?

      posted in REST API
      P
      Pilow
    • RE: Mirror backup: No new data to upload for this vm?

      @Forza you will have to switch to LATEST to profit from end month release
      STABLE is one version behind LATEST

      both are production ready.

      posted in Backup
      P
      Pilow
    • RE: Orange Disks

      @bazzacad first time I see orange VDIs in this view 😮

      posted in Management
      P
      Pilow
    • RE: NOT_SUPPORTED_DURING_UPGRADE()

      @paco seems to be the 10Mb cloudconfig drive leftover after template deployement

      you could delete it, if it is not in use anymore (you forgot it ?)

      beware do not delete anything before being sure what you are deleting.

      posted in Management
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      on another simpler install (One host, one XOA, no proxy, SMB remote in same lan not an S3 remote), XOA 5.112.1

      same problem !

      I think something has been broken along the way @bastien-nollet @florent

      granular file restore is important for us, otherwise we have to get Veeam Agent backup instead of XO Backup

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      antoher log from listPartitions :

      {
        "id": "0miuq9mt5",
        "properties": {
          "method": "backupNg.listPartitions",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd"
          },
          "name": "API call: backupNg.listPartitions",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051796921,
        "status": "failure",
        "updatedAt": 1765051856924,
        "end": 1765051856924,
        "result": {
          "url": "https://10.xxx.xxx.61/api/v1",
          "originalUrl": "https://10.xxx.xxx.61/api/v1",
          "message": "HTTP connection has timed out",
          "name": "Error",
          "stack": "Error: HTTP connection has timed out\n    at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n    at ClientRequest.emit (node:events:518:28)\n    at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n    at Object.onceWrapper (node:events:632:28)\n    at TLSSocket.emit (node:events:530:35)\n    at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.Socket._onTimeout (node:net:595:8)\n    at listOnTimeout (node:internal/timers:581:17)\n    at processTimers (node:internal/timers:519:7)"
        }
      }
      
      {
        "id": "0miunp2s1",
        "properties": {
          "method": "backupNg.listPartitions",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd"
          },
          "name": "API call: backupNg.listPartitions",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765047478609,
        "status": "failure",
        "updatedAt": 1765047530203,
        "end": 1765047530203,
        "result": {
          "code": -32000,
          "data": {
            "code": 5,
            "killed": false,
            "signal": null,
            "cmd": "vgchange -an cl",
            "stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n"
        }
      }
      
      posted in Backup
      P
      Pilow
    • FILE RESTORE / overlapping loop device exists

      Hi, on latest channel XOA, we get this error :

      {
        "id": "0miuqao5o",
        "properties": {
          "method": "backupNg.listFiles",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/ec9e8a54-a78e-8ca8-596e-20ebeaaa4308/vdis/70dec2db-a660-4bf4-b8f9-7c90e7e45156/7fe5a104-e9a3-4e16-951c-f88ce78e3b2a/20251206T161309Z.alias.vhd",
            "path": "/",
            "partition": "6f2859cc-5df3-4c47-bd05-37d3b066f11e"
          },
          "name": "API call: backupNg.listFiles",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051845324,
        "status": "failure",
        "updatedAt": 1765051845346,
        "end": 1765051845346,
        "result": {
          "code": -32000,
          "data": {
            "code": 32,
            "killed": false,
            "signal": null,
            "cmd": "mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd",
            "stack": "Error: Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n"
        }
      }
      

      sometimes, we can get through the volume/partition selection, but then the restoration nerver ends...

      Remote is working, tested ok.
      Remote is accessed by a XO PROXY that have been rebooted.

      Backups TO this remote is ok.
      Restoration of FULL VM from the same VM from same remote is also OK.

      Only the granular file restore that is not working...

      any idea ?

      posted in Backup
      P
      Pilow
    • RE: HOST_NOT_ENOUGH_FREE_MEMORY

      @ideal perhaps you could use advantage of dynamic memory
      https://docs.xcp-ng.org/vms/#dynamic-memory
      to oversubscribe memory and have all 4 VMs up at once... or reduce the allocated memory of your VMs, you seem to have a pretty big VM in terms of memory in comparison to the 2 others on your screenshot

      posted in Xen Orchestra
      P
      Pilow
    • RE: Install mono on XCP-ng

      @isdpcman-0 said in Install mono on XCP-ng:

      Our RMM tools will run on CentOS but fail to install because they are looking for mono to be on the system. How can I install Mono on an XCP-ng host so we can install our monitoring/management tools?

      Reply

      I think it is advised to consider hosts as appliances and not install any external packages (repos are disabled for that purpose, that's probably your issue for installing anything)
      even in case of clusters of many hosts in a pool, you should deploy same packages on all hosts to expect compliancy between hosts...

      better use SNMP to monitor you hosts ? or standard installed packages ?

      posted in Compute
      P
      Pilow
    • RE: HOST_NOT_ENOUGH_FREE_MEMORY

      @ideal you should, yes.
      beware of dom0 memory (the host), it consumes memory too
      f805ffa6-a49f-43fc-a4dc-fd1ab11f5e8a-{15B3F9E5-B4C7-455B-9BC0-D74D4FF84901}.png

      posted in Xen Orchestra
      P
      Pilow
    • RE: Test results for Dell Poweredge R770 with NVMe drives

      @olivierlambert said in Test results for Dell Poweredge R770 with NVMe drives:

      Hang on!

      no pun intended ? 😃

      posted in Hardware
      P
      Pilow
    • RE: Test results for Dell Poweredge R770 with NVMe drives

      @olivierlambert eager to test this new ISO, we have two XCP clusters in 8.2 that need upgrading in 8.3 with these cards :

      • BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller

      Do you think we would be impacted ?

      These same servers also have I350 Gigabit Network Connectioncard (quad port)

      posted in Hardware
      P
      Pilow
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way

      posted in XCP-ng
      P
      Pilow
    • RE: When the XCPNG host restart, it restarts running directly, instead of being in maintenance mode

      @olivierlambert I even witnessed something today, on the same thematic :

      • one pool of 2 hosts
      • multiple VMs not running, but some have the AUTO POWER ON checked
      • reboot the slave host, and as soon as it gets online / green, the VMs with auto power on starts...

      they were downed on purpose... surprise 😃

      by the way, is this what is expected if the AUTO POWER ON is also checked on the POOL advanced tab ? I supposed it was there only to check auto power on on newly created VMs

      posted in Compute
      P
      Pilow
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @MajorP93 I guess so, if someone from Vates team get us the answer as why so frequently perhaps it will enlighten us

      posted in XCP-ng
      P
      Pilow
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @MajorP93 said in log_fs_usage / /var/log directory on pool master filling up constantly:

      will keep monitoring this but it seems to improve things quite substantially!

      Since it appears that multiple users are affected by this it may be a good idea to change the default value within XCP-ng and/or add this to official documentation.

      Reply

      nice, but these SR scans have a purpose (when you create/extend an SR, to discover VDIs and ISOs, ...)
      on the legitimacy of reducing the period, and the impact on logs, it should be better documented yeah

      xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID> 
      

      never saw this command line in the documentation, perhaps it should be there with full warnings ?

      posted in XCP-ng
      P
      Pilow