XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Pilow
    P
    Offline
    • Profile
    • Following 3
    • Followers 0
    • Topics 16
    • Posts 181
    • Groups 0

    Pilow

    @Pilow

    54
    Reputation
    21
    Profile views
    181
    Posts
    0
    Followers
    3
    Following
    Joined
    Last Online
    Website www.cloudbox.re
    Location Reunion Island

    Pilow Unfollow Follow

    Best posts made by Pilow

    • Veeam backup with XCP NG

      just to #brag

      b0ccd690-052c-47c9-8150-7ef40323cb01-{ECF9C24A-7E0C-44C7-BE07-D310A00DE8D0}.png

      Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)

      first backup is fine, with decent speed ! (to a xcp hosted S3 minio)

      will keep testing

      posted in Backup
      P
      Pilow
    • Racked today, entire hosting solution based on Vates stack

      c913d143-7df8-4c52-92b8-256b8bd9fb6c-image.png

      Hey all,

      We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
      This is the production node, tomorrow i'll post the replica node !

      XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).

      This needs a sticker "Vates Inside" 😃 #vent

      posted in Share your setup!
      P
      Pilow
    • RE: Is supermicro IPMI data display planned?

      @sluflyer06 and I wish HPE would be added too 😃

      posted in Xen Orchestra
      P
      Pilow
    • RE: Cloudbase-init on Windows

      I did stick to version: 1 in my working configuration

      01998ae9-bb3f-4a3c-b3f6-b1d34ae23704-image.png

      Had to rename my "Ethernet 2" nic name to Ethernet2 without the space

      You have to put the exact template nic name for this to work.

      posted in Advanced features
      P
      Pilow
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way

      posted in XCP-ng
      P
      Pilow
    • TAGS in BACKUP/RESTORE

      Hi,

      Smart backups are wonderful to manage smart selection of VMs to be backuped.

      When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
      I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.
      10fc3983-4531-4936-96c2-09e2ed29e78c-image.png
      Perhaps something to add to XO6 ?

      posted in Backup
      P
      Pilow
    • LTR cosmetics in UI

      Could we have a way to know wich backup is part of LTR ?
      In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI
      dcb3c947-ffa1-4cd1-b9ea-0575ecde900a-image.png

      That's pure cosmectics indeed, but practical.

      posted in Backup
      P
      Pilow
    • RE: XO logs to external syslog

      @Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...

      But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

      42411fc9-af86-4838-a870-ca9ca6307234-image.png

      posted in Xen Orchestra
      P
      Pilow
    • RE: 🛰️ XO 6: dedicated thread for all your feedback!

      is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?

      posted in Xen Orchestra
      P
      Pilow
    • RE: Cloudbase-init on Windows

      @MK.ultra don't think so

      it's working without for me.

      posted in Advanced features
      P
      Pilow

    Latest posts made by Pilow

    • RE: FILE RESTORE / overlapping loop device exists

      on another simpler install (One host, one XOA, no proxy, SMB remote in same lan not an S3 remote), XOA 5.112.1

      same problem !

      I think something has been broken along the way @bastien-nollet @florent

      granular file restore is important for us, otherwise we have to get Veeam Agent backup instead of XO Backup

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      antoher log from listPartitions :

      {
        "id": "0miuq9mt5",
        "properties": {
          "method": "backupNg.listPartitions",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd"
          },
          "name": "API call: backupNg.listPartitions",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051796921,
        "status": "failure",
        "updatedAt": 1765051856924,
        "end": 1765051856924,
        "result": {
          "url": "https://10.xxx.xxx.61/api/v1",
          "originalUrl": "https://10.xxx.xxx.61/api/v1",
          "message": "HTTP connection has timed out",
          "name": "Error",
          "stack": "Error: HTTP connection has timed out\n    at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n    at ClientRequest.emit (node:events:518:28)\n    at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n    at Object.onceWrapper (node:events:632:28)\n    at TLSSocket.emit (node:events:530:35)\n    at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.Socket._onTimeout (node:net:595:8)\n    at listOnTimeout (node:internal/timers:581:17)\n    at processTimers (node:internal/timers:519:7)"
        }
      }
      
      {
        "id": "0miunp2s1",
        "properties": {
          "method": "backupNg.listPartitions",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd"
          },
          "name": "API call: backupNg.listPartitions",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765047478609,
        "status": "failure",
        "updatedAt": 1765047530203,
        "end": 1765047530203,
        "result": {
          "code": -32000,
          "data": {
            "code": 5,
            "killed": false,
            "signal": null,
            "cmd": "vgchange -an cl",
            "stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n"
        }
      }
      
      posted in Backup
      P
      Pilow
    • FILE RESTORE / overlapping loop device exists

      Hi, on latest channel XOA, we get this error :

      {
        "id": "0miuqao5o",
        "properties": {
          "method": "backupNg.listFiles",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/ec9e8a54-a78e-8ca8-596e-20ebeaaa4308/vdis/70dec2db-a660-4bf4-b8f9-7c90e7e45156/7fe5a104-e9a3-4e16-951c-f88ce78e3b2a/20251206T161309Z.alias.vhd",
            "path": "/",
            "partition": "6f2859cc-5df3-4c47-bd05-37d3b066f11e"
          },
          "name": "API call: backupNg.listFiles",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051845324,
        "status": "failure",
        "updatedAt": 1765051845346,
        "end": 1765051845346,
        "result": {
          "code": -32000,
          "data": {
            "code": 32,
            "killed": false,
            "signal": null,
            "cmd": "mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd",
            "stack": "Error: Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n"
        }
      }
      

      sometimes, we can get through the volume/partition selection, but then the restoration nerver ends...

      Remote is working, tested ok.
      Remote is accessed by a XO PROXY that have been rebooted.

      Backups TO this remote is ok.
      Restoration of FULL VM from the same VM from same remote is also OK.

      Only the granular file restore that is not working...

      any idea ?

      posted in Backup
      P
      Pilow
    • RE: HOST_NOT_ENOUGH_FREE_MEMORY

      @ideal perhaps you could use advantage of dynamic memory
      https://docs.xcp-ng.org/vms/#dynamic-memory
      to oversubscribe memory and have all 4 VMs up at once... or reduce the allocated memory of your VMs, you seem to have a pretty big VM in terms of memory in comparison to the 2 others on your screenshot

      posted in Xen Orchestra
      P
      Pilow
    • RE: Install mono on XCP-ng

      @isdpcman-0 said in Install mono on XCP-ng:

      Our RMM tools will run on CentOS but fail to install because they are looking for mono to be on the system. How can I install Mono on an XCP-ng host so we can install our monitoring/management tools?

      Reply

      I think it is advised to consider hosts as appliances and not install any external packages (repos are disabled for that purpose, that's probably your issue for installing anything)
      even in case of clusters of many hosts in a pool, you should deploy same packages on all hosts to expect compliancy between hosts...

      better use SNMP to monitor you hosts ? or standard installed packages ?

      posted in Compute
      P
      Pilow
    • RE: HOST_NOT_ENOUGH_FREE_MEMORY

      @ideal you should, yes.
      beware of dom0 memory (the host), it consumes memory too
      f805ffa6-a49f-43fc-a4dc-fd1ab11f5e8a-{15B3F9E5-B4C7-455B-9BC0-D74D4FF84901}.png

      posted in Xen Orchestra
      P
      Pilow
    • RE: Test results for Dell Poweredge R770 with NVMe drives

      @olivierlambert said in Test results for Dell Poweredge R770 with NVMe drives:

      Hang on!

      no pun intended ? 😃

      posted in Hardware
      P
      Pilow
    • RE: Test results for Dell Poweredge R770 with NVMe drives

      @olivierlambert eager to test this new ISO, we have two XCP clusters in 8.2 that need upgrading in 8.3 with these cards :

      • BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller

      Do you think we would be impacted ?

      These same servers also have I350 Gigabit Network Connectioncard (quad port)

      posted in Hardware
      P
      Pilow
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way

      posted in XCP-ng
      P
      Pilow
    • RE: When the XCPNG host restart, it restarts running directly, instead of being in maintenance mode

      @olivierlambert I even witnessed something today, on the same thematic :

      • one pool of 2 hosts
      • multiple VMs not running, but some have the AUTO POWER ON checked
      • reboot the slave host, and as soon as it gets online / green, the VMs with auto power on starts...

      they were downed on purpose... surprise 😃

      by the way, is this what is expected if the AUTO POWER ON is also checked on the POOL advanced tab ? I supposed it was there only to check auto power on on newly created VMs

      posted in Compute
      P
      Pilow