XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Pilow
    3. Posts
    P
    Offline
    • Profile
    • Following 3
    • Followers 0
    • Topics 22
    • Posts 223
    • Groups 0

    Posts

    Recent Best Controversial
    • bug about provoked BACKUP FELL BACK TO A FULL due to DR job

      Hi,
      latest XOA 6.0.3

      I found a case where I can provoke the 8ae00fdb-ab5d-4edc-ba05-95e252a2a5dd-{B9996169-F642-4FA6-883B-A1F757E52FB9}.png
      I do not understand why it happens, but it is happening anyway.

      We have a normal DELTA backup job with 6 VMs in it. say VM1 to VM6.
      This job has advanced options of NBD+CBT checked. NBD network exists to do the backup.
      Normal snapshot mode. Merge Synchronously in advanced is checked.

      Another job : CONTINUOUS REPLICATION, same 6 VMs, same advanced options checked

      and a last job : DISASTER RECOVERY of a subset of two VMs : VM4 and VM6
      ZSTD compression, normal snapshot mode, merge backup synchronously. 1 point retention (a full is done each time, as intended)

      All 3 jobs are in sequence, they do not overlap. first backup, second replica, third DR.

      All 3 jobs execute with success, but in the BACKUP and REPLICA jobs :
      only VM4 and VM6 have 703c771e-d223-45ac-aff7-6487238be1b0-{B9996169-F642-4FA6-883B-A1F757E52FB9}.png with "delta" backup type at the end.

      the fact that the DR job is done one these two VMs provoke this behavior. this is not wanted.

      What we see in REPLICA job log :
      7d8bdcda-ce93-4541-8513-f83ea28ab46e-{929E2473-40BB-4AA3-AB8D-BDCFF75B3744}.png

      And in backup JOB log :
      47ce2123-3492-45fd-a61a-6ab5307b68d9-{9C9CD199-AF57-4B41-9C87-21762C5C2CDB}.png

      I checked on VM VDI after the DR job is finished and CBT is still checked
      29f400f1-dcd2-43cc-b235-9de20f072ed5-{FBF26978-1EFE-4610-87B7-C06F69C7EEA0}.png

      Why does a DR job resets something that make other two jobs fall back to full on these VDIs ?

      posted in Backup
      P
      Pilow
    • RE: backup mail report says INTERRUPTED but it's not ?

      @Bastien-Nollet new day without false INTERRUPTED
      94dbb451-6567-4d1e-af0a-6d28573416bb-image.png

      the scrutiny of backup email reports made me find a new bug in backups (not reports this time)
      i'll create a new topic about the dreaded BACKUP FELL BACK TO A FULL --> can provoke it on purpose !

      posted in Backup
      P
      Pilow
    • RE: Power Management (Power Efficiencies) Plugin Idea

      @olivierlambert on vmware, this was handled by DRS functionnality in highest level of licence

      Easyvirt is a paying addon to XOA infrastructure

      would some equivalent be developped internally one day ?

      posted in Xen Orchestra
      P
      Pilow
    • RE: Homogeneous pools, how similar do the CPUs need to be?

      @arc1 i would suggest separate pools and use warm migration in advanced tab between pools
      or shutdown VM and do a DR job or copy VM
      as simple as that

      posted in Compute
      P
      Pilow
    • RE: VDI not showing in XO 5 from Source.

      tried to deploy a NEW vm for testing purpose on an impacted SR :
      31059047-0663-4438-8131-dae10fdece65-{876A5496-1224-46ED-B5FB-282584D4CBAD}.png

      and it appears ! it's the only VDI visible, there are OTHER vms on this same SR that have invisible disks

      all as if nothing
      10f77f41-6599-4ead-b7cb-101c5dc7f076-{DD14C76B-144E-4A0C-B985-01EBE178A1A0}.png

      I guess we could migrate the impacted VMs out of this SR and back, and it would correct the issue !

      does that help ?!

      posted in Management
      P
      Pilow
    • RE: VDI not showing in XO 5 from Source.

      😕

      I am also impacted on some SRs, latest XOA 6.0.3

      In the VM DISKS tab, no disk
      e64c1143-5c66-4918-bf09-255868da37a8-{14091121-3DE9-4E4C-8DC6-ACAE262294BA}.png

      VM is running OK. can snapshot, can backup.

      on the impacted SR (local RAID5 for this one) I noted that there is no more green progress bar as if SR is empty, but showing 47 VDIs and occupied space OK :
      557e3927-604f-443e-9dad-4d0249bb2813-{BAE0069B-4449-4EBF-8262-E9BCDD7E4A2C}.png

      DISKS tab of the SR is showing the VDIs on the running VMs :
      38be29c7-0ec4-44b2-909a-bec1f8f1bf71-{48C0989B-A413-45C9-BF28-87DA09CB8BEA}.png

      as @danp asked, check of params on one impacted VDI & VBD :

      # xe vdi-param-list uuid=a81ecd87-3788-4528-819d-7d7c03aa6c61
      uuid ( RO)                    : a81ecd87-3788-4528-819d-7d7c03aa6c61
                    name-label ( RW): xxx-xxx-xxxxxx_sda
              name-description ( RW):
                 is-a-snapshot ( RO): false
                   snapshot-of ( RO): f9cbd30f-a261-4b95-97db-b6846147634a
                     snapshots ( RO): cb65b96a-bed9-4e9a-82d3-e73b5aed546d
                 snapshot-time ( RO): 20250912T17:38:57Z
            allowed-operations (SRO): snapshot; clone
            current-operations (SRO):
                       sr-uuid ( RO): b1b80611-7223-c829-8953-6aa2bf5865b3
                 sr-name-label ( RO): xxx-xx-xxxxxxx RAID5 Local
                     vbd-uuids (SRO): 51bb1797-c6c7-50f0-13a9-dfaad4c99d90
               crashdump-uuids (SRO):
                  virtual-size ( RO): 68719476736
          physical-utilisation ( RO): 30686765056
                      location ( RO): a81ecd87-3788-4528-819d-7d7c03aa6c61
                          type ( RO): User
                      sharable ( RO): false
                     read-only ( RO): false
                  storage-lock ( RO): false
                       managed ( RO): true
           parent ( RO) [DEPRECATED]: <not in database>
                       missing ( RO): false
                  is-tools-iso ( RO): false
                  other-config (MRW):
                 xenstore-data (MRO):
                     sm-config (MRO): vhd-parent: daeee201-3891-443e-8bdb-b00ed1051279; host_OpaqueRef:3e7283ba-5a42-1881-958a-9f96b71fb98f: RW; read-caching-enabled-on-f2868da5-4509-43d7-9ef9-2bb3857e1ba5: true
                       on-boot ( RW): persist
                 allow-caching ( RW): false
               metadata-latest ( RO): false
              metadata-of-pool ( RO): <not in database>
                          tags (SRW):
                   cbt-enabled ( RO): true
      
      
      # xe vbd-param-list uuid=51bb1797-c6c7-50f0-13a9-dfaad4c99d90
      uuid ( RO)                        : 51bb1797-c6c7-50f0-13a9-dfaad4c99d90
                           vm-uuid ( RO): 108ad69b-1fa5-d80b-fb16-a62509ad642a
                     vm-name-label ( RO): xxx-xxx-xxxxxx
                          vdi-uuid ( RO): a81ecd87-3788-4528-819d-7d7c03aa6c61
                    vdi-name-label ( RO): xxx-xxx-xxxxxx_sda
                allowed-operations (SRO): attach; unpause; pause
                current-operations (SRO):
                             empty ( RO): false
                            device ( RO): xvda
                        userdevice ( RW): 0
                          bootable ( RW): false
                              mode ( RW): RW
                              type ( RW): Disk
                       unpluggable ( RW): false
                currently-attached ( RO): true
                        attachable ( RO): true
                      storage-lock ( RO): false
                       status-code ( RO): 0
                     status-detail ( RO):
                qos_algorithm_type ( RW):
              qos_algorithm_params (MRW):
          qos_supported_algorithms (SRO):
                      other-config (MRW): owner:
                       io_read_kbs ( RO): 0.000
                      io_write_kbs ( RO): 93.752
      
      

      VDI seen as a snapshot...
      in XO6, VDI appears ok.
      we have an internal webapp that access by API and disk appears OK, like in XO6.

      seems to be rooted on the SR, not the VMs, as the entire SR is impacted... ? not all SRs in this XOA instance are impacted (have other RAID5 local SR and iSCSI SRs)
      all VMs hosted on this SR have invisible disks in XO5

      have an XO CE attached to same servers, and same behavior, invisible disks
      9aaad5b2-c8e3-4057-a47d-8da39b5a4664-{9E011831-A228-4A0C-B12C-9EE224398B8E}.png

      edit :
      on GENERAL tab of an impacted VM :
      a66f9b9e-b71a-4219-a8eb-68506284b47a-{4B5E10D1-2240-40E9-8EA4-88A41E97C738}.png
      we can see a 0Bytes VDI, but activity
      b7b73755-7bcf-4d9d-afd5-79dfa913cda9-{6EA81F62-B519-4711-A810-519D270D3102}.png

      posted in Management
      P
      Pilow
    • RE: backup mail report says INTERRUPTED but it's not ?

      here is a first feedback of this annoying INTERRUPTED random issue (╯‵□′)╯︵┻━┻

      Started 12/10/2025 to appear (filtered view of mail inbox on word INTERRUPTED)
      d5aa943b-eff3-4b02-ac8e-97e04960ce2f-image.png

      All these backups are indeed SUCCESS in XOA, are present on the remotes, are restorable.
      We have 14 backup reports each day. As you can see, some days no interrupted, some days multiple ones.
      Backups are spanned from noon to late at night.

      Today (after applying your patch code) was either a good day, or patch succedeed ¯\_(ツ)_/¯
      27901574-a827-4710-abf9-dd0abb22f8fd-image.png

      Will keep looking for rogue INTERRUPTIONs

      posted in Backup
      P
      Pilow
    • RE: backup mail report says INTERRUPTED but it's not ?

      @Bastien-Nollet said in backup mail report says INTERRUPTED but it's not ?:

      file packages/xo-server-backup-reports/dist/index.js by a

      modification done, will give feedback

      posted in Backup
      P
      Pilow
    • RE: VM Pool To Pool Migration over VPN

      @acebmxer i noticed the différence but can't quite explain it...
      need some Vates storage guru to enlighten us 😅

      posted in Management
      P
      Pilow
    • RE: VM Pool To Pool Migration over VPN

      @acebmxer could you try a VM copy with compression enabled ? and benchmark it

      posted in Management
      P
      Pilow
    • RE: HUB is bugged ?

      @pierrebrunet thanks for quick insight/fix, got back to stable to deploy templates, it is working

      posted in Management
      P
      Pilow
    • RE: backup mail report says INTERRUPTED but it's not ?

      @florent yes they are, I screenshoted earlier in the thread

      posted in Backup
      P
      Pilow
    • RE: backup mail report says INTERRUPTED but it's not ?

      seeing more and more of this INTERRUPTED issue in mail reports.. anyone has this also ?

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      still having initial issue, even on latest XOA or XO CE 😞

      posted in Backup
      P
      Pilow
    • HUB is bugged ?

      Hi,

      latest version of XOA
      I'm in /v5 webui

      I can't install Templates anymore from the HUB, i get this error message :
      86709203-cfe8-424a-90fb-c03d08e72599-image.png

      is this a new bug ?

      posted in Management
      P
      Pilow
    • RE: backup mail report says INTERRUPTED but it's not ?

      some timeout or race condition between the end of the job and the mail generation ?

      perhaps putting 10sec delay to send mail ?

      posted in Backup
      P
      Pilow
    • backup mail report says INTERRUPTED but it's not ?

      we have a strange behavior in the mail reports of XOA Backup.

      the backup is done, we see the delta point on the remote, in XOA it's all green, no sign of INTERRUPTED, but the mail report tells otherwise :
      03d2ca7b-a186-4239-8ff1-66f04950aec3-image.png

      the "INTERRUPTION" seems to happen on the remote
      4a60ea48-5899-4c85-90cb-9fe585ac8e41-image.png

      the point in the remote :00eb35f5-5e2d-43d3-ab1a-9e74b4e37e61-image.png
      in XOA logs :
      92762647-9824-4f69-bc38-4ccc32c8ac4b-image.png

      be23f4b6-50e0-437f-a596-a777a68669ab-image.png
      other backups are okay, this same one will be okay too tonight...

      what is happening ?
      false alarm ? @florent @bastien-nollet

      {
        "data": {
          "mode": "delta",
          "reportWhen": "always"
        },
        "id": "1766680469800",
        "jobId": "87966399-d428-431d-a067-bb99a8fdd67a",
        "jobName": "BCK_C_xxxx",
        "message": "backup",
        "proxyId": "5359db6e-841b-4a6d-b5e6-a5d19f43b6c0",
        "scheduleId": "56872f53-4c20-47fc-8542-2cd9aed2fdde",
        "start": 1766680469800,
        "status": "success",
        "infos": [
          {
            "data": {
              "vms": [
                "b1eef06b-52c1-e02a-4f59-1692194e2376"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "b1eef06b-52c1-e02a-4f59-1692194e2376",
              "name_label": "xxxx"
            },
            "id": "1766680472044",
            "message": "backup VM",
            "start": 1766680472044,
            "status": "success",
            "tasks": [
              {
                "id": "1766680472050",
                "message": "clean-vm",
                "start": 1766680472050,
                "status": "success",
                "end": 1766680473396,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1766680474042",
                "message": "snapshot",
                "start": 1766680474042,
                "status": "success",
                "end": 1766680504544,
                "result": "c4b42a79-532e-c376-833b-22707ddad571"
              },
              {
                "data": {
                  "id": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1766680504544:0",
                "message": "export",
                "start": 1766680504544,
                "status": "success",
                "tasks": [
                  {
                    "id": "1766680511990",
                    "message": "transfer",
                    "start": 1766680511990,
                    "status": "success",
                    "end": 1766680515706,
                    "result": {
                      "size": 423624704
                    }
                  },
                  {
                    "id": "1766680521053",
                    "message": "clean-vm",
                    "start": 1766680521053,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1766680521895",
                        "message": "merge",
                        "start": 1766680521895,
                        "status": "success",
                        "end": 1766680530887
                      }
                    ],
                    "end": 1766680531173,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1766680531192
              }
            ],
            "infos": [
              {
                "message": "Transfer data using NBD"
              },
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:d8aef4c9-5514-6623-1cda-f5e879c4990f"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1766680531211
          }
        ],
        "end": 1766680531267
      }
      
      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      not better with production upgraded to 6.0.1 (XOA and XO Proxies)
      we will open a support ticket

      ps : if we delog/relog from XOA with another user, we have better chance to get the file restore working... not 100% very unstable
      is there a link ?!

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      @olivierlambert I had the time to test connecting a REMOTE from production on latest XO CE on replica datacenter

      and file restore is working flawlessly... ultra fast, and working.

      either we have a problem on production, or the last update of XO6 corrected the bug ?

      we still are on 5.113.2 on production

      posted in Backup
      P
      Pilow
    • RE: License no longer registered after upgrade

      @fluxtor is your XOA accessible over HTTP 80 ?

      posted in Management
      P
      Pilow