XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Pilow
    3. Posts
    P
    Online
    • Profile
    • Following 3
    • Followers 0
    • Topics 20
    • Posts 208
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: backup mail report says INTERRUPTED but it's not ?

      some timeout or race condition between the end of the job and the mail generation ?

      perhaps putting 10sec delay to send mail ?

      posted in Backup
      P
      Pilow
    • backup mail report says INTERRUPTED but it's not ?

      we have a strange behavior in the mail reports of XOA Backup.

      the backup is done, we see the delta point on the remote, in XOA it's all green, no sign of INTERRUPTED, but the mail report tells otherwise :
      03d2ca7b-a186-4239-8ff1-66f04950aec3-image.png

      the "INTERRUPTION" seems to happen on the remote
      4a60ea48-5899-4c85-90cb-9fe585ac8e41-image.png

      the point in the remote :00eb35f5-5e2d-43d3-ab1a-9e74b4e37e61-image.png
      in XOA logs :
      92762647-9824-4f69-bc38-4ccc32c8ac4b-image.png

      be23f4b6-50e0-437f-a596-a777a68669ab-image.png
      other backups are okay, this same one will be okay too tonight...

      what is happening ?
      false alarm ? @florent @bastien-nollet

      {
        "data": {
          "mode": "delta",
          "reportWhen": "always"
        },
        "id": "1766680469800",
        "jobId": "87966399-d428-431d-a067-bb99a8fdd67a",
        "jobName": "BCK_C_xxxx",
        "message": "backup",
        "proxyId": "5359db6e-841b-4a6d-b5e6-a5d19f43b6c0",
        "scheduleId": "56872f53-4c20-47fc-8542-2cd9aed2fdde",
        "start": 1766680469800,
        "status": "success",
        "infos": [
          {
            "data": {
              "vms": [
                "b1eef06b-52c1-e02a-4f59-1692194e2376"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "b1eef06b-52c1-e02a-4f59-1692194e2376",
              "name_label": "xxxx"
            },
            "id": "1766680472044",
            "message": "backup VM",
            "start": 1766680472044,
            "status": "success",
            "tasks": [
              {
                "id": "1766680472050",
                "message": "clean-vm",
                "start": 1766680472050,
                "status": "success",
                "end": 1766680473396,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1766680474042",
                "message": "snapshot",
                "start": 1766680474042,
                "status": "success",
                "end": 1766680504544,
                "result": "c4b42a79-532e-c376-833b-22707ddad571"
              },
              {
                "data": {
                  "id": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1766680504544:0",
                "message": "export",
                "start": 1766680504544,
                "status": "success",
                "tasks": [
                  {
                    "id": "1766680511990",
                    "message": "transfer",
                    "start": 1766680511990,
                    "status": "success",
                    "end": 1766680515706,
                    "result": {
                      "size": 423624704
                    }
                  },
                  {
                    "id": "1766680521053",
                    "message": "clean-vm",
                    "start": 1766680521053,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1766680521895",
                        "message": "merge",
                        "start": 1766680521895,
                        "status": "success",
                        "end": 1766680530887
                      }
                    ],
                    "end": 1766680531173,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1766680531192
              }
            ],
            "infos": [
              {
                "message": "Transfer data using NBD"
              },
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:d8aef4c9-5514-6623-1cda-f5e879c4990f"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1766680531211
          }
        ],
        "end": 1766680531267
      }
      
      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      not better with production upgraded to 6.0.1 (XOA and XO Proxies)
      we will open a support ticket

      ps : if we delog/relog from XOA with another user, we have better chance to get the file restore working... not 100% very unstable
      is there a link ?!

      posted in Backup
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      @olivierlambert I had the time to test connecting a REMOTE from production on latest XO CE on replica datacenter

      and file restore is working flawlessly... ultra fast, and working.

      either we have a problem on production, or the last update of XO6 corrected the bug ?

      we still are on 5.113.2 on production

      posted in Backup
      P
      Pilow
    • RE: License no longer registered after upgrade

      @fluxtor is your XOA accessible over HTTP 80 ?

      posted in Management
      P
      Pilow
    • RE: License no longer registered after upgrade

      @fluxtor said in License no longer registered after upgrade:

      Looks to me like the XOA5 webUI is out of sync with the underlying updater status as everything still seems to work.

      CTRL+F5 ?
      or private navigation, to see if its not a cache issue ?

      posted in Management
      P
      Pilow
    • RE: DR error : (intermediate value) is not iterable

      on the source VM :

      • I tried to switch to another network and re put the good network to no avail
      • I tried to put the network mentionned in the error (UUID that is not the good one), to no avail
      • I deleted the VIF, recreated it with the good network for this VM

      and booom, DR is okay ! that was not a XAPI problem
      e8de568f-95df-4268-acfc-7dd8eabb93c9-image.png

      so, there was a problem with this VM VIF... and error message in XO6 permitted to pin point it

      thanks for the advice @florent !

      posted in Backup
      P
      Pilow
    • RE: DR error : (intermediate value) is not iterable

      @florent 4508b88e-83f1-4061-9d85-47dcfbd55d59-{6B7F2752-4DB5-426D-AF5C-92EBE33E0C34}.png

      here is the errot in XO6
      this UUID of network is NOT the one on the source VM...

      how is it possible ?

      posted in Backup
      P
      Pilow
    • RE: DR error : (intermediate value) is not iterable

      @florent yes I can try with XO from source to latest
      let me restore the VM and launch a DR with XO from source and i'll report back

      posted in Backup
      P
      Pilow
    • RE: DR error : (intermediate value) is not iterable

      source host :

      xe host-param-list uuid=161be695-e1f9-4271-b581-27b716fde9a5 |grep xapi
      

      software-version (MRO): product_version: 8.3.0; product_version_text: 8.3; product_version_text_short: 8.3; platform_name: XCP; platform_version: 3.4.0; product_brand: XCP-ng; build_number: cloud; git_id: 0; hostname: localhost; date: 20250909T12:59:54Z; dbv: 0.0.1; xapi: 25.6; xapi_build: 25.6.0; xen: 4.17.5-15; linux: 4.19.0+1; xencenter_min: 2.21; xencenter_max: 2.21; network_backend: openvswitch; db_schema: 5.786

      DR target host :

      xe host-param-list uuid=e604c3bf-373c-489b-b191-edecbabec43f |grep xapi
      
      

      software-version (MRO): product_version: 8.3.0; product_version_text: 8.3; product_version_text_short: 8.3; platform_name: XCP; platform_version: 3.4.0; product_brand: XCP-ng; build_number: cloud; git_id: 0; hostname: localhost; date: 20250909T12:59:54Z; dbv: 0.0.1; xapi: 25.6; xapi_build: 25.6.0; xen: 4.17.5-15; linux: 4.19.0+1; xencenter_min: 2.21; xencenter_max: 2.21; network_backend: openvswitch; db_schema: 5.786

      posted in Backup
      P
      Pilow
    • RE: DR error : (intermediate value) is not iterable

      tried to delete the VM, restore it, same result

      tried to get the job done by an XO PROXY instead of XOA, same result

      3 VMs have been deployed from the same Hub Template Ubuntu 24.04 at the same time.

      weird.

      posted in Backup
      P
      Pilow
    • RE: DR error : (intermediate value) is not iterable

      @florent this is where it's strange, all my 7 hosts were installed the same day, with the same patches...
      all 3 VMs were deployed the same day/way too

      why are 2 OK and not the third ?

      indeed CR is working, but DR is hiding something from me

      posted in Backup
      P
      Pilow
    • DR error : (intermediate value) is not iterable

      Hi,

      XCP 8.3, XOA 5.113.2 here

      a DR job, with 3 VMs, 2 are OK, one will not pass... And I dont understand why, first time I see this error

         "message": "(intermediate value) is not iterable",
              "name": "TypeError",
              "stack": "TypeError: (intermediate value) is not iterable\n    at Xapi.import (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/vm.mjs:610:21)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_writers/FullXapiWriter.mjs:56:21"
      
      

      a4653a5d-f2a3-4504-a785-29b01a0bba27-image.png

      any idea anyone ?
      or @bastien-nollet @florent

      there is free space on the destination SR (iSCSI SR)

      posted in Backup
      P
      Pilow
    • RE: 🛰️ XO 6: dedicated thread for all your feedback!

      @olivierlambert nice, better than megathreads ! 😃 added my first TAG demand on it

      posted in Xen Orchestra
      P
      Pilow
    • RE: CBT disabling itself / bug ?

      about these KEY backups, I think perhaps LTR got in the way @florent @bastien-nollet

      still no way of knowing WHEN a weekly/monthly backup is happening ?

      posted in Backup
      P
      Pilow
    • RE: CBT disabling itself / bug ?

      @flakpyro indeed seems related.

      I also have this bug :
      2ba22e40-77b8-4671-8223-a82d8d62ac68-image.png

      on some VMs all jobs do KEY points, but in the backup logs they are indeed DELTA

      041a1912-dc57-4508-9fc9-3a8b956e38eb-image.png

      you can see as mere Megabytes are transfered that it's a delta backup... but point is presented as KEY

      here is the log :
      511ecbf6-b633-4140-9299-5380200b75bc-image.png

      posted in Backup
      P
      Pilow
    • CBT disabling itself / bug ?

      Hi,

      Latest XOA, with fully patched XCP 8.3 here.

      I'm fiddling around again with NBD+CBT in backup jobs (was avoiding CBT for a time, to reliably control my backups and avoid unnecessary KEY points) in the context of THICK SRs to spare some space.

      I know that CBT is reset when migrating from one SR to another.

      But here is what I encounter :

      • VM has no CBT enabled on its VDIs, it is on a SHARED SR in a pool of 3 hosts
      • backup option changed for NBD+CBT, was only NBD before
      • CBT is enabled on the next run by the backup job, I get a delta (was expecting a FULL ?)
      • next run, delta, as expected
      • i migrate this VM on another HOST, without changing its SR
      • CBT is immediatly disabled ? why ??
      • next run of backup it tries a delta, but "fall back to a full" (normal as CBT has been disabled...), and do a KEY point on the remote
      • next run is a delta as expected

      does this mean if I do a rolling pool update or host maintenance that will move the all VMs around, all CBT will be disabled and I should expect a FALL BACK TO FULL on all my NBD+CBT enabled backup jobs ??!

      why disabling CBT on a change of HOST and no move of SR ?

      posted in Backup
      P
      Pilow
    • RE: SR.Scan performance withing XOSTOR

      @denis.grilli really big news, I need to have XO STOR working 😃
      Thanks for your problems and support correting them 😄

      posted in XOSTOR
      P
      Pilow
    • RE: Plugins in XO6?

      @olivierlambert so XO5 will have a quite long lifespan, as everything must be included in XO6 ?

      posted in Xen Orchestra
      P
      Pilow
    • RE: FILE RESTORE / overlapping loop device exists

      @ph7 thank you for your tests

      some Vates dev are lurking in these forums, they will probably stumble upon this post anytime soon 😛

      posted in Backup
      P
      Pilow