XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Gheppy
    3. Posts
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 31
    • Posts 239
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Continuous replication problem with multiple SR

      hmm,
      I found the solution, after I read here on forum.
      I enable this options and save and then disable and save and now is ok. Task is running again. It is running with last commit to
      So the problem is with these settings which may not exist in the task created before the update.
      05b77f69-6e87-49d2-9c62-d795a13d161a-image.png

      posted in Backup
      GheppyG
      Gheppy
    • RE: Continuous replication problem with multiple SR

      @florent
      The same result
      Logs, but task is blocked on importing step

      {
        "data": {
          "mode": "delta",
          "reportWhen": "always"
        },
        "id": "1751375761951",
        "jobId": "109e74e9-b59f-483b-860f-8f36f5223789",
        "jobName": "tb-xxxx-xxxx-vrs7",
        "message": "backup",
        "scheduleId": "40f57bd8-2557-4cf5-8322-705ec1d811d2",
        "start": 1751375761951,
        "status": "pending",
        "infos": [
          {
            "data": {
              "vms": [
                "629bdfeb-7700-561c-74ac-e151068721c2"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "629bdfeb-7700-561c-74ac-e151068721c2",
              "name_label": "tb-xxxx-xxxx-vrs7"
            },
            "id": "1751375768046",
            "message": "backup VM",
            "start": 1751375768046,
            "status": "pending",
            "tasks": [
              {
                "id": "1751375768586",
                "message": "snapshot",
                "start": 1751375768586,
                "status": "success",
                "end": 1751375772595,
                "result": "904c2b00-087f-45ae-9799-b6dad1680aff"
              },
              {
                "data": {
                  "id": "1afcdfda-6ede-3cb5-ecbf-29dc09ea605c",
                  "isFull": true,
                  "name_label": "tb-vrs1-RAID",
                  "type": "SR"
                },
                "id": "1751375772596",
                "message": "export",
                "start": 1751375772596,
                "status": "pending",
                "tasks": [
                  {
                    "id": "1751375773638",
                    "message": "transfer",
                    "start": 1751375773638,
                    "status": "pending"
                  }
                ]
              },
              {
                "data": {
                  "id": "a5d2b22e-e4be-c384-9187-879aa41dd70f",
                  "isFull": true,
                  "name_label": "tb-vrs6-RAID",
                  "type": "SR"
                },
                "id": "1751375772611",
                "message": "export",
                "start": 1751375772611,
                "status": "pending",
                "tasks": [
                  {
                    "id": "1751375773654",
                    "message": "transfer",
                    "start": 1751375773654,
                    "status": "pending"
                  }
                ]
              }
            ]
          }
        ]
      }
      
      posted in Backup
      GheppyG
      Gheppy
    • RE: Continuous replication problem with multiple SR

      @florent
      I will update to the latest version and look at the logs.
      Now I have version 7994fc52c31821c2ad482471319551ee00dc1472

      posted in Backup
      GheppyG
      Gheppy
    • RE: Continuous replication problem with multiple SR

      @olivierlambert
      I update to last commit, two more change is on git now

      d1aff2fd-b61f-45fa-a375-6155ee9bfc60-image.png

      The problem still exists. From what I've noticed, export tasks start being created and then disappear shortly after. And then a few moments later the import tasks appear and it remains blocked like this.

      42647e87-a3ae-451c-9001-555b3ad26bf8-image.png

      Only up to this commit 7994fc52c31821c2ad482471319551ee00dc1472 (inclusive) , everything is ok.

      EDIT: I read on the forum about similar problems. I don't have NBD enabled.

      posted in Backup
      GheppyG
      Gheppy
    • RE: Continuous replication problem with multiple SR

      @olivierlambert
      The problem start with this commit cbc07b319fea940cab77bc163dfd4c4d7f886776
      This is ok 7994fc52c31821c2ad482471319551ee00dc1472

      posted in Backup
      GheppyG
      Gheppy
    • RE: Continuous replication problem with multiple SR

      Hello, the problem has reappeared.
      This time with a different behavior.
      It hangs at this stage, and the export tasks do not even start/show.

      Info:

      XCP-ng 8.3 up to date,
      XOCE commit fcefaaea85651c3c0bb40bfba8199dd4e963211c
      

      93ca9782-1073-42d1-b0ac-d79a387d4bd9-image.png

      2413a478-b12c-4334-939f-088c813abb7d-image.png

      posted in Backup
      GheppyG
      Gheppy
    • RE: Continuous replication problem with multiple SR

      I tested with commit e64c434 and it is ok.
      Thank you

      posted in Backup
      GheppyG
      Gheppy
    • Continuous replication problem with multiple SR

      Hello,
      I have problems with continuous replication, meaning that if I choose to do it on a single SR everything is ok but if I add two or more SRs it crashes and gives the error message below.
      The problem is that it does nothing, it stays stuck and it is always the first SR in the list.
      Info:

      • XCP-ng 8.3 up to date,
      • XOCE commit 6ecab

      Here it is with only one SR and everything is ok:

      • configuration:
        ac70e819-d049-4012-ae36-1e11cbc74019-image.png

      • transfer:
        1a607aab-db25-4ab1-b970-9233d196c1d7-image.png

      • message

      • f225a84e-f2e6-443a-9a5a-5213b211caff-image.png

      • bc228099-8f29-434f-b0ca-bec7ebf69b47-image.png

      Here it is with two SRs

      • configuration:
        2d985a5f-7141-4b4d-9640-33671b458f3f-image.png

      • transfer:
        4158c90f-739b-4fce-8309-91865d6ba93b-image.png

      • message:
        c766f698-fcec-41d6-bd46-43323533b499-image.png

      • log

      {
        "data": {
          "mode": "delta",
          "reportWhen": "always"
        },
        "id": "1748599936065",
        "jobId": "109e74e9-b59f-483b-860f-8f36f5223789",
        "jobName": "********-vrs7",
        "message": "backup",
        "scheduleId": "40f57bd8-2557-4cf5-8322-705ec1d811d2",
        "start": 1748599936065,
        "status": "pending",
        "infos": [
          {
            "data": {
              "vms": [
                "629bdfeb-7700-561c-74ac-e151068721c2"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "629bdfeb-7700-561c-74ac-e151068721c2",
              "name_label": "********-vrs7"
            },
            "id": "1748599941103",
            "message": "backup VM",
            "start": 1748599941103,
            "status": "pending",
            "tasks": [
              {
                "id": "1748599941671",
                "message": "snapshot",
                "start": 1748599941671,
                "status": "success",
                "end": 1748599945608,
                "result": "5c030f40-0b34-d1b4-10aa-f849548aa0b7"
              },
              {
                "data": {
                  "id": "1afcdfda-6ede-3cb5-ecbf-29dc09ea605c",
                  "isFull": true,
                  "name_label": "********-RAID",
                  "type": "SR"
                },
                "id": "1748599945609",
                "message": "export",
                "start": 1748599945609,
                "status": "pending",
                "tasks": [
                  {
                    "id": "1748599948875",
                    "message": "transfer",
                    "start": 1748599948875,
                    "status": "failure",
                    "end": 1748599949159,
                    "result": {
                      "code": "HANDLE_INVALID",
                      "params": [
                        "SR",
                        "OpaqueRef:e6bbcba7-6a86-4d00-8391-3c1722b3552f"
                      ],
                      "call": {
                        "duration": 6,
                        "method": "VDI.create",
                        "params": [
                          "* session id *",
                          {
                            "name_description": "********-sdb-256gb",
                            "name_label": "********-sdb-256gb",
                            "other_config": {
                              "xo:backup:vm": "629bdfeb-7700-561c-74ac-e151068721c2",
                              "xo:copy_of": "59ee458d-99c8-4a45-9c91-263c9729208b"
                            },
                            "read_only": false,
                            "sharable": false,
                            "SR": "OpaqueRef:e6bbcba7-6a86-4d00-8391-3c1722b3552f",
                            "tags": [],
                            "type": "user",
                            "virtual_size": 274877906944,
                            "xenstore_data": {}
                          }
                        ]
                      },
                      "message": "HANDLE_INVALID(SR, OpaqueRef:e6bbcba7-6a86-4d00-8391-3c1722b3552f)",
                      "name": "XapiError",
                      "stack": "XapiError: HANDLE_INVALID(SR, OpaqueRef:e6bbcba7-6a86-4d00-8391-3c1722b3552f)\n    at XapiError.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)\n    at file:///opt/xen-orchestra/packages/xen-api/transports/json-rpc.mjs:38:21\n    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
                    }
                  }
                ]
              },
              {
                "data": {
                  "id": "a5d2b22e-e4be-c384-9187-879aa41dd70f",
                  "isFull": true,
                  "name_label": "********-vrs6-RAID",
                  "type": "SR"
                },
                "id": "1748599945617",
                "message": "export",
                "start": 1748599945617,
                "status": "pending",
                "tasks": [
                  {
                    "id": "1748599948887",
                    "message": "transfer",
                    "start": 1748599948887,
                    "status": "pending"
                  }
                ]
              }
            ]
          }
        ]
      }
      
      posted in Backup
      GheppyG
      Gheppy
    • RE: No more options for export

      It works for me again, thanks for your support

      posted in Backup
      GheppyG
      Gheppy
    • No more options for export

      I have a problem after commit 6e508b0.
      I can no longer select the export method.
      With 6e508b0, all are ok

      Commit a386680
      c7c57eaf-77f8-48ed-9d80-8cc4580df6c5-image.png

      b797f9de-105e-41a4-8990-93d07b7351f2-image.png

      Commit 6e508b0
      bbb271c5-daa6-45b7-97e9-4545b0b449c9-image.png

      b1614525-7ffb-4135-9475-dabd90818747-image.png

      posted in Backup
      GheppyG
      Gheppy
    • RE: VUSBs options for backup/snapshot

      At a certain point, I don't know when because we are talking about a home lab here.
      I could attach the USB HDD same as a virtual HDD and I could put exclusions on it

      posted in Backup
      GheppyG
      Gheppy
    • RE: VUSBs options for backup/snapshot

      More detailed:
      At the moment I have this:
      A VM with TrueNAS that has:

      • sda 100Gb, the operating system
      • sdb 2Tb, the first hdd to use in TrueNAS
      • sdc 2Tb USB, clone of sdb 2Tb made in TrueNAS

      I want to backup the TrueNAS system, which means only sda
      I excluded sdb via [NOBAK] [NOSNAP]
      But I can no longer exclude sdc via [NOBAK] [NOSNAP]

      posted in Backup
      GheppyG
      Gheppy
    • VUSBs options for backup/snapshot

      Is there a possibility to implement the same options as for disks, for VUSB to?
      I mean the option to use [NOBAK] [NOSNAP].
      Before the update, I saw the USB-HDD as a normal disk that I could attach to VM and use the [NOBAK] option to excluded it from the backup.
      I have XCP-ng 8.3 is installed on the server.

      posted in Backup
      GheppyG
      Gheppy
    • RE: XO cant Snapshot itself ?

      This is what I was thinking

      • [NOSNAP] to be like SDA and disk 3
      • [NOBAK] to be with SDA and disk 2
      • In both cases disk 4 is excluded

      66a42055-8a10-4b46-a203-0dd3da2e9b54-image.png

      posted in Xen Orchestra
      GheppyG
      Gheppy
    • RE: XO cant Snapshot itself ?

      From my point of view, it should be with "OR" and if the condition is fulfilled to execute.
      E.g :

      • if I have "[NOSNAP] 'VM name' " not to execute snapshoot
      • if I have "[NOBAK] 'Vm name' " not to execute backup
      • if I have "[NOSNAP] [NOBAK] 'Vm name' " not to execute snapshot and backup
        But if you make a backup, remove only what is with [NOBAK]
      posted in Xen Orchestra
      GheppyG
      Gheppy
    • RE: XOA: backup Active Directory vm

      It's not really necessary.
      As I said, I have 3 x DC and I restored them in the test lab and they were ok.
      All three were backed up at the same time with a single normal backup task.
      Below is the task I was talking about.
      e03d9889-dede-4e7c-ba5a-4d10a5b05937-image.png

      posted in Backup
      GheppyG
      Gheppy
    • RE: XOA: backup Active Directory vm

      @fatek
      AD has a maximum period of difference between Domain Controller's and as far as I know it is 24h.
      If you don't do this, the oldest one will be out of sync and useless.

      posted in Backup
      GheppyG
      Gheppy
    • RE: XOA: backup Active Directory vm

      I have something like this and I have no problems so far.
      The only thing is to do them all on the same day.
      I have a task that backs up all three VMs at once.
      And the restoration is done the same way, all from the same day.

      posted in Backup
      GheppyG
      Gheppy
    • RE: VM migration within single server

      @AtaxyaNetwork
      thank you

      posted in Management
      GheppyG
      Gheppy
    • RE: VM migration within single server

      @olivierlambert
      ok, thank you

      posted in Management
      GheppyG
      Gheppy