XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Timestamp lost in Continuous Replication

    Scheduled Pinned Locked Moved Backup
    23 Posts 6 Posters 291 Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      Pilow @florent
      last edited by Pilow

      @florent Florent, I could see the benefits of unified VM name, but could you at least push the timestamp in a note on the VM ?
      it is important to know wich timestamp a replica VM is, in order to choose failover option wisely

      florentF 1 Reply Last reply Reply Quote 1
      • florentF Offline
        florent Vates 🪐 XO Team @Pilow
        last edited by

        @Pilow the timestamp is on the snapshot, but you're right, we can add a note on the VM with the last replications informations

        note that the older VMs replicated will be purged when we are sure they don't have any usefull data , so you will have only one VM replicated , wuth multiple snapshots

        P 1 Reply Last reply Reply Quote 0
        • P Offline
          Pilow @florent
          last edited by

          @florent ho nice
          we had as many "VMs with a timestamp in the name" as number of REPLICAs, and multiple snapshot on source VM
          now we have "one Replica VM with multiple snapshots" ? Veeam-replica-style...
          do multiple snapshots persists on source VM too ?

          if it's true, that's nice on the concept.

          but when your replica is over lvmoiscsi 😕 not so nice

          ps : i didnt upgrade to last XOA/XCP patchs yet

          florentF 1 Reply Last reply Reply Quote 1
          • florentF Offline
            florent Vates 🪐 XO Team @Pilow
            last edited by florent

            @Pilow

            we had as many "VMs with a timestamp in the name" as number of REPLICAs, and multiple snapshot on source VM
            now we have "one Replica VM with multiple snapshots" ? Veeam-replica-style...

            we didn't look at veeam , but it's reassuring to see that we converge toward the solutions used elsewhere

            it shouldn't change anything on the source
            I am currently doing more test to see if we missed something

            edit: as an additional beenfits it should use less space on target it you have a retention > 1 since we will only have one active disk

            K 1 Reply Last reply Reply Quote 0
            • K Offline
              kratos @florent
              last edited by

              Hello everyone,

              I’m not sure if my information is useful, but I’m experiencing the same problem. I am using continuous replication between two servers with a retention of 4.
              Previously, four VM replicas were created, each with a timestamp in the name. With the current version, four VMs are created with identical names.

              My environment:
              XO: from source commit 598ab
              xcp-ng: 8.3.0 with the latest patches

              My backup job:
              Name of backup job: replicate_to_srv002
              Source server: srv003
              Target server: srv002
              VM name: privat
              Retention: 4

              Result on the target server:
              4 VMs with the name "[XO Backup replicate_to_srv002] privat - replicate_to_srv002", where only the newest one contains a snapshot named: "privat - replicate_to_srv002 - (20260318T083542Z)"

              Additionally, full backups are created on every run of the backup job, and no deltas are being used.

              If I can help with any additional information, I’d be happy to do so.

              Best regards,
              Simon

              P 1 Reply Last reply Reply Quote 0
              • P Offline
                Pilow @kratos
                last edited by

                @kratos @florent shouldn't he have one VM with 4 snapshots ? and delta replicas between each snap ?

                K 1 Reply Last reply Reply Quote 0
                • K Offline
                  kratos @Pilow
                  last edited by kratos

                  @Pilow
                  actually it looks like this:

                  bf9a378e-300e-4301-8ddd-126551c258ce-image.jpeg

                  edit:

                  Log of last run:
                  d5453e94-ec34-44d6-8bd3-97f68aa242b6-image.jpeg

                  {
                    "data": {
                      "mode": "delta",
                      "reportWhen": "failure"
                    },
                    "id": "1773822934377",
                    "jobId": "a95ac100-0e20-49c5-9270-c0306ee2852f",
                    "jobName": "replicate_to_srv002",
                    "message": "backup",
                    "scheduleId": "1014584a-228c-4049-8912-51ab1b24925a",
                    "start": 1773822934377,
                    "status": "success",
                    "infos": [
                      {
                        "data": {
                          "vms": [
                            "224a73db-9bc6-13d6-cc8e-0bf22dbede73"
                          ]
                        },
                        "message": "vms"
                      }
                    ],
                    "tasks": [
                      {
                        "data": {
                          "type": "VM",
                          "id": "224a73db-9bc6-13d6-cc8e-0bf22dbede73",
                          "name_label": "privat"
                        },
                        "id": "1773822936247",
                        "message": "backup VM",
                        "start": 1773822936247,
                        "status": "success",
                        "tasks": [
                          {
                            "id": "1773822937378",
                            "message": "snapshot",
                            "start": 1773822937378,
                            "status": "success",
                            "end": 1773822940361,
                            "result": "d0ba1483-f5ae-ce72-fb4f-dbd9eafbf272"
                          },
                          {
                            "data": {
                              "id": "8205e6c4-4d8f-69d9-6315-9ee89af8e307",
                              "isFull": true,
                              "name_label": "Local storage",
                              "type": "SR"
                            },
                            "id": "1773822940361:0",
                            "message": "export",
                            "start": 1773822940361,
                            "status": "success",
                            "tasks": [
                              {
                                "id": "1773822942354",
                                "message": "transfer",
                                "start": 1773822942354,
                                "status": "success",
                                "tasks": [
                                  {
                                    "id": "1773823497635",
                                    "message": "target snapshot",
                                    "start": 1773823497635,
                                    "status": "success",
                                    "end": 1773823500290,
                                    "result": "OpaqueRef:53bceb07-a69c-504d-e824-28f5384cb763"
                                  }
                                ],
                                "end": 1773823500290,
                                "result": {
                                  "size": 61941481472
                                }
                              },
                              {
                                "id": "1773823501512",
                                "message": "health check",
                                "start": 1773823501512,
                                "status": "success",
                                "tasks": [
                                  {
                                    "id": "1773823501515",
                                    "message": "cloning-vm",
                                    "start": 1773823501515,
                                    "status": "success",
                                    "end": 1773823504720,
                                    "result": "OpaqueRef:43e9644f-fc99-963a-4a54-da3b845e823b"
                                  },
                                  {
                                    "id": "1773823504722",
                                    "message": "vmstart",
                                    "start": 1773823504722,
                                    "status": "success",
                                    "end": 1773823545662
                                  }
                                ],
                                "end": 1773823549312
                              }
                            ],
                            "end": 1773823549312
                          }
                        ],
                        "end": 1773823549319
                      }
                    ],
                    "end": 1773823549319
                  }
                  
                  1 Reply Last reply Reply Quote 0
                  • florentF Offline
                    florent Vates 🪐 XO Team
                    last edited by

                    @kratos @pilow is it possible that your replications are in the same pool ?

                    K 1 Reply Last reply Reply Quote 0
                    • K Offline
                      kratos @florent
                      last edited by

                      @florent
                      Yes, that is absolutely correct. I have a pool with two members without shared storage. Some VMs run on the master, and some on the second pool member. I replicate between the pool members so that, if necessary, I can start the VMs on the other member. This may not be best practice.

                      florentF P 2 Replies Last reply Reply Quote 0
                      • florentF Offline
                        florent Vates 🪐 XO Team @kratos
                        last edited by

                        @kratos you probably heard the sound of my head hitting my desk when I found the cause
                        the fix is in review, you will be able to use it in a few hours

                        K 1 Reply Last reply Reply Quote 0
                        • K Offline
                          kratos @florent
                          last edited by

                          @florent
                          I’m a developer myself, so I can totally relate—just when you think everything is working perfectly, someone like me comes along 🙂
                          I’m really glad I could help contribute to finding a solution, and I’ll report back once I’ve tested the new commit. Thanks a lot for your work.

                          However, this does raise the question for me: is my use case for continuous replication really that unusual?

                          florentF 1 Reply Last reply Reply Quote 1
                          • florentF Offline
                            florent Vates 🪐 XO Team @kratos
                            last edited by

                            @kratos no, it's not that rare. I even saw in the wild replication on the same storage (wouldn't recommend it , though )

                            the cross pool replication is a little harder since the objects are each split on their own xen api, so the calls must be routed to the right one
                            We tested the harder part, not the mono xapi case

                            1 Reply Last reply Reply Quote 0
                            • P Offline
                              Pilow @kratos
                              last edited by

                              @kratos said:

                              This may not be best practice.

                              in a two hosts pool, if your replicated VMs live on the Master, and it's gone, you won't be able to start the replicated VMs

                              you will first need to transition slave to master

                              indeed CR is better to another pool 😃

                              1 Reply Last reply Reply Quote 1

                              Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                              Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                              With your input, this post could be even better 💗

                              Register Login
                              • First post
                                Last post