XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    mirror backup to S3

    Scheduled Pinned Locked Moved Backup
    19 Posts 4 Posters 443 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A Online
      acebmxer @robyt
      last edited by

      @robyt

      Create a backup sequence.

      Screenshot 2025-06-23 112013.png

      It will ignore the time set in the initial schedules so if the previous job is still running when the next schedule job is set to start. This will hold the next job until the first job completes.

      robytR 1 Reply Last reply Reply Quote 0
      • robytR Offline
        robyt @acebmxer
        last edited by

        @acebmxer ok, i don't use (for full + delta) the old schedulng (one job but two scheduling, one full and one delta), but i must separate the two jobs?

        A 1 Reply Last reply Reply Quote 0
        • A Online
          acebmxer @robyt
          last edited by acebmxer

          @robyt

          Unless im am not truly understanding what your asking.

          I have 1 backup job. That job has 2 schedules. One for daily incremental backups and one that has the force full backup checked. The full backup only runs on Sunday. The incremental does not run on Sunday.

          When creating the sequence job it uses the schedules from that job to create the sequence job.

          Screenshot 2025-06-23 142518.png

          Screenshot 2025-06-23 142238.png

          D robytR 2 Replies Last reply Reply Quote 0
          • D Offline
            DustinB @acebmxer
            last edited by

            @acebmxer Not a backup job, a backup sequence.

            89e58b60-caf6-4a0a-9902-da92c88b2776-image.png

            Jobs are for creating separate backups on (usually different schedules).

            Sequences are Backup and then do XYZ.

            A 1 Reply Last reply Reply Quote 0
            • A Online
              acebmxer @DustinB
              last edited by

              @DustinB

              Yes but you cant create the sequence with out the schedules from a regular backup.

              Screenshot 2025-06-23 154634.png

              1 Reply Last reply Reply Quote 0
              • robytR Offline
                robyt @acebmxer
                last edited by robyt

                @acebmxer [excuse for the poor english!]
                i've now this situation:
                1 backup job with two disables schedules, one full and one delta, to a nas
                1 mirror full backup to wasabi (S3)
                1 mirror incremental backup

                i've insert two sequences:
                one starting at sunday for full backup (the sequence is full backup and then full mirror)
                one every 3 hours with delta backup and then mirror incremental

                the job start at the correct hour but the mirror incremental send every time the same data size..
                backup to nas:

                dns_interno1 (ctx1.tosnet.it)
                Transfer data using NBD
                    Clean VM directory
                    cleanVm: incorrect backup size in metadata
                    Start: 2025-06-24 16:00
                    End: 2025-06-24 16:00
                    Snapshot
                    Start: 2025-06-24 16:00
                    End: 2025-06-24 16:00
                    Backup XEN OLD
                        transfer
                        Start: 2025-06-24 16:00
                        End: 2025-06-24 16:01
                        Duration: a few seconds
                        Size: 132 MiB
                        Speed: 11.86 MiB/s
                    Start: 2025-06-24 16:00
                    End: 2025-06-24 16:01
                    Duration: a minute
                
                Start: 2025-06-24 16:00
                End: 2025-06-24 16:01
                Duration: a minute
                Type: delta
                
                 dns_interno1 (ctx1.tosnet.it)
                    Wasabi
                        transfer
                        Start: 2025-06-24 16:02
                        End: 2025-06-24 16:15
                        Duration: 13 minutes
                        Size: 25.03 GiB
                        Speed: 34.14 MiB/s
                        transfer
                        Start: 2025-06-24 16:15
                        End: 2025-06-24 16:15
                        Duration: a few seconds
                        Size: 394 MiB
                        Speed: 22.49 MiB/s
                    Start: 2025-06-24 16:02
                    End: 2025-06-24 16:17
                    Duration: 15 minutes
                    Wasabi
                    Start: 2025-06-24 16:15
                    End: 2025-06-24 16:17
                    Duration: 2 minutes
                
                Start: 2025-06-24 16:02
                End: 2025-06-24 16:17
                Duration: 15 minutes
                

                the job send every time 25gb to wasabi, not the incremental data.

                A 1 Reply Last reply Reply Quote 0
                • A Online
                  acebmxer @robyt
                  last edited by

                  @robyt

                  What i posted was just examples of how i have my backups configured. You dont have to call the backup jobs the same as mine. Maybe someone else with more experience can step in. Double check your schedules make sure you did set incorrect settings in one of the schedules/jobs.

                  robytR 1 Reply Last reply Reply Quote 0
                  • robytR Offline
                    robyt @acebmxer
                    last edited by

                    @acebmxer of course, this is only a test.
                    the problem is not the schedulng but why incremental send every time all data.

                    1 Reply Last reply Reply Quote 0
                    • florentF Offline
                      florent Vates 🪐 XO Team
                      last edited by

                      @robyt said in mirror backup to S3:

                      1 backup job with two disables schedules, one full and one delta, to a nas

                      Hi robyt

                      i the mirror settings : full mirror if when mirroring full backup ( called backup ) , as one file per VM
                      The incremental mirror will take care on all the files created by a delta backup

                      Can you show me what retention you have on both side ? you have more explanation on the syncrohnization algorithm here https://docs.xen-orchestra.com/mirror_backup , to be short a delta mirror can only be done if part of the disks chains are common to both remotes

                      robytR 2 Replies Last reply Reply Quote 0
                      • robytR Offline
                        robyt @florent
                        last edited by

                        @florent hi, i've adjusted the retention parameters and i'm waiting for some days of backup/mirror for checking

                        1 Reply Last reply Reply Quote 1
                        • robytR Offline
                          robyt @florent
                          last edited by

                          Hi @florent, i've clean backup data, add the correct retention and now
                          it's fine
                          591a0ded-9ab4-40f4-803e-81ea50270e87-immagine.png
                          i'm lowering nbd connection (from 4 to 1), the speed of "test backup con mirror" is too low

                          florentF 1 Reply Last reply Reply Quote 0
                          • florentF Offline
                            florent Vates 🪐 XO Team @robyt
                            last edited by

                            @robyt 2 is generally a sweet spot

                            robytR 1 Reply Last reply Reply Quote 0
                            • robytR Offline
                              robyt @florent
                              last edited by

                              @florent i've a little problem with backup to s3/wasabi..

                              for delta seems all ok:

                              {
                                "data": {
                                  "mode": "delta",
                                  "reportWhen": "failure"
                                },
                                "id": "1751914964818",
                                "jobId": "e4adc26c-8723-4388-a5df-c2a1663ed0f7",
                                "jobName": "Mirror wasabi delta",
                                "message": "backup",
                                "scheduleId": "62a5edce-88b8-4db9-982e-ad2f525c4eb9",
                                "start": 1751914964818,
                                "status": "success",
                                "infos": [
                                  {
                                    "data": {
                                      "vms": [
                                        "2771e7a0-2572-ca87-97cf-e174a1d35e6f",
                                        "b89670f6-b785-7df0-3791-e5e41ec8ee08",
                                        "cac6afed-5df8-0817-604c-a047a162093f"
                                      ]
                                    },
                                    "message": "vms"
                                  }
                                ],
                                "tasks": [
                                  {
                                    "data": {
                                      "type": "VM",
                                      "id": "b89670f6-b785-7df0-3791-e5e41ec8ee08"
                                    },
                                    "id": "1751914968373",
                                    "message": "backup VM",
                                    "start": 1751914968373,
                                    "status": "success",
                                    "tasks": [
                                      {
                                        "id": "1751914968742",
                                        "message": "clean-vm",
                                        "start": 1751914968742,
                                        "status": "success",
                                        "end": 1751914979708,
                                        "result": {
                                          "merge": false
                                        }
                                      },
                                      {
                                        "data": {
                                          "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                                          "type": "remote"
                                        },
                                        "id": "1751914984503",
                                        "message": "export",
                                        "start": 1751914984503,
                                        "status": "success",
                                        "tasks": [
                                          {
                                            "id": "1751914984667",
                                            "message": "transfer",
                                            "start": 1751914984667,
                                            "status": "success",
                                            "end": 1751914992365,
                                            "result": {
                                              "size": 125829120
                                            }
                                          },
                                          {
                                            "id": "1751914995521",
                                            "message": "clean-vm",
                                            "start": 1751914995521,
                                            "status": "success",
                                            "tasks": [
                                              {
                                                "id": "1751915004208",
                                                "message": "merge",
                                                "start": 1751915004208,
                                                "status": "success",
                                                "end": 1751915018911
                                              }
                                            ],
                                            "end": 1751915020075,
                                            "result": {
                                              "merge": true
                                            }
                                          }
                                        ],
                                        "end": 1751915020077
                                      }
                                    ],
                                    "end": 1751915020077
                                  },
                                  {
                                    "data": {
                                      "type": "VM",
                                      "id": "2771e7a0-2572-ca87-97cf-e174a1d35e6f"
                                    },
                                    "id": "1751914968380",
                                    "message": "backup VM",
                                    "start": 1751914968380,
                                    "status": "success",
                                    "tasks": [
                                      {
                                        "id": "1751914968903",
                                        "message": "clean-vm",
                                        "start": 1751914968903,
                                        "status": "success",
                                        "end": 1751914979840,
                                        "result": {
                                          "merge": false
                                        }
                                      },
                                      {
                                        "data": {
                                          "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                                          "type": "remote"
                                        },
                                        "id": "1751914986808",
                                        "message": "export",
                                        "start": 1751914986808,
                                        "status": "success",
                                        "tasks": [
                                          {
                                            "id": "1751914987416",
                                            "message": "transfer",
                                            "start": 1751914987416,
                                            "status": "success",
                                            "end": 1751914993152,
                                            "result": {
                                              "size": 119537664
                                            }
                                          },
                                          {
                                            "id": "1751914996024",
                                            "message": "clean-vm",
                                            "start": 1751914996024,
                                            "status": "success",
                                            "tasks": [
                                              {
                                                "id": "1751915005023",
                                                "message": "merge",
                                                "start": 1751915005023,
                                                "status": "success",
                                                "end": 1751915035567
                                              }
                                            ],
                                            "end": 1751915039414,
                                            "result": {
                                              "merge": true
                                            }
                                          }
                                        ],
                                        "end": 1751915039414
                                      }
                                    ],
                                    "end": 1751915039415
                                  },
                                  {
                                    "data": {
                                      "type": "VM",
                                      "id": "cac6afed-5df8-0817-604c-a047a162093f"
                                    },
                                    "id": "1751915020089",
                                    "message": "backup VM",
                                    "start": 1751915020089,
                                    "status": "success",
                                    "tasks": [
                                      {
                                        "id": "1751915020443",
                                        "message": "clean-vm",
                                        "start": 1751915020443,
                                        "status": "success",
                                        "end": 1751915030194,
                                        "result": {
                                          "merge": false
                                        }
                                      },
                                      {
                                        "data": {
                                          "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                                          "type": "remote"
                                        },
                                        "id": "1751915034962",
                                        "message": "export",
                                        "start": 1751915034962,
                                        "status": "success",
                                        "tasks": [
                                          {
                                            "id": "1751915035142",
                                            "message": "transfer",
                                            "start": 1751915035142,
                                            "status": "success",
                                            "end": 1751915052723,
                                            "result": {
                                              "size": 719323136
                                            }
                                          },
                                          {
                                            "id": "1751915056146",
                                            "message": "clean-vm",
                                            "start": 1751915056146,
                                            "status": "success",
                                            "tasks": [
                                              {
                                                "id": "1751915064681",
                                                "message": "merge",
                                                "start": 1751915064681,
                                                "status": "success",
                                                "end": 1751915116508
                                              }
                                            ],
                                            "end": 1751915117838,
                                            "result": {
                                              "merge": true
                                            }
                                          }
                                        ],
                                        "end": 1751915117839
                                      }
                                    ],
                                    "end": 1751915117839
                                  }
                                ],
                                "end": 1751915117839
                              }
                              

                              For full i'm not sure:

                                 {
                                    "data": {
                                      "mode": "full",
                                      "reportWhen": "always"
                                    },
                                    "id": "1751757492933",
                                    "jobId": "35c78a31-67c5-47ba-9988-9c4cb404ed8e",
                                    "jobName": "Mirror wasabi full",
                                    "message": "backup",
                                    "scheduleId": "476b863d-a651-42e5-9bb3-db830dbdac7c",
                                    "start": 1751757492933,
                                    "status": "success",
                                    "infos": [
                                      {
                                        "data": {
                                          "vms": [
                                            "2771e7a0-2572-ca87-97cf-e174a1d35e6f",
                                            "b89670f6-b785-7df0-3791-e5e41ec8ee08",
                                            "cac6afed-5df8-0817-604c-a047a162093f"
                                          ]
                                        },
                                        "message": "vms"
                                      }
                                    ],
                                    "end": 1751757496499
                                  }
                              

                              XOA send to me the email with this report

                              Job ID: 35c78a31-67c5-47ba-9988-9c4cb404ed8e
                              Run ID: 1751757492933
                              Mode: full
                              Start time: Sunday, July 6th 2025, 1:18:12 am
                              End time: Sunday, July 6th 2025, 1:18:16 am
                              Duration: a few seconds
                              

                              four second for 203 gb?

                              robytR 1 Reply Last reply Reply Quote 0
                              • robytR Offline
                                robyt @robyt
                                last edited by

                                any ideas?

                                florentF 1 Reply Last reply Reply Quote 0
                                • florentF Offline
                                  florent Vates 🪐 XO Team @robyt
                                  last edited by

                                  @robyt are you using full backus ( called "backup" ) on the source ?
                                  because incremental mirror will transfer all the backups generated by a "delta backup" whereas it's the first transfer or the following delta

                                  (our terminology can be confusing for now)

                                  robytR 1 Reply Last reply Reply Quote 0
                                  • robytR Offline
                                    robyt @florent
                                    last edited by

                                    @florent said in mirror backup to S3:

                                    @robyt are you using full backus ( called "backup" ) on the source ?
                                    because incremental mirror will transfer all the backups generated by a "delta backup" whereas it's the first transfer or the following delta

                                    (our terminology can be confusing for now)

                                    Hi, i've two delta jobs, one with "force full backup" checked
                                    In log i've only this:
                                    31356631-ce43-43a0-b626-e9b0dbe52da2-immagine.png

                                    florentF 1 Reply Last reply Reply Quote 0
                                    • florentF Offline
                                      florent Vates 🪐 XO Team @robyt
                                      last edited by

                                      @robyt your doiing incremental backup with 2 step : complete backup ( full/key disks) and delta (differencing/incremental) Both of theses are transfered through an incremental mirror

                                      on the other hand if you do Backup , it build one xva file per VM containing all the VM data at each backup. These are transfered through a Full backup mirror

                                      we are working on clarifying the vocabulary

                                      robytR 1 Reply Last reply Reply Quote 0
                                      • robytR Offline
                                        robyt @florent
                                        last edited by

                                        @florent said in mirror backup to S3:

                                        @robyt your doiing incremental backup with 2 step : complete backup ( full/key disks) and delta (differencing/incremental) Both of theses are transfered through an incremental mirror

                                        on the other hand if you do Backup , it build one xva file per VM containing all the VM data at each backup. These are transfered through a Full backup mirror

                                        we are working on clarifying the vocabularyahhhh...
                                        the full mirror to S3 is not necessary

                                        1 Reply Last reply Reply Quote 0
                                        • First post
                                          Last post