XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Continuous Replication jobs creates full backups every time since 2025-09-06 (xo from source)

    Scheduled Pinned Locked Moved Backup
    20 Posts 5 Posters 314 Views 6 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      peo @peo
      last edited by

      Too bad it becomes that unexplainable slow when it has to fall back to a full backup:
      f971760e-3157-4352-a2e8-c0060aada73a-image.png

      The first (full) backup was 10x faster:
      19a77237-047f-4813-b72b-9c9b3459e888-image.png

      1 Reply Last reply Reply Quote 0
      • A Online
        Andrew Top contributor @olivierlambert
        last edited by

        @olivierlambert This happens to me too with a459015ca91c159123bb682f16237b4371a312a6.

        I did open an issue https://github.com/vatesfr/xen-orchestra/issues/8969

        andrew64k created this issue in vatesfr/xen-orchestra

        open a459015 causes delta replication to always be full #8969

        olivierlambertO 1 Reply Last reply Reply Quote 0
        • olivierlambertO Online
          olivierlambert Vates πŸͺ Co-Founder CEO @Andrew
          last edited by

          @Andrew And it doesn't with the commit just before?

          A 1 Reply Last reply Reply Quote 0
          • A Online
            Andrew Top contributor @olivierlambert
            last edited by

            @olivierlambert Correct. Running commit 4944ea902ff19f172b1b86ec96ad989e322bec2c works.

            florentF 1 Reply Last reply Reply Quote 0
            • olivierlambertO Online
              olivierlambert Vates πŸͺ Co-Founder CEO
              last edited by

              @florent so it's like https://github.com/vatesfr/xen-orchestra/commit/a459015ca91c159123bb682f16237b4371a312a6 might introduced a regression?

              0 fbeauchamp committed to vatesfr/xen-orchestra
              Fix(replication): VDI_NOT_MANAGED error  (#8935)
              
              from ticket #40151
              1 Reply Last reply Reply Quote 0
              • florentF Offline
                florent Vates πŸͺ XO Team @Andrew
                last edited by

                @Andrew then again, with such a precise report, the fix is easier

                the fix should join master soon

                1 Reply Last reply Reply Quote 2
                • olivierlambertO Online
                  olivierlambert Vates πŸͺ Co-Founder CEO
                  last edited by olivierlambert

                  If you want to test, you can switch to the relevant branch, which is fix_replication

                  https://github.com/vatesfr/xen-orchestra/pull/8971

                  fbeauchamp opened this pull request in vatesfr/xen-orchestra

                  open fix(backups): replication doiing always a full #8971

                  A 1 Reply Last reply Reply Quote 0
                  • A Online
                    Andrew Top contributor @olivierlambert
                    last edited by

                    @olivierlambert fix_replication works for some... but most of my VMs now have an error:
                    the writer IncrementalXapiWriter has failed the step writer.checkBaseVdis() with error Cannot read properties of undefined (reading 'managed').

                          "result": {
                            "message": "Cannot read properties of undefined (reading 'managed')",
                            "name": "TypeError",
                            "stack": "TypeError: Cannot read properties of undefined (reading 'managed')\n    at file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_writers/IncrementalXapiWriter.mjs:29:15\n    at Array.filter (<anonymous>)\n    at IncrementalXapiWriter.checkBaseVdis (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_writers/IncrementalXapiWriter.mjs:26:8)\n    at file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:159:54\n    at callWriter (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:15)\n    at IncrementalXapiVmBackupRunner._callWriters (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:52:14)\n    at IncrementalXapiVmBackupRunner._selectBaseVm (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:158:16)\n    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:378:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                          }
                    
                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Online
                      olivierlambert Vates πŸͺ Co-Founder CEO
                      last edited by

                      Feedback for you @florent πŸ˜›

                      A 1 Reply Last reply Reply Quote 0
                      • J Offline
                        JB @peo
                        last edited by

                        @peo Yes, same problem!

                        1 Reply Last reply Reply Quote 0
                        • A Online
                          Andrew Top contributor @olivierlambert
                          last edited by

                          @olivierlambert @florent Looks like 1471ab0c7c79fa6dca9a1598e7be2a141753ba91 (in current master) has fixed the issue.

                          But, during testing I found a new related issue (no error messages). Running current XO master b16d5...

                          During the normal CR delta backup job, two VMs shows the warning message: Backup fell back to a full but it did a delta (not full) backup.

                          I caused this by running a different CR backup job (for testing) that used the two same VMs but to a different SR. The new test job did a full backup and delta backup correctly. Since there is only one CBT snapshot the normal backup job did recognize that the snapshot was not for the original normal backup job and should have done a full (and started a new CBT snapshot), but it only did a short delta.

                          As the VMs were off, I guess it could correctly use the other CBT snapshot as no blocks had changed... Just odd that CR backup said it was going to do a full and then did a delta.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post