XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Continuous Replication jobs creates full backups every time since 2025-09-06 (xo from source)

    Scheduled Pinned Locked Moved Backup
    23 Posts 5 Posters 350 Views 6 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A Offline
      Andrew Top contributor @olivierlambert
      last edited by

      @olivierlambert Correct. Running commit 4944ea902ff19f172b1b86ec96ad989e322bec2c works.

      florentF 1 Reply Last reply Reply Quote 0
      • olivierlambertO Online
        olivierlambert Vates πŸͺ Co-Founder CEO
        last edited by

        @florent so it's like https://github.com/vatesfr/xen-orchestra/commit/a459015ca91c159123bb682f16237b4371a312a6 might introduced a regression?

        0 fbeauchamp committed to vatesfr/xen-orchestra
        Fix(replication): VDI_NOT_MANAGED error  (#8935)
        
        from ticket #40151
        1 Reply Last reply Reply Quote 0
        • florentF Online
          florent Vates πŸͺ XO Team @Andrew
          last edited by

          @Andrew then again, with such a precise report, the fix is easier

          the fix should join master soon

          1 Reply Last reply Reply Quote 2
          • olivierlambertO Online
            olivierlambert Vates πŸͺ Co-Founder CEO
            last edited by olivierlambert

            If you want to test, you can switch to the relevant branch, which is fix_replication

            https://github.com/vatesfr/xen-orchestra/pull/8971

            fbeauchamp opened this pull request in vatesfr/xen-orchestra

            open fix(backups): replication doiing always a full #8971

            A 1 Reply Last reply Reply Quote 0
            • A Offline
              Andrew Top contributor @olivierlambert
              last edited by

              @olivierlambert fix_replication works for some... but most of my VMs now have an error:
              the writer IncrementalXapiWriter has failed the step writer.checkBaseVdis() with error Cannot read properties of undefined (reading 'managed').

                    "result": {
                      "message": "Cannot read properties of undefined (reading 'managed')",
                      "name": "TypeError",
                      "stack": "TypeError: Cannot read properties of undefined (reading 'managed')\n    at file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_writers/IncrementalXapiWriter.mjs:29:15\n    at Array.filter (<anonymous>)\n    at IncrementalXapiWriter.checkBaseVdis (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_writers/IncrementalXapiWriter.mjs:26:8)\n    at file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:159:54\n    at callWriter (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:15)\n    at IncrementalXapiVmBackupRunner._callWriters (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:52:14)\n    at IncrementalXapiVmBackupRunner._selectBaseVm (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:158:16)\n    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:378:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202509150025/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                    }
              
              1 Reply Last reply Reply Quote 0
              • olivierlambertO Online
                olivierlambert Vates πŸͺ Co-Founder CEO
                last edited by

                Feedback for you @florent πŸ˜›

                A 1 Reply Last reply Reply Quote 0
                • J Offline
                  JB @peo
                  last edited by

                  @peo Yes, same problem!

                  1 Reply Last reply Reply Quote 0
                  • A Offline
                    Andrew Top contributor @olivierlambert
                    last edited by

                    @olivierlambert @florent Looks like 1471ab0c7c79fa6dca9a1598e7be2a141753ba91 (in current master) has fixed the issue.

                    But, during testing I found a new related issue (no error messages). Running current XO master b16d5...

                    During the normal CR delta backup job, two VMs shows the warning message: Backup fell back to a full but it did a delta (not full) backup.

                    I caused this by running a different CR backup job (for testing) that used the two same VMs but to a different SR. The new test job did a full backup and delta backup correctly. Since there is only one CBT snapshot the normal backup job did recognize that the snapshot was not for the original normal backup job and should have done a full (and started a new CBT snapshot), but it only did a short delta.

                    As the VMs were off, I guess it could correctly use the other CBT snapshot as no blocks had changed... Just odd that CR backup said it was going to do a full and then did a delta.

                    J 1 Reply Last reply Reply Quote 1
                    • J Offline
                      JB @Andrew
                      last edited by

                      @Andrew backup-vms.jpeg

                      florentF 1 Reply Last reply Reply Quote 0
                      • florentF Online
                        florent Vates πŸͺ XO Team @JB
                        last edited by

                        @JB this means that it should have done a delta ( as per the full backup interval ) , but had to fall back to a full for at least one disk
                        this can happens after a failed transfer, a new disk added and some edge case. This issue was not really visible before the latest release, even if the impacts can be important , saturating network and storage.

                        We are investigating this ( especially @Bastien-Nollet ) , and expect to have a fix and/or an explanation fast

                        Are you using "purge snapshot data" option ? Are there anything on the journalctl logs ?

                        J 1 Reply Last reply Reply Quote 0
                        • J Offline
                          JB @florent
                          last edited by

                          @florent Thanks, Florent! No, No!

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post