XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    CBT: the thread to centralize your feedback

    Scheduled Pinned Locked Moved Backup
    455 Posts 37 Posters 416.4k Views 29 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • F Offline
      flakpyro @rtjdamen
      last edited by

      @rtjdamen interesting, this is with iSCSI (block) or with an NFS SR?

      R 1 Reply Last reply Reply Quote 0
      • R Offline
        rtjdamen @flakpyro
        last edited by

        @flakpyro both scenarios

        F 1 Reply Last reply Reply Quote 0
        • F Offline
          flakpyro @rtjdamen
          last edited by

          @rtjdamen Hmm very strange.

          The only thing i can think of is that this maybe due to the fact these VMs were imported from VMware.

          Next week i can try creating a brand new NFSv3 SR (Since NFS4 has created issues in the past) as well as a new clean install VM that was not imported from VMware and see if the issue persists.

          F R 2 Replies Last reply Reply Quote 0
          • F Offline
            flakpyro @flakpyro
            last edited by flakpyro

            This is a completely different 5 host pool backed by a Pure storage array with SRs mounted via NFSv3, migrating a VM between hosts results in the same issue.

            Before migration:
            [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog 
            e28065ff-342f-4eae-a910-b91842dd39ca
            
            After migration
            [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog 
            00000000-0000-0000-0000-000000000000
            

            I dont think i have anything "custom" running that would be causing this so no idea why this is happening but its happening on multiple pools for us.

            R florentF 2 Replies Last reply Reply Quote 0
            • R Offline
              rtjdamen @flakpyro
              last edited by

              @flakpyro is there any difference in migrating with the vm powered on or powered off?

              1 Reply Last reply Reply Quote 0
              • R Offline
                rtjdamen @flakpyro
                last edited by rtjdamen

                @flakpyro i have just tested live migration and offline on our end, both kept the cbt alive. Tested on both iscsi and nfs.

                F 1 Reply Last reply Reply Quote 0
                • F Offline
                  flakpyro @rtjdamen
                  last edited by

                  @rtjdamen

                  Looks like it does this if the VM is powered off as well. Im really not sure what else to try since this is happening on 2 different pools for us.

                  I may need to end up submitting a ticket with Vates for them to get to the bottom of it.

                  R 1 Reply Last reply Reply Quote 0
                  • R Offline
                    rtjdamen @flakpyro
                    last edited by

                    @flakpyro are u running the latest xcp-ng version 8.2 or 8.3?

                    F 1 Reply Last reply Reply Quote 0
                    • F Offline
                      flakpyro @rtjdamen
                      last edited by

                      @rtjdamen Both pools are on 8.3 with all the latest updates.
                      I did find this PR on github and wonder if it may be related: https://github.com/vatesfr/xen-orchestra/pull/8127 but not sure why it would only happen after a migration....

                      fbeauchamp opened this pull request in vatesfr/xen-orchestra

                      open fix(backups): handle slow enable cbt #8127

                      R 1 Reply Last reply Reply Quote 0
                      • R Offline
                        rtjdamen @flakpyro
                        last edited by

                        @flakpyro we are still on 8.2 sor maybe there is some difference there.

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Online
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          Thanks for the feedback @flakpyro and it shows it's not an XO issue. There's something not preserving CBT in your case where it shouldn't, and IDK why. But clearly, you have a way to test it easily, which is progress 🙂

                          F 1 Reply Last reply Reply Quote 0
                          • F Offline
                            flakpyro @olivierlambert
                            last edited by

                            @olivierlambert So i guess the next thing we need to do is have someone also running 8.3 test this using an NFS SR?

                            1 Reply Last reply Reply Quote 0
                            • florentF Offline
                              florent Vates 🪐 XO Team @flakpyro
                              last edited by Danp

                              @flakpyro said in CBT: the thread to centralize your feedback:

                              This is a completely different 5 host pool backed by a Pure storage array with SRs mounted via NFSv3, migrating a VM between hosts results in the same issue.

                              Before migration:
                              [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog 
                              e28065ff-342f-4eae-a910-b91842dd39ca
                              
                              After migration
                              [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog 
                              00000000-0000-0000-0000-000000000000
                              

                              I dont think i have anything "custom" running that would be causing this so no idea why this is happening but its happening on multiple pools for us.

                              This is a very interesting clue, and we will investigate it with damien

                              there is a lot of edges case that can happens ( a lying network/drive/... )
                              and most of the time , xcp/xapi are self healing, but sometimes XO have to do a little work to cleanup. The CBT should be reset correctly after storage migration.
                              We'll add the async call to enable/ disable CBT since it could lead to bogus state, and maybe a more in depth cleaning of cbt after a "vdi not related error "

                              F 2 Replies Last reply Reply Quote 0
                              • F Offline
                                flakpyro @florent
                                last edited by

                                @florent thanks for checking into this as we'd love to be able to use this feature. If you need me to test anything or provide any additional logs/info about our environment let me know!

                                1 Reply Last reply Reply Quote 0
                                • F Offline
                                  flakpyro @florent
                                  last edited by

                                  @florent Testing a storage migration i do see CBT get disabled and reset during the process which is expected! I do notice it leaves the .cbtlog file on the old SR after the storage migration is complete but that's easy enough to clean up manually.

                                  The issue i posted above however is just a VM migration from host to host on a shared NFS SR, the SR the VM is on is not changing.

                                  1 Reply Last reply Reply Quote 0
                                  • R Offline
                                    Rhodderz
                                    last edited by

                                    We appear to have a similar issue to @flakpyro
                                    We dont have NFS storage but using iSCSI from Dell SC5020's
                                    We had backups with NBD and CBT enabled
                                    We updated one of our pools to the latest (stable branch) yesterday to try and get rid of the iSCSI disconnecting bug, which meant all the vms where shuffled around and migrated.
                                    This morning majority of the vms failed the backup with "can't create a stream from a metadata VDI, fall back to a base"
                                    Quick searching brought me here and following what flak did i found one of the cbtlogs for one of the failed vms is also zero'd as shown below:

                                    [09:40 xcp101 VG_XenStorage-6c2ec0ce-01ba-6975-741c-e2e86bc45e21]# cbt-util get -c -n cc2f2443-eb13-4eeb-951b-5faa3c7b8c55.cbtlog
                                    00000000-0000-0000-0000-000000000000
                                    

                                    We have an enterprise support with a ticket already open about NBD being slow (was on 1 NBD Connections) with a support tunnel open which I will update as well.
                                    Hopefully that gives you another point of reference to check from.

                                    Is it possible to force a clean fresh start for the backups similar to Veeam "Active Full"?

                                    ForzaF R 2 Replies Last reply Reply Quote 0
                                    • ForzaF Online
                                      Forza @Rhodderz
                                      last edited by

                                      @Rhodderz said in CBT: the thread to centralize your feedback:

                                      Is it possible to force a clean fresh start for the backups similar to Veeam "Active Full"?

                                      Perhaps delete the snapshots for each vm. When backup job starts, it should be a 'full' backup.

                                      R 1 Reply Last reply Reply Quote 0
                                      • R Offline
                                        rtjdamen @Rhodderz
                                        last edited by

                                        @Rhodderz are u also on 8.3?

                                        R 1 Reply Last reply Reply Quote 0
                                        • R Offline
                                          Rhodderz @rtjdamen
                                          last edited by Rhodderz

                                          @rtjdamen Having a look i assumed we where on 8.3 as we updated yesterday and there is no available patches, but on 8.2.1

                                          NAME="XCP-ng"
                                          VERSION="8.2.1"
                                          ID="xenenterprise"
                                          ID_LIKE="centos rhel fedora"
                                          VERSION_ID="8.2.1"
                                          PRETTY_NAME="XCP-ng 8.2.1"

                                          release/yangtze/master/58

                                          Apologies forgot to check that and (wrongfully) assumed

                                          R 1 Reply Last reply Reply Quote 0
                                          • R Offline
                                            Rhodderz @Forza
                                            last edited by

                                            @Forza Tested this on a VM and seems i still get the same error sadly.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post