XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    CBT: the thread to centralize your feedback

    Scheduled Pinned Locked Moved Backup
    439 Posts 37 Posters 386.5k Views 29 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • florentF Offline
      florent Vates πŸͺ XO Team
      last edited by

      dataDestroy will be enable-able (not sure if it's really a word) today, in he meantime, the

      Please note that the metadata snapshot won't be visible in the UI since it's not a VM Snapshot, but only the metadata of the vdi snapshots

      latest commits in the fix_cbt branch add an additionnal check on dom0 connect, more error handling

      R 1 Reply Last reply Reply Quote 0
      • R Offline
        rtjdamen @florent
        last edited by

        @florent ok so currently the data remains? When do u think this addition is ready for testing? I am interested as we saw some issues with this on nfs and i am curious if it will make a difference with this code.

        @olivierlambert i now understand there is in general no difference on coalesce as long as the data destroy is not done. So u were right on that part and it’s safe pushing it this way!

        florentF 1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates πŸͺ Co-Founder CEO
          last edited by

          Yes, that's why we'll be able to offer a safe route for people not using the data destroy but leave people who want to explore it to do so in opt in πŸ™‚

          1 Reply Last reply Reply Quote 0
          • florentF Offline
            florent Vates πŸͺ XO Team @rtjdamen
            last edited by

            @rtjdamen it's still fresh, but on the other hand, the worse that can happen is falling back to a full backup. So for now I would not use it on the bigger VM ( multi terabytes )
            We are sure that it will be a game changer on thick provisioning ( because snapshot cost the full virtual size) or on fast changing VM , where coalescing an older snapshot is a major hurdle

            If everything goes well it will be on stable by the end of july, and we'll probably enable it by default on new backup in the near future

            R 3 Replies Last reply Reply Quote 1
            • Tristis OrisT Offline
              Tristis Oris Top contributor
              last edited by

              can't commit, too small for ticket.

              typo

              preferNbdInformation:
                  'A network accessible by XO or the proxy must have NBD enabled,. Storage must support Change Block Tracking (CBT) to ue it in a backup',
              

              enabled,.
              to ue

              1 Reply Last reply Reply Quote 0
              • R Offline
                rtjdamen @florent
                last edited by

                This post is deleted!
                1 Reply Last reply Reply Quote 0
                • Tristis OrisT Offline
                  Tristis Oris Top contributor
                  last edited by

                  updated to fix_cbt branch.

                  CR NBD backup works.
                  Delta NBD backup works.
                  just once, so we can't be sure yet.

                  No broken tasks is generated.

                  Still confused why CBT toggle is enabled on some VMs.
                  2 similars vms on same pool, same storage, same ubuntu version. One is enabled automaticaly, other is not.

                  1 Reply Last reply Reply Quote 1
                  • R Offline
                    rtjdamen @florent
                    last edited by

                    @florent i did some testing with the data_destroy branch on my lab, it seems to work as required, indeed the snapshot is hidden when it is cbt only.

                    What i am not shure is correct, when the data destroy action is done, i would expect a snapshot is showing up for coalesce but it does not. Is it too small, and quick removed so it will not be visible in XOA? on larger vms with our production i can see these snapshots showing for coalesce? Or when you do vdi.data_destroy will it try to coalesce directly without garbage collection afterwards?

                    1 Reply Last reply Reply Quote 0
                    • R Offline
                      rtjdamen @florent
                      last edited by

                      @florent what does happen when we upgrade to this version by the end of july, we do now use NDB without cbt on most backups. will all need to run a full or does it 'convert' the method to the cbt situation? i asume as the checkbox for data destroy will be disabled in general it will not change that much to the backup at day one as long as u not switch to the data destroy option?

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates πŸͺ Co-Founder CEO
                        last edited by

                        The transition to CBT should be done smoothly and without any manual intervention. @florent will provide more details on how πŸ™‚

                        robytR 1 Reply Last reply Reply Quote 0
                        • R Offline
                          rtjdamen
                          last edited by

                          All tests with 2 vms were so far succesfull, no issues found in our lab. Good job guys!

                          1 Reply Last reply Reply Quote 1
                          • robytR Offline
                            robyt @olivierlambert
                            last edited by

                            @olivierlambert how many time for us with precompiled XOA?

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates πŸͺ Co-Founder CEO
                              last edited by

                              Tomorrow πŸ™‚

                              R 2 Replies Last reply Reply Quote 1
                              • R Offline
                                rtjdamen @olivierlambert
                                last edited by

                                @olivierlambert sounds good!

                                1 Reply Last reply Reply Quote 0
                                • D Offline
                                  Delgado
                                  last edited by

                                  Things are looking good on my end as well.

                                  1 Reply Last reply Reply Quote 0
                                  • A Online
                                    Andrew Top contributor @olivierlambert
                                    last edited by

                                    @olivierlambert Looks like it's back to single threaded bottlenecks...

                                    I see a lot of single core 100% utilization on the XO VM.

                                    R 1 Reply Last reply Reply Quote 0
                                    • R Offline
                                      rtjdamen @Andrew
                                      last edited by

                                      @Andrew Hi Andrew, can't reproduce on my end, all cores utilized at the same time around 30 to 40 % for 2 simultanious backups.

                                      A 1 Reply Last reply Reply Quote 0
                                      • A Online
                                        Andrew Top contributor @rtjdamen
                                        last edited by

                                        @rtjdamen It happens when Continuous Replication is running. The source and destination and network can do 10Gb/sec.

                                        I'll have to work on a better set of conditions and tests to replicate the issue.

                                        I know it's slower because the hourly replication was taking 5-6 minutes and now takes 9-10 minutes. It's more of an issue when the transfer is >300GB.

                                        Just feedback....

                                        R 1 Reply Last reply Reply Quote 0
                                        • R Offline
                                          rtjdamen @Andrew
                                          last edited by

                                          @Andrew understood! We do not use that at this time.

                                          1 Reply Last reply Reply Quote 0
                                          • R Offline
                                            rtjdamen @olivierlambert
                                            last edited by

                                            @olivierlambert Hi Olivier, do you have an ETA?

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post