XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Large incremental backups

    Scheduled Pinned Locked Moved Backup
    15 Posts 5 Posters 441 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      McHenry
      last edited by

      I have a Windows RDS server that has an hourly delta backup. I expect these backups to only be 1.5GB or so however they are now over 150GB.

      Is the delta backup only looking for file changes and if so 150GB should be easy enough to identify whwn comparing tow deltas.

      Is there a recommended way to diagnose large delta backups?

      tjkreidlT 1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Backups are made by doing a VHD diff. A VHD block is 2MiB, if you modify just one bit inside a 2MiB block the entire block is needed to be exported.

        MS OS are known to write a bit everywhere, so I'm not entirely surprised.

        M 1 Reply Last reply Reply Quote 0
        • M Offline
          McHenry @olivierlambert
          last edited by

          @olivierlambert

          Thanks for the clarification.

          So this means it is not directly rated to files being updated rather disk changes which may or may not be a result of files being updated. So me searching for the large files that have been changed since the last delta is a waste of time?

          I guess something like a defrag would result in a large delta too.

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            Yes, anything moving blocks around will generate more data to backup. So it might not be some specific files, just logs rotation or anything doing random write at many places

            M 1 Reply Last reply Reply Quote 1
            • M Offline
              McHenry @olivierlambert
              last edited by

              @olivierlambert

              The server had high memory usage so I expect lots of paging, which could explain the block writes. I've increased the mem and want to see what difference that makes.

              M 1 Reply Last reply Reply Quote 2
              • D Offline
                DustinB
                last edited by

                The question that is begging to be answered then is would Change Block Tracking (CBT) address this issue?

                Presumably it would, but as the post is still actively getting updates, https://xcp-ng.org/forum/topic/9268/cbt-the-thread-to-centralize-your-feedback

                M 1 Reply Last reply Reply Quote 0
                • M Offline
                  McHenry @DustinB
                  last edited by

                  @DustinB

                  I do not know how CBT works but will take a look.

                  1 Reply Last reply Reply Quote 0
                  • M Offline
                    McHenry @McHenry
                    last edited by McHenry

                    @McHenry

                    The results are good!

                    Before increasing the VM memory to reduce paging:
                    28086c37-10e5-4dd4-a4a0-c183b0101ac8-image.png

                    After increasing the VM memory to reduce paging:
                    264bbd56-4517-4036-8729-2bfbaff04855-image.png

                    @olivierlambert
                    One problem we now have is the new VM memory exceeds the DR host's memory so the health check fails "no hosts available". It would be good if a health check could be started using a lesser mem value as it only needs the network to activate to pass.

                    D D olivierlambertO 3 Replies Last reply Reply Quote 0
                    • D Offline
                      DustinB @McHenry
                      last edited by

                      @McHenry said in Large incremental backups:

                      One problem we now have is the new VM memory exceeds the DR host's memory so the health check fails "no hosts available". It would be good if a health check could be started using a lesser mem value as it only needs the network to activate to pass.

                      Is this possible on any other hypervisor? If so It would definitely be worth looking into . . .

                      M 1 Reply Last reply Reply Quote 0
                      • tjkreidlT Offline
                        tjkreidl Ambassador @McHenry
                        last edited by

                        @McHenry Am wondering if defragmenting the drives might help, at least some, if nothing else perhaps slightly better I/O performance?

                        1 Reply Last reply Reply Quote 0
                        • M Offline
                          McHenry @DustinB
                          last edited by

                          @DustinB

                          Other hypervisors I have used do not perform the healthcheck as an auto restore on a different host so I cannot say. It would be good if the healthcheck could start the VM with the minimum memory value configured.

                          829bb774-d99d-4094-b4db-2f7534ec2e7e-image.png

                          D 1 Reply Last reply Reply Quote 0
                          • D Offline
                            Davidj 0 @McHenry
                            last edited by

                            @McHenry Can you put the paging file on a separate disk, and then tag that disk not to be backed up?

                            1 Reply Last reply Reply Quote 0
                            • D Offline
                              DustinB @McHenry
                              last edited by

                              @McHenry said in Large incremental backups:

                              @DustinB

                              Other hypervisors I have used do not perform the healthcheck as an auto restore on a different host so I cannot say. It would be good if the healthcheck could start the VM with the minimum memory value configured.

                              829bb774-d99d-4094-b4db-2f7534ec2e7e-image.png

                              Those minimums aren't really "minimums" in the sense that you're thinking. They are the template minimums and changing those configurations on the fly would actually impact the guest as a whole from its startup.

                              Changing the dynamic memory to use something less than the guest's configured Memory causes other issues with backups, I'm not certain as to why, but I've found other administrators who have changed that setting to 32Gb/64Gb and then suddenly the VM can't be backed up or has other issues. Someone else would have to elaborate as to why this is the case though.

                              Setting the same to 64Gb/64Gb fixes said issues.

                              1 Reply Last reply Reply Quote 1
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO @McHenry
                                last edited by

                                @McHenry That's a very interesting result 🙂 (adding @florent to see it and @thomas-dkmt to maybe document this).

                                Regarding health feature, how we could guess down which memory we could go to test if the test recovery VM boots?

                                M 1 Reply Last reply Reply Quote 0
                                • M Offline
                                  McHenry @olivierlambert
                                  last edited by

                                  @olivierlambert

                                  Hyper-V has the concept of Dynamic memory as documented below. This allows a startup mem value to be specified and something similar could be used for health checks as this is only needed for the network to connect then them VM gets killed.

                                  f746e366-9f3a-4d78-a6d7-219c8faac555-image.png

                                  1 Reply Last reply Reply Quote 0
                                  • First post
                                    Last post