XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup Alternatives

    Scheduled Pinned Locked Moved Backup
    29 Posts 7 Posters 3.0k Views 8 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      We benched ZFS dedup without compression and the result was bad. I'm convinced everything will change with a proper data path that will support trim.

      R 1 Reply Last reply Reply Quote 0
      • R Offline
        rfx77 @olivierlambert
        last edited by

        @olivierlambert dedup has nothing to do with forever incremental.

        if you are doing backups, only the changed blocks are stored and the blocks which are not used in any backup are deleted. all blocks are backuped but the software stores only the blocks which are new

        with our kopia solution we do always full backups. kopia manages the dedup. with commvault we do fulls and incs

        i think that most people here are thinking to much backup as the way veeam does it with reverse-inc and forever inc. veeam does it in a very specific way. with commvault, netbackup,... we do classic fulls and incs.

        i dont know how to describe it better. if you do an incremental backup you only backup the changed blocks. when you do a full afterwards these blocks are already in the backup repo so they dont have to be written. you can do incs and fulls and you will only have the blocks in the repo which are necessary to restore these backups. so one year of daily fulls or incs may not use more then 3-4 times a full in the backup repo.

        1 Reply Last reply Reply Quote 0
        • R Offline
          rfx77 @rfx77
          last edited by

          To be clear. i never talked about deduplicated storage. i am always talking about deduplicated backup store. the edup is not done by the fs but by the backup-software

          deduplicated storage is a completely different topic which can be addresse with sapiv3 but its not the think my posts are all about

          1 Reply Last reply Reply Quote 0
          • R Offline
            rfx77 @olivierlambert
            last edited by rfx77

            @olivierlambert we are talking about different things. i dont mean dedup storage. i am talking about deduplicated backup store. the dedup is done by the backup-software (kopia,restic,borg, commvault,...) they support various backends like s3, ftp, sftp, local disk,...

            olivierlambertO 1 Reply Last reply Reply Quote 0
            • R Offline
              rfx77 @nikade
              last edited by

              @nikade You can see the ratios in the picture above. they are all around 90%

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO @rfx77
                last edited by olivierlambert

                @rfx77 That's exactly what I'm talking about. As I said, we tested dedup ratio with exported VHD files and it tends to diminish pretty quickly as you have more and more fragmentation, at a point it's worthless. But, yes it works pretty well if you have similar templates underneath and little fragmentation.

                W R 2 Replies Last reply Reply Quote 0
                • W Offline
                  wtdrisco @olivierlambert
                  last edited by wtdrisco

                  @olivierlambert this backup looks really good. I currently in my VMware space use veeam. It's ok. I mainly just run backups of my virtuals and move them to an offline storage so if an encryption hack gets in, they can't encrypt the disks. So I run a script to move the file through a link to a Synology. That is just the VMs. For all my SQLs I run smss jobs to back up dbs and then transaction logs every 15 min. Again moving to offline storage. That gets me at least 15 min recovery to the crash. So I really do not focus too much on the backup software but to get me a VM back but my data i do separately so I have to ability to restore locally or move it to external resource. People here have some good discussions on the backup. Good thread

                  1 Reply Last reply Reply Quote 0
                  • R Offline
                    rfx77 @olivierlambert
                    last edited by rfx77

                    @olivierlambert fragmentation does not matter with deduplication (at least how dedup is used in backup repos). i dont know why it does not work in your case. we do raw and not vhd. maybe thats a point. there can also be a problem when the guest-filesystem is zfs or btrfs because how those filystem alain their blocks.

                    in general (with non ROW/COW guest fs) fragmentation should not do any harm. at least thats what we see at our customers backup repos. and they have windows and different linux guests. we do not use base templates so there should not be any hidden benefit of something.

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      We tested with VHD format, not with raw. We could do a test, but with raw you can't do deltas.

                      1 Reply Last reply Reply Quote 0
                      • R Offline
                        rtjdamen @wtdrisco
                        last edited by

                        @wtdrisco we are testing with alike A3 from Quadric software, it is a good solution as well, they have a lot of experience with xenserver and xcp as well, they also have builtin deduplication.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post