XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup Alternatives

    Scheduled Pinned Locked Moved Backup
    29 Posts 7 Posters 3.1k Views 8 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • W Offline
      wtdrisco
      last edited by

      I was watching a YT video that indicated that Vates and Proxmox were working on integrating VEEAM and COMVAULT backup solutions into their environment. I am assuming more integrated? I understand that many companies have purchased these solutions, but I do know you have a strong backup solution as does Proxmox. What I am wondering is, if there is something these two solutions offer that XCP-NG does not at the moment? And if they are 'point to point' comparable, and included in the software, wouldn't these companies see this as another method of reducing cost? OR is it just the IT lead person using the "Since my dad did this... I will always do this..." philosophy?
      I was just curious as there were posts in the past from VATES related to how their backup was more efficient. But I also thought that they could still use veeam on the environment. Thanks for any feedback. As I have VEEAM with VMWare, but would be GLAD to remove it from my costs as XCP-NG has backup features. But was not fully aware of the differences if any.

      R 1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by olivierlambert

        Hi,

        The default/advised solution for VM backup is Xen Orchestra. Have you tried it? It's included in the XCP-ng support plans anyway, so that would be sad to miss it since it's "bundled" 😄

        W 1 Reply Last reply Reply Quote 0
        • W Offline
          wtdrisco @olivierlambert
          last edited by

          @olivierlambert No sir. not yet. I am still building out a test configuration here to replace VMWare. I have been able to take an older DELL EqualLogic PS4100 and two older DELL servers, that VMWare will not support the CPU (WHY they do that i do not know). Getting this together and setup - reading a lot and screening post and YT videos.. to get a good setup. Then I will test the backup. My question was more rhetorical on the basis there is a backup tool, but everyone seems to be so concerned about VEEAM and COMVAULT. Just thought that was interesting... and the YT video indicating support for these now.

          1 Reply Last reply Reply Quote 0
          • nikadeN Offline
            nikade Top contributor
            last edited by

            Both Veeam and Comvault has application-aware backups and also support to backup Active Directory Domain Controllers which plays a big part in many enterprise setups.
            Other than that the built in XOA backups are pretty decent, we use an agent inside the guest OS if the VM runs SQL Server or Active Directory to backup those if they run on our XCP platform.
            On our VMWare platform we just use Veeam for everything since it has application awareness.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              We did the effort to bring some validated solutions here: https://docs.xcp-ng.org/project/ecosystem/#-vm-backup

              1 Reply Last reply Reply Quote 1
              • planedropP Offline
                planedrop Top contributor
                last edited by

                Been using Xen Orchestra for backups for many years now without many issues, they are very easy to setup and probably the best/most supported/easiest way to get backups going.

                You can also use Veeam inside the VMs with agents, as you would any normal server, works totally fine and you can pretty easily restore, but Xen Orchestra is still more straightforward and better integrated.

                K 1 Reply Last reply Reply Quote 0
                • K Offline
                  KPS Top contributor @planedrop
                  last edited by

                  The discussion about backup solutions is currently quite frequent here

                  Application-aware backups, in particular, are often mentioned.
                  I cannot speak for other environments, but I feel that the discussion is often more "heated" than necessary:

                  For the few applications that REALLY need application-aware backups, an agent is advantageous, but I also don't see a major limitation in implementing a second, smaller backup solution for this purpose, for example, to secure Active Directory.

                  Even if XOA should be able to do this in the future, it will take a LONG time for the feature set to be sufficient. In my view, XOA has significant weaknesses in terms of "visibility" of what's happening (backup merge, etc.), but application awareness will certainly take up a lot of development time, which I personally would prefer to see allocated elsewhere.

                  Using e.g. Veeam with the free 10 host-license should be sufficient for many XCP-users

                  I want to do 95% of backups with XOA!

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    That's the current priority: better XOA backup (tasks and so on) than doing agent work. Maybe we'll plan to do agent stuff in the future, but it's clearly not possible now with our team size.

                    1 Reply Last reply Reply Quote 0
                    • R Offline
                      rfx77
                      last edited by

                      The problem we have with XOA Backup is that it does not support Deduplication like Kopia, Borg or Restic. So we have to do our VM Backups with a script.

                      The problem we faced with a simple solution like dump xva or ova files to kopia/restic/borg is that the tar file-format and the metadata (directory name Ref...) is different on every export (even when you export the same snapshot twice) and because of that dedup does not work (well). we had to do separate export for metadata and VDIs but because of this a restore is not unattended.

                      I looked a little bit into the implementation of the XOA OVA export code and in my opinion, it should be possible for Vates to alter the format so that is could be deduplicated well. you than can easily stream it via stdin to kopia. kopia can also dump to stdout so restore should also be possible.

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        Is deduplication such a big deal? I mean, the cost of it is not negligible and (at least with SMAPIv1) the gain isn't great at all.

                        R 1 Reply Last reply Reply Quote 0
                        • R Offline
                          rfx77 @olivierlambert
                          last edited by rfx77

                          @olivierlambert

                          With dedup i mean dedup of the backup repo. And its an extremely big deal for backup repos. Backup vendors put a lot of effort in this. CommVault and NetBackup have it natively and Veeam does recommend deduplicated storage (Data Domain,...).

                          In our case we use an S3 backend and if you pay for a month worth of full backups 2-3 times a full backup size or 30 times a compressed full backup size is really a big deal. And we keep out backups sometimes for month.

                          We are doing bi-weekly fulls with our homebuilt kopia strategy of a 1.3 TB SR (no empty disks) and are using apx. 1.37 TB S3.

                          e1559b4e-5935-44f9-a47a-b9be9fcaddcb-image.png

                          its from 0.64 TB up to 1.37 TB. Green is size used and black are the blocks which could be deleted.

                          if someone is interested why we use kopia instead of restic. restic is super slow on dumps to stdout and on mount. we use mount in combination with libguestfs to do file-level-recovery.

                          this is the script. not super professional and not the best error-handling but it may be useful:

                          #!/bin/bash
                          
                          
                          
                          VMNAME=$1
                          SERVER=$2
                          SERVER=${SERVER:=172.25.10.2}
                          
                          BACKUPDESC="Backup $(date +'%Y-%m-%d %H:%M:%S') of VM $VMNAME"
                          
                          echo "$(date +'%Y-%m-%d %H:%M:%S') Starting Backup of vm $VMNAME"
                          
                          
                          echo SERVER:$SERVER
                          
                          
                          PASS=$(cat /root/.pass)
                          export XE_EXTRA_ARGS="server=$SERVER,port=443,username=root,password=$PASS"
                          
                          
                          
                          VMID=$(xe  vm-list name-label=$VMNAME | grep uuid | sed "s/.*: //g")
                          
                          if [ -z "$VMID" ]
                          then
                            echo VMID not found
                            exit 1
                          fi
                          
                          echo VMID: $VMID
                          
                          SNAPID=$(xe vm-snapshot uuid=$VMID  new-name-label=backup_$VMNAME)
                          
                          if [ -z "$SNAPID" ]
                          then
                            echo SNAPID not found
                            exit 1
                          fi
                          
                          
                          echo SNAPID: $SNAPID
                          
                          declare -a DISKID
                          declare -a DISKNAME
                          
                          
                          TEMPD=$(mktemp -d)
                          mkdir $TEMPD/data
                          
                          echo "$(date +'%Y-%m-%d %H:%M:%S') Temp directory: $TEMPD"
                          xe vm-export metadata=true uuid=$VMID filename=$TEMPD/data/metadata.ova.tar
                          xe snapshot-disk-list uuid=$SNAPID vdi-params=all vbd-params=all > $TEMPD/data/vdi.txt
                          xe vm-list name-label=$VMNAME params=all > $TEMPD/data/vm.txt
                          
                          ls -l $TEMPD/data
                          
                          
                          
                          DISKID=( $(xe snapshot-disk-list uuid=$SNAPID vdi-params=uuid,name-label vbd-params=false | grep uuid | awk '{print $5}') )
                          DISKNAME=( $(xe snapshot-disk-list uuid=$SNAPID vdi-params=uuid,name-label vbd-params=false | grep name-label |  awk '{print $4}') )
                          echo DISKID: ${DISKID[@]}
                          echo DISKNAME: ${DISKNAME[@]}
                          
                          for i in ${!DISKID[@]}; do
                            DN=${DISKNAME[$i]}
                            DID=${DISKID[$i]}
                            echo "$(date +'%Y-%m-%d %H:%M:%S') tagging disk $i is ${DID} ${DN}"
                            xe vdi-param-set name-description="$BACKUPDESC" uuid=$DID
                          done
                          
                          
                          
                          for i in ${!DISKID[@]}; do
                            DN=${DISKNAME[$i]}
                            DID=${DISKID[$i]}
                            echo "$(date +'%Y-%m-%d %H:%M:%S') backup disk $i is ${DID} ${DN}"
                          
                            xe vdi-export uuid=$DID filename= format=raw | kopia snapshot create --tags=vm:$VMNAME,data=$DN --no-progress --log-level=warning --override-source=/vms/$VMNAME/$DN --stdin-file=${DID}.vhd - || echo "ERROR"
                            sleep 5
                            xe vdi-destroy uuid=$DID
                          done
                          
                          
                          
                          tar cvpf $TEMPD/metadata.tar -C $TEMPD/data .
                          kopia snapshot create  --tags=vm:$VMNAME,data=meta  --no-progress --log-level=info --override-source=/vms/$VMNAME $TEMPD/metadata.tar || echo "ERROR"
                          rm -rf $TEMPD
                          echo "$(date +'%Y-%m-%d %H:%M:%S') waiting 30s"
                          sleep 30
                          
                          echo "$(date +'%Y-%m-%d %H:%M:%S') deleting snapshot $SNAPID of VM $VMNAME"
                          xe snapshot-uninstall uuid=$SNAPID force=true || echo ERROR
                          sleep 20
                          xe snapshot-uninstall uuid=$SNAPID force=true > /dev/null 2>&1 || true
                          
                          
                          
                          echo "$(date +'%Y-%m-%d %H:%M:%S') Done Backup of vm $VMNAME"
                          
                          
                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by olivierlambert

                            We tried dedup for the backup repo with ZFS, to see the ratio kept. Due to VHD fragmentation, the gain was between 10 to 30% tops. Far far from being useful. That's why I think it would makes sense with SMAPIv3 and not v1 (because of the VHD format and its fragmentation)

                            edit: also a simple script like this won't be able to deal with chain protection and many other important features.

                            R 1 Reply Last reply Reply Quote 0
                            • R Offline
                              rfx77 @olivierlambert
                              last edited by

                              @olivierlambert The problem is what you dedup.

                              We dedup raw VDI exports and they are nearly perfect dedup candidates.
                              I attached our script in the previous post.

                              With XVA i also only get 25% dedup ratio because of the tar metadata-problem. We decided that this is useless.

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                We tested to dedup the VHD exported. Your dedup ratio will diminish inexorably with time, as your VHD will become more and more fragmented. In raw, it should be similar at some point, since blocks aren't removed after usage (no trim support in SMAPIv1)

                                R 1 Reply Last reply Reply Quote 0
                                • R Offline
                                  rfx77
                                  last edited by

                                  To clarify: In a XVA the folder names are of the kind Ref... and the number in the folder-name is random. This kills the dedup. If we could force the folder-name to be like Disk_[1,2,3,4] XVAs would be very good deduplicatable

                                  1 Reply Last reply Reply Quote 0
                                  • R Offline
                                    rfx77 @olivierlambert
                                    last edited by rfx77

                                    @olivierlambert
                                    We are using dedup backup-stores for years now and yes they grow if you keep very long time-ranges. but they never grow as much as compressed fulls. and if you only keep shorter ranges (1-2 month) than the dedup ration will be nearly constant after some month.

                                    I just checked the CommVault-Repos of our customers and they are all above 70% dedup saving and about 50% of the repos are above 90% dedup saving.

                                    Depending on the size of their backup NAS they all have retention seetings between 60 days and one year.

                                    Our own XenBackup S3 repo for Kopia will stop to grow when we delete older snapshots. for now we just kep backing up and this is the reason why it grows.

                                    73bfd47b-7573-41cf-be52-a802d77d8f1c-image.png

                                    i removed the first column because it would show confidential customer information.

                                    olivierlambertO R 2 Replies Last reply Reply Quote 0
                                    • nikadeN Offline
                                      nikade Top contributor
                                      last edited by

                                      Dedup is a huge deal, in our SAN we're seeing a 6.7:1 ratio which means we're saving a ton of money. With that said, Veeam for example also has dedup but it totally sucks.
                                      I think it might be hard to get a good ratio with backup-data.

                                      R 1 Reply Last reply Reply Quote 0
                                      • olivierlambertO Offline
                                        olivierlambert Vates 🪐 Co-Founder CEO @rfx77
                                        last edited by olivierlambert

                                        @rfx77 I don't know how you can keep shorter range if you do forever incremental 🤔 The fragmentation will only grow

                                        K R 2 Replies Last reply Reply Quote 0
                                        • K Offline
                                          KPS Top contributor @olivierlambert
                                          last edited by

                                          Is there any possibility to disable compression?
                                          I think, it could be a good option to fully give ZFS control over the remote:

                                          • ZFS can do compression
                                          • Without "external compression", deduplication could be much more efficient
                                          1 Reply Last reply Reply Quote 0
                                          • olivierlambertO Offline
                                            olivierlambert Vates 🪐 Co-Founder CEO
                                            last edited by

                                            We benched ZFS dedup without compression and the result was bad. I'm convinced everything will change with a proper data path that will support trim.

                                            R 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post