two separate process for backup?
-
Hi, is there a mode to separate snapshot and transfer?
My problem is the slow transfer and when the transfer want days i've days without backup.
But if the process is not linked.. i may have one snapshot every six hours, if the previous transfer is ended ok, otherwise XOA wait and when previous transfer was finished start the new transfer and so on.
(excuse for the english!) -
Not sure to get it but question for @florent when he got 5 minutes (we are pretty busy, closing this month release)
-
@olivierlambert This is related to the concurrency settings I was asking for previously. That way @robyt could set snapshot concurrency fairly high and transfer concurrency low.
-
@CJ said in two separate process for backup?:
@olivierlambert This is related to the concurrency settings I was asking for previously. That way @robyt could set snapshot concurrency fairly high and transfer concurrency low.
Hi, but this is in the same backup's session
If i start another session but the previous was in transfer the new backup job aborted.
Job start
snapshot_1
transfer -->another backup job start: xoa create the snapshot_2 but don't transfer anything
End of transfer of snapshot_1, start transfer snapshot_2 -
@robyt Yes, that's exactly what I had talked about. With separate concurrency settings all of your snapshots would have finished before your first transfer did.
Also, you should look into the new backup SR. I assume that your slow transfer is due to internet speeds, so you could do backups locally and then use the backup SR to do the transfer offsite.
-
Backups not very fast anyway, because of XAPI limitations. I see cap about 300-400Mbit at 10Gbit local network.
-
@CJ said in two separate process for backup?:
@robyt Yes, that's exactly what I had talked about. With separate concurrency settings all of your snapshots would have finished before your first transfer did.
Also, you should look into the new backup SR. I assume that your slow transfer is due to internet speeds, so you could do backups locally and then use the backup SR to do the transfer offsite.
Hi, i've a little nas (qnap 419 with extension TR-004) connected with a gibait ethernet (bonded but not in parallel, only one at once)
now i'm trying an update/rebbot of the nas and update xen orchestra ;-( -
I'm contending with a similar issue as well, where my backups complete to my backup device in very short order, but syncing to cloud literally takes days.
It may be the way that the sync to cloud was setup, which is to Backup to a NAS, then sync from the NAS to cloud storage.
I'm checking to see if I can find any reporting from my firewall that would pinpoint IOPS/bandwidth as the issue before I go and make any major changes, like how my backups are sync'd to cloud storage.
While it's possible to sync directly from XO, I really want to better understand how that sync operation occurs, does the backup sync through XO to cloud, or from my Storage to Cloud (and just removes the sync connection my NAS has?).
Concurrency in backup settings (specifically around Remote/Cloud Storage) would be huge I think, though I can't say it would fix my issue - yet anyways.
-
@DustinB
with 10/12 NDB i've a lot of http timeouttransfer Start: Feb 3, 2024, 06:03:22 AM End: Feb 3, 2024, 05:04:54 PM Duration: 11 hours Error: HTTP connection has timed out
and next delta become a full...
The speed is very very low for some backuptransfer Start: Feb 6, 2024, 11:39:45 AM End: Feb 6, 2024, 12:12:32 PM Duration: 33 minutes Size: 686.66 MiB Speed: 357.42 KiB/s
i
-
@DustinB What are you using for cloud sync?
-
@robyt When you say 10/12 NBD, are you referring to the setting of NBD connections per disk? Or connexion as the UI says. How are you checking that NBD is being used?
-
@CJ said in two separate process for backup?:
@DustinB What are you using for cloud sync?
is not a cloud sync, is a copy to a qnap connected via gigabit ethernet
-
@CJ said in two separate process for backup?:
@robyt When you say 10/12 NBD, are you referring to the setting of NBD connections per disk? Or connexion as the UI says. How are you checking that NBD is being used?
Connexion in UI
-
@robyt said in two separate process for backup?:
@CJ said in two separate process for backup?:
@DustinB What are you using for cloud sync?
is not a cloud sync, is a copy to a qnap connected via gigabit ethernet
You'll note that that was a reply to @DustinB who is using cloud sync, not you.
@robyt said in two separate process for backup?:
@CJ said in two separate process for backup?:
@robyt When you say 10/12 NBD, are you referring to the setting of NBD connections per disk? Or connexion as the UI says. How are you checking that NBD is being used?
Connexion in UI
What happens if you reduce the number? From my understanding your Qnap isn't all that powerful, so it sounds like you're overwhelming the poor thing. I'm using 1 per disk and seeing over 1Gbps speeds. My main limitation is the 2.5G connection to XO.
However, I will say that I'm not sure if I'm actually using NBD or not. I don't see anything in the UI to indicate whether it's being taken advantage of. I also have my backup concurrency set to 1 to avoid running multiple transfers through XO as the switching can eat into the overall bandwidth.
-
@robyt ops....
-
@CJ said in two separate process for backup?:
@robyt said in two separate process for backup?:
@CJ said in two separate process for backup?:
@DustinB What are you using for cloud sync?
is not a cloud sync, is a copy to a qnap connected via gigabit ethernet
You'll note that that was a reply to @DustinB who is using cloud sync, not you.
@robyt said in two separate process for backup?:
@CJ said in two separate process for backup?:
@robyt When you say 10/12 NBD, are you referring to the setting of NBD connections per disk? Or connexion as the UI says. How are you checking that NBD is being used?
Connexion in UI
What happens if you reduce the number? From my understanding your Qnap isn't all that powerful, so it sounds like you're overwhelming the poor thing. I'm using 1 per disk and seeing over 1Gbps speeds. My main limitation is the 2.5G connection to XO.
However, I will say that I'm not sure if I'm actually using NBD or not. I don't see anything in the UI to indicate whether it's being taken advantage of. I also have my backup concurrency set to 1 to avoid running multiple transfers through XO as the switching can eat into the overall bandwidth.
when i've incremented connexion i see a lot of http timeout.
In UI if i open the detail of a backup job i see "Transfer data using NBD"
I know that qnap is not a wonderful and very fast nas but i want buy another (and more expensive) hardware without more test
Some vm is very very slow (the transfer of mailserver's snapshot is.. slooooooowwww) but the configuration is the same (network, part of disk in SSD and part in mechanical HD etc) -
@robyt Can you post a screenshot of where you see NBD? I don't see it anywhere in mine but I'm not sure if I'm looking at the right place.
I think you need to get things working first and then start trying to push the envelop. I would set it to 1 and see what your speeds look like. Then you can slowly increase the number of connections until you stop seeing speed increases and/or you see http connection timeouts.
I'll admit that I haven't been paying attention to the whole thread, but I'm not sure what more testing you feel you need in order to determine that the problem is your NAS. What is the goal you're attempting to achieve that you're concerned that different hardware wouldn't solve?
I'm not sure what your configuration is (if you've posted it already, my apologies), but posting your XCP-ng host(s) details, XO specs, QNAP specs, and what other duties, if any, the QNAP is performing, that would be helpful. They make some powerful models, but most of them don't have that much oomph and can really only do one, maybe two things at once. And this goes doubly so if you're using very large VM disks instead of network storage.
In my case I'm backing up to a fairly beefy TrueNAS running a 11z3 setup on a 10G network. My VMs are all configured with smallish disks and pull the majority of their data from the same NAS. My biggest bottleneck is that my XCP-ng hosts only have 2.5G NICs which is why I'm considering moving XO to something with 10G in order to support the fully bandwidth.