Switching to XCP-NG, want to hear your problems
-
@olivierlambert @john-c So i have feedback from commvault. There is an existing Request for Enhancement but no timeline or if it ever will be implemented. The CMR is 346970, for reference
To speed things up there are many more customers needed that reference to this CMR. Thats a chicken-egg problem and i dont think that thats the way to go.
But maybe vates could commmunicate with commvault directly and reference the CMR. For commvault it would be clearly heplful if they could have better numbers about how many customers use XCP-NG and what the potential is.
-
@olivierlambert thanks for this info. i also cannot not understand where all the love for proxmox comes from. and thats from someone who has his office in their neighborhood.
-
Have you tried CBT with XO before?
-
@rfx77 Proxmox exists from 2008 and built a big home labber community (which is great!). All those people are ambassadors and love using it for their lab. Having a big and vocal community doesn't mean it's deployed everywhere in the corporate world.
-
@olivierlambert i have running a XO installation which does backup a small part of our infrastructure, which works well. it is a single host with local thin-provisioned storage.
we also tried it on our main pool with iscsi storage and had mostly the same problems like others above.
we decided to wait a litte longer before we give it another try on our main production system mainly due to some hanging snapshots,...
in the future we are planning to use it as main vm-level backup solution for standalone hosts in combination with commvault agents in the vm.
our current test status i that we try to use XO as a docker container which would fit better in out existing backup infrastructure as we could easily integrate it in our linux backup appliances (see it like a linux NAS). The docker tests are running very well and we dont have any issues at the moment. even file-level-recovery is working well in this scenario.
-
So I advise to test CBT, this might solve your coalesce issues, since the snapshot will be removed a lot faster
-
@olivierlambert i do agree, from our experience cbt did solve this issues. Snapshots are removed in seconds, this was 10 to 15 minutes prior to using cbt. There is still some work to be done with cbt but it is usable at this time.
-
@rfx77 How has your experience been with Commvault and XCP-NG for backups? We are bumping into similar issues migrating away from VMware/Veeam. XO backups work but the snapshots that remain on larger VMs can grow quite large between nightly backup runs.
Trying CBT has been a mixed bag with snapshot coalesces taking anywhere from seconds up to 2-3 hours on larger VMs (with looping timeouts in SMlog) and backup failures if a VM has moved between hosts in the same pool.,and in the worst cases causing our NFS based storage (Pure storage) to expire all NFS leases to the pool master trying to coalesce causing NFS timeouts and storage outages. We have Pure looking into why this is happening for us now but until that's figured out we are back to CBT-less backups that leave a snapshots behind.
From what i have read Commvault uses their proxy VM to track changed blocks without leaving the snapshot in place? Sounds like it may be worth checking out.
-
@flakpyro are u using ssd storage? Of so i would recommend increasing the leaf coalesce parameters. This will prevent this loops from occuring. We had issues with this this prior to changing it. Now we do not see this anymore.
-
@rtjdamen The Pure storage array is all flash. I did see your comments about increasing those values, i did do that and while it did help it still can be hit or miss for us. For example our sharepoint server took 2-3 hours the other night to coalesce when using full CBT with snapshot delete enabled. Running it without snapshot delete the traditional snapshot coalesce only takes maybe 10-15 mins.
With that said we are also having issues with Pure's NFS implementation and how it interacts with XCP-NG, causing storage timeouts for us. According to them the array, when under load is disconnecting hosts due to "expired NFS leases" we are currently working with them to stabilize that, perhaps then i can revisit full CBT backups. Weirdly enough the disconnects do not appear to happen under regular VM operations even under high load, we mostly run into these during backup runs with CBT enabled runs increasing the chance of it happening.
What we ideally will end up with is local backups, offsite backups and archival backups. This is what we had with Veeam prior. Doing this with traditional XOA backups would result in 3 snapshots per VM (1 for each job) which lead me to looking into commvault since it does things a bit differently. CBT would also solve this as well i assume.
-
@flakpyro ah i understand, i have seen issues with nfs as well, doing it with iscsi did give us much better results.
Also i would recommend you to look into alike a3 backup, we use it with high io vms with 2tb disk size, no issues there and u need only one snapshot per vm.
-
@flakpyro what NFS version are you using?
We're backing up 50-60 VM's every night (without CBT) and the coalescale is pretty quick, i've tried NFS 4 but we had some timeout issues so we went back to NFS 3 which seems stable.Have you tried to experiment with the mount options?
Just curious, since we didnt have any luck with NFS 4 on both TrueNAS and our Dell Powerstore, the latter is NVME allflash and the TrueNAS boxes are 10K SAS with SSD as cache. -
@nikade Interesting to hear we are not the only ones experiencing timeouts with V4 We are on NFS 4.1 with the Pure array on the latest firmware. I initially wanted to use v4 as it is a stateful protocol and thought it may handle controller failovers better due to that. Perhaps i should try V3 and see if it fairs better. We had okay luck with V4 on TrueNAS but never really ran it under any extreme load. It will run for 2 or 3 days without issue then suddenly NFS drops appear in dmesg on the hosts.
For example:
dmesg -T from the host shows the following with 10.174.199.25 being the array. [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying Followed by recovery after some time: [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK
Is that what you were seeing on your powerstore with V4 as well?
-
@flakpyro Thanks for replying!
Yeah we've not had much luck with the NFS 4... but we've been using NFS 3 for years (Since 2017 I think) and it has been rock solid, failover between controllers are not even causing a single timeout in dmesg on the hosts.I've tried to tweak the mount options, but I didn't have much luck, thats the reason why I asked if you had played around with it.
-
@flakpyro said in Switching to XCP-NG, want to hear your problems:
@nikade Interesting to hear we are not the only ones experiencing timeouts with V4 We are on NFS 4.1 with the Pure array on the latest firmware. I initially wanted to use v4 as it is a stateful protocol and thought it may handle controller failovers better due to that. Perhaps i should try V3 and see if it fairs better. We had okay luck with V4 on TrueNAS but never really ran it under any extreme load. It will run for 2 or 3 days without issue then suddenly NFS drops appear in dmesg on the hosts.
For example:
dmesg -T from the host shows the following with 10.174.199.25 being the array. [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying [Thu Aug 15 01:28:30 2024] nfs: server 10.174.199.25 not responding, still trying Followed by recovery after some time: [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK [Thu Aug 15 01:29:49 2024] nfs: server 10.174.199.25 OK
Is that what you were seeing on your powerstore with V4 as well?
We actually had some timeouts like these on our "slower" TrueNAS, we got help from @yannik I think to tweak the mount options when our backups failed.
Ever since they tweaked the mount options it has been rock solid and we havent seen any timeouts in dmesg. -
@nikade I am not using any custom mount options other than "hard" to do a hard nfs mount to prevent data lose when drops like these happen.
Did you have to use custom mount options with V3 as well then or just with V4? I may try moving VMs over to a V3 mount from V4 to see if that helps stabilize things.
-
@flakpyro
To be honest we gave up on Xen for our systems where we have shared storage. We swiched to HyperV.The Lack of CBT Support in CommVault was a major problem. We cannot use XO for backups because it lacks 90% of the features we now have with CommVault (Dedup, Tape, Agents, IntelliSnap of VM disks, AUX Copys,..) and we dont want to combine it with CommVault agents since this complicates our Backups and dies not make real sense.
HyperV with Clustering is free in our setup so it was not a big discussion there. We also can utilize the full potential (performance and feature wise) of our SAN and Network which always was a major issue with Xen.
We keep XCP-NG for now on some specialized installs where we need to map physical hardware into a vm.
-
@rfx77 Makes sense. I was thinking of giving the Commvault trial a try with XCP-NG since it looked like while they don't use Xens' CBT, they still tracked changed blocks within their helper VM while still doing dedupe.
Backups have been the biggest set back in our move from VMware, i knew going in i would miss Veeam more than i'd miss Vmware itself.
As for NFS 4.1 vs 3, if the timeouts return this week i think i will give v3 a try if it worked more reliability for you.
-
@flakpyro said in Switching to XCP-NG, want to hear your problems:
@nikade I am not using any custom mount options other than "hard" to do a hard nfs mount to prevent data lose when drops like these happen.
Did you have to use custom mount options with V3 as well then or just with V4? I may try moving VMs over to a V3 mount from V4 to see if that helps stabilize things.
In XOA we got some mount options from @yannik but in XCP we have not had to use any special options when mounting the NFS SR (as long as we're using NFS 3).
-
@flakpyro said in Switching to XCP-NG, want to hear your problems:
@rfx77 Makes sense. I was thinking of giving the Commvault trial a try with XCP-NG since it looked like while they don't use Xens' CBT, they still tracked changed blocks within their helper VM while still doing dedupe.
Backups have been the biggest set back in our move from VMware, i knew going in i would miss Veeam more than i'd miss Vmware itself.
As for NFS 4.1 vs 3, if the timeouts return this week i think i will give v3 a try if it worked more reliability for you.
Yeah, Veeam really is the king of backups. We're backing up about 50 VM's with Veeam from our vmware clusters and man it is sooo fast and reliable, i've seen backups go at 7Gbit/s which is incredible.