Thank you @florent. I'll do that.
Update: It worked.
Thank you @florent. I'll do that.
Update: It worked.
@Forza Sorry, you were correct, I just mixed in another new issue. NFS is currently used only for backups. All my SRs are in local storage. It just happened that I now have backups failing not just because of the NFS issue but because of the VDI issue but I think it's a side-effect of the NFS problem causing the backup to get interrupted so now the VDI is stuck attached to dom0. I should have made that more clear or never mentioned the VDI issue at all.
@Forza Seems you are correct about showmount
. On UNRAID running v4 showmount
says there are no clients connected. I previously assumed that meant XO only connected during the backup. When I look at /proc/fs/nfsd/clients
I see the connections.
On Synology, running v3, showmount
does show the XO IP connected. Synology is supposed to support v4 and I have the share set to allow v4 but XO had trouble connecting that way. Synology is pretty limited in what options it lets me set for NFS shares.
Synology doesn't have rcpdebug
available. I'll see if I can figure out how to get more logging info about NFS.
@olivierlambert Well, last night the backup completed just fine despite me taking no action.
I updated the XO to the latest commit when I got in this morning so hopefully the issue I had back in June don't come back.
@julien-f Thank you, that's super helpful and even easier than I thought it would be.
@florent Yeah, it's a lot of data, thankfully my other VMs are not nearly as large. I'm still not sure why it failed when none of the virtual drives are 2TB. The largest ones are configured with a 1.82TB max so even the capacity of the drive is less than the max.
I'm moving ahead with a file level sync attempt to see if that works.
To be clear, this post was as much or more about helping you figure out what's wrong so other people don't have the same issue, than it is about making this import work for my VM. With the flood you are getting from VMware refugees, I figure I'm not the only person with large drives to import. In other words, if there's something I can do to help you figure out why it fails then I'm willing to help.
@gskger Yeah, looks like it would be too tight. Ouch, those T4s are an order of magnitude more expensive. I'm definitely not interested in going that route.
@DustinB Nothing useful yet. I rebooted the servers and explored a bit in the BIOS to see if there were any settings, or to at least tweak some things to see if it would reset whatever went wrong in the reboot in mid December. While doing that I found that one of the two impacted servers was a version behind for the BIOS as well as for the iDRAC so I updated both of them. Unfortunately, that made no change to the fan speeds.
I've been out sick all of this week, so far, but I'll be looking into this more when I get back to the office. I've read about ways to manually control the fans but I'd rather not have to depend on a script running somewhere that makes those kinds of decisions, I'd much rather have iDRAC, or whatever normally controls it, handle it like it used to.
@olivierlambert It was DR. I was testing DR a while ago and after running it once I disabled the backup job so these backups have just been sitting on the server. I don't think I've rebooted that server since running that backup.
@JamfoFL Thank you for the follow-up. Now I feel like I can install those patches. Bonus that it appears there are 23 patches now rather than the 22 I saw last week. Now I'm glad I waited, though I think the newest patch doesn't apply to my hardware anyway.
Thank you @florent. I'll do that.
Update: It worked.
Those 5 mirror backups continued to fail over the weekend.
The good news is that the normal delta backups succeeded over the weekend and didn't have any random failures I've been seeing.
I have a couple Delta backups that target the same remote. Yesterday I set up a Mirror backup between that remote and another remote. When I ran the Mirror backup yesterday, it was successful.
Today 7 of the VMs succeed in the backup but 5 of them fail with:
Missing vdi <guid> which is a base for a delta
Could this be caused by the fact that the target for the Mirror backup used to be a target for the Delta backup so it has files on it already. Do I need to purge the folder entirely? I thought maybe that would give it a head start and the reason I'm using the Mirror backup is because the Delta backup used to fail randomly on random VMs when backing up to both remotes.
How would I go about diagnosing this? I assume since the guid for the vdi is missing, it's going to be hard for me to find since I can't find something that's missing. Alternatively, maybe it's because it's currently a weird mix of old Delta backups and new Mirror backups and maybe over the next few days it will purge enough old stuff that it will start working cleanly.
Should I delete the entire remote target of the Mirror to get it started cleanly? Or, more likely, delete all backups on that Mirror for the 5 VMs that failed with this error.
@JamfoFL Any luck getting this to work? I've put off installing the patches because I don't want to lose my XO installs.
@olivierlambert I totally forgot about mirror backups, thanks for the reminder.
I see that a mirror will let me have different backup retention on the mirror target than I do on the mirror source. That's fantastic in my case because the mirror source has less storage space than the mirror target. Using rsync would have limited the backup retention on the Synology.
Now can you tell me how to recover the hours I spent figuring out I couldn't get it working to pull from the Synology side then the time to figure out how to schedule tasks in UNRAID and learn rsync and confirm it would do what I want it to do?
I have two remotes that I use for delta backups. One is on a 10Gbe network and the other only has a single 1Gbe port. Since XO writes those backups simultaneously, the backup takes a really long time and frequently randomly fails on random VMs.
I was thinking it would be better if I have XO perform the backup to the 10Gbe remote running UNRAID then schedule an rsync job to mirror that backup to the 1Gbe remote on a Synology array. I've already NFS mounted the Synology share to UNRAID and confirmed I can rsync to it from a prompt, I just need to create a cron job to do it daily. In addition to, hopefully, preventing the random failures, it would let XO complete the backup a lot faster and my UNRAID server and Synology will take care of the rest, reducing the load on my main servers during the workday. The current backup takes long enough that it sometimes spills into the workday.
I'm wondering if that will cause any restore problems. When XO writes to two remotes, does it write exactly the same files to both remote such that it can restore from the secondary even if it never wrote the files to that remote? I would assume they are all the same but want to make sure I don't create a set of backup files that won't be usable. Granted, I can, and will, test a restore from that other location but I don't want to destroy all the existing backups on that remote if this would be a known-bad setup.
Note, I did a --dry-run with rsync and see quite a few files that it would delete and a bunch it would copy. That surprises me because that is still an active remote for the delta backups. I'm suspicious that the differences are due to previously failed backups.
This happens to me on 8.2 with one specific VM. It doesn't always happen but about once a week (it feels like) the backup will get stuck in Starting
and it's always, or almost always, the same VM it's stuck on. I usually reboot the VM running XO and that makes the backup task change to Interrupted
and then the next scheduled backup usually works.
Would be nice to know if there's something that's causing this that I can try to fix. Not a world-ending problem but it's a nuisance since it stops all future backups for that Job until it's resolved. I'm happy that all the other VMs in that job get backed up at least.
@flakpyro Oh thank you. I try to go through every update post but I must have missed that one.
It worked
Makes a big difference for this VM if I back up a partially filled 250GB disk vs a largely full 8,250GB set of disks when 8TB of that doesn't need to be backed up.
Will this add the ability to control which disks on the VM will be backed up? I'd love to be able to select specific disks on a VM to backup and leave others out.
Maybe configured at the VM level rather than the backup level. Flag a disk as not needing backup and then the regular backup procedure would ignore it. However, I could also see why it might be better to control it by creating a specific backup for that VM so you could have different backup schedules, some that backup those extra disks and some that don't. I have no need to ever backup the extra disks at the moment though.