@stormi
Fantastic and way to go...thanks for all your hard work!
Best posts made by archw
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
-
RE: Switching to XCP-NG, want to hear your problems
I moved four sites from Esxi to XCP-NG (there are about twenty ish virtual machines).
Once I figured out how to make XCP-NG work, it was relatively easy. I began by installing it on an unused old Dell.
My comments are from that standpoint of a general contractor (construction) that also does IT work so take some of my terminology with a grain (boulder) of salt.
Things that gave me some pause:
-
Figuring out how XOA works vs XO was somewhat confusing. I ended up watching two of Tom's videos at Lawrence Systems (shown below) to get me started.
https://lawrence.technology/xcp-ng-and-xen-orchestra-tutorials/
https://lawrence.technology/virtualization/getting-started-tutorial-building-an-open-source-xcp-ng-8-xen-orchestra-virtualization-lab/ -
NIC fallover - this was much easier in Esxi. It took me a night to figure out how to do the bonding thing.
-
The whole "NIC2:" has to be the same "NIC2:" on every machine was a pain in the a##. Again the way esxi does it is easier.
-
Figuring our the proper terminology to properly create a local repository
Find the disk ID of the “sdb” or “cciss/c0d1”disk
ll /dev/disk/by-id
use gdisk to create partions
"xe sr-create host-uuid=c691140b-966e-43b1-8022-1d1e05081b5b content-type=user name-label="Local EXT4 SR-SSD1" shared=false device-config:device=/dev/disk/by-id/scsi-364cd98f07c7ef8002d2c3c86296c4242-part1 type=ext"-
Expanding an existing drive (i.e. after you grow a raid drive) was a tough (I have post on this site that shows how I did it).
-
Moving a VM from esxi to XCP-NG was just long and a few vomited in the process and had to be re-done. In some cases I used the built in XCP-NG migration and, in others (the huge VMs) I figured out how to do it via Clonezilla (much, much faster once I got the hang of it).
-
list item Having to shut down a running VM to increase the disk size is a bit of a PITA but its not that big of a deal.
-
Over committing memory...I still don't have a great grasp on the one.
Before I made the move, I did a ton of speed tests of esxi vs XCP-NG. About 60% were slightly faster on Esxi and 40% were faster on XCP-NG. In the end, the differences were negligible.
With all that said, I think XCP-NG is much easier to use than esxi and I like it better. Vcenter seemed to last about six months and then always died and had to be rebuilt (and the restore utility was about as reliable as gas station sushi). With XOA, it always works and is much faster than Vcenter.
The backup is awesome. With esxi I was using Nakivo.
Just my two cents!
-
-
RE: Import from VMware fails after upgrade to XOA 5.91
@florent
I's running it right now! -
RE: How to enable NBD?
@archw
Ignore that entire post...user error. I have two networks that are spelled very close to the same thing and I was using the wrong one. -
RE: Backup fails with "VM_HAS_VUSBS" error
@Danp
80 to 84! Building science...never saw Bo but Lionel "Little Train" James was in the class I went to (he was leaving as I was walking in...nice dude!!!). -
RE: VM migration from esxi fails with "Cannot read properties of undefined (reading 'startsWith')"
@florent
For the heck of it, I made a new VM in esxi and pointed it to the vmdk file. At that point, xcp-ng would perform the loading conversion process using the new VM setup.I have no idea what was wrong with the original vmx setup file in esxi.
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
11 hosts and thirty something VMs (windows, linux, bsd mix) and update went fine.
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
@stormi
Sorry...I cliked reply to teh wrong post.Yes, I ran "yum update edk2 --enablerepo=xcp-ng-testing,xcp-ng-cii" and let it install. I then rebooted the VM (several of them) and tried to start them and none worked. I then ran "yum downgrade edk2-20180522git4b8552d-1.5.1.xcpng8.3" and the VM's fired right up.
Latest posts made by archw
-
RE: Potential bug with Windows VM backup: "Body Timeout Error"
@olivierlambert
Since I'm having the same issue, can I give that suggestion a shot? If so, how do you do it from the command line (with xe CLI)? -
RE: Potential bug with Windows VM backup: "Body Timeout Error"
@Hex
Ditto
https://xcp-ng.org/forum/topic/10532/backup-failed-with-body-timeout-error/8I have it happen on almost every large backup. I had to give up. In the VMs that would not backup, I moved to delta backups.
FWIW, here were my results:
Regular backup to TrueNas, “compression” set to “Zstd”: backup fails. Regular backup to TrueNas, “compression” set to “disabled”: backup is successful. Regular backup to vanilla Ubuntu test VM, “compression” set to “Zstd”: backup is successful. Delta backup to TrueNas: backup is successful.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey
Is a reboot required for this batch of updates? -
Question on NDB backups
While testing NDB over the last few days, I saw this note ”Remember that it works ONLY with a remote (Backup Repository) configured with "multiple data blocks".” I then saw this note:
“Store backup as multiple data blocks instead of a whole VHD file. (creates 500-1000 files per backed up GB but allows faster merge)”I set one up and backedup a small VM and, sure enough, with a 5gb VM, it made 15,587 files. I can only imagine a large VM drive would generate a massive amount of files.
Questions:-
Is there anything of concern with respect to generating all these files on the NFS server vs just a couple of large files?
-
One area said “your remote must be able to handle parallel access (up to 16 write processes per backup)”…how does one know if it will do so?
-
Also, do you still need to do this: “edit your XO config.toml file (near the end of it) with:
[backups]
useNbd = true -
Does anyone use this vs the other methods of backup?
Thanks!
-
-
RE: Backup failed with "Body Timeout Error"
Update:
It’s weird:
• There are three VM’s on this host. The backup works with two but not with the third. It does with the “Body Timeout Error” error.
• Two of the VM’s are almost identical (same drive sizes). The only difference is that one was set up as “Other install media” (came over from esxi) and the one that fails was set up using the “Windows Server 2022” template.
• I normally backup to multiple NFS servers so I changed to try one at a time; both failed.
• After watching it do the backup too many times to thing, I found that, at about the 95% stage, the snapshot stops writing to the NFS share.
• About that time, the file /var/log/xensource.log records this information:
o Feb 26 09:43:33 HOST1 xapi: [error||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xapi_compression] nice failed to compress: exit code 70
o Feb 26 09:43:33 HOST1 xapi: [ warn||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|pervasiveext] finally: Error while running cleanup after failure of main function: Failure("nice failed to compress: exit code 70")
o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|xenops] Event on VM fbcbd709-a9d9-4cc7-80de-90185a74eba4; resident_here = true
o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|dummytaskhelper] task timeboxed_rpc D:a08ebc674b6d created by task D:d21ea5c4dd9a
o Feb 26 09:43:33 HOST1 xapi: [debug||922 |timeboxed_rpc D:a08ebc674b6d|xmlrpc_client] stunnel pid: 339060 (cached) connected to 192.168.1.6:443
o Feb 26 09:43:33 HOST1 xapi: [debug||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xmlrpc_client] stunnel pid: 296483 (cached) connected to 192.168.1.6:443
o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] [XO] VM export R:c14f4c4c1c4c failed with exception Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ])
o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] Raised Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ])
• I have no idea if it means anything but the “failed to compress” made me try something. I change “compression” from “Zstd” to “disabled” and that time it worked.Here are my results:
- Regular backup to TrueNas, “compression” set to “Zstd”: backup fails.
- Regular backup to TrueNas, “compression” set to “disabled”: backup is successful.
- Regular backup to vanilla Ubuntu test VM, “compression” set to “Zstd”: backup is successful.
- Delta backup to TrueNas: backup is successful.
Sooooo…the $64,000 question is why doesn’t it work on that one VM when compression is on and it a TrueNas box?
-
RE: Backup failed with "Body Timeout Error"
@archw
I didn't even know that was a thing....thanks!
Question:Lets say dummy_remote-1 was where I write data and the mirror job copies the data from dummy_remote-1 to dummy_remote-2.
-
What happens if the backup to dummy_remote-1 fails part of the way through the backup? I don't know what gets left behind in a partial backup but I assume its a bunch of nothingness/worthless data?? Would the mirror copy the failed data over the good data on dummy_remote-2
-
What happens if the backup to dummy_remote-1 got wiped out by a bad actor (erased)? Would the mirror copy the zerored out data over the good data on dummy_remote-2
-
-
RE: Dumb question about multiple remotes
@olivierlambert
Very cool...thanks for the explanation (but I'll admit I had to google "Node streams")!