@stormi
Fantastic and way to go...thanks for all your hard work!
Best posts made by archw
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
-
RE: Switching to XCP-NG, want to hear your problems
I moved four sites from Esxi to XCP-NG (there are about twenty ish virtual machines).
Once I figured out how to make XCP-NG work, it was relatively easy. I began by installing it on an unused old Dell.
My comments are from that standpoint of a general contractor (construction) that also does IT work so take some of my terminology with a grain (boulder) of salt.
Things that gave me some pause:
-
Figuring out how XOA works vs XO was somewhat confusing. I ended up watching two of Tom's videos at Lawrence Systems (shown below) to get me started.
https://lawrence.technology/xcp-ng-and-xen-orchestra-tutorials/
https://lawrence.technology/virtualization/getting-started-tutorial-building-an-open-source-xcp-ng-8-xen-orchestra-virtualization-lab/ -
NIC fallover - this was much easier in Esxi. It took me a night to figure out how to do the bonding thing.
-
The whole "NIC2:" has to be the same "NIC2:" on every machine was a pain in the a##. Again the way esxi does it is easier.
-
Figuring our the proper terminology to properly create a local repository
Find the disk ID of the “sdb” or “cciss/c0d1”disk
ll /dev/disk/by-id
use gdisk to create partions
"xe sr-create host-uuid=c691140b-966e-43b1-8022-1d1e05081b5b content-type=user name-label="Local EXT4 SR-SSD1" shared=false device-config:device=/dev/disk/by-id/scsi-364cd98f07c7ef8002d2c3c86296c4242-part1 type=ext"-
Expanding an existing drive (i.e. after you grow a raid drive) was a tough (I have post on this site that shows how I did it).
-
Moving a VM from esxi to XCP-NG was just long and a few vomited in the process and had to be re-done. In some cases I used the built in XCP-NG migration and, in others (the huge VMs) I figured out how to do it via Clonezilla (much, much faster once I got the hang of it).
-
list item Having to shut down a running VM to increase the disk size is a bit of a PITA but its not that big of a deal.
-
Over committing memory...I still don't have a great grasp on the one.
Before I made the move, I did a ton of speed tests of esxi vs XCP-NG. About 60% were slightly faster on Esxi and 40% were faster on XCP-NG. In the end, the differences were negligible.
With all that said, I think XCP-NG is much easier to use than esxi and I like it better. Vcenter seemed to last about six months and then always died and had to be rebuilt (and the restore utility was about as reliable as gas station sushi). With XOA, it always works and is much faster than Vcenter.
The backup is awesome. With esxi I was using Nakivo.
Just my two cents!
-
-
RE: Import from VMware fails after upgrade to XOA 5.91
@florent
I's running it right now! -
RE: How to enable NBD?
@archw
Ignore that entire post...user error. I have two networks that are spelled very close to the same thing and I was using the wrong one. -
RE: Backup fails with "VM_HAS_VUSBS" error
@Danp
80 to 84! Building science...never saw Bo but Lionel "Little Train" James was in the class I went to (he was leaving as I was walking in...nice dude!!!). -
RE: VM migration from esxi fails with "Cannot read properties of undefined (reading 'startsWith')"
@florent
For the heck of it, I made a new VM in esxi and pointed it to the vmdk file. At that point, xcp-ng would perform the loading conversion process using the new VM setup.I have no idea what was wrong with the original vmx setup file in esxi.
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
11 hosts and thirty something VMs (windows, linux, bsd mix) and update went fine.
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
@stormi
Sorry...I cliked reply to teh wrong post.Yes, I ran "yum update edk2 --enablerepo=xcp-ng-testing,xcp-ng-cii" and let it install. I then rebooted the VM (several of them) and tried to start them and none worked. I then ran "yum downgrade edk2-20180522git4b8552d-1.5.1.xcpng8.3" and the VM's fired right up.
-
RE: Introduce yourself!
I'm the president of a general contractor in the South. I've also been our IT person for 30+ years.
Last fall, when the winds of change at VMware began blowing, I started looking into alternatives for esxi. I happened upon videos about XCP-NG from Tom at Lawrence Systems and I decided to install it on a test system.
Its a great product and I've enjoyed getting to know it!
Latest posts made by archw
-
Question on NDB backups
While testing NDB over the last few days, I saw this note ”Remember that it works ONLY with a remote (Backup Repository) configured with "multiple data blocks".” I then saw this note:
“Store backup as multiple data blocks instead of a whole VHD file. (creates 500-1000 files per backed up GB but allows faster merge)”I set one up and backedup a small VM and, sure enough, with a 5gb VM, it made 15,587 files. I can only imagine a large VM drive would generate a massive amount of files.
Questions:-
Is there anything of concern with respect to generating all these files on the NFS server vs just a couple of large files?
-
One area said “your remote must be able to handle parallel access (up to 16 write processes per backup)”…how does one know if it will do so?
-
Also, do you still need to do this: “edit your XO config.toml file (near the end of it) with:
[backups]
useNbd = true -
Does anyone use this vs the other methods of backup?
Thanks!
-
-
RE: Backup failed with "Body Timeout Error"
Update:
It’s weird:
• There are three VM’s on this host. The backup works with two but not with the third. It does with the “Body Timeout Error” error.
• Two of the VM’s are almost identical (same drive sizes). The only difference is that one was set up as “Other install media” (came over from esxi) and the one that fails was set up using the “Windows Server 2022” template.
• I normally backup to multiple NFS servers so I changed to try one at a time; both failed.
• After watching it do the backup too many times to thing, I found that, at about the 95% stage, the snapshot stops writing to the NFS share.
• About that time, the file /var/log/xensource.log records this information:
o Feb 26 09:43:33 HOST1 xapi: [error||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xapi_compression] nice failed to compress: exit code 70
o Feb 26 09:43:33 HOST1 xapi: [ warn||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|pervasiveext] finally: Error while running cleanup after failure of main function: Failure("nice failed to compress: exit code 70")
o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|xenops] Event on VM fbcbd709-a9d9-4cc7-80de-90185a74eba4; resident_here = true
o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|dummytaskhelper] task timeboxed_rpc D:a08ebc674b6d created by task D:d21ea5c4dd9a
o Feb 26 09:43:33 HOST1 xapi: [debug||922 |timeboxed_rpc D:a08ebc674b6d|xmlrpc_client] stunnel pid: 339060 (cached) connected to 192.168.1.6:443
o Feb 26 09:43:33 HOST1 xapi: [debug||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xmlrpc_client] stunnel pid: 296483 (cached) connected to 192.168.1.6:443
o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] [XO] VM export R:c14f4c4c1c4c failed with exception Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ])
o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] Raised Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ])
• I have no idea if it means anything but the “failed to compress” made me try something. I change “compression” from “Zstd” to “disabled” and that time it worked.Here are my results:
- Regular backup to TrueNas, “compression” set to “Zstd”: backup fails.
- Regular backup to TrueNas, “compression” set to “disabled”: backup is successful.
- Regular backup to vanilla Ubuntu test VM, “compression” set to “Zstd”: backup is successful.
- Delta backup to TrueNas: backup is successful.
Sooooo…the $64,000 question is why doesn’t it work on that one VM when compression is on and it a TrueNas box?
-
RE: Backup failed with "Body Timeout Error"
@archw
I didn't even know that was a thing....thanks!
Question:Lets say dummy_remote-1 was where I write data and the mirror job copies the data from dummy_remote-1 to dummy_remote-2.
-
What happens if the backup to dummy_remote-1 fails part of the way through the backup? I don't know what gets left behind in a partial backup but I assume its a bunch of nothingness/worthless data?? Would the mirror copy the failed data over the good data on dummy_remote-2
-
What happens if the backup to dummy_remote-1 got wiped out by a bad actor (erased)? Would the mirror copy the zerored out data over the good data on dummy_remote-2
-
-
RE: Dumb question about multiple remotes
@olivierlambert
Very cool...thanks for the explanation (but I'll admit I had to google "Node streams")! -
Dumb question about multiple remotes
Many of my backups are set to go to multiple remotes.When the backup finishes, the status shows up and that they finished at the same time.
One is a faster backup device than the other so how does it work? Does it write parallel data/blocks/etc to one share and then wait for the other to share catch up or is it some other machination?
Just curious.
-
RE: Backup failed with "Body Timeout Error"
It also backs up two very similar VM's so I went to the status of their last backup and they show this:
VM1
Duration: 8 hours
Size: 1.1 TiB
Speed: 42.2 MiB/sVM2
Duration: 5 hours
Size: 676.8 GiB
Speed: 41.29 MiB/sIts a 10GB connection but the storage media is an NFS share made from 7200 RPM SATA drives (I'm jealous of your nvme!).
I dont have a time out set but for the heck of it I just set one at 8 hours. Also, I have the backup set to go to two NFS shares. I also just changed it to only go to one and just started it. If that fails, I'll change to the other and see what happens.
-
Backup failed with "Body Timeout Error"
Most of the backup occur with any error but I have a backup that keeps failing with "Body Timeout Error" it says it is dying at the four hour mark.
It is backingup up to two NFS boxes. Liek I said above, it works fine for most of the other VMs however one other one does this every once in a while.
Backup running on Xen Orchestra, commit f18b0
Any ideas?
Thanks!The log file is as follows:
{
"data": {
"mode": "full",
"reportWhen": "failure"
},
"id": "1740449460003",
"jobId": "2cc39019-5201-43f8-ad8a-d13870e948be",
"jobName": "Exchange-2025",
"message": "backup",
"scheduleId": "1f01cbda-bae3-4a60-a29c-4d88b79a38d2",
"start": 1740449460003,
"status": "failure",
"infos": [
{
"data": {
"vms": [
"558c3009-1e8d-b1e4-3252-9cf3915d1fe4"
]
},
"message": "vms"
}
],
"tasks": [
{
"data": {
"type": "VM",
"id": "558c3009-1e8d-b1e4-3252-9cf3915d1fe4",
"name_label": "VM1"
},
"id": "1740449463052",
"message": "backup VM",
"start": 1740449463052,
"status": "failure",
"tasks": [
{
"id": "1740449472998",
"message": "snapshot",
"start": 1740449472998,
"status": "success",
"end": 1740449481725,
"result": "d73124a9-d08c-4623-f28c-7503fcef9260"
},
{
"data": {
"id": "0d6da7ba-fde1-4b37-927b-b56c61ce8e59",
"type": "remote",
"isFull": true
},
"id": "1740449483456",
"message": "export",
"start": 1740449483456,
"status": "failure",
"tasks": [
{
"id": "1740449483464",
"message": "transfer",
"start": 1740449483464,
"status": "failure",
"end": 1740462845094,
"result": {
"name": "BodyTimeoutError",
"code": "UND_ERR_BODY_TIMEOUT",
"message": "Body Timeout Error",
"stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:581:17)\n at process.processTimers (node:internal/timers:519:7)"
}
}
],
"end": 1740462845094,
"result": {
"name": "BodyTimeoutError",
"code": "UND_ERR_BODY_TIMEOUT",
"message": "Body Timeout Error",
"stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:581:17)\n at process.processTimers (node:internal/timers:519:7)"
}
},
{
"data": {
"id": "656f82d9-5d68-4e26-a75d-b57b4cb17d5e",
"type": "remote",
"isFull": true
},
"id": "1740449483457",
"message": "export",
"start": 1740449483457,
"status": "failure",
"tasks": [
{
"id": "1740449483469",
"message": "transfer",
"start": 1740449483469,
"status": "failure",
"end": 1740462860953,
"result": {
"name": "BodyTimeoutError",
"code": "UND_ERR_BODY_TIMEOUT",
"message": "Body Timeout Error",
"stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:581:17)\n at process.processTimers (node:internal/timers:519:7)"
}
}
],
"end": 1740462860953,
"result": {
"name": "BodyTimeoutError",
"code": "UND_ERR_BODY_TIMEOUT",
"message": "Body Timeout Error",
"stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:581:17)\n at process.processTimers (node:internal/timers:519:7)"
}
},
{
"id": "1740463648921",
"message": "clean-vm",
"start": 1740463648921,
"status": "success",
"warnings": [
{
"data": {
"path": "/backup-storage/558c3009-1e8d-b1e4-3252-9cf3915d1fe4/.20250224T232125Z.xva"
},
"message": "unused XVA"
}
],
"end": 1740463649103,
"result": {
"merge": false
}
},
{
"id": "1740463649144",
"message": "clean-vm",
"start": 1740463649144,
"status": "success",
"warnings": [
{
"data": {
"path": "/backup-storage/558c3009-1e8d-b1e4-3252-9cf3915d1fe4/.20250224T232125Z.xva"
},
"message": "unused XVA"
}
],
"end": 1740463651524,
"result": {
"merge": false
}
}
],
"end": 1740463651532,
"result": {
"errors": [
{
"name": "BodyTimeoutError",
"code": "UND_ERR_BODY_TIMEOUT",
"message": "Body Timeout Error",
"stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:581:17)\n at process.processTimers (node:internal/timers:519:7)"
},
{
"name": "BodyTimeoutError",
"code": "UND_ERR_BODY_TIMEOUT",
"message": "Body Timeout Error",
"stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202502241838/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:581:17)\n at process.processTimers (node:internal/timers:519:7)"
}
],
"message": "all targets have failed, step: writer.run()",
"name": "Error",
"stack": "Error: all targets have failed, step: writer.run()\n at FullXapiVmBackupRunner._callWriters (file:///opt/xo/xo-builds/xen-orchestra-202502241838/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:64:13)\n at async FullXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202502241838/@xen-orchestra/backups/_runners/_vmRunners/FullXapi.mjs:55:5)\n at async FullXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202502241838/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:396:9)\n at async file:///opt/xo/xo-builds/xen-orchestra-202502241838/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
}
}
],
"end": 1740463651533
} -
Upgrading to Server 2025 and Xenserver VM tools
This is in the FWIW department. Some of the upgrades (Windows server 2022 to server 2025) have gone through without an issue but two would not upgrade until I removed the Xenserver VM tools (version 9.4). As it was, it would do the upgrade, reboot twice and then sit with a spinning wheel.
Once I removed the Xenserver VM tools, the upgrade went through without issue.
-
RE: Backup fails with "Body Timeout Error", "all targets have failed, step: writer.run()"
Many, many happy hours have since transpired
I ended up wiping out the XO vm that was running the process and making a new one. That seems to have fixed it.
With all that said, I got one again last night with backing up the same VM that has caused an issue in the past. I just told the backup to restart so lets see what happens.