@stormi Installed on our 2 production pools, DR and remote sites, 46 hosts total ranging from Dell, Lenovo, HP, and Supermicro servers, no issues to report!
Best posts made by flakpyro
-
RE: XCP-ng 8.3 updates announcements and testing
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey Updated my usual test hosts, (Minisforum and Supermicro X11) as well as an two sets of 2 host AMD pools (one pool of HP DL320 Gen10s and another of Asus Epyc servers of some sort, and lastly a Dell R360 without issue.
-
RE: XCP-ng 8.3 updates announcements and testing
installed on 2 test machines
Machine 1:
Intel Xeon E-2336
SuperMicro board.Machine 2:
Minisforum MS-01
i9-13900H
32 GB Ram
Using Intel X710 onboard NICBoth machines installed fine and all VMs came up without issue after. My one test backup job also seemed to run without any issues.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey installed on 2 test machines
Machine 1:
Intel Xeon E-2336
SuperMicro board.Machine 2:
Minisforum MS-01
i9-13900H
32 GB Ram
Using Intel X710 onboard NICBoth machines installed fine and all VMs came up without issue after.
I ran a backup job after to test snapshot coalesce, no issues there.
-
RE: XCP-ng 8.3 updates announcements and testing
@stormi Updated a test machine running only couple VMs. Everything installed fine and rebooted without issue.
Machine is:
Intel Xeon E-2336
SuperMicro board.
One VM happens to be windows based with an Nvidia GPU passed though to it running Blue Iris using the MSR fixed found elsewhere on these forums, fix continues to work with this version of Xen. -
RE: CBT: the thread to centralize your feedback
@dthenot @olivierlambert thanks guys ill hold off on submitting a ticket for now to keep the conversation centralized here but if you need any more info, would like me to try anything or would like a remote support tunnel opened just let me know!
-
RE: Host failure after patches
@McHenry i think the command you need to run on a current slave is
xe pool-emergency-transition-to-master followed by xe-toolstack-restart
This will make that slave the new pool master. You should only do this though if the current pool master for sure dead.
This may also be useful:
https://docs.xenserver.com/en-us/xenserver/8/dr/machine-failures.html -
RE: XO (self build) tasks spamming
@olivierlambert On my test instance which is admittedly not very busy (only 2 hosts) this fixed the issue!
-
RE: Switching to XCP-NG, want to hear your problems
@CodeMercenary I am using V4 on the XO-Server to our backup remotes and it seems to work just fine. However using V4 as a storage SR was nothing but problems, as @nikade mentioned we had tons of NFS Server not responding issues which would lock up hosts and VMs causing downtime. Since moving to v3 that hasn't happened.
Checking a host's NFS retransmissions stats after 9 days of uptime i see we have had some retransmissions but they have not caused any downtime or even any timeout messages to appear in dmesg on the host.
[xcpng-prd-02 ~]# nfsstat -rc Client rpc stats: calls retrans authrefrsh 268513028 169 268537542
From what a gather from this blog post from redhat (https://www.redhat.com/sysadmin/using-nfsstat-nfsiostat) it seems like that amount of retransmissions is VERY low and not an issue.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey Installed on about 50 servers across various pools and remote sites. No issues. Ran a couple backup jobs as well which completed without issue.
Latest posts made by flakpyro
-
RE: Intel i40e drivers not working with X710-T2L on kernel-alt
@Kreeblah Not related to your i40e issue but when you are running the standard kernel, before the system shuts down due to a thermal shutdown do you see anything when running the command "xl dmesg" from ssh? I have seen xen thermal throttle cpu cores there and report when it happens. On some SFF systems (Minisforum MS-01) i have simply adjusted the boost clocks (which you can do from inside xcp-ng) to stop it from happening. Though you mention you are not going over 60C which should not trigger any throttling.
-
RE: Updating XenTools on Windows 2022 - duplicate NIC
I believe this was a bug that was fixed in later versions:
https://support.citrix.com/support-home/kbsearch/article?articleNumber=CTX678047 -
RE: Veeam backup with XCP NG
@Pilow You can watch it via the cli using Journal using this command. Not as nice as having it in the UI but useful if you are waiting for a merge for finish before running maintenance on your remotes.
sudo journalctl -u xo-server -f -n 50
-
UI Bug when falling back to Full Backup
I had a job what was interrupted for some reason during its backup run, upon retrying a full backup was required (expected) however the UI shows contradicting information about the VMs backup job.
I think ideally it should be clarified that a Delta is not running but instead a full? (At the bottom)
-
RE: Windows 2025 Standard 24H2.11 (iso release of sept 25) crash on reboot with "INACCESSIBLE BOOT DEVICE 0x7B" in XCP 8.2.1 and XCP 8.3
@dinhngtu Well that is slightly concerning! Ill be sure to not remove Xen Tools on any of these VMs until we can get this resolved. We have a handful of production server 2025 VMs that im now slightly worried about! I should note that last week we did update them to version 9.4.2 of the tools, which tends to require 2 reboots to fully install and we didn't run into any issues.
-
RE: Windows 2025 Standard 24H2.11 (iso release of sept 25) crash on reboot with "INACCESSIBLE BOOT DEVICE 0x7B" in XCP 8.2.1 and XCP 8.3
@dinhngtu Very strange! I wonder why installing from an older ISO and then installing all the updates has no issues at all? Im relieved to know that our current 2025 VMs wont stop working after a reboot!
-
RE: Host failure after patches
@McHenry No the pool master must always been patched and rebooted first. Do you have a pool metadata backup? Are your VMs on shared storage of some sort? In case you need to rebuild the pool.
-
RE: Host failure after patches
@McHenry I am wondering if you are in a situation where you need to reboot. You have patches installed but have not rebooted, but have restarted tool stacks on the salves, which means some components have restarted and are running on their new versions? If you have support it may be best to reach out to Vates for guidance.
-
RE: Host failure after patches
@McHenry Have you tried restarting the toolstack on the hosts since running pool-recover-slave?
I have only had to do this once before but remember it going fairly smoothly at the time. (As smooth as you can expect a host failure to be anyways)