@DustinB doesn't it use the exact same mechanism?
I have to find out.
Posts
-
RE: USB-Passthrough does not survive reboot of VM
-
USB-Passthrough does not survive reboot of VM
I had that many times with XCP-ng 8.2 and it felt like the problem is gone with 8.3, but it seems it's not (completely) gone.
I can enable passthrough, add it to VM and it runs fine.
I shutdown the VM for whatever reasons, recently it was just a reboot for Windows updates and the USB device got lost.What could be the reason, how to debug that?

I can enable it again, shutdown VM (hotplug sadly still doesn't work till today?) attach and it's fine again.
Regards
- Christof
-
RE: Question on CPU masking with qemu and xen
As I can't find it anywhere:
What exactly is happening on join/leave?
Is the masking changing at runtime and affecting every new started VM, does the pool need to be rebooted after an older server has been removed, to "unlock" all features?I have a pool with HPE DL380 Gen 9 and 10 where Gen 9 will replaced with Gen 12.
So I assume it makes sense to remove 9 first, then add 12.
But how exactly is the masking process working?
Should I reboot Gen 10 first (shutting down all VMs) or add 12, move VMs over... or just shutdown and start all currently running VMs?In the early days (~XenServer 6) it had to be done manually by getting CPU features, calculating the commons and give it as parameter to join command.
What I did not see though, is: How exactly is it working on removal and what's best practice.(also: should be added to documentations)
-
RE: Booting to Dracut (I trusted ChatGPT)
@AtaxyaNetwork did you ever think about "upgrading" the system?
The installer should be able to detect and backup your current one, installl new and migrate all relevant data.
That should exclude whatever home-brew-fiddlydiddly you did to your current installation.
The reason you're seeing 2 almost identical partitions is exactly that: The way how upgrades work with XenServer/XCP-ng. One is your current system, the other one may contain a backup of a previous install (which can also be restored by the installer - if existing).As it's been mentioned: Neither trust USB-storage nor any AI without knowing what they suggest you to do. But you may have learned your AI-lesson: It can help, but it can end with a big pile of sh*t if you don't know what it's suggesting you. It's more A than I.
-
RE: Citrix or XCP-ng drivers for Windows Server 2022
Small warning: XenTools are still a clusterfuck.
On (allmost) all machines without our main environment it keeps the ip-configuration within the Realtek emulated NIC and never transfers it to the Xen device.
It's fucked for long now - but doesn't matter if all VMs are getting their IPs via DHCP.Currently struggling with: Windows Server 2016 + XenTools/Management 9.4.2.
At least it didn't break the whole VM by destroying the disk device drivers this round.tl;dr: Snapshot your VMs and have a documentation at hand, when upgrading the tools/drivers.
-
RE: Update Templates
@bikemuch I registered an XOA-account to keep a few machines with XOA maintained. It's free, not much to loose.
Many of us are looking forward for a 9.0 release, including a modern 5.x or even 6.x kernel, including all the developments and improvements, that have been made.
Also things like 16TB+ disks on block storage (incl. thin provisioning) etc.If your company is running a bigger environment: You can put priorities on features as a paying customer.

-
RE: Update Templates
@bikemuch While you're technically not wrong about the impression, especially Linux distros usually work with flawlessly with the former version template.
I also saw that there's a user-test running with updated templates including Debian 13 etc.
It should be GA in about 2 weeks. -
RE: Upgrade 8.2.1 -> 8.3 failed (manually fixed)
IIRC I just "tried again".
It failed 2 times, then I looked up the logs from other console, removed the file (which shouldn't be of any importance for our instance) and retried without reboot.I copied the whole installer-log to the usb stick before finshing the install.

(Could actually be a good hint or even a menu-option for those, where the install fails and won't leave it on the harddrive - e.g. evaluating hardware)[ 128.517356] ata1.00: exception Emask 0x0 SAct 0x800000 SErr 0x0 action 0x0 [ 128.517357] ata1.00: irq_stat 0x40000008 [ 128.517359] ata1.00: failed command: READ FPDMA QUEUED [ 128.517362] ata1.00: cmd 60/80:b8:10:6c:d4/00:00:02:00:00/40 tag 23 ncq dma 65536 in res 41/40:10:80:6c:d4/00:00:02:00:00/00 Emask 0x409 (media error) <F> [ 128.517363] ata1.00: status: { DRDY ERR } [ 128.517364] ata1.00: error: { UNC } [ 128.518008] ata1.00: configured for UDMA/133 [ 128.518018] sd 0:0:0:0: [sda] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 128.518020] sd 0:0:0:0: [sda] tag#23 Sense Key : Medium Error [current] [ 128.518021] sd 0:0:0:0: [sda] tag#23 Add. Sense: Unrecovered read error - auto reallocate failed [ 128.518024] sd 0:0:0:0: [sda] tag#23 CDB: Read(10) 28 00 02 d4 6c 10 00 00 80 00 [ 128.518025] print_req_error: I/O error, dev sda, sector 47475840 [ 128.518039] ata1: EH complete [ 128.581286] ata1.00: exception Emask 0x0 SAct 0x2000000 SErr 0x0 action 0x0 [ 128.581287] ata1.00: irq_stat 0x40000008 [ 128.581288] ata1.00: failed command: READ FPDMA QUEUED [ 128.581291] ata1.00: cmd 60/08:c8:80:6c:d4/00:00:02:00:00/40 tag 25 ncq dma 4096 in res 41/40:08:80:6c:d4/00:00:02:00:00/00 Emask 0x409 (media error) <F> [ 128.581292] ata1.00: status: { DRDY ERR } [ 128.581293] ata1.00: error: { UNC } [ 128.582111] ata1.00: configured for UDMA/133 [ 128.582117] sd 0:0:0:0: [sda] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 128.582118] sd 0:0:0:0: [sda] tag#25 Sense Key : Medium Error [current] [ 128.582119] sd 0:0:0:0: [sda] tag#25 Add. Sense: Unrecovered read error - auto reallocate failed [ 128.582121] sd 0:0:0:0: [sda] tag#25 CDB: Read(10) 28 00 02 d4 6c 80 00 00 08 00 [ 128.582122] print_req_error: I/O error, dev sda, sector 47475840 [ 128.582133] ata1: EH complete [ 128.629307] ata1.00: exception Emask 0x0 SAct 0x200 SErr 0x0 action 0x0 [ 128.629309] ata1.00: irq_stat 0x40000008 [ 128.629310] ata1.00: failed command: READ FPDMA QUEUED [ 128.629313] ata1.00: cmd 60/08:48:80:6c:d4/00:00:02:00:00/40 tag 9 ncq dma 4096 in res 41/40:08:80:6c:d4/00:00:02:00:00/00 Emask 0x409 (media error) <F> [ 128.629314] ata1.00: status: { DRDY ERR } [ 128.629315] ata1.00: error: { UNC } [ 128.630068] ata1.00: configured for UDMA/133 [ 128.630074] sd 0:0:0:0: [sda] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 128.630076] sd 0:0:0:0: [sda] tag#9 Sense Key : Medium Error [current] [ 128.630077] sd 0:0:0:0: [sda] tag#9 Add. Sense: Unrecovered read error - auto reallocate failed [ 128.630078] sd 0:0:0:0: [sda] tag#9 CDB: Read(10) 28 00 02 d4 6c 80 00 00 08 00 [ 128.630079] print_req_error: I/O error, dev sda, sector 47475840 [ 128.630092] ata1: EH completeIndeed it looks like the SSD should be replaced.
8.3 is running stable on this (and all other hosts, I upgraded) so far.
It's a system at a UAS, running various student projects for several years now, coming from XenServer originally. I voluntarily maintain it. Thx for the hint! -
RE: Windows 2025 Standard 24H2.11 (iso release of sept 25) crash on reboot with "INACCESSIBLE BOOT DEVICE 0x7B" in XCP 8.2.1 and XCP 8.3
@dinhngtu I can't say on XCP-ng side, but it's likely linked to:
August patch (and following), as Microsoft changed something to the NVMe stack.Google gives a lot about it. It seems that it most likely doesn't kill NVMes but can cause trouble.
We have a few PCs becoming more unstable (BSODs) or even very slow after that upgrae. -
RE: Debian 9 virtual machine does not start in xcp-ng 8.3
I often wondered what's the general purpose of that option.
As I only have 1 - 2 socket servers, I always choose 1 socket with x cores (mostly 2 - 8, not exeeding 1 real CPU).
Also for historic reasons: Sockets have been limited, but not cores.Does it generally make any difference on Xen side/backend?
VM OS might handle it different due to NUMA optimizations. -
Upgrade 8.2.1 -> 8.3 failed (manually fixed)
After upgrading my first server successfully, I upgraded another one recently (different environments, no pool), but it failed.
I remembered from the forum to check installer logs, so I copied the whole directory (in case it contains useful info) and switched to another console (ALT+F3?) to see where it failed.
I don't know if it's documented somewhere, but following that console is pretty informative, rather then just seeing a progress bar.
Older windows types didn't hide what the installer is doing. It's a bit sad XCP-ng "hides" that.tl;dr: The problem seemed to be:
STANDARD ERROR:cp: error reading '/tmp/primary-jqbXmQ/usr/lib64/python2.7/lib-dynload/_codecs_hk.so': Input/output error cp: failed to extend '/tmp/backup-TbutMQ/usr/lib64/python2.7/lib-dynload/_codecs_hk.so': Input/output errorAs it was during backup phase, nothing was broken and I could just retry... to end up with the same problem.
As it looks like some hongkong locales, I just removed the file and tried again: with success.
Backup ran through, install/upgrade went fine. Box is running since.I didn't back the file up, but with "ls" it looked fine like everything else. Also nobody ever touched that file. I can't say why, but wanted to drop it here, for archival purposes. Maybe someone else stumbles over a close or similar problem.
As the logs and other terminal give quite some information about current actions, debugging was somehow fun and it was interesting to dig a bit into what the installer is actually doing. Big pro over Microsoft... which often is a big pain to debug.
If you want the whole installlog-dir: I still have it, but will delete the next days, if not.
Greetings
- Christof
-
RE: First SMAPIv3 driver is available in preview
@nikade I found out the HPE MSA2060 has a full flash bundle option, wich is suprisingly cheap, so our SAN has 3.84 TB SAS SSDs - they'll be good within a few hours, but our backup server has a RAID6 with 10 TB HDDs.
-
RE: First SMAPIv3 driver is available in preview
@nikade It's also RAS. The risk of a 2nd failing disc during rebuild is a lot higher than usual.
Our B2D2T server needs about 24 hours for that. -
RE: First SMAPIv3 driver is available in preview
@olivierlambert said in First SMAPIv3 driver is available in preview:
Hi,
8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3

Lots of work means lots of changes, means: I'm exciting about it. Also sounds more like a 9.0, if that much work is going into it.

-
RE: First SMAPIv3 driver is available in preview
@hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
That's why the only option for our SAN is: RAID6 respectively any DP algorythm. -
RE: First SMAPIv3 driver is available in preview
@Paolo If it's only for that: Any HW-RAID with DP should do the job. (in case you don't fully go for SW-RAID)
-
RE: First SMAPIv3 driver is available in preview
@bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment) -
RE: First SMAPIv3 driver is available in preview
@olivierlambert I know the problem of a shared FS, the quesion I had was rather: does qcow2 or vhdx have benefits above each other. What are pros/cons with the choice of one.
Does it matter at all? -
RE: First SMAPIv3 driver is available in preview
@still_at_work the question is technically wrong. It's less depending on SMAPI, moreso on the "drivers" that it'll be able to use.
Someone needs to implement something for thin provisioned shared storage that could handle it.
e.g. via GFS2 or something else.You could make your own "adapter"/"driver" (I forgot how they called it) for it, like they did with ZFS.
-
RE: First SMAPIv3 driver is available in preview
@john-c as well as FC. Basically all shared storages that are production ready.
What are the up/downsides of qcow2 vs. VHDX?