Hi, this XAPI plugin multi
is called on another host but is failing with IOError.
It's doing a few things on a host related to LVM handling.
It's failing on one of them, you should look into the one having the error to have the full error in SMlog of the host.
The plugin itself is located in /etc/xapi.d/plugins/on-slave
, it's the function named multi
.

Posts
-
RE: Issue with SR and coalesce
-
RE: XCP-ng 8.3 updates announcements and testing
@ph7 It's only enabled for the two yum command with the
--enablerepo
explicitly used.
It's disabled in the config otherwise.
No need to do anything -
RE: Non-persistent disk
@elsif2 I'm not too sure but I think it's the
on-boot
parameter on the VDI, it can have two values:persist
andreset
.
Maybe setting toreset
would be enough.
I have not tried to use it though.You can set it like this:
xe vdi-param-set uuid=<VDI UUID> on-boot=reset
The VM of which the VDI is connected to needs to be offline.
-
RE: Moving VDIs - Am I doing it correctly
@buldamoosh Oh yeah, you can't disable CBT on a snapshot. It is inherited from the VDI it is the snapshot of.
You canvdi-destroy
the snapshot instead of forgetting them
But yeah, it is "normal" that the snapshot are not removed by the migration.CBT will be enabled by XO if you have the NBD+CBT enabled on your backup job.
-
RE: Moving VDIs - Am I doing it correctly
@buldamoosh You just need to disable CBT on the VDI you want to move.
Do you not use XO to migrate VDI? it should be doing itself from what I understood.Otherwise, you can just disable CBT on the VDI with :
xe vdi-disable-cbt uuid=<VDI UUID>
before migrating the VDI. -
RE: Moving VDIs - Am I doing it correctly
@Greg_E No, it's because CBT is special, you don't need to remove backup because they will be collected by the GC either way.
CBT need to be disabled before moving a VDI (and if needed re-enabled on the new VDI following migration).
CBT Metadata VDI won't be automatically removed though.I think XO automatically disable CBT before moving a VDI, but it does not automatically remove the CBT Metadata VDI when not needed anymore.
-
RE: Multiple base copies taking up storage space
@McHenry Base copies are not copy per say but a base of a VDI chain.
Here you can see that A and B are base copies for the snapshots C and E while D is the active VDI.
You can find more info on this page: https://docs.xcp-ng.org/storage/#-coalesce -
RE: XCP-ng 8.3 betas and RCs feedback 🚀
@jhansen
Hello,
I created a thread about the NBD issue where VBD are left connected to Dom0.
I added what I already know about the situation on my side.
Anyone observing the error can help us by sharing what they observed in the thread.https://xcp-ng.org/forum/topic/9864/vdi-staying-connected-to-dom0-when-using-nbd-backups
Thanks
-
RE: Create a shared ISO SR - option to pass SMB version
@stormi So I looked into the code of the ISOSR and there is a parameter for the version in the device-config.
The parameter isvers
but it can only be V1 or V3.
And if it can't connect with V3, it will also try with V1. -
RE: Create a shared ISO SR - option to pass SMB version
@dthenot Sorry, you were asking about a ISO SR.
In this case, it's in/opt/xensource/sm/ISOSR.py:appendCIFSMountOptions
. -
RE: Create a shared ISO SR - option to pass SMB version
@stormi There is no way to do this currently.
You could add options manually in/opt/xensource/sm/SMBSR.py:getMountOptions
by appending the option here to try.
The simple way would add the parameter to all SMB SR though.
And it would be override by asm
update. -
RE: Can't pass multiple PCI devices
@sluflyer06 In your example, only your first BDF is correct.
The other one you put a dot instead of a colon. -
RE: LargeBlockSR for 4KiB blocksize disks
Hello again,
It is now available in 8.2.1 with the testing packages, you can install them by enabling the testing repository and updating.
Available in sm 2.30.8-10.2.yum update --enablerepo=xcp-ng-testing sm xapi-core xapi-xe xapi-doc
You then need to restart the toolstack.
Afterwards, you can create SR with the command in the above post. -
LargeBlockSR for 4KiB blocksize disks
Hello,
As some of you may know, there is currently a problem with disks with blocksize of 4KiB not being compatible to be a SR disk.
It is an error with thevhd-util
utilities that is not easily fixed.
As such, we quickly developed a SMAPI driver usinglosetup
ability to emulate another sector size to be able to workaround the problem for the moment.The real solution will involve SMAPIv3, which the first driver is available to test: https://xcp-ng.org/blog/2024/04/19/first-smapiv3-driver-is-available-in-preview/
To go back to the LargeBlock driver, it is available in 8.3 in sm 3.0.12-12.2.
To set it up, it is as simple as creating a EXT SR with
xe
CLI but withtype=largeblock
.xe sr-create host-uuid=<host UUID> type=largeblock name-label="LargeBlock SR" device-config:device=/dev/nvme0n1
It does not support using multiple devices because of quirks with LVM and the EXT SR driver.
It automatically creates a loop device with a sector size of 512b on top of the 4KiB device and then creates a EXT SR on top of this emulated device.
This driver is a workaround, we have automated tests but they can't catch all things.
If you have any feedbacks or problems, don't hesitate to share here -
RE: Get Local Disk WWID for Oracle ASM drive identification.
@James9103 Hello, sorry for the delayed answer.
scsi_id would not work on
/dev/xvdX
devices since those are not SCSI based.
They arexen-blkfront
devices using theblkif
protocol.It took me a bit of time to answer since I don't know about Oracle ASM.
From what I can read from ASM documentation, you only need stable identifier for disks.
Could you use some other kind of unique identifier?Another thing is that there no current way to obtain a unique identifier for
xen-blkfront
devices, but that we could try to do something about.
I will be looking a bit more about on it. -
RE: PCI Passthrough back to dom0
@lotusdew Your second PCI address is wrong. You have a dot instead of a colon:
0000:03.00.1
->0000:03:00.1
. -
RE: Best CPU performance settings for HP DL325/AMD EPYC servers?
@s-pam
I can't look at the dmesg today as I'm home with a cold...
I hope you get well soon
I did experiment with
xl cpupool-numa-split
but this did not generate good results for multithreaded workloads. I believe this is because VMs get locked to use only as many cores as there are in each NUMA domain.Indeed, a VM in a pool get locked to use only the cores of the pool and its max amount of VCPU being the number of core in the pool. It is useful if you have the need to isolate completely the VM.
You need to be careful when benching these things because the memory allocation of a running VM is not moved but the VCPU will still run on the pinned node. I don't remember exactly if cpu-pool did have a different behavior than simple pinning in that case though. I remember that hard pinning a guest VCPU were not definitely not moving its memory. You could only modify this before booting. -
RE: Best CPU performance settings for HP DL325/AMD EPYC servers?
@s-pam Damn, computer are really magic. I'm very surprised about these result.
Does the NONUMA really mean no NUMA info being given by the firmware?
I have no idea how the scheduler of Xen uses this information, I know that the memory allocator strip the memory of the VM on all nodes the VM is configured to be allocated on. As such it would mean the scheduler is doing good work on scheduling the VCPU on nodes, without even knowing about the memory positioning of the current process running inside the guest.
Did you touch anything in the config of the guest? It's interesting result nonetheless. Can you share the memory allocation of the VM? You can obtain it withxl debug-keys u; xl dmesg
from the Dom0. -
RE: Best CPU performance settings for HP DL325/AMD EPYC servers?
@olivierlambert @S-Pam Indeed, it's normal, Dom0 doesn't see the NUMA information and the hypervisor handle the compute and memory allocation. You can see the wiki about manipulating VM allocation with the NUMA architecture if you want. But in normal use-cases it's not worth the effort.