@olivierlambert I've understood that part... what I am wondering is if I have 3 hosts in one data center and 3 hosts in another, and I have asked for redundancy of 3 copies, is there a way to ensure all three copies are never in the same data center all at the same time.
Best posts made by vaewyn
-
RE: XOSTOR hyperconvergence preview
Latest posts made by vaewyn
-
RE: Can't designate new master on XO source pool
Further testing/playing... I detached 3 hosts into a new pool and those 3 hosts I can reassign the master at will with no issues.
-
RE: Can't designate new master on XO source pool
I have checked the hosts and they all have non-expired self-signed certificates with:
subject=CN = 10.10.48.152
issuer=CN = 10.10.48.152matching their IP addresses.
-
RE: Can't designate new master on XO source pool
@bleader This is a new stock source install with no attempt to install local certs. XO I have the server certificate checking turned off for the connection. All other functions are working... migrations... monitoring... etc... just can't change the pool master. Does that make sense? I can work on setting up and installing self signed across the board but from my understanding I should already be sitting in that state currently with the default install.
-
Can't designate new master on XO source pool
Edit to add version info: xcp-ng 8.3.0 with Xen Orchestra, commit 71fa8
Master, commit 32b3cHave a working pool with several hosts.... can migrate etc... trying to change the master always results in:
XapiError: INTERNAL_ERROR(Xmlrpc_client.Connection_reset)\n at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202512011349/packages/xen-api/_XapiError.mjs:16:12)\n at file:///opt/xo/xo-builds/xen-orchestra-202512011349/packages/xen-api/transports/json-rpc.mjs:38:21\n at runNextTicks (node:internal/process/task_queues:65:5)\n at processImmediate (node:internal/timers:453:9)\n at process.callbackTrampoline (node:internal/async_hooks:130:17)"Looking at the host I am trying to change to I see:
Dec 16 21:39:13 is-r10-wbxcptest01 xapi: [debug||15 HTTPS 10.10.48.245->:::80|pool.designate_new_master R:e89ae3e02f50|stunnel] 2025.12.16 21:39:13 LOG5[0]: Service [stunnel] accepted connection from unnamed socket Dec 16 21:39:13 is-r10-wbxcptest01 xapi: [debug||15 HTTPS 10.10.48.245->:::80|pool.designate_new_master R:e89ae3e02f50|stunnel] 2025.12.16 21:39:13 LOG5[0]: s_connect: connected 10.10.48.248:443 Dec 16 21:39:13 is-r10-wbxcptest01 xapi: [debug||15 HTTPS 10.10.48.245->:::80|pool.designate_new_master R:e89ae3e02f50|stunnel] 2025.12.16 21:39:13 LOG5[0]: Service [stunnel] connected remote server from 10.10.48.151:48190 Dec 16 21:39:13 is-r10-wbxcptest01 xapi: [debug||15 HTTPS 10.10.48.245->:::80|pool.designate_new_master R:e89ae3e02f50|stunnel] 2025.12.16 21:39:13 LOG3[0]: SSL_connect: ssl/record/rec_layer_s3.c:1544: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure Dec 16 21:39:13 is-r10-wbxcptest01 xapi: [debug||15 HTTPS 10.10.48.245->:::80|pool.designate_new_master R:e89ae3e02f50|stunnel] 2025.12.16 21:39:13 LOG5[0]: Connection reset: 0 byte(s) sent to TLS, 0 byte(s) sent to socket Dec 16 21:39:13 is-r10-wbxcptest01 xapi: [debug||15 HTTPS 10.10.48.245->:::80|pool.designate_new_master R:e89ae3e02f50|xapi_pool_transition] Phase 1 aborting, caught exception: INTERNAL_ERROR: [ Xmlrpc_client.Connection_reset ]https to a port 80 connection... sslv3? This seems quite wrong. Anyone run into this that didn't put the symptoms where Google could find them?
-
RE: Migrating complicated windows fileservers from vmware quickly?
@rtjdamen Correct... in VMWare RDMs created quite a few issues so they are all as VMFS datastores with VMDKs inside them that are then exposed to the VM as drives.
Unfortunately, this is like the ONLY thing that RDMs would actually shine for

-
RE: Migrating complicated windows fileservers from vmware quickly?
@rtjdamen They are currently VMware datastores (VMFS) that have drives on them (vmdks) attached to the VMs so
I can easily expose the iSCSI targets but neither Windows or XCP-NG understands what to do with a datastore target from VMware -
Migrating complicated windows fileservers from vmware quickly?
We have several large file servers (each has 20+ disks that are from 2-4TB in size). VMWare is currently connecting via iSCSI to the SAN they are on and exposing them as virtual disks to the VM.... and therein is the problem. VMFS
I can't just attach those iSCSI targets to XCP-NG and have them as happy disks on the new system.I have room on other SANs or on XOSTOR for the new disk.... but I can think of only one (sketchy) way to do a conversion that won't take DAYS of downtime to complete. I'm wondering, even if I changed datastores from the external SANs to vSan, if the migration process would even have a prayer of completing at all.
The only idea I have come up with, and yes.. this seems VERY stupid, is to:
Expose some iSCSI targets from a SAN directly to the files server VM
Setup an OS RAID mirror for each disk
Let them complete mirroring in the OS
Remove the old drives from the mirror
Shutdown the VM
Migrate the VM to our XCP cluster
Bring up the VM
Add new disks coming from XOSTOR to the VM
Add the new disks as mirror members in the OS raid mirror
Wait for the mirror to complete
Remove the iSCSI disks on the SAN from the mirror
Destroy the mirror and return it to a standard disk setupLooking for feedback on my level of stupidity
and wondering if someone has been down this road before. -
RE: XOSTOR Performance
@olivierlambert iodepth didn't change it much...
Read speeds are good, I'm seeing 1,113 MiB/s on both the raid0 and the single drive... so does smapiv1 have a limiting factor on only the writes?
-
RE: XOSTOR Performance
@olivierlambert Well... that actually went worse and better
68.8MB/s for the first test... but then 266MiB/s when it was no longer "thin" on the second run.This is still only 1/3 of "raw" performance so... it's not a deal breaker but man... having to do fake raid devices and still coming in that slow, in comparison, is rough. I've seen some forum posts with waaaaaay better speeds... I'll need to do some searching and see if they have any "magic" for me.
My methodology was.. created 4 drives on xostor assigned to the VM. Then in the VM I did:
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvde /dev/xvdf
mkfs.xfs /dev/md0
mount /dev/md0 /mnt
fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1 -
RE: XOSTOR Performance
@olivierlambert I created a 100gb drive on my xostor... mounted it in a vm on /mnt and ran:
fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1Which basically says to write a 10gb file as fast as you can.