10GB xfer speed
-
I am hoping I can get some help with this. I have struggled for a few days testing and nothing seems to be working. My setup is below.
vSphere 7 environment (vxrail x 4 hosts) connected via brocade 10gb switches. Management network is 192.168.1.0/24 that routes through a virtual palo alto sitting in the vsphere environment. Vlans are establsihed for management, iscsi and office network all running over trunks on the brocades.
New xcp-ng environment stood up, 6 x r715 with 2 dac cables from each server to brocades all ports configured as trunks and set to 9000 on the bond and vlan vifs in xcp-ng. Management ips for hosts and XOA (built from source) are on the 192.168.1.0/24 network. XCP-NG has a default SR connected to a truenas via iscsi vlan. I have validated with iperf3 from a xcp-ng host to the truenas that I am seeing 9.7GB speeds. The drives in the truenas are a zfs zvol with arc and log ssd's and the rest of the drives are 7200 4tb drives, 12 of them.
XOA was built with a management ip on the 192.168.1.0/24 network that routes via the virtual palo alto. When I go to import a vm from vmware, the max speed I am seeing is 135 megabits/s (this is seen in the netdata app on truenas scale) so a 100GB vm takes almost 3 hours to transfer.
I bring up the palo alto because im sure our license does not support 10gb throughput but if the XOA has to connect to a esxi node via the management interface to detect what vms are on it, it probably transfers via that same connection. How is the best way to transfer vms without having to rebuild a different router that is not locked down to speed due to license limitations.
-
Hi,
VMware import speed seems pretty normal, we have to make a lot of conversions on the fly Adding @florent in the loop
-
@olivierlambert yes, the direct import process is quite slow. In our lab, it is also between 80-100Mbps per disk , it seems to be a limit of the vmware api we use here. Maybe you can transfer multiple VM in parallel to reduce the total migration time, but be careful to not overload the vmware cluster with too many queries
In some case, it can be faster to export the VM as an ova , copy it, and import it on XO 's side.
-
Also wanted to chime in here that I was seeing about the same speeds when doing some ESXi importing to XCP-ng as well, seems pretty normal to me like @olivierlambert and @florent have already mentioned.
-
BTW if any vmware wizard know a faster way to extract the data, we are interested
(nice to have : it doesn't require us to sacrifice any living being to an other plane entity ) -
So I have been doing some testing. A question i am curious about, when you do the import from VMware, it just copies in the .vmdk, no conversions it seems. The file seems to have a lapel piece added in front of the .vmdk with [ESXI} Is this flag a manual piece that can be added through cli in some way?
How I have set things up is I have a truenas scale with an iscsi 10gb network. I added a second nic to xoa and put it on the same iscsi network and selected that interface as the main interface. I have the iscsi network added to all the xcp-ng hosts and mapped a nfs share over that 10gb network as a storage device. I then went in and selected the iscsi 10gb network in my vsphere environment and added management to each vmkernal port. I have a server 2019 veeam box on the same iscsi 10gb network i use to access xoa gui and also access each esxi host directly. I have the truenas nfs share mapped in xcp-ng to all hosts and also to one esxi node for testing. In vsphere, i migrated a test vm from its main vsan to the truenas nfs share. I then went in to truenas cli and validated i could see that folder structure with all files in it. What I dont see is the folder and files when rescanning the nfs share in xoa, so i am assuming when an import is done in xoa, it flags the files in a way that xoa can see them.
The speed is there, I am getting 1.2gb write speeds on my truenas iscsi zfs mount.
-
@bughatti the vmdk can be, in fact, multiples files ( the base and a chain of delta between each snapshot)
the formats are : raw for the -flat.vmdk, vmdk cowd for the delat before esxi 6.5, and sesparse for 6.5 +
There are a lot of difference between this and the vhd format used by xen , and the translation is done without using any additional storageIf you can afford to shut down the VM and have enough free storage on disk, you can export the vmdk from your vmware platform, convert them to vhd with
qemu-img convert
and then load them Xen orchestra or xo-cli . That will be faster, but will need more resource and more manipulation on your end -
@florent So I tried and converted a vmdk to a vhdx and dumped it in the nfs share, i rescanned sr in xoa and xcp-ng center
I used veeam to export it out as a vhdx
no luck
-
You need 2 conditions to do something like this:
- only VHD format is supported (not VHDX, which is different)
- the file MUST have the format
<uuid>.vhd
and be placed inside the SR folder (no sub-folders)
-
@olivierlambert SO a couple things I am confused about.
Can no folders exist in the nfs datastore at all for it to read .vhd files because I just created a brand new nfs datastore on truenas scale and when I attached it to xoa, it create a folder called
ee832e6c-6974-f568-c236-9274307c40f1 in it.Second, does the .vhd need to actually have a number.vhd like you stated uuid.vhd or can it be a name.vhd.
CasaOS_Disk_CasaOS.vhd
CasaOS_Disk_CasaOS_1.vhdI currently have the 2 vhd's above in the sr and neither xcp-ng center nor xoa see them
-
That's exactly what I said When you create a SR into an NFS path, it will create a new folder with a unique ID (UUID).
Then, all
vhd
files must be there "flat" (no subfolder). Also, the only format working is<uuid>.vhd
, nothing else.You can generate an UUID pretty easily with
uuidgen
.