Is the VM on shared storage?
Storage migration is quite slow, but if the VM is on shared storage the migration is usually pretty quick, we've seen about 7Gbit/s on a 10G network with MTU 9000.
Posts
-
RE: VM migration time
-
RE: Moving management network to another adapter and backups now fail
@syscon-chrisl OK that's another story, totally weird.
It should totally work if the networking is correct, we've done this at work many times (Migrating from 1G to 10G NIC for example). -
RE: Moving management network to another adapter and backups now fail
@syscon-chrisl OK - So you just changed the mgmt interface to another physical NIC?
That should definately work, did you have any ip configuration on any other interface? -
RE: Moving management network to another adapter and backups now fail
Changing the mgmt interface on a pool is nothing I have good experience from, but on a stand alone host it is rather easy:
Make sure that the new interface is connected and has the correct VLAN, if you're using DHCP it will automatically grab an IP. If not you'll have to go on the console and configure the IP manually after changing the interface.
-
RE: Moving management network to another adapter and backups now fail
One thing I learned over the years with XenServer and now XCP-NG:
Don't change anything, if you have a pool all hosts needs to have the same specs on their NIC, make sure to sync the clock with NTP, dont use DHCP and dont mess with iptables or similar since it will mess upp XAPI.From the error message I'd say you have missed some vital part, like what does a traceroute or ping from the XCP-NG to the host look like? Can you telnet the destination and port?
I know the traceroute binary isn't present but you can install it from epel repo's. -
RE: How to pass MAC address to pfSense VM
@fred974 said in How to pass MAC address to pfSense VM:
@nikade do you use 2 IP subnet? 1x for xcp-ng management + ssh and another extra that you use for the VMs or is it all 1 single subnet? Also did you requested separate MAC on Hetzner backend or left it as is?
@nikade said in How to pass MAC address to pfSense VM:
We then enabled forwarding on the xcp-ng host and setup the first usable IP as "local" subnet
How did you achieve this? Thank you very much in advance. I really appreciate the feedback..
1 IPv4 on the dom0, the one that came with the server. Then we ordered a /29 IPv4 subnet which is routed to the dom0 IPv4 address.
You then need to setup IPv4 forwarding on the dom0 by editing the sysctl and then setup the 1st usable IPv4 from the /29 IPv4 subnet on an internal interface.I just created a dummy vlan interface and put it there and that worked fine.
-
RE: How to pass MAC address to pfSense VM
Used Hetzner many times with xcp-ng and pfsense, we ordered a subnet which was routed to the xcp-ng host.
We then enabled forwarding on the xcp-ng host and setup the first usable IP as "local" subnet and then we were able to use the rest of the IP's on our VM's using that first IP as default gateway. -
RE: Backup / Migration Performance
@planedrop said in Backup / Migration Performance:
@KPS Regarding the 2TiB limitation, it'll definitely be nice when we have SMAPIv3 so we can go over this, but it's worth noting that IMO no VMs should be larger than this anyway. Generally speaking if you need that kind of space it'd be better to just use a NAS/iSCSI setup. Something like TrueNAS can delivery that at high speed, and then handle it's own backups and replication of it.
I know most probably already know this, and all environments are different (I manage one that requires a 7TiB local disk, at least for the time being, plan is to migrate it to a NAS once the software vendor supports it), but it's worth noting anytime I see the 2TiB limit come up, ideally it should be architected around so the VMs are nimble.
I do something similar w/ a pretty massive SMB share and TrueNAS can back this up at whatever speed the WAN can handle, in my case 2 gigabits and it'll maintain that 2 gigabit upload for 8+ hours without slowing down. (and I'm confident even 10 gigabit would be possible with this box)
We have 1 exception and that is for Windows file servers which is backing our DFS.
Except from those we dont allow VM's larger than 1Tb and if they're that big we do not back them up because it usually breaks and cause all kinds of problems. -
RE: What should i expect from VM migration performance from Xen-ng ?
@Greg_E yeah I can imagine, I dont think the ram does much except gives you some "margins" under some heavier load. With 12 VM's I wouldn't bother increasing it from the default values
Yea the mgmt interface in ESXi is nice, I think the standalone interface is almost better than vCenter. Im pretty sure XO Lite will be able to do more when it is done, for example you'll be able to manage a small pool with it.
-
RE: Issue installing latest pfSense Plus (24.03 release)
Googling the error "supervisor read data page not present" gives a lot of hints towards bad memory, are you using ECC or non ECC?
-
RE: What should i expect from VM migration performance from Xen-ng ?
@Greg_E said in What should i expect from VM migration performance from Xen-ng ?:
I'm doing upgrades on both XCP-NG and Truenas, when they are done I have 4 small VMs that I'm going to fire up at the same time and see what I can see. I might be able to hit 2gbps reads while all 4 are booting.
I have one last thing to try in my speed testing, going to see if increasing the RAM for each host makes a difference. Probably not since each already has a decent amount (defaults go up when you pass a certain amount of available). I read somewhere that going up to 16GB for each host might help, with my lab being so lightly used, I might try going up to 32GB for fun. I have lots of slots in my production system where I can add RAM if this helps on lab, the lab machines are full.
We usually set our busy hosts to 16Gb for the dom0 - It does make a stability difference in our case (We could have 30-40 running VM's per host) especially when there is a bit of load inside the VM's.
Normal hosts gets 4-8Gb ram depending on the total amount of ram in the host, 4Gb on the ones with 32Gb and then 6Gb for 64Gb and 8Gb and upwards for the ones with +128Gb. -
RE: Issue installing latest pfSense Plus (24.03 release)
@nikade they have always been created using the "other" template.
The original VM's had 1 vCPU and 2 GB preset on the template. Aside from the values changed initially, I didn't made changes to the memory/resources.
NIC: 2 NIC's (one for WAN another for LAN). Initially they always have the "Realtek" driver, so all I did on one test was changing that to Intel e1000. But there was no change to the outcome.Hi,
Can you go to the "Advanced" tab of the VM and show the memory settings?
I want to make sure you're not using dynamic memory since it is really unstable in Xen. -
RE: Issue installing latest pfSense Plus (24.03 release)
What happends if you create a new VM with the template "other" and attempt to make a clean installation? Do you get the same error?
Also, what kind of configuration have to given the machine? For example NIC? RAM? Dynamic RAM or static?
-
RE: What should i expect from VM migration performance from Xen-ng ?
@Greg_E said in What should i expect from VM migration performance from Xen-ng ?:
I've spent a bunch of time trying to find some dark magic to making the VDI migration faster, so far nothing. My VM (memory) migration is fast enough that I'm not concerned right now. and don't have any testing to show for it.
Currently migrating the test VDI from storage1 to storage2 (again) and getting an average of 400/400mbps (lower case m and b). If I do three VDI at once, I can get over a gigbit and sometimes close to 2 gigabit.
It's either SMAPIv1 or it is a file "block" size issue, bigger blocks can get me benchmarks up to 600MBps to almost 700MBps (capital M and B) on my slow storage over a 10gbps network. Testing this with XCP-NG 8.3 release to see if anything changed from the Beta, so far all is the same. Also all testing done with thin provisioned file shares (SMB and NFS). If I could get half my maximum tests for the VDI migration, I'd be happy. In fact I'm extremely pleased that my storage can go as fast as it is showing, it's all old stuff on SATA.
I have a whole thread on this testing if you want to read more.
You can see the migrate which was 400/400 and then the benchmark across the ethernet interface of my Truenas, this example was migrate from SMB to NFS, and benchmark on the NFS. Settings for that NFS are in the thread mentioned and certainly my fastest non-real world performance to date.
That's impressive!
We're not seeing as high speeds are you are, we have 3 different storages, mostly doing NFS tho. We're still running 8.2.0 but I dont really think it matters as the issue is most likely tied to the SMAPIv1.We also noted that it goes a bit faster when doing 3-4 VDI's in parallell, but the individual speed per migration is about the same.
-
RE: Rolling Pool Update - host took too long to restart
@tuxpowered said in Rolling Pool Update - host took too long to restart:
@nikade Both clusters have shared storage. Kind of a pre-req to have a cluster
Yes both systems were on line and working well. One is TrueNas SCALE and the other is a Qnap. Both massively over kill systems.Yea I was hoping that you were using shared storage, but i've actually seen ppl using clusters/pools without shared storage so I felt I had to ask
-
RE: What should i expect from VM migration performance from Xen-ng ?
@henri9813 said in What should i expect from VM migration performance from Xen-ng ?:
Hello,
Thanks for your answer @nikade
I migrate both storage and vm.
I have 25Gb/s nic, but I have a 5Gb/s limitation at switch level.
But I found on another topic an explanation about this, this could be related to SMAPIv1.
Best regards
Yeah, when migrating storage the speeds are pretty much the same no matter what NIC you have...
This is a known limitation and I hope it will be resolved soon. -
RE: Rolling Pool Update - host took too long to restart
@tuxpowered justs to make sure, you're using the "Rolling pool update" button, right?
And then the master is patched, vm's migrated off and then no bueno, correct?Did you happend to have any shared storage in this pool or is all storage local storage?
-
RE: Please review - XCP-ng Reference Architecture
@john-c said in Please review - XCP-ng Reference Architecture:
@nikade said in Please review - XCP-ng Reference Architecture:
@john-c said in Please review - XCP-ng Reference Architecture:
TrueSecure
Whats that? Never heard of TrueSecure on TrueNAS.
It's an application or feature for TrueNAS Scale, TrueNAS Core and/or TrueNAS Enterprise. Which enables the enabling and configuration of security features of TrueNAS instances (software and/or hardware).
Alright - I didnt know that, thanks for the info.
-
RE: Please review - XCP-ng Reference Architecture
@john-c said in Please review - XCP-ng Reference Architecture:
TrueSecure
Whats that? Never heard of TrueSecure on TrueNAS.
-
RE: Please review - XCP-ng Reference Architecture
@TS79 I dont think it really matters, we run ours in one of our pools and we've been doing that since 2016 without any issues.