Citrix Hypervisor 8.0 landed
@olivierlambert .. Thanks for the headsup that XS8.0 has landed .. and now we must wait for XCP-NG 8 ... patiently.. hopefully not too long
So the core changes:
Kernel version: Linux 4.19
Xen hypervisor version: 4.11
Control domain operating system version: CentOS 7.5
Guest UEFI boot
Virtual disk images larger than 2 TiB on GFS2 SR
Disk and memory snapshots for vGPU-enabled VM
I'm concerned that the Citrix XS8.0 docs state support for "legacy" cpu's has been dropped but those same legacy CPUs are listed as supported on the HCL.
The following processors are now supported in Citrix Hypervisor 8.0:
Xeon 82xx/62xx/52xx/42xx/32xx CascadeLake-SP
The following legacy processors are no longer supported in Citrix Hypervisor 8.0:
Opteron 13xx Budapest Opteron 23xx/83xx Barcelona Opteron 23xx/83xx Shanghai Opteron 24xx/84xx Istanbul Opteron 41xx Lisbon Opteron 61xx Magny Cours Xeon 53xx Clovertown Xeon 54xx Harpertown Xeon 55xx Nehalem Xeon 56xx Westmere-EP Xeon 65xx/75xx Nehalem-EX Xeon 73xx Tigerton Xeon 74xx Dunnington
The XS HCL which is showing it supports the following intel cpus
The following legacy drivers will be deprecated:
Citrix continues to support them in this release but they will be removed in a future Current Release.
Can some one clarify or am I just misreading the docs ? If they've dropped support for cpu's such as the Xeon 56XX Westmere in XS8.0 can XCP-NG add those back in ? Both my XCP-NG hosts are running Intel Xeon 56XX CPU's and I know a lot of others are as well.
So, any of these will not be supported in 8.x+.
I do have to chuckle a bit at the "experimental feature" in an "Official Release Version" from a company like Citrix. Perhaps they are feeling some pressure from XCP-ng.
If your screenshot is complete, I notice that they don't officially support the current Xeon E-21xx Series aka Coffee Lake.
Not supported btw. doesn't mean they won't work, it's just that you fall back to "best effort" support, in case a problem occurs.
@cg I have a feeling that it may not actually even boot without messing with grub configuration if my experience serves me right.
@jcpt928 regarding the old or the actual Xeons?
I may have the chance to test an HPE DL20 with E-21xx aka Coffee Lake in some weeks, when I have a plan on what to do with/for a customer (probably XCP-ng, test if USB-passthrough works with that dongles...).
@cg Well, I know I have had to make grub modifications even up to E3 v2 CPUs, including as old as L, E, and X series 56xx CPUs using the latest version of XCP-ng. I have also had to do so on first through 3rd generation Core CPUs on a couple occasions. I imagine 8.0 will be even more limited - VMware takes the same, more aggressive, approach.
What can I help to make XCP-ng 8.0 available sooner?
@stormi got a nice TODO list right now, but I'm sure a lot of testing will be involved, so we'll keep you posted as soon we got a testable ISO, even if it's an alpha
Adding mirrors could help to spread the load for netinstall or
yum updatewhen it's out too.
I hope XCP-ng will support legacy CPUs as I have E5-24XX series servers
I guess they will work fine but as they are not supported officially ...
Regarding the news of this version:
- Kernel version: Linux 4.19
- Xen hypervisor version: 4.11
- Control domain operating system version: CentOS 7.5
- Guest UEFI boot (only for Windows?)
- Virtual disk images larger than 2 TiB on GFS2 SR (already there before, SMAPIv3/qcow2, I don't see the point?)
- Disk and memory snapshots for vGPU-enabled VM (hard to test here)
Note that it could have been Xen 4.12 or CentOS 7.6, but still, it's far more recent than the content of 7.6!
We'll see if we can bundle a Xen 4.12 in experimental repo in the future, but we know that the ABI breaks so we'll need a more recent XAPI too, which can be difficult and require a loads of tests anyway.
About UEFI: it's not completely Open Source, but the license inside said we can redistribute it. However, we'd like to have something really open, ie with the sources. So we'll see.
@xisco I suppose it's only meant in terms of Citrix support and not the fact "it doesn't work". Note that if it works, we (via XCP-ng Pro support) will support it and do our best to assist if you have a problem.
Virtual disk images larger than 2 TiB on GFS2 SR (already there before, SMAPIv3/qcow2, I don't see the point?)
Is there a real implementation of SMAPIv3 that people can use today (via XAPI)? Or it is going to be released in 8.0?
VHD (NFS, EXT, LVM, LVMoISCSI, LVMoHBA) has 2TiB restrictions. I'm sure community will benefit from larger than 2TiB virtual disks on these SR types.
I don't think GFS2 will be full open source and available to all.
GFS2 is using SMAPIv3 since 7.5. I wonder how they can sell it due to the very very poor performances (it was catastrophic in 7.5). But as you said, the Citrix implementation isn't Open Source.
We made some tests and we managed to make
ext4working on SMAPIv3. However, perfs were so low that we waited for a new release (also because some part of SMAPIv3 are Open Source BUT we don't have access to the dev branch).
We started to make bench on latest Citrix release, if we got decent perfs, then expect to have first drivers soon However, this will be considered still experimental because you have a lot restriction: migration to legacy SR, no delta etc.
@olivierlambert So you are saying that in the future releases we may have ext4 support over iSCSI? (so real VHD files instead LVM over iSCSI)?
No, I never said that: don't mix SMAPIv3 and shared block storage. SMAPIv3 is "just" a brand new storage stack allowing far more flexibility due to its architecture
Sharing block on multiple host is a complete another story. You can use LVM (but you'll end in a thick pro storage), or a shared filesystem, like GFS2/OCFS, + a lock manager (corosync is used by Citrix)
ext4on top of iSCSI is easy… as long as you have one host. Because when it's more,
ext4isn't a "cluster aware" filesystem.
UEEEEEEEEEEEEEEE Kernel 4.19????
Wow! This means that this kernel already support natively all the client feature set of Ceph.
This means no feature downgrade server side.
This means a HUGE step forward.
I'm about to take over again the project this month.
Very good news in the air!
This will probably helps to connect to Ceph, however perfs level would be unknown
I've see the @stormi to-do list.
Seems very goal oriented.