Citrix Hypervisor 8.0 landed
-
What can I help to make XCP-ng 8.0 available sooner?
-
@stormi got a nice TODO list right now, but I'm sure a lot of testing will be involved, so we'll keep you posted as soon we got a testable ISO, even if it's an alpha
Adding mirrors could help to spread the load for netinstall or
yum update
when it's out too. -
@cheese you can also have a look at our open issues at https://github.com/xcp-ng/xcp/issues and see if there's anything where you can help.
-
I hope XCP-ng will support legacy CPUs as I have E5-24XX series servers
I guess they will work fine but as they are not supported officially ... -
Regarding the news of this version:
- Kernel version: Linux 4.19
- Xen hypervisor version: 4.11
- Control domain operating system version: CentOS 7.5
- Guest UEFI boot (only for Windows?)
- Virtual disk images larger than 2 TiB on GFS2 SR (already there before, SMAPIv3/qcow2, I don't see the point?)
- Disk and memory snapshots for vGPU-enabled VM (hard to test here)
Note that it could have been Xen 4.12 or CentOS 7.6, but still, it's far more recent than the content of 7.6!
We'll see if we can bundle a Xen 4.12 in experimental repo in the future, but we know that the ABI breaks so we'll need a more recent XAPI too, which can be difficult and require a loads of tests anyway.
About UEFI: it's not completely Open Source, but the license inside said we can redistribute it. However, we'd like to have something really open, ie with the sources. So we'll see.
-
@xisco I suppose it's only meant in terms of Citrix support and not the fact "it doesn't work". Note that if it works, we (via XCP-ng Pro support) will support it and do our best to assist if you have a problem.
-
@olivierlambert said in Citrix Hypervisor 8.0 landed:
Virtual disk images larger than 2 TiB on GFS2 SR (already there before, SMAPIv3/qcow2, I don't see the point?)
Is there a real implementation of SMAPIv3 that people can use today (via XAPI)? Or it is going to be released in 8.0?
VHD (NFS, EXT, LVM, LVMoISCSI, LVMoHBA) has 2TiB restrictions. I'm sure community will benefit from larger than 2TiB virtual disks on these SR types.
I don't think GFS2 will be full open source and available to all.
-
GFS2 is using SMAPIv3 since 7.5. I wonder how they can sell it due to the very very poor performances (it was catastrophic in 7.5). But as you said, the Citrix implementation isn't Open Source.
We made some tests and we managed to make
ext4
working on SMAPIv3. However, perfs were so low that we waited for a new release (also because some part of SMAPIv3 are Open Source BUT we don't have access to the dev branch).We started to make bench on latest Citrix release, if we got decent perfs, then expect to have first drivers soon However, this will be considered still experimental because you have a lot restriction: migration to legacy SR, no delta etc.
-
@olivierlambert So you are saying that in the future releases we may have ext4 support over iSCSI? (so real VHD files instead LVM over iSCSI)?
-
No, I never said that: don't mix SMAPIv3 and shared block storage. SMAPIv3 is "just" a brand new storage stack allowing far more flexibility due to its architecture
Sharing block on multiple host is a complete another story. You can use LVM (but you'll end in a thick pro storage), or a shared filesystem, like GFS2/OCFS, + a lock manager (corosync is used by Citrix)
Having
ext4
on top of iSCSI is easy… as long as you have one host. Because when it's more,ext4
isn't a "cluster aware" filesystem. -
UEEEEEEEEEEEEEEE Kernel 4.19????
Wow! This means that this kernel already support natively all the client feature set of Ceph.
This means no feature downgrade server side.This means a HUGE step forward.
I'm about to take over again the project this month.
Very good news in the air! -
This will probably helps to connect to Ceph, however perfs level would be unknown
-
I've see the @stormi to-do list.
Seems very goal oriented. -
Link?
-
@olivierlambert I would be willing to be a testing help for this. I have a few 6TB WD Golds I could throw each onto four older Fat Twin^2 nodes and do maybe passthrough for the OSD's (slightly esoteric and small but could give baselines if E5645's are still supported).
Currently they're just "collecting dust" inside of a chassis and use to be part of a 6x6TB RAIDZ2 ZFS pool that was retired for a 10x10TB RAIDZ2 pool (general storage + endpoint backups).
-
-
People are watching me, such honour and responsibility!
-
@stormi said in Citrix Hypervisor 8.0 landed:
People are watching me, such honour and responsibility!
I told you that people of the forum are "the watchmens".
It's even easier if you have subscribed the notification on GitHub on the project.
-
@maxcuttins Quis custodiet ipsos custodes?
@stormi Honor is ours
-
I also would like to be an alpha/beta tester. HP and Dell blades and assorted Dell servers.
Best regards.