Citrix Hypervisor 8.0 landed
-
GFS2 is using SMAPIv3 since 7.5. I wonder how they can sell it due to the very very poor performances (it was catastrophic in 7.5). But as you said, the Citrix implementation isn't Open Source.
We made some tests and we managed to make
ext4
working on SMAPIv3. However, perfs were so low that we waited for a new release (also because some part of SMAPIv3 are Open Source BUT we don't have access to the dev branch).We started to make bench on latest Citrix release, if we got decent perfs, then expect to have first drivers soon However, this will be considered still experimental because you have a lot restriction: migration to legacy SR, no delta etc.
-
@olivierlambert So you are saying that in the future releases we may have ext4 support over iSCSI? (so real VHD files instead LVM over iSCSI)?
-
No, I never said that: don't mix SMAPIv3 and shared block storage. SMAPIv3 is "just" a brand new storage stack allowing far more flexibility due to its architecture
Sharing block on multiple host is a complete another story. You can use LVM (but you'll end in a thick pro storage), or a shared filesystem, like GFS2/OCFS, + a lock manager (corosync is used by Citrix)
Having
ext4
on top of iSCSI is easy… as long as you have one host. Because when it's more,ext4
isn't a "cluster aware" filesystem. -
UEEEEEEEEEEEEEEE Kernel 4.19????
Wow! This means that this kernel already support natively all the client feature set of Ceph.
This means no feature downgrade server side.This means a HUGE step forward.
I'm about to take over again the project this month.
Very good news in the air! -
This will probably helps to connect to Ceph, however perfs level would be unknown
-
I've see the @stormi to-do list.
Seems very goal oriented. -
Link?
-
@olivierlambert I would be willing to be a testing help for this. I have a few 6TB WD Golds I could throw each onto four older Fat Twin^2 nodes and do maybe passthrough for the OSD's (slightly esoteric and small but could give baselines if E5645's are still supported).
Currently they're just "collecting dust" inside of a chassis and use to be part of a 6x6TB RAIDZ2 ZFS pool that was retired for a 10x10TB RAIDZ2 pool (general storage + endpoint backups).
-
-
People are watching me, such honour and responsibility!
-
@stormi said in Citrix Hypervisor 8.0 landed:
People are watching me, such honour and responsibility!
I told you that people of the forum are "the watchmens".
It's even easier if you have subscribed the notification on GitHub on the project.
-
@maxcuttins Quis custodiet ipsos custodes?
@stormi Honor is ours
-
I also would like to be an alpha/beta tester. HP and Dell blades and assorted Dell servers.
Best regards.
-
Be sure that as soon we got something to test, you'll be notified
-
Hey all,
I am building a home lab and will be glad to test the new XCP with Cloudstack on top. Followed the repo!
-
Great! We really need CloudStack testing too
-
I finish to test XenServer8 with Ceph.
It just works without patches.- Installation of the needed package wouldn't try to update any kind of package of the original installation.
- Kernel is already higher enought to include higher RBD client.
So you can just mount RBD images manually with few easy steps.
I tested quickly the connection and performance were not very good (but I'm working in a nested virtualized environment).I guess all the mess in order to setup the connect are finally over.
Now, what it's needed is to create a VHD on top of a RBD images.
Probably we can just fork the LVMoverISCSI plugin in order to accomplish last mile of connection.
However there are many alternative in order to complete this last step. -
Can you write few lines on how you did the initial steps? (so we can provide a SMAPIv3 driver for further testing)
-
@maxcuttins You can always have LVM SR on that RBD image device. You need to whitelist /dev/rbd in lvm.conf though.
I'll test once XCP-NG 8 is available.
-
@olivierlambert said in Citrix Hypervisor 8.0 landed:
Can you write few lines on how you did the initial steps? (so we can provide a SMAPIv3 driver for further testing)
Oh yess!
In reality I already wrote yesterday in the wiki everything we know as today about integration with CEPH:
https://github.com/xcp-ng/xcp/wiki/Ceph-on-XCP-ng-7.5-or-laterBut this was before my test on XenServer8.
However the steps are exactly the same.
I summarize here the steps (but they are explained better in the wiki):yum install epel-release -y --enablerepo=extras yum install centos-release-ceph-nautilus --enablerepo=extras yum install yum-plugin-priorities --enablerepo=base yum install ceph-common --enablerepo='base,extras,epel'
And that's all.
Since today we always needed to install other connector in order to userbd
.
This mean no need forrbd-fuse
(rbd over fuse),rbd-nbd
(rbd over NBD),ceph-fuse
(cephFS over Fuse). We can use the originalrbd
directly with kernel support.To map an image:
Before you can connect you need then to just exchange
keyrings
in order to allow the client to connect.
In order to connect to an image calledmytestimage
created on the poolXCP-Test-Pool
.
Map the block device:rbd map mytestimage --name client.admin -p XCP-Test-Pool
Create the filesystem that you prefer on top:
mkfs.ext4 -m0 /dev/rbd/XCP-Test-Pool/mytestimage
And mount:
mkdir /mnt-test-ceph mount /dev/rbd/XCP-Test-Pool/mytestimage /mnt-test-ceph
I'm gonna to write down all these passages in the WIKI as soon as XCP-8 is out.
Now the hype for the next release is even more.
I'm gonna to stalking @stormi all days