Citrix Hypervisor 8.0 landed
-
No, I never said that: don't mix SMAPIv3 and shared block storage. SMAPIv3 is "just" a brand new storage stack allowing far more flexibility due to its architecture
Sharing block on multiple host is a complete another story. You can use LVM (but you'll end in a thick pro storage), or a shared filesystem, like GFS2/OCFS, + a lock manager (corosync is used by Citrix)
Having
ext4
on top of iSCSI is easy… as long as you have one host. Because when it's more,ext4
isn't a "cluster aware" filesystem. -
UEEEEEEEEEEEEEEE Kernel 4.19????
Wow! This means that this kernel already support natively all the client feature set of Ceph.
This means no feature downgrade server side.This means a HUGE step forward.
I'm about to take over again the project this month.
Very good news in the air! -
This will probably helps to connect to Ceph, however perfs level would be unknown
-
I've see the @stormi to-do list.
Seems very goal oriented. -
Link?
-
@olivierlambert I would be willing to be a testing help for this. I have a few 6TB WD Golds I could throw each onto four older Fat Twin^2 nodes and do maybe passthrough for the OSD's (slightly esoteric and small but could give baselines if E5645's are still supported).
Currently they're just "collecting dust" inside of a chassis and use to be part of a 6x6TB RAIDZ2 ZFS pool that was retired for a 10x10TB RAIDZ2 pool (general storage + endpoint backups).
-
-
People are watching me, such honour and responsibility!
-
@stormi said in Citrix Hypervisor 8.0 landed:
People are watching me, such honour and responsibility!
I told you that people of the forum are "the watchmens".
It's even easier if you have subscribed the notification on GitHub on the project.
-
@maxcuttins Quis custodiet ipsos custodes?
@stormi Honor is ours
-
I also would like to be an alpha/beta tester. HP and Dell blades and assorted Dell servers.
Best regards.
-
Be sure that as soon we got something to test, you'll be notified
-
Hey all,
I am building a home lab and will be glad to test the new XCP with Cloudstack on top. Followed the repo!
-
Great! We really need CloudStack testing too
-
I finish to test XenServer8 with Ceph.
It just works without patches.- Installation of the needed package wouldn't try to update any kind of package of the original installation.
- Kernel is already higher enought to include higher RBD client.
So you can just mount RBD images manually with few easy steps.
I tested quickly the connection and performance were not very good (but I'm working in a nested virtualized environment).I guess all the mess in order to setup the connect are finally over.
Now, what it's needed is to create a VHD on top of a RBD images.
Probably we can just fork the LVMoverISCSI plugin in order to accomplish last mile of connection.
However there are many alternative in order to complete this last step. -
Can you write few lines on how you did the initial steps? (so we can provide a SMAPIv3 driver for further testing)
-
@maxcuttins You can always have LVM SR on that RBD image device. You need to whitelist /dev/rbd in lvm.conf though.
I'll test once XCP-NG 8 is available.
-
@olivierlambert said in Citrix Hypervisor 8.0 landed:
Can you write few lines on how you did the initial steps? (so we can provide a SMAPIv3 driver for further testing)
Oh yess!
In reality I already wrote yesterday in the wiki everything we know as today about integration with CEPH:
https://github.com/xcp-ng/xcp/wiki/Ceph-on-XCP-ng-7.5-or-laterBut this was before my test on XenServer8.
However the steps are exactly the same.
I summarize here the steps (but they are explained better in the wiki):yum install epel-release -y --enablerepo=extras yum install centos-release-ceph-nautilus --enablerepo=extras yum install yum-plugin-priorities --enablerepo=base yum install ceph-common --enablerepo='base,extras,epel'
And that's all.
Since today we always needed to install other connector in order to userbd
.
This mean no need forrbd-fuse
(rbd over fuse),rbd-nbd
(rbd over NBD),ceph-fuse
(cephFS over Fuse). We can use the originalrbd
directly with kernel support.To map an image:
Before you can connect you need then to just exchange
keyrings
in order to allow the client to connect.
In order to connect to an image calledmytestimage
created on the poolXCP-Test-Pool
.
Map the block device:rbd map mytestimage --name client.admin -p XCP-Test-Pool
Create the filesystem that you prefer on top:
mkfs.ext4 -m0 /dev/rbd/XCP-Test-Pool/mytestimage
And mount:
mkdir /mnt-test-ceph mount /dev/rbd/XCP-Test-Pool/mytestimage /mnt-test-ceph
I'm gonna to write down all these passages in the WIKI as soon as XCP-8 is out.
Now the hype for the next release is even more.
I'm gonna to stalking @stormi all days -
I throw down one of my xcp-host to setup a not-nested-virtualized xen-8 in order to test RBD speed. Performance are about 4x slower than they should be but at least it run almost like a standard local disk.
dd if=/dev/zero of=./test.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 1.86156 s, 577 MB/s
-
Have anybody tried xcp 8.0 with the intel x56xx series cpu?
why is support for this cpus dropped, are there any technical behind it or just that they are old and considerd Legacy?
i would like to know if this cpus have been tried with this veriosn of xcp.