Citrix Hypervisor 8.0 landed


  • XCP-ng Team

    Be sure that as soon we got something to test, you'll be notified 😉



  • Hey all,

    I am building a home lab and will be glad to test the new XCP with Cloudstack on top. Followed the repo!


  • XCP-ng Team

    Great! We really need CloudStack testing too 🙂



  • I finish to test XenServer8 with Ceph.
    It just works without patches.

    • Installation of the needed package wouldn't try to update any kind of package of the original installation.
    • Kernel is already higher enought to include higher RBD client.

    So you can just mount RBD images manually with few easy steps.
    I tested quickly the connection and performance were not very good (but I'm working in a nested virtualized environment).

    I guess all the mess in order to setup the connect are finally over.

    Now, what it's needed is to create a VHD on top of a RBD images.
    Probably we can just fork the LVMoverISCSI plugin in order to accomplish last mile of connection.
    However there are many alternative in order to complete this last step.


  • XCP-ng Team

    Can you write few lines on how you did the initial steps? (so we can provide a SMAPIv3 driver for further testing)


  • XCP-ng Team

    @maxcuttins You can always have LVM SR on that RBD image device. You need to whitelist /dev/rbd in lvm.conf though.

    I'll test once XCP-NG 8 is available.



  • @olivierlambert said in Citrix Hypervisor 8.0 landed:

    Can you write few lines on how you did the initial steps? (so we can provide a SMAPIv3 driver for further testing)

    Oh yess!
    In reality I already wrote yesterday in the wiki everything we know as today about integration with CEPH:
    https://github.com/xcp-ng/xcp/wiki/Ceph-on-XCP-ng-7.5-or-later

    But this was before my test on XenServer8.
    However the steps are exactly the same.
    I summarize here the steps (but they are explained better in the wiki):

    yum install epel-release -y --enablerepo=extras
    yum install centos-release-ceph-nautilus --enablerepo=extras
    yum install yum-plugin-priorities --enablerepo=base
    yum install ceph-common --enablerepo='base,extras,epel'
    

    And that's all.
    Since today we always needed to install other connector in order to use rbd.
    This mean no need for rbd-fuse (rbd over fuse), rbd-nbd (rbd over NBD), ceph-fuse (cephFS over Fuse). We can use the original rbd directly with kernel support.

    To map an image:

    Before you can connect you need then to just exchange keyrings in order to allow the client to connect.
    In order to connect to an image called mytestimage created on the pool XCP-Test-Pool.
    Map the block device:

    rbd map mytestimage --name client.admin -p XCP-Test-Pool
    

    Create the filesystem that you prefer on top:

    mkfs.ext4 -m0 /dev/rbd/XCP-Test-Pool/mytestimage
    

    And mount:

    mkdir /mnt-test-ceph
    mount /dev/rbd/XCP-Test-Pool/mytestimage  /mnt-test-ceph
    

    I'm gonna to write down all these passages in the WIKI as soon as XCP-8 is out.
    Now the hype for the next release is even more. 😍
    I'm gonna to stalking @stormi all days 😈



  • I throw down one of my xcp-host to setup a not-nested-virtualized xen-8 in order to test RBD speed. Performance are about 4x slower than they should be but at least it run almost like a standard local disk.

    dd if=/dev/zero of=./test.img bs=1G count=1 oflag=dsync
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB) copied, 1.86156 s, 577 MB/s
    


  • Have anybody tried xcp 8.0 with the intel x56xx series cpu?

    why is support for this cpus dropped, are there any technical behind it or just that they are old and considerd Legacy?

    i would like to know if this cpus have been tried with this veriosn of xcp.



  • @maxcuttins said in Citrix Hypervisor 8.0 landed:

    I throw down one of my xcp-host to setup a not-nested-virtualized xen-8 in order to test RBD speed. Performance are about 4x slower than they should be but at least it run almost like a standard local disk.

    dd if=/dev/zero of=./test.img bs=1G count=1 oflag=dsync
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB) copied, 1.86156 s, 577 MB/s
    

    1G is usually a really bad test, as pretty small things can influence the result massively.
    You should run tests with 10 or better 100 - if you can.
    That also diminishes influence of any caches (on source and target!).


  • XCP-ng Center Team

    @Prilly XCP-ng 8.0 (the Host) is not released (even not an alpha or beta), I only released XCP-ng Center 8.0 (the Windows Client)



  • @Prilly the same thing I'm wondering what could be wrong with that cpus beside of high power consumption which is obvious becouse those cpus are about 7 yrs old.

    VmWare is working with that cpu but 6.7 installer says that the support for that cpu will be dropped. And that would be not good for me becouse I would need to replace all of my servers which is not the case.



  • @xisco said in Citrix Hypervisor 8.0 landed:

    I hope XCP-ng will support legacy CPUs as I have E5-24XX series servers 😞
    I guess they will work fine but as they are not supported officially ...

    E5 arent legacy. Xeons x5xx are.



  • According to the XS HCL Citrix Hypervisor 8.0 is supporting Dell R420 servers but not Dell R430 (same model, but newer, you can still buy the R430). The newer R440 is also supported. This seems very odd to me. Has anyone any idea why this is the case?


  • XCP-ng Team

    While you're waiting for XCP-ng 8.0 beta, what about testing the latest security update to help us release it fast?

    https://xcp-ng.org/forum/post/11832

    Thanks!



  • Been running CH 8.0 for a week to test it out. There's an issue with the guest tools. performance is slow because the guests I/O isn't "optimized"

    going to revert back to xcp 7.6 today.


  • XCP-ng Team

    @nuts23 testing what? CH 8.0?



  • @olivierlambert yes, sorry, the Citrix Hypervisor 8.0


  • XCP-ng Team

    Guest tools on Windows (I suppose) aren't working well?




Log in to reply
 

XCP-ng Pro Support

XCP-ng Pro Support