XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Citrix Hypervisor 8.0 landed

    Scheduled Pinned Locked Moved News
    65 Posts 20 Posters 34.8k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Online
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by olivierlambert

      Regarding the news of this version:

      • Kernel version: Linux 4.19 ✔
      • Xen hypervisor version: 4.11 ✔
      • Control domain operating system version: CentOS 7.5 ✔
      • Guest UEFI boot ❓ (only for Windows?)
      • Virtual disk images larger than 2 TiB on GFS2 SR ❓ (already there before, SMAPIv3/qcow2, I don't see the point?)
      • Disk and memory snapshots for vGPU-enabled VM ❓ (hard to test here)

      Note that it could have been Xen 4.12 or CentOS 7.6, but still, it's far more recent than the content of 7.6! 🙂

      We'll see if we can bundle a Xen 4.12 in experimental repo in the future, but we know that the ABI breaks so we'll need a more recent XAPI too, which can be difficult and require a loads of tests anyway.

      About UEFI: it's not completely Open Source, but the license inside said we can redistribute it. However, we'd like to have something really open, ie with the sources. So we'll see.

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Online
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by olivierlambert

        xisco I suppose it's only meant in terms of Citrix support and not the fact "it doesn't work". Note that if it works, we (via XCP-ng Pro support) will support it and do our best to assist if you have a problem.

        1 Reply Last reply Reply Quote 2
        • R Offline
          r1 XCP-ng Team
          last edited by

          olivierlambert said in Citrix Hypervisor 8.0 landed:

          Virtual disk images larger than 2 TiB on GFS2 SR (already there before, SMAPIv3/qcow2, I don't see the point?)

          Is there a real implementation of SMAPIv3 that people can use today (via XAPI)? Or it is going to be released in 8.0?

          VHD (NFS, EXT, LVM, LVMoISCSI, LVMoHBA) has 2TiB restrictions. I'm sure community will benefit from larger than 2TiB virtual disks on these SR types.

          I don't think GFS2 will be full open source and available to all.

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Online
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            GFS2 is using SMAPIv3 since 7.5. I wonder how they can sell it due to the very very poor performances (it was catastrophic in 7.5). But as you said, the Citrix implementation isn't Open Source.

            We made some tests and we managed to make ext4 working on SMAPIv3. However, perfs were so low that we waited for a new release (also because some part of SMAPIv3 are Open Source BUT we don't have access to the dev branch).

            We started to make bench on latest Citrix release, if we got decent perfs, then expect to have first drivers soon 🙂 However, this will be considered still experimental because you have a lot restriction: migration to legacy SR, no delta etc.

            E 1 Reply Last reply Reply Quote 1
            • E Offline
              ebrainte @olivierlambert
              last edited by

              olivierlambert So you are saying that in the future releases we may have ext4 support over iSCSI? (so real VHD files instead LVM over iSCSI)?

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Online
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                No, I never said that: don't mix SMAPIv3 and shared block storage. SMAPIv3 is "just" a brand new storage stack allowing far more flexibility due to its architecture 🙂

                Sharing block on multiple host is a complete another story. You can use LVM (but you'll end in a thick pro storage), or a shared filesystem, like GFS2/OCFS, + a lock manager (corosync is used by Citrix)

                Having ext4 on top of iSCSI is easy… as long as you have one host. Because when it's more, ext4 isn't a "cluster aware" filesystem.

                1 Reply Last reply Reply Quote 0
                • M Offline
                  maxcuttins
                  last edited by

                  UEEEEEEEEEEEEEEE Kernel 4.19????
                  Wow! This means that this kernel already support natively all the client feature set of Ceph.
                  This means no feature downgrade server side.

                  This means a HUGE step forward.
                  I'm about to take over again the project this month.
                  Very good news in the air!

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Online
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    This will probably helps to connect to Ceph, however perfs level would be unknown 🙂

                    K 1 Reply Last reply Reply Quote 0
                    • M Offline
                      maxcuttins
                      last edited by

                      I've see the stormi to-do list.
                      Seems very goal oriented.

                      1 Reply Last reply Reply Quote 0
                      • C Offline
                        cg
                        last edited by

                        Link? 😆

                        olivierlambertO 1 Reply Last reply Reply Quote 0
                        • K Offline
                          Kalloritis @olivierlambert
                          last edited by

                          olivierlambert I would be willing to be a testing help for this. I have a few 6TB WD Golds I could throw each onto four older Fat Twin^2 nodes and do maybe passthrough for the OSD's (slightly esoteric and small but could give baselines if E5645's are still supported).

                          Currently they're just "collecting dust" inside of a chassis and use to be part of a 6x6TB RAIDZ2 ZFS pool that was retired for a 10x10TB RAIDZ2 pool (general storage + endpoint backups).

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Online
                            olivierlambert Vates 🪐 Co-Founder CEO @cg
                            last edited by

                            cg https://github.com/xcp-ng/xcp/issues/180

                            stormi created this issue in xcp-ng/xcp

                            closed XCP-ng 8.0 (meta-issue) #180

                            1 Reply Last reply Reply Quote 2
                            • stormiS Offline
                              stormi Vates 🪐 XCP-ng Team
                              last edited by

                              People are watching me, such honour and responsibility!

                              M 1 Reply Last reply Reply Quote 1
                              • M Offline
                                maxcuttins @stormi
                                last edited by maxcuttins

                                stormi said in Citrix Hypervisor 8.0 landed:

                                People are watching me, such honour and responsibility!

                                I told you that people of the forum are "the watchmens".
                                It's even easier if you have subscribed the notification on GitHub on the project.
                                😆

                                donato_marcosD 1 Reply Last reply Reply Quote 1
                                • donato_marcosD Offline
                                  donato_marcos @maxcuttins
                                  last edited by

                                  maxcuttins Quis custodiet ipsos custodes?

                                  stormi Honor is ours

                                  1 Reply Last reply Reply Quote 1
                                  • antoniolfdacruzA Offline
                                    antoniolfdacruz
                                    last edited by

                                    I also would like to be an alpha/beta tester. HP and Dell blades and assorted Dell servers.

                                    Best regards.

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Online
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Be sure that as soon we got something to test, you'll be notified 😉

                                      1 Reply Last reply Reply Quote 1
                                      • D Offline
                                        dredknight
                                        last edited by

                                        Hey all,

                                        I am building a home lab and will be glad to test the new XCP with Cloudstack on top. Followed the repo!

                                        1 Reply Last reply Reply Quote 1
                                        • olivierlambertO Online
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by

                                          Great! We really need CloudStack testing too 🙂

                                          1 Reply Last reply Reply Quote 0
                                          • M Offline
                                            maxcuttins
                                            last edited by maxcuttins

                                            I finish to test XenServer8 with Ceph.
                                            It just works without patches.

                                            • Installation of the needed package wouldn't try to update any kind of package of the original installation.
                                            • Kernel is already higher enought to include higher RBD client.

                                            So you can just mount RBD images manually with few easy steps.
                                            I tested quickly the connection and performance were not very good (but I'm working in a nested virtualized environment).

                                            I guess all the mess in order to setup the connect are finally over.

                                            Now, what it's needed is to create a VHD on top of a RBD images.
                                            Probably we can just fork the LVMoverISCSI plugin in order to accomplish last mile of connection.
                                            However there are many alternative in order to complete this last step.

                                            R 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post