XCP-ng

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups

    Citrix Hypervisor 8.0 landed

    News
    20
    65
    15148
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • R
      r1 XCP-ng Team 🚀 last edited by

      @olivierlambert said in Citrix Hypervisor 8.0 landed:

      Virtual disk images larger than 2 TiB on GFS2 SR (already there before, SMAPIv3/qcow2, I don't see the point?)

      Is there a real implementation of SMAPIv3 that people can use today (via XAPI)? Or it is going to be released in 8.0?

      VHD (NFS, EXT, LVM, LVMoISCSI, LVMoHBA) has 2TiB restrictions. I'm sure community will benefit from larger than 2TiB virtual disks on these SR types.

      I don't think GFS2 will be full open source and available to all.

      1 Reply Last reply Reply Quote 0
      • olivierlambert
        olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

        GFS2 is using SMAPIv3 since 7.5. I wonder how they can sell it due to the very very poor performances (it was catastrophic in 7.5). But as you said, the Citrix implementation isn't Open Source.

        We made some tests and we managed to make ext4 working on SMAPIv3. However, perfs were so low that we waited for a new release (also because some part of SMAPIv3 are Open Source BUT we don't have access to the dev branch).

        We started to make bench on latest Citrix release, if we got decent perfs, then expect to have first drivers soon 🙂 However, this will be considered still experimental because you have a lot restriction: migration to legacy SR, no delta etc.

        E 1 Reply Last reply Reply Quote 1
        • E
          ebrainte @olivierlambert last edited by

          @olivierlambert So you are saying that in the future releases we may have ext4 support over iSCSI? (so real VHD files instead LVM over iSCSI)?

          1 Reply Last reply Reply Quote 0
          • olivierlambert
            olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

            No, I never said that: don't mix SMAPIv3 and shared block storage. SMAPIv3 is "just" a brand new storage stack allowing far more flexibility due to its architecture 🙂

            Sharing block on multiple host is a complete another story. You can use LVM (but you'll end in a thick pro storage), or a shared filesystem, like GFS2/OCFS, + a lock manager (corosync is used by Citrix)

            Having ext4 on top of iSCSI is easy… as long as you have one host. Because when it's more, ext4 isn't a "cluster aware" filesystem.

            1 Reply Last reply Reply Quote 0
            • M
              maxcuttins last edited by

              UEEEEEEEEEEEEEEE Kernel 4.19????
              Wow! This means that this kernel already support natively all the client feature set of Ceph.
              This means no feature downgrade server side.

              This means a HUGE step forward.
              I'm about to take over again the project this month.
              Very good news in the air!

              1 Reply Last reply Reply Quote 0
              • olivierlambert
                olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                This will probably helps to connect to Ceph, however perfs level would be unknown 🙂

                K 1 Reply Last reply Reply Quote 0
                • M
                  maxcuttins last edited by

                  I've see the @stormi to-do list.
                  Seems very goal oriented.

                  1 Reply Last reply Reply Quote 0
                  • C
                    cg last edited by

                    Link? 😆

                    olivierlambert 1 Reply Last reply Reply Quote 0
                    • K
                      Kalloritis @olivierlambert last edited by

                      @olivierlambert I would be willing to be a testing help for this. I have a few 6TB WD Golds I could throw each onto four older Fat Twin^2 nodes and do maybe passthrough for the OSD's (slightly esoteric and small but could give baselines if E5645's are still supported).

                      Currently they're just "collecting dust" inside of a chassis and use to be part of a 6x6TB RAIDZ2 ZFS pool that was retired for a 10x10TB RAIDZ2 pool (general storage + endpoint backups).

                      1 Reply Last reply Reply Quote 0
                      • olivierlambert
                        olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 @cg last edited by

                        @cg https://github.com/xcp-ng/xcp/issues/180

                        stormi created this issue in xcp-ng/xcp

                        closed XCP-ng 8.0 (meta-issue) #180

                        1 Reply Last reply Reply Quote 2
                        • stormi
                          stormi Vates 🪐 XCP-ng Team 🚀 last edited by

                          People are watching me, such honour and responsibility!

                          M 1 Reply Last reply Reply Quote 1
                          • M
                            maxcuttins @stormi last edited by maxcuttins

                            @stormi said in Citrix Hypervisor 8.0 landed:

                            People are watching me, such honour and responsibility!

                            I told you that people of the forum are "the watchmens".
                            It's even easier if you have subscribed the notification on GitHub on the project.
                            😆

                            donato_marcos 1 Reply Last reply Reply Quote 1
                            • donato_marcos
                              donato_marcos @maxcuttins last edited by

                              @maxcuttins Quis custodiet ipsos custodes?

                              @stormi Honor is ours

                              1 Reply Last reply Reply Quote 1
                              • antoniolfdacruz
                                antoniolfdacruz last edited by

                                I also would like to be an alpha/beta tester. HP and Dell blades and assorted Dell servers.

                                Best regards.

                                1 Reply Last reply Reply Quote 0
                                • olivierlambert
                                  olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                                  Be sure that as soon we got something to test, you'll be notified 😉

                                  1 Reply Last reply Reply Quote 1
                                  • D
                                    dredknight last edited by

                                    Hey all,

                                    I am building a home lab and will be glad to test the new XCP with Cloudstack on top. Followed the repo!

                                    1 Reply Last reply Reply Quote 1
                                    • olivierlambert
                                      olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                                      Great! We really need CloudStack testing too 🙂

                                      1 Reply Last reply Reply Quote 0
                                      • M
                                        maxcuttins last edited by maxcuttins

                                        I finish to test XenServer8 with Ceph.
                                        It just works without patches.

                                        • Installation of the needed package wouldn't try to update any kind of package of the original installation.
                                        • Kernel is already higher enought to include higher RBD client.

                                        So you can just mount RBD images manually with few easy steps.
                                        I tested quickly the connection and performance were not very good (but I'm working in a nested virtualized environment).

                                        I guess all the mess in order to setup the connect are finally over.

                                        Now, what it's needed is to create a VHD on top of a RBD images.
                                        Probably we can just fork the LVMoverISCSI plugin in order to accomplish last mile of connection.
                                        However there are many alternative in order to complete this last step.

                                        R 1 Reply Last reply Reply Quote 0
                                        • olivierlambert
                                          olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                                          Can you write few lines on how you did the initial steps? (so we can provide a SMAPIv3 driver for further testing)

                                          M 1 Reply Last reply Reply Quote 0
                                          • R
                                            r1 XCP-ng Team 🚀 @maxcuttins last edited by

                                            @maxcuttins You can always have LVM SR on that RBD image device. You need to whitelist /dev/rbd in lvm.conf though.

                                            I'll test once XCP-NG 8 is available.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post