XCP-ng

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups

    XenServer 8.0 - Major update due Q1 2019

    Development
    18
    89
    22214
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambert
      olivierlambert Vates ๐Ÿช Co-Founder๐Ÿฆธ CEO ๐Ÿง‘โ€๐Ÿ’ผ last edited by

      I think there is a lot of conversations around this forum about this ๐Ÿ™‚ SMAPIv3 is able to use qcow2 instead of VHD, allowing to get rid of the 2TiB limit.

      However, SMAPIv3 is far from being production ready now. See the dev diary in News section ๐Ÿ™‚

      C 1 Reply Last reply Reply Quote 0
      • C
        cg @olivierlambert last edited by

        @olivierlambert said in XenServer 8.0 - Major update due Q1 2019:

        However, SMAPIv3 is far from being production ready now. See the dev diary in News section ๐Ÿ™‚

        We all wait for updates ๐Ÿ˜œ

        1 Reply Last reply Reply Quote 0
        • stormi
          stormi Vates ๐Ÿช XCP-ng Team ๐Ÿš€ @gangsterrapper22 last edited by

          @gangsterrapper22 Thanks. The reason why we restrict corosync in the license daemon is to avoid XCP-ng Center advertise it as available when it isn't.

          1 Reply Last reply Reply Quote 1
          • D
            dkleva last edited by

            Still max 32 vcpu limit per VM. This is too litle!

            1 Reply Last reply Reply Quote 0
            • olivierlambert
              olivierlambert Vates ๐Ÿช Co-Founder๐Ÿฆธ CEO ๐Ÿง‘โ€๐Ÿ’ผ last edited by olivierlambert

              This is not a real limit: we even unlocked this artificial limit in Xen Orchestra.

              It's 128 in HVM guest, see https://wiki.xenproject.org/wiki/Xen_Project_Release_Features

              1 Reply Last reply Reply Quote 0
              • C
                cg last edited by cg

                You should consider, that efficiency of vCPUs goes down by each one you add. I don't have the link to that Citrix document handy, so you need to google that.
                If you really need that many cores, you should consider a physical machine, which should make a serious bump in performance.
                AFAIR it was the overhead of the Xen scheduler, which needs to balance the needs of your VM. The more vCPUs one VM has, the bigger the overhead. I'm sure it didn't change in more recent versions.

                1 Reply Last reply Reply Quote 1
                • ruskofd
                  ruskofd last edited by

                  Absolutely @cg ๐Ÿ‘

                  1 Reply Last reply Reply Quote 0
                  • olivierlambert
                    olivierlambert Vates ๐Ÿช Co-Founder๐Ÿฆธ CEO ๐Ÿง‘โ€๐Ÿ’ผ last edited by

                    Well, it's not entirely true. You can do vCPU pinning if you want to avoid any bad placement on very large core setup, so Xen cost will be virtually non-existent. This is working well.

                    The main reason for Citrix to limit vCPU number is for support reasons: there is some odd combination possible in some case on some hardware.

                    1 Reply Last reply Reply Quote 0
                    • A
                      AllooTikeeChaat last edited by

                      @ Oli and the XCP-NG team ..

                      Will the Westmere EP (aka X5xx series etc) Xeons be supported by XCP-NG 8.0 as the XS HCL no longer lists them as a supported CPU?

                      1 Reply Last reply Reply Quote 0
                      • C
                        cg last edited by

                        How about doing your own matrix?

                        1 Reply Last reply Reply Quote 1
                        • First post
                          Last post