XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Citrix Hypervisor 8.0 landed

    Scheduled Pinned Locked Moved News
    65 Posts 20 Posters 34.8k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stormiS Offline
      stormi Vates 🪐 XCP-ng Team
      last edited by stormi

      From what I see in https://www.intel.com/content/dam/www/public/us/en/documents/corporate-information/SA00233-microcode-update-guidance_05132019.pdf X5675 CPUs are not supported by Intel itself anymore, so no mitigation for you for the MDS attacks 😕

      And that's why no vendor can say they "support" it anymore, since no one can guarantee the security of anything running on them now.

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        Prilly @stormi
        last edited by Prilly

        @stormi as long as you dont have any untrusted vms running on this cpus there is no problem with security issues.

        stormiS 1 Reply Last reply Reply Quote 0
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team @Prilly
          last edited by

          @Prilly You're fine if you're running trusted workload. This includes VMs themselves and everything that gets executed in it. Including maybe javascript or webassembly stuff on some not-so-trusted websites. This also means that a compromised VM due to a security flaw in the VM or something badly configured or access obtained through social engineering can leverage the hardware security flaws to get access to sensitive data not only from within the VM but also from other VMs.

          So, I agree with you but we need to be careful about the definition of "trusted".

          D 1 Reply Last reply Reply Quote 0
          • M Offline
            maxcuttins @cg
            last edited by

            @cg said in Citrix Hypervisor 8.0 landed:

            @maxcuttins said in Citrix Hypervisor 8.0 landed:

            I throw down one of my xcp-host to setup a not-nested-virtualized xen-8 in order to test RBD speed. Performance are about 4x slower than they should be but at least it run almost like a standard local disk.

            dd if=/dev/zero of=./test.img bs=1G count=1 oflag=dsync
            1+0 records in
            1+0 records out
            1073741824 bytes (1.1 GB) copied, 1.86156 s, 577 MB/s
            

            1G is usually a really bad test, as pretty small things can influence the result massively.
            You should run tests with 10 or better 100 - if you can.
            That also diminishes influence of any caches (on source and target!).

            Not very good.
            Here is 10M:

            dd if=/dev/zero of=./test1.img bs=10M count=1 oflag=dsync
            1+0 records in
            1+0 records out
            10485760 bytes (10 MB) copied, 0.0545468 s, 192 MB/s
            

            and here 100M:

            dd if=/dev/zero of=./test1.img bs=100M count=1 oflag=dsync
            1+0 records in
            1+0 records out
            104857600 bytes (105 MB) copied, 0.266544 s, 393 MB/s
            
            C 1 Reply Last reply Reply Quote 0
            • C Offline
              cg @maxcuttins
              last edited by cg

              @maxcuttins did you really measure 10 and 100 MB after I said 1 G is not enough for accurate results?

              Usually you set blocksize to something usefull, like 1M and set count to e.g. 10000.
              Of course you can change blocksize to test a bit, but that's usually between like 64k and maybe 4M.

              M 1 Reply Last reply Reply Quote 1
              • M Offline
                maxcuttins @cg
                last edited by

                @cg said in Citrix Hypervisor 8.0 landed:

                @maxcuttins did you really measure 10 and 100 MB after I said 1 G is not enough for accurate results?

                Usually you set blocksize to something usefull, like 1M and set count to e.g. 10000.
                Of course you can change blocksize to test a bit, but that's usually between like 64k and maybe 4M.

                Ah did you intend 10G? instead of 10M?

                1 Reply Last reply Reply Quote 0
                • C Offline
                  cg
                  last edited by

                  First rule of all benchmarks: The longer and more often they run, the more precise they are.
                  If we talk about 1G as base, why should I switch with 10 or 100 to M? That doesn't make any sense at all.

                  1 Reply Last reply Reply Quote 1
                  • stormiS Offline
                    stormi Vates 🪐 XCP-ng Team
                    last edited by

                    I heard in several places over this forum that fio would be a better benchmark than dd. Does it apply here too?

                    C 1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      It's always better than dd, because it's closer to a real load.

                      1 Reply Last reply Reply Quote 0
                      • C Offline
                        cg @stormi
                        last edited by cg

                        @stormi dd stands for disk dump and does exactly that: Copy a stream of data.
                        Fio however can be configured for precise workloads and read/write mixes, parallel workloads etc.

                        So the first thing will only give you streamline benchmarks, what almost nobody cares about.
                        The second can simulate realworld (VM/database...) workloads, where (controller) Caches and non magnetic storage (Flash, Optane, MRAM...) makes the real difference.
                        Also use big amount of data, since caches can impact small ones extremely. Don't get me wrong: We need them and they can make huge differences, but as long as your benchmarks fully fit into them, it gives your nonsense/fake results. Also (consumer) SSDs start throttling after some 10 to a very few 100 GB of data written. Their caches fill up and they 'overheat'.

                        You can spend days on benchmarks and how to do what. 😉

                        1 Reply Last reply Reply Quote 2
                        • First post
                          Last post