XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Citrix Hypervisor 8.0 landed

    Scheduled Pinned Locked Moved News
    65 Posts 20 Posters 34.8k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      maxcuttins @cg
      last edited by

      @cg said in Citrix Hypervisor 8.0 landed:

      @maxcuttins did you really measure 10 and 100 MB after I said 1 G is not enough for accurate results?

      Usually you set blocksize to something usefull, like 1M and set count to e.g. 10000.
      Of course you can change blocksize to test a bit, but that's usually between like 64k and maybe 4M.

      Ah did you intend 10G? instead of 10M?

      1 Reply Last reply Reply Quote 0
      • C Offline
        cg
        last edited by

        First rule of all benchmarks: The longer and more often they run, the more precise they are.
        If we talk about 1G as base, why should I switch with 10 or 100 to M? That doesn't make any sense at all.

        1 Reply Last reply Reply Quote 1
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team
          last edited by

          I heard in several places over this forum that fio would be a better benchmark than dd. Does it apply here too?

          C 1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            It's always better than dd, because it's closer to a real load.

            1 Reply Last reply Reply Quote 0
            • C Offline
              cg @stormi
              last edited by cg

              @stormi dd stands for disk dump and does exactly that: Copy a stream of data.
              Fio however can be configured for precise workloads and read/write mixes, parallel workloads etc.

              So the first thing will only give you streamline benchmarks, what almost nobody cares about.
              The second can simulate realworld (VM/database...) workloads, where (controller) Caches and non magnetic storage (Flash, Optane, MRAM...) makes the real difference.
              Also use big amount of data, since caches can impact small ones extremely. Don't get me wrong: We need them and they can make huge differences, but as long as your benchmarks fully fit into them, it gives your nonsense/fake results. Also (consumer) SSDs start throttling after some 10 to a very few 100 GB of data written. Their caches fill up and they 'overheat'.

              You can spend days on benchmarks and how to do what. 😉

              1 Reply Last reply Reply Quote 2
              • First post
                Last post