XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. vaewyn
    V
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 3
    • Posts 13
    • Groups 0

    vaewyn

    @vaewyn

    1
    Reputation
    1
    Profile views
    13
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    vaewyn Unfollow Follow

    Best posts made by vaewyn

    • RE: XOSTOR hyperconvergence preview

      @olivierlambert I've understood that part... what I am wondering is if I have 3 hosts in one data center and 3 hosts in another, and I have asked for redundancy of 3 copies, is there a way to ensure all three copies are never in the same data center all at the same time.

      posted in XOSTOR
      V
      vaewyn

    Latest posts made by vaewyn

    • RE: Migrating complicated windows fileservers from vmware quickly?

      @rtjdamen Correct... in VMWare RDMs created quite a few issues so they are all as VMFS datastores with VMDKs inside them that are then exposed to the VM as drives.

      Unfortunately, this is like the ONLY thing that RDMs would actually shine for 😟

      posted in Migrate to XCP-ng
      V
      vaewyn
    • RE: Migrating complicated windows fileservers from vmware quickly?

      @rtjdamen They are currently VMware datastores (VMFS) that have drives on them (vmdks) attached to the VMs so 😞 I can easily expose the iSCSI targets but neither Windows or XCP-NG understands what to do with a datastore target from VMware

      posted in Migrate to XCP-ng
      V
      vaewyn
    • Migrating complicated windows fileservers from vmware quickly?

      We have several large file servers (each has 20+ disks that are from 2-4TB in size). VMWare is currently connecting via iSCSI to the SAN they are on and exposing them as virtual disks to the VM.... and therein is the problem. VMFS 😕 I can't just attach those iSCSI targets to XCP-NG and have them as happy disks on the new system.

      I have room on other SANs or on XOSTOR for the new disk.... but I can think of only one (sketchy) way to do a conversion that won't take DAYS of downtime to complete. I'm wondering, even if I changed datastores from the external SANs to vSan, if the migration process would even have a prayer of completing at all.

      The only idea I have come up with, and yes.. this seems VERY stupid, is to:
      Expose some iSCSI targets from a SAN directly to the files server VM
      Setup an OS RAID mirror for each disk
      Let them complete mirroring in the OS
      Remove the old drives from the mirror
      Shutdown the VM
      Migrate the VM to our XCP cluster
      Bring up the VM
      Add new disks coming from XOSTOR to the VM
      Add the new disks as mirror members in the OS raid mirror
      Wait for the mirror to complete
      Remove the iSCSI disks on the SAN from the mirror
      Destroy the mirror and return it to a standard disk setup

      Looking for feedback on my level of stupidity 😅 and wondering if someone has been down this road before.

      posted in Migrate to XCP-ng
      V
      vaewyn
    • RE: XOSTOR Performance

      @olivierlambert iodepth didn't change it much...

      Read speeds are good, I'm seeing 1,113 MiB/s on both the raid0 and the single drive... so does smapiv1 have a limiting factor on only the writes?

      posted in XOSTOR
      V
      vaewyn
    • RE: XOSTOR Performance

      @olivierlambert Well... that actually went worse and better 😞 68.8MB/s for the first test... but then 266MiB/s when it was no longer "thin" on the second run.

      This is still only 1/3 of "raw" performance so... it's not a deal breaker but man... having to do fake raid devices and still coming in that slow, in comparison, is rough. I've seen some forum posts with waaaaaay better speeds... I'll need to do some searching and see if they have any "magic" for me.

      My methodology was.. created 4 drives on xostor assigned to the VM. Then in the VM I did:
      mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvde /dev/xvdf
      mkfs.xfs /dev/md0
      mount /dev/md0 /mnt
      fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1

      posted in XOSTOR
      V
      vaewyn
    • RE: XOSTOR Performance

      @olivierlambert I created a 100gb drive on my xostor... mounted it in a vm on /mnt and ran:
      fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1

      Which basically says to write a 10gb file as fast as you can.

      posted in XOSTOR
      V
      vaewyn
    • XOSTOR Performance

      Doing our trial setup before we purchase... and I am getting performance numbers that are lower than I expected.

      Setup is 6 Dell R740xd chassis with 23 4TB Samsung EVO SSDs assigned to XOSTOR in each chassis. Network setup is 2 - 10gb ports in an slb bond from each host.

      Running TrueNAS on them with a ZFS setup I normally see numbers around 885MiB/s for fio write tests (10GB with 1 worker) from vmware mounting the iscsi from them to a vm.

      With XOSTOR I am seeing 170MiB/s from a VM running on the same host as the indicated "In Use" XOSTOR host.

      Wondering if these numbers are expected... where to go for "tuning" etc...

      Thanks for any info in advance!

      posted in XOSTOR
      V
      vaewyn
    • Newbie questions

      #1 Why can't you have rolling upgrades when there is an XOSTOR on the pool?
      #2 Will that limitation go away anytime soon?
      #3 I'm guessing that doing the migrate out/maintenance/upgrade/down/up/back in dance manually still works just fine?
      #4 "Only show Networks that meet XOSTOR requirements" ... what are those requirements as I am only seeing the management network when others "should" be available. (figured this one out on my own)

      Thanks!

      posted in XOSTOR
      V
      vaewyn
    • RE: XOSTOR hyperconvergence preview

      For those that might run across my questions here... there is a nice blog post at Linbit on how to span availability zones correctly to keep your data redundancy up:
      https://linbit.com/blog/multi-az-replication-using-automatic-placement-rules-in-linstor/

      So CLI is doable 🙂 GUI would be nice in the future 😁

      posted in XOSTOR
      V
      vaewyn
    • RE: XOSTOR hyperconvergence preview

      @olivierlambert Correct... these DCs are across a campus on private fiber so single digit milliseconds worst case. We've historically had vmware keep 3 data copies and make sure at least one is in a separate DC... that way, when a DC is lost, the HA VMs can restart on the remaining host pool successfully because they have their storage available still.

      posted in XOSTOR
      V
      vaewyn