XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Dark199
    3. Posts
    D
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 7
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

      @ronan-a
      [...]
      64 bytes from 172.27.18.161: icmp_seq=21668 ttl=64 time=0.805 ms
      64 bytes from 172.27.18.161: icmp_seq=21669 ttl=64 time=0.737 ms
      64 bytes from 172.27.18.161: icmp_seq=21670 ttl=64 time=0.750 ms
      64 bytes from 172.27.18.161: icmp_seq=21671 ttl=64 time=0.780 ms
      64 bytes from 172.27.18.161: icmp_seq=21672 ttl=64 time=0.774 ms
      64 bytes from 172.27.18.161: icmp_seq=21673 ttl=64 time=0.737 ms
      64 bytes from 172.27.18.161: icmp_seq=21674 ttl=64 time=0.773 ms
      64 bytes from 172.27.18.161: icmp_seq=21675 ttl=64 time=0.835 ms
      64 bytes from 172.27.18.161: icmp_seq=21676 ttl=64 time=0.755 ms
      1004711/1004716 packets, 0% loss, min/avg/ewma/max = 0.712/1.033/0.775/195.781 ms

      I am attaching simple ping stats for last 11 days. I don't think we can blame the network šŸ™‚

      posted in XOSTOR
      D
      Dark199
    • RE: XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

      @ronan-a
      Hello,

      I am uploading kern.log and drbd-kern.log for both events.

      drbd-kern.Feb06.log.txt
      kern.Feb06.log.txt

      drbd-kern.Feb17.log.txt
      kern.Feb17.log.txt

      Disks and RAM are 100% ok. But kernel logs make me wonder how XOSTOR should react for a short network outage ?
      VMs did have local primary drbd resource (diskful volume, all the data they need was available on a local disk)

      # linstor resource list
      ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
      ā”Š ResourceName                                    ā”Š Node       ā”Š Port ā”Š Usage  ā”Š Conns ā”Š      State ā”Š CreatedOn           ā”Š
      ā•žā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•”
      ā”Š xcp-persistent-database                         ā”Š xencc-hp03 ā”Š 7000 ā”Š Unused ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-02 15:28:19 ā”Š
      ā”Š xcp-persistent-database                         ā”Š xenrt-1    ā”Š 7000 ā”Š InUse  ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-02 15:28:18 ā”Š
      ā”Š xcp-persistent-database                         ā”Š xenrt-2    ā”Š 7000 ā”Š Unused ā”Š Ok    ā”Š   Diskless ā”Š 2025-02-02 15:28:17 ā”Š
      ā”Š xcp-volume-623a917e-614f-4176-8e58-505248ee9db4 ā”Š xencc-hp03 ā”Š 7004 ā”Š InUse  ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-02 15:35:18 ā”Š
      ā”Š xcp-volume-623a917e-614f-4176-8e58-505248ee9db4 ā”Š xenrt-1    ā”Š 7004 ā”Š Unused ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-02 15:35:17 ā”Š
      ā”Š xcp-volume-623a917e-614f-4176-8e58-505248ee9db4 ā”Š xenrt-2    ā”Š 7004 ā”Š Unused ā”Š Ok    ā”Š TieBreaker ā”Š 2025-02-02 15:35:17 ā”Š
      ā”Š xcp-volume-9dd3dc66-aa58-40f2-aa56-14b8846a4278 ā”Š xencc-hp03 ā”Š 7007 ā”Š Unused ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-04 16:18:46 ā”Š
      ā”Š xcp-volume-9dd3dc66-aa58-40f2-aa56-14b8846a4278 ā”Š xenrt-1    ā”Š 7007 ā”Š Unused ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-04 16:18:46 ā”Š
      ā”Š xcp-volume-9dd3dc66-aa58-40f2-aa56-14b8846a4278 ā”Š xenrt-2    ā”Š 7007 ā”Š Unused ā”Š Ok    ā”Š TieBreaker ā”Š 2025-02-04 16:18:46 ā”Š
      ā”Š xcp-volume-e9428d9d-97a7-4a37-a2bb-630f8b5f3f0f ā”Š xencc-hp03 ā”Š 7005 ā”Š Unused ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-02 15:42:40 ā”Š
      ā”Š xcp-volume-e9428d9d-97a7-4a37-a2bb-630f8b5f3f0f ā”Š xenrt-1    ā”Š 7005 ā”Š InUse  ā”Š Ok    ā”Š   UpToDate ā”Š 2025-02-02 15:42:40 ā”Š
      ā”Š xcp-volume-e9428d9d-97a7-4a37-a2bb-630f8b5f3f0f ā”Š xenrt-2    ā”Š 7005 ā”Š Unused ā”Š Ok    ā”Š TieBreaker ā”Š 2025-02-02 15:42:39 ā”Š
      ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
      
      posted in XOSTOR
      D
      Dark199
    • RE: XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

      @olivierlambert
      Hi,

      Yes, thank you, I am aware of that. I read all the docs/forums available, didn't find anything on the subject and just wanted to share the experience. Should I assume it's a known problem? - after all, that's what betas are for šŸ™‚

      Thanks,

      posted in XOSTOR
      D
      Dark199
    • RE: XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

      Afterwards, I left two VMs using XOSTOR storage, each one on a different host, and "Shutting down fileststem" happened only on one of them, with the following report generated on the linstor controller:

      ErrorReport-67B37339-00000-000000.log.txt

      Kind regards,

      posted in XOSTOR
      D
      Dark199
    • XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

      Hello,

      I am currently testing XOSTOR volume (xcp-ng 8.3 build 11 oct 2024, three hosts) and have experienced a two part problem:.

      1. linstor controller crashed, attaching /var/log/linstor-controller/ErrorReport, excerpt:
        Error message: Failed to start transaction
        Error message:
        Error message: IO Exception: null [90028-197]
        Error message: Reading from nio:/var/lib/linstor/linstordb.mv.db failed; file length 901120 read length 8192 at 0 [1.4.197/1]
        Error message: Input/output error

      as far as I can tell, controller was immediately started on one of remaining hosts, but

      1. linux VMs (all 3 of them) lost access to disk ("Shutting down filesystem"), they're up2date centos 7, here's console screenshot:
        XOSTOR_1.png

      2. After VM reboots, all went back to normal without any other action.

      So it seems the biggest issue was the guest OSes giving up at the time of controller crash.

      ErrorReport-679F8267-00000-000001.log.txt

      Can we do something about it ?

      posted in XOSTOR
      D
      Dark199
    • RE: Linux HVM big performance drop on Xeon E7450@HP DL 580g5, somewhere between 6.5 and 7.6 - still visible on 8.1

      Thanks for quick response,

      I've tried 'pti=off spectre_v2=off l1tf=off nospec_store_bypass_disable no_stf_barrier' (Xen/Dom0/guest) but no improvements.

      I'm still curious enough to investigate further šŸ™‚

      posted in Compute
      D
      Dark199
    • Linux HVM big performance drop on Xeon E7450@HP DL 580g5, somewhere between 6.5 and 7.6 - still visible on 8.1

      Hello everyone,

      could someone point me to a possible cause - and hopefully possible workarounds - in order to be able to use Xeon E7450 cores - even if only for sandbox/testing purposes.

      The VM's were running perfectly on 6.5, but after upgrade to 7.6 performance dropped significantly - to the point of unusability. The same VMs - when using PV - still run perfectly. Moreover, windows machines (HVM) show no change of performance. Dom0 - also seems unaffected.
      Cpu usage on those Linux HVM guests easily gets very high (100% all cores) - when given simple tasks, but I am unable to tell why.

      I've tried messing with guest I/O schedulers, and disabling ucode loader in Xen, but nothing helped. I've also tried different network hardware.

      I am aware that 'dunnington' cores are now unsupported (as of 8.1), but the problem exist also on 7.6.

      I've tested on both (up to date) Centos 6 and Centos 7.

      Thanks in advance,

      posted in Compute
      D
      Dark199