XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. DustyArmstrong
    3. Topics
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 11
    • Posts 62
    • Groups 0

    Topics

    • DustyArmstrongD

      AMD 'Barcelo' passthrough issues - any success stories?

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      12
      1
      0 Votes
      12 Posts
      438 Views
      T
      @DustyArmstrong Thanks for responding to the GitHub issue. It’s great that more people want this working; it’s difficult to gain traction otherwise. Regarding your list, it’s correct. A reboot should be on the second place. You need to reboot only to detach your PCI device (video card) from its driver and assign it to the pciback driver instead on the next boot. This effectively creates a reservation for the device and allows you to dynamically assign it to VMs. Once your card is free from other kernel drivers, the rest doesn’t require a reboot.
    • DustyArmstrongD

      Detached VM Snapshots after Warm Migration

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      26
      1
      0 Votes
      26 Posts
      957 Views
      DustyArmstrongD
      @florent No problem, just thought it would be fun. Thanks for your work anyway!
    • DustyArmstrongD

      Lots of "host.getMdadmHealth" Failure Logs

      Watching Ignoring Scheduled Pinned Locked Moved Management
      5
      0 Votes
      5 Posts
      853 Views
      DustyArmstrongD
      Updated all my hosts but ended up with a bunch of stuck tasks for API host calls, didn't seem too healthy! It looks like they were stuck, kept seeing a host unhealthy power state repeatedly pop up and disappear. I opted to select all tasks and delete, same with my logs (I monitor externally anyway) which appears to have resolved this for the moment. I no longer see these mdadm logs being generated and everything appears normal.
    • DustyArmstrongD

      Backups (Config & VMs) Fail Following Updates

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      7
      0 Votes
      7 Posts
      2k Views
      DustyArmstrongD
      An update, if anyone ever comes across this via search engine. Turns out it was my container's timezone. The image was set to pure UTC, no timezone, by default, so I believe when it was writing files to my network storage it introduced a discrepancy. My network share was recording the file metadata accurately to real-time, so I assume when it came time to do another backup, the file time XO expected was different, making it think it was "stale" or still being "held". Have now run both scheduled metadata and VM backups without any errors . In summary: make sure your time, date and timezone are set correctly!
    • DustyArmstrongD

      XO Backups - Offline Storage Best Practices?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      0 Votes
      8 Posts
      2k Views
      planedropP
      @DustyArmstrong Got it got it! Makes total sense. I do think making sure you somehow are backing things up in a way that covers large natural disasters is important, not quite sure what the ideal solution here would be though.
    • DustyArmstrongD

      Has REST API changed (Cannot GET backup logs)?

      Watching Ignoring Scheduled Pinned Locked Moved Solved REST API
      7
      0 Votes
      7 Posts
      1k Views
      julien-fJ
      @DustyArmstrong Thanks for your report
    • DustyArmstrongD

      XO Rest API Supported Queries

      Watching Ignoring Scheduled Pinned Locked Moved REST API
      16
      0 Votes
      16 Posts
      3k Views
      julien-fJ
      @DustyArmstrong Perfect, thanks for your feedback
    • DustyArmstrongD

      XO Sources Build - Yarn ESOCKETTIMEDOUT ?

      Watching Ignoring Scheduled Pinned Locked Moved Solved Xen Orchestra
      2
      0 Votes
      2 Posts
      1k Views
      DustyArmstrongD
      I fixed it by running the following: yarn cache clean yarn --network-timeout 10000000