XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • maximsachsM

      XCP-ng 8.3: Broadcom BCM57414 `bnxt_en` Driver Fails to Probe on HPE DL380a Gen12

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      2
      1 Votes
      2 Posts
      65 Views
      olivierlambertO
      Pinging @Team-OS-Platform-Release
    • W

      VDI not showing in XO 5 from Source.

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Management
      39
      2
      0 Votes
      39 Posts
      3k Views
      P
      @wgomes yup thanks for bumping this topic, still having the problem too
    • C

      VM Migration | PIF is not attached

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      2
      0 Votes
      2 Posts
      42 Views
      P
      had the same on one pool that needed 97 updates. RPU was going great, but couldn't migrate VM post-upgrade it is a pool of 2 hosts, second hosts still had 97 patches to pass... even if RPU was finished. we waited 10 min and the 97 patches disappeared, and then we could migrate the VM
    • U

      Comparison with Ceph

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      2
      0 Votes
      2 Posts
      71 Views
      olivierlambertO
      Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
    • P

      Timestamp lost in Continuous Replication

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      27
      2
      0 Votes
      27 Posts
      799 Views
      florentF
      @joeymorin said: I observed similar behaviour. Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host B1 A2 (on local SR). Host A2 has a second local SR (separate physical disc) used as the target for a CR job. CR job would back up all four VMs to the second local SR on host A2. The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR. I noticed the situation was corrected by a commit on or about the same date reported by @ph7. Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install. Thanks for the work! edit: Correction B1 A2 sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )
    • C

      Cannot shutdown VM or migrate

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      2
      0 Votes
      2 Posts
      53 Views
      C
      Not sure the issue, but rebooted the host and everything is fine again. I definitely had rebooted the host twice though after applying the patches and physically unplugged and moved the host.
    • P

      found reproductible BUG with FLR

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      1
      0 Votes
      1 Posts
      17 Views
      No one has replied
    • E

      TrueNAS VM failing to start

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      21
      0 Votes
      21 Posts
      1k Views
      A
      That is a frustrating loop to be in, especially with TrueNAS. Usually, when the VM fails to start after a change, it’s because XCP-ng is trying to pass through a PCI device (like an HBA) that isn't being released properly by the host. Have you checked if the "hide" parameters in your grub config are still correct? Sometimes an update can reset those, and the host grabs the controller before the VM can. Another thing to try is toggling the BIOS/UEFI mode in the VM settings - TrueNAS can be picky about that depending on which version you’re running.
    • R

      Boot device: Hard Disk - Success

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      3
      0 Votes
      3 Posts
      271 Views
      O
      @DustinB If it hangs right after detecting the disk, it could be a bootloader or filesystem inconsistency from the snapshot state. Can you try booting the VM with a recovery ISO to check disk integrity or rebuild the bootloader?
    • J

      Every VM in a CR backup job creates an "Unhealthy VDI"

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      20
      1
      0 Votes
      20 Posts
      1k Views
      J
      This issue appears to have been resolved by a recent change.
    • P

      fell back to full and cannot delete snapshot

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      1
      1
      0 Votes
      1 Posts
      29 Views
      No one has replied
    • Tristis OrisT

      Unable to copy template

      Watching Ignoring Scheduled Pinned Locked Moved Management
      4
      0 Votes
      4 Posts
      173 Views
      Tristis OrisT
      While I was updating all the pools, a micro-update for 5 packages was released again. They come out every week now. Same issue with basic migration. Now it impossible.
    • S

      How to Setup IPMI in XO

      Watching Ignoring Scheduled Pinned Locked Moved Management
      30
      0 Votes
      30 Posts
      2k Views
      A
      Come on everyone! Click Here to vote to support HP IPMI info in XO!
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      169
      7 Votes
      169 Posts
      18k Views
      S
      @AlexanderK xo 6 is rolling out features regularly, for many functions, you must continue to use x05. backup is one of those at this time
    • MathieuRAM

      ACL V2 is coming soon and we need your feedbacks!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      1
      4 Votes
      1 Posts
      112 Views
      No one has replied
    • kruessK

      XenServer 7.1.2 to XCP-ng 8.3: INTERNAL_ERROR(Not_found)

      Watching Ignoring Scheduled Pinned Locked Moved Migrate to XCP-ng
      6
      0 Votes
      6 Posts
      218 Views
      kruessK
      Good moaning... The solution was pretty simple: a toolstack restart on the master (xcp83) did get all back on track and it now allows me to move the systems with a simple shutdown/start.
    • K

      Feedback from Automation Project (vCPUs, VDI rename, boot order)

      Watching Ignoring Scheduled Pinned Locked Moved REST API
      5
      0 Votes
      5 Posts
      177 Views
      K
      @olivierlambert & @mathieura thanks for the speedy response. Duly noted, very much appreciated.