XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • A

      Backup Info under VM tab in v6 never loads...

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      64
      2
      0 Votes
      64 Posts
      468 Views
      A
      @MathieuRA said: @ph7 @acebmxer Hi, regarding your backups which do not appear on the XO5 restore page, I suggest you to open a new topic so that this one remains focused on XO6 dashboards Once I updated back to master are restore points are visible again. Issue only appears with this test branch.
    • P

      Restore only showing 1 VM

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      1
      0 Votes
      8 Posts
      59 Views
      P
      @Bastien-Nollet I don't know, played around with it a little bit, all seemed to work, so I updated to master a6c50 as @acebmxer did and it still seems OK, so maybe we can put this on hold for now and put the energy on something else
    • M

      Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      6
      0 Votes
      6 Posts
      64 Views
      P
      tried either by manual run, or relaunched the schedule to see if context of schedule was needed to get id of the schedule, but to no avail schedule or schedule.id seems to be undefined
    • M

      Axios NPM attack

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      4
      0 Votes
      4 Posts
      60 Views
      olivierlambertO
      I think I added it after the video ^^ (between the recording and the official release)
    • C

      VM Migration | PIF is not attached

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      2
      0 Votes
      2 Posts
      23 Views
      P
      had the same on one pool that needed 97 updates. RPU was going great, but couldn't migrate VM post-upgrade it is a pool of 2 hosts, second hosts still had 97 patches to pass... even if RPU was finished. we waited 10 min and the 97 patches disappeared, and then we could migrate the VM
    • U

      Comparison with Ceph

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      2
      0 Votes
      2 Posts
      41 Views
      olivierlambertO
      Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
    • D

      XCP-NG upgrade 8.2 to 8.3

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      3
      0 Votes
      3 Posts
      66 Views
      R
      To add a bit more detail on the upgrade path: strictly speaking, you do not need to apply outstanding 8.2 patches before upgrading. When you upgrade to 8.3, you are replacing the entire base system with the 8.3 release which already incorporates everything from the 8.2 patch stream. Any 8.2 patches you hadn't yet applied will simply be superseded. That said, applying them first is still a reasonable approach if you want a clean upgrade history and a fully-patched 8.2 baseline before jumping to 8.3. A few things worth checking before you start on a production pool: Check VM compatibility. Run a quick review of your VMs for any that might have specific OS or toolstack dependencies tied to 8.2. Most guests upgrade cleanly but it is worth knowing your environment. Use rolling pool upgrade if you have more than one host. XCP-ng supports rolling upgrades: you migrate VMs off each host, upgrade it, rejoin the pool, then proceed to the next. This maintains VM availability throughout the process. The XO interface handles this workflow if you have XOA. Back up before the jump. Export critical VM configurations or snapshots beforehand. If you use Xen Orchestra for backups, trigger a manual full backup job before starting. The upgrade itself via yum is straightforward: add the 8.3 repo, yum update, reboot. The toolstack and XAPI will handle pool registration after the host comes back up. After upgrading all hosts, run the post-upgrade checks from the docs (pool metadata sync, storage rescans) and verify HA is healthy if you use it.
    • U

      Loss of connection during an action BUG

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      18
      3
      0 Votes
      18 Posts
      243 Views
      P
      @User-cxs peut etre un clnflit IP ou la VM prenait l'IP du master... courage pour la suite !
    • C

      OIDC login - Internal Server Error

      Watching Ignoring Scheduled Pinned Locked Moved Advanced features
      12
      0 Votes
      12 Posts
      310 Views
      P
      @dlgroep Thank you, we will take thin/thick token into account and use 'claims' part for other properties.
    • B

      [AVIS][COOKBOOK] Suite d'articles dédiés à XCP et l'écosystème Vates

      Watching Ignoring Scheduled Pinned Locked Moved French (Français)
      3
      1 Votes
      3 Posts
      78 Views
      B
      @AtaxyaNetwork Merci becuoup pour ce retour. Je vais compléter l'article dans ce sens.
    • U

      HOST_NOT_ENOUGH_FREE_MEMORY

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      6
      5
      0 Votes
      6 Posts
      195 Views
      U
      @Danp Okay thanks you
    • P

      Timestamp lost in Continuous Replication

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      27
      2
      0 Votes
      27 Posts
      772 Views
      florentF
      @joeymorin said: I observed similar behaviour. Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host B1 A2 (on local SR). Host A2 has a second local SR (separate physical disc) used as the target for a CR job. CR job would back up all four VMs to the second local SR on host A2. The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR. I noticed the situation was corrected by a commit on or about the same date reported by @ph7. Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install. Thanks for the work! edit: Correction B1 A2 sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )