• XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"

    Backup
    5
    0 Votes
    5 Posts
    58 Views
    florentF
    @flakpyro yep , update of a VM code is broken the fix is here We probably won't release it today, so I advise source user to use this branch, and xoa user to fall back to stable until thursday ( monday is not worked in France )
  • Restore only showing 1 VM

    Backup
    14
    1
    0 Votes
    14 Posts
    141 Views
    R
    The clue is in your own observation: re-enabling the disabled manual job made all VMs appear. The Backup Restore (V5) view likely filters its restore list based on which backup jobs are currently enabled, if a job is disabled, its associated VMs may not be indexed into the restore view even if backups from that job exist on the remote. This is different from how the older restore view worked, where it would scan the remote and show everything regardless of job state. The V5 restore view appears to be building its list from job metadata rather than from a direct remote scan. A few things worth verifying: Check whether the one VM that does appear is covered by a currently-enabled job, and the others are only covered by disabled jobs. If that pattern holds consistently, that's the root cause and it's a V5 behavior change worth reporting to the XO team. In the XO UI, go to Backup → Jobs and check which jobs are enabled. If you temporarily enable all jobs, does the restore list show all VMs? And when you disable some again, does it immediately drop those VMs from the list? If the behavior is confirmed, filing it against the backup v5 restore view filter logic would be useful, the restore UI should show all VMs that have restorable backups on the remote, regardless of whether the originating job is currently active or not. Given you're on a test branch focused on reactivity fixes, this could also be a rendering/state issue where the component isn't re-fetching when job state changes, the list gets built once and doesn't update until you navigate away and back.
  • Is v8.3 NUMA aware?

    Hardware
    3
    0 Votes
    3 Posts
    105 Views
    C
    If anyone can point me to documentation on this, I would greatly appreciate it. I have looked in the docs and searched for an answer and come up empty.
  • XOA - Memory Usage

    Xen Orchestra
    7
    2
    0 Votes
    7 Posts
    273 Views
    P
    @flakpyro yup still aggressive on memory consumption I have a task that reboots XOA & XOPROXIES every two days to mitigate [image: 1775146146414-8e3474d6-3c71-4b21-bf1c-293fe13c61b8-image.jpeg] since 6.3 / 6.3.1 it seems more aggressive on the ramp up (I updated just yesterday... was still on 6.1.2)
  • iso modification and gpg key check

    Solved Development
    3
    4
    0 Votes
    3 Posts
    44 Views
    olivierlambertO
    Nice catch @vagrantin ! Feel free to open a new thread if you have other problems.
  • Distibuted backups doesn't clean up the deltas in the BRs

    Backup
    11
    8
    0 Votes
    11 Posts
    91 Views
    P
    @Pilow said: @ph7 try to put RETENTION to 1 in the schedule, as you are using LTR parameters I was running the shedule with 20 mins backup retention and it is running fine together with the LTR And here is the manual I ran at 14:56 removed [image: 1775140196713-screenshot-2026-04-02-at-16-28-13-backup.png]
  • 1 Votes
    13 Posts
    165 Views
    P
    @Pilow perfect!
  • Backup retention policy and key backup interval

    Backup
    5
    3
    0 Votes
    5 Posts
    62 Views
    P
    @Bastien-Nollet why doni have weekly or monthly rétention points that are not fulls ? could not theses rétentions points be merged in fulls when tagged ? keeping a monthly incremental suppose to also keep previous inc. and based full
  • VDI not showing in XO 5 from Source.

    Unsolved Management
    39
    2
    0 Votes
    39 Posts
    3k Views
    P
    @wgomes yup thanks for bumping this topic, still having the problem too
  • Boot device: Hard Disk - Success

    XCP-ng
    3
    0 Votes
    3 Posts
    269 Views
    O
    @DustinB If it hangs right after detecting the disk, it could be a bootloader or filesystem inconsistency from the snapshot state. Can you try booting the VM with a recovery ISO to check disk integrity or rebuild the bootloader?
  • VM Migration | PIF is not attached

    Compute
    2
    0 Votes
    2 Posts
    39 Views
    P
    had the same on one pool that needed 97 updates. RPU was going great, but couldn't migrate VM post-upgrade it is a pool of 2 hosts, second hosts still had 97 patches to pass... even if RPU was finished. we waited 10 min and the 97 patches disappeared, and then we could migrate the VM
  • Comparison with Ceph

    XOSTOR
    2
    0 Votes
    2 Posts
    61 Views
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • Axios NPM attack

    Xen Orchestra
    4
    0 Votes
    4 Posts
    86 Views
    olivierlambertO
    I think I added it after the video ^^ (between the recording and the official release)
  • OIDC login - Internal Server Error

    Advanced features
    12
    0 Votes
    12 Posts
    353 Views
    P
    @dlgroep Thank you, we will take thin/thick token into account and use 'claims' part for other properties.
  • XCP-NG upgrade 8.2 to 8.3

    XCP-ng
    3
    0 Votes
    3 Posts
    84 Views
    R
    To add a bit more detail on the upgrade path: strictly speaking, you do not need to apply outstanding 8.2 patches before upgrading. When you upgrade to 8.3, you are replacing the entire base system with the 8.3 release which already incorporates everything from the 8.2 patch stream. Any 8.2 patches you hadn't yet applied will simply be superseded. That said, applying them first is still a reasonable approach if you want a clean upgrade history and a fully-patched 8.2 baseline before jumping to 8.3. A few things worth checking before you start on a production pool: Check VM compatibility. Run a quick review of your VMs for any that might have specific OS or toolstack dependencies tied to 8.2. Most guests upgrade cleanly but it is worth knowing your environment. Use rolling pool upgrade if you have more than one host. XCP-ng supports rolling upgrades: you migrate VMs off each host, upgrade it, rejoin the pool, then proceed to the next. This maintains VM availability throughout the process. The XO interface handles this workflow if you have XOA. Back up before the jump. Export critical VM configurations or snapshots beforehand. If you use Xen Orchestra for backups, trigger a manual full backup job before starting. The upgrade itself via yum is straightforward: add the 8.3 repo, yum update, reboot. The toolstack and XAPI will handle pool registration after the host comes back up. After upgrading all hosts, run the post-upgrade checks from the docs (pool metadata sync, storage rescans) and verify HA is healthy if you use it.
  • 1 Votes
    3 Posts
    89 Views
    B
    @AtaxyaNetwork Merci becuoup pour ce retour. Je vais compléter l'article dans ce sens.
  • Loss of connection during an action BUG

    Xen Orchestra
    18
    3
    0 Votes
    18 Posts
    264 Views
    P
    @User-cxs peut etre un clnflit IP ou la VM prenait l'IP du master... courage pour la suite !
  • HOST_NOT_ENOUGH_FREE_MEMORY

    Xen Orchestra
    6
    5
    0 Votes
    6 Posts
    207 Views
    U
    @Danp Okay thanks you
  • Backup Info under VM tab in v6 never loads...

    Backup
    64
    2
    0 Votes
    64 Posts
    614 Views
    A
    @MathieuRA said: @ph7 @acebmxer Hi, regarding your backups which do not appear on the XO5 restore page, I suggest you to open a new topic so that this one remains focused on XO6 dashboards Once I updated back to master are restore points are visible again. Issue only appears with this test branch.
  • Timestamp lost in Continuous Replication

    Backup
    27
    2
    0 Votes
    27 Posts
    784 Views
    florentF
    @joeymorin said: I observed similar behaviour. Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host B1 A2 (on local SR). Host A2 has a second local SR (separate physical disc) used as the target for a CR job. CR job would back up all four VMs to the second local SR on host A2. The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR. I noticed the situation was corrected by a commit on or about the same date reported by @ph7. Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install. Thanks for the work! edit: Correction B1 A2 sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )