• Distibuted backups doesn't clean up the deltas in the BRs

    Backup
    7
    8
    0 Votes
    7 Posts
    31 Views
    P
    @ph7 Hi, We found the issue. It should be fixed by https://github.com/vatesfr/xen-orchestra/pull/9667 However, given the number of delta, it may take a while to merge the whole chain back to 2 backups. fbeauchamp opened this pull request in vatesfr/xen-orchestra open fix(backups): schedule reference is missing #9667
  • Backup retention policy and key backup interval

    Backup
    5
    3
    0 Votes
    5 Posts
    44 Views
    P
    @Bastien-Nollet why doni have weekly or monthly rétention points that are not fulls ? could not theses rétentions points be merged in fulls when tagged ? keeping a monthly incremental suppose to also keep previous inc. and based full
  • 0 Votes
    10 Posts
    115 Views
    P
    @Pilow ooh sorry, I did not see that you have a proxy. You need to fix it in the proxy too. Edit: in fact, you just have to put the changes of IncrementalRemoteWriter.mjs in proxy node_modules, the rest is optional
  • Restore only showing 1 VM

    Backup
    11
    1
    0 Votes
    11 Posts
    90 Views
    Bastien NolletB
    @ph7 Ok, thanks. But the most important will be to do this test while some VMs are missing from the backup restore page (if it happens again)
  • VDI not showing in XO 5 from Source.

    Unsolved Management
    39
    2
    0 Votes
    39 Posts
    3k Views
    P
    @wgomes yup thanks for bumping this topic, still having the problem too
  • iso modification and gpg key check

    Development
    2
    4
    0 Votes
    2 Posts
    17 Views
    V
    Ok just read the code and found my answer It was a typo it is repo-gpgcheck not repo_gpgcheck repo_gpgcheck = (None if getStrAttribute(i, ['repo-gpgcheck'], default=None) is None else getBoolAttribute(i, ['repo-gpgcheck'])) gpgcheck = (None if getStrAttribute(i, ['gpgcheck'], default=None) is None else getBoolAttribute(i, ['gpgcheck']))
  • XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"

    Backup
    1
    0 Votes
    1 Posts
    12 Views
    No one has replied
  • Boot device: Hard Disk - Success

    XCP-ng
    3
    0 Votes
    3 Posts
    259 Views
    O
    @DustinB If it hangs right after detecting the disk, it could be a bootloader or filesystem inconsistency from the snapshot state. Can you try booting the VM with a recovery ISO to check disk integrity or rebuild the bootloader?
  • VM Migration | PIF is not attached

    Compute
    2
    0 Votes
    2 Posts
    30 Views
    P
    had the same on one pool that needed 97 updates. RPU was going great, but couldn't migrate VM post-upgrade it is a pool of 2 hosts, second hosts still had 97 patches to pass... even if RPU was finished. we waited 10 min and the 97 patches disappeared, and then we could migrate the VM
  • Comparison with Ceph

    XOSTOR
    2
    0 Votes
    2 Posts
    52 Views
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • Axios NPM attack

    Xen Orchestra
    4
    0 Votes
    4 Posts
    77 Views
    olivierlambertO
    I think I added it after the video ^^ (between the recording and the official release)
  • OIDC login - Internal Server Error

    Advanced features
    12
    0 Votes
    12 Posts
    334 Views
    P
    @dlgroep Thank you, we will take thin/thick token into account and use 'claims' part for other properties.
  • XCP-NG upgrade 8.2 to 8.3

    XCP-ng
    3
    0 Votes
    3 Posts
    72 Views
    R
    To add a bit more detail on the upgrade path: strictly speaking, you do not need to apply outstanding 8.2 patches before upgrading. When you upgrade to 8.3, you are replacing the entire base system with the 8.3 release which already incorporates everything from the 8.2 patch stream. Any 8.2 patches you hadn't yet applied will simply be superseded. That said, applying them first is still a reasonable approach if you want a clean upgrade history and a fully-patched 8.2 baseline before jumping to 8.3. A few things worth checking before you start on a production pool: Check VM compatibility. Run a quick review of your VMs for any that might have specific OS or toolstack dependencies tied to 8.2. Most guests upgrade cleanly but it is worth knowing your environment. Use rolling pool upgrade if you have more than one host. XCP-ng supports rolling upgrades: you migrate VMs off each host, upgrade it, rejoin the pool, then proceed to the next. This maintains VM availability throughout the process. The XO interface handles this workflow if you have XOA. Back up before the jump. Export critical VM configurations or snapshots beforehand. If you use Xen Orchestra for backups, trigger a manual full backup job before starting. The upgrade itself via yum is straightforward: add the 8.3 repo, yum update, reboot. The toolstack and XAPI will handle pool registration after the host comes back up. After upgrading all hosts, run the post-upgrade checks from the docs (pool metadata sync, storage rescans) and verify HA is healthy if you use it.
  • 1 Votes
    3 Posts
    87 Views
    B
    @AtaxyaNetwork Merci becuoup pour ce retour. Je vais compléter l'article dans ce sens.
  • Loss of connection during an action BUG

    Xen Orchestra
    18
    3
    0 Votes
    18 Posts
    255 Views
    P
    @User-cxs peut etre un clnflit IP ou la VM prenait l'IP du master... courage pour la suite !
  • HOST_NOT_ENOUGH_FREE_MEMORY

    Xen Orchestra
    6
    5
    0 Votes
    6 Posts
    201 Views
    U
    @Danp Okay thanks you
  • Backup Info under VM tab in v6 never loads...

    Backup
    64
    2
    0 Votes
    64 Posts
    536 Views
    A
    @MathieuRA said: @ph7 @acebmxer Hi, regarding your backups which do not appear on the XO5 restore page, I suggest you to open a new topic so that this one remains focused on XO6 dashboards Once I updated back to master are restore points are visible again. Issue only appears with this test branch.
  • Timestamp lost in Continuous Replication

    Backup
    27
    2
    0 Votes
    27 Posts
    777 Views
    florentF
    @joeymorin said: I observed similar behaviour. Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host B1 A2 (on local SR). Host A2 has a second local SR (separate physical disc) used as the target for a CR job. CR job would back up all four VMs to the second local SR on host A2. The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR. I noticed the situation was corrected by a commit on or about the same date reported by @ph7. Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install. Thanks for the work! edit: Correction B1 A2 sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )
  • Every VM in a CR backup job creates an "Unhealthy VDI"

    Backup
    20
    1
    0 Votes
    20 Posts
    1k Views
    J
    This issue appears to have been resolved by a recent change.
  • fell back to full and cannot delete snapshot

    Backup
    1
    1
    0 Votes
    1 Posts
    28 Views
    No one has replied