• Timestamp lost in Continuous Replication

    Backup
    27
    2
    0 Votes
    27 Posts
    786 Views
    florentF
    @joeymorin said: I observed similar behaviour. Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host B1 A2 (on local SR). Host A2 has a second local SR (separate physical disc) used as the target for a CR job. CR job would back up all four VMs to the second local SR on host A2. The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR. I noticed the situation was corrected by a commit on or about the same date reported by @ph7. Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install. Thanks for the work! edit: Correction B1 A2 sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )
  • Every VM in a CR backup job creates an "Unhealthy VDI"

    Backup
    20
    1
    0 Votes
    20 Posts
    1k Views
    J
    This issue appears to have been resolved by a recent change.
  • fell back to full and cannot delete snapshot

    Backup
    1
    1
    0 Votes
    1 Posts
    29 Views
    No one has replied
  • Unable to copy template

    Management
    4
    0 Votes
    4 Posts
    170 Views
    Tristis OrisT
    While I was updating all the pools, a micro-update for 5 packages was released again. They come out every week now. Same issue with basic migration. Now it impossible.
  • Backing up from Replica triggers full backup

    Backup
    14
    6
    1 Votes
    14 Posts
    299 Views
    florentF
    @Pilow open a dedicated topic on this . I will ping the relevant team there is it iscsi + CBT ?
  • How to Setup IPMI in XO

    Management
    30
    0 Votes
    30 Posts
    2k Views
    A
    Come on everyone! Click Here to vote to support HP IPMI info in XO!
  • backup mail report says INTERRUPTED but it's not ?

    Backup
    119
    5
    0 Votes
    119 Posts
    8k Views
    florentF
    yes, the last changes are released in latest ( 6.3) tomorrow if everything proceed as intended mostly : https://github.com/vatesfr/xen-orchestra/pull/9622 https://github.com/vatesfr/xen-orchestra/pull/9557 and this one was in 6.2 : https://github.com/vatesfr/xen-orchestra/commit/e36e1012e20c9678efa15148179941cb284c39a6 that's nice to hear fbeauchamp opened this pull request in vatesfr/xen-orchestra closed feat(xo-server): use index for getAllUnhealthyVdiChainsLength #9622 fbeauchamp opened this pull request in vatesfr/xen-orchestra closed fix(backups): better handling of generator cleanup #9557 0 fbeauchamp committed to vatesfr/xen-orchestra fix(xo-web): reducing polling for patches and license
  • 🛰️ XO 6: dedicated thread for all your feedback!

    Pinned Xen Orchestra
    169
    7 Votes
    169 Posts
    18k Views
    S
    @AlexanderK xo 6 is rolling out features regularly, for many functions, you must continue to use x05. backup is one of those at this time
  • ACL V2 is coming soon and we need your feedbacks!

    Xen Orchestra
    1
    4 Votes
    1 Posts
    99 Views
    No one has replied
  • 0 Votes
    6 Posts
    208 Views
    kruessK
    Good moaning... The solution was pretty simple: a toolstack restart on the master (xcp83) did get all back on track and it now allows me to move the systems with a simple shutdown/start.
  • Cannot shutdown VM or migrate

    XCP-ng
    2
    0 Votes
    2 Posts
    45 Views
    C
    Not sure the issue, but rebooted the host and everything is fine again. I definitely had rebooted the host twice though after applying the patches and physically unplugged and moved the host.
  • 0 Votes
    5 Posts
    172 Views
    K
    @olivierlambert & @mathieura thanks for the speedy response. Duly noted, very much appreciated.
  • Install XO from sources.

    Xen Orchestra
    21
    3 Votes
    21 Posts
    1k Views
    G
    @AlexanderK The ronivay script requires you to select an option (#2 to update). I look at things this way, it's good to have more people working on scripts like this.
  • PRs for wording cleanup - worth doing?

    Development
    6
    0 Votes
    6 Posts
    100 Views
    olivierlambertO
    There's no small contributions, every one of them are great & welcome!
  • Rolling Pool Update and Affinity Host

    Management
    3
    0 Votes
    3 Posts
    103 Views
    olivierlambertO
    Yes indeed, that's expected. XAPI affinity is on VM boot, not in the VM life. Then, it's the work of the load balancer.
  • VMware to XCP-ng Migrate Only Specific Disks

    Migrate to XCP-ng
    6
    0 Votes
    6 Posts
    202 Views
    planedropP
    @MajorP93 This was also my thought, the disconnection method you mentioned, I was just wanting some feedback about that. I do have to be a bit careful because the migration must happen and then the VM on VMware must go back online while I work with the vendor to migrate all the rest of the data. As for the bigger than 2TB issue, that won't be an issue for me. This VM is HUGE but it's because it's thick provisioned and no longer needs to be, the new VM will have like 8 disks but none of them will be over 2TB (most will be less than 100GB). So I am thinking using V2V and just powering off the VM on VMware, disconnect the unneeded VDIs, then migrate, then reconnect those VDIs and power the VM back on.
  • 0 Votes
    13 Posts
    465 Views
    M
    I can confirm that when using Citrix/Xenserver guest utilities version 8.4 (https://github.com/xenserver/xe-guest-utilities/releases/tag/v8.4.0) memory ballooning / DMC is working fine. After live migration the RAM of the linux guest is expanded to dynamic_max again. So this issue was in fact caused by Rust based xen-guest-agent. For now I'll keep using Citrix/Xenserver guest utilities on my Linux guests until the feature is implemented in Vates rust-based guest utilities. Best regards
  • Backup Suddenly Failing

    Backup
    28
    0 Votes
    28 Posts
    518 Views
    tjkreidlT
    @JSylvia007 Sorry, I'm really late to this thread, but note that backups can become problematic if the SR is something like 90% or more full. There needs to be some buffer for storage as part of the process. The fact you could copy/clone VMs means your SR is working OK, but backups are a different situation. If need be, you can always migrate VMs to other storage which is evidently what you ended up doing, which frees up extra disk space. Also backups are pretty intensive so make sure you have both enough CPU capacity and memory to handle the load. Finally. a defective SR will definitely cause issues if there are I/O errors, so watch your /var/log/SMlog for any such entries.
  • Intel Flex GPU with SR-IOV for GPU accelarated VDIs

    Hardware
    52
    0 Votes
    52 Posts
    13k Views
    S
    @TeddyAstie I hadn't seen anything special was needed in a VM other that a kernel new enough for support which 6.17 has, is there more to the story with adding things to Ubuntu?
  • XCP-ng 8.3 updates announcements and testing

    Pinned News
    431
    1 Votes
    431 Posts
    171k Views
    A
    @gduperrey Installed on home lab via rolling pool update and both host updated no issues and vms migrated back to 2nd host as expected this time. fingers crossed work servers have the same luck. I do have open support ticket from last round of updates for work servers. Waiting for response before installing patches.