thanks to the prep work of the team, the patch can be released today .Expect a 6.3.2 in a few hours
Posts
-
RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"
-
RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"
@flakpyro yep , update of a VM code is broken
the fix is here
We probably won't release it today, so I advise source user to use this branch, and xoa user to fall back to stable until thursday ( monday is not worked in France )
-
RE: Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"
(file:///usr/local/lib/node_modules/@x
fix is here https://github.com/vatesfr/xen-orchestra/pull/9667
-
RE: Timestamp lost in Continuous Replication
I observed similar behaviour.
Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host
B1A2 (on local SR).Host A2 has a second local SR (separate physical disc) used as the target for a CR job.
CR job would back up all four VMs to the second local SR on host A2.
The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR.
I noticed the situation was corrected by a commit on or about the same date reported by @ph7.
Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install.
Thanks for the work!
edit: Correction
B1A2sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support
I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )
-
RE: Backing up from Replica triggers full backup
@Pilow open a dedicated topic on this . I will ping the relevant team there
is it iscsi + CBT ? -
RE: Backing up from Replica triggers full backup
@Andrew the NBD warning is probably doable.
the other solution is not : NBD is mandaotry to use CBT ( the fallback with export can't work without a real snapshot ) -
RE: Backing up from Replica triggers full backup
@flakpyro yes this one : https://github.com/vatesfr/xen-orchestra/pull/9635
fixed this morning in our work branch, and merged this after noon . We also explicited some edge cases here : https://github.com/vatesfr/xen-orchestra/blob/master/%40xen-orchestra/backups/docs/VM backups/incrementalReplication.md (that will probably be ported to the main doc shortly )
thank you for your input
-
RE: backup mail report says INTERRUPTED but it's not ?
yes, the last changes are released in
latest( 6.3) tomorrow if everything proceed as intendedmostly : https://github.com/vatesfr/xen-orchestra/pull/9622
https://github.com/vatesfr/xen-orchestra/pull/9557and this one was in 6.2 :
https://github.com/vatesfr/xen-orchestra/commit/e36e1012e20c9678efa15148179941cb284c39a6that's nice to hear
-
RE: Backing up from Replica triggers full backup
@flakpyro you probaby had multiple job/schedule on the same schedule
I reworked the code and pushed an update and added a doc with the edge cases : https://github.com/vatesfr/xen-orchestra/pull/9635/changes#diff-ef545af2ad06f2759c1a3787f266108b3b2cbc203bc8071bbd847278f3e6a5f0 (will be on doc xo when merged )
-
RE: Backing up from Replica triggers full backup
@flakpyro could you test this branch ?
https://github.com/vatesfr/xen-orchestra/pull/9635/commits -
RE: Backup Suddenly Failing
@JSylvia007
Error: stream has ended with not enough data (actual: 397, expected: 2097152)is the root cause, can you post the json here ?remove the snapshot on the source to start a new backup chain .
-
RE: Backing up from Replica triggers full backup
I am on it , thanks for all the informations
-
RE: Question about Continuous Replication/ Backups always doing Full Backups
@Kraken89 I will look into the transition from the previous system to the new one
maybe that's the key
-
RE: Question about Continuous Replication/ Backups always doing Full Backups
@Kraken89 is it on all VM or only one ?
we did quite a lot of changes on the replication, maybe we missed something.are you replicating inside the same pool or toward another pool ? (are both server in the same pool ? )
-
RE: Backup Suddenly Failing
@JSylvia007 this means that one of the task coalescing on of the older backups failed .
we are working on making it more observable, because for now it is a very opaque process. Even if the failure are rare, they can happen
I would advise you to check the toggle " merge synchronously" in the backup job if your backup window allow it. At least you will have an error message earlier
-
RE: Timestamp lost in Continuous Replication
@ph7 that is a good news
thank you for your patience and help -
RE: Timestamp lost in Continuous Replication
@kratos no, it's not that rare. I even saw in the wild replication on the same storage (wouldn't recommend it , though )
the cross pool replication is a little harder since the objects are each split on their own xen api, so the calls must be routed to the right one
We tested the harder part, not the mono xapi case -
RE: Timestamp lost in Continuous Replication
@kratos you probably heard the sound of my head hitting my desk when I found the cause
the fix is in review, you will be able to use it in a few hours -
RE: Timestamp lost in Continuous Replication
we had as many "VMs with a timestamp in the name" as number of REPLICAs, and multiple snapshot on source VM
now we have "one Replica VM with multiple snapshots" ? Veeam-replica-style...we didn't look at veeam , but it's reassuring to see that we converge toward the solutions used elsewhere
it shouldn't change anything on the source
I am currently doing more test to see if we missed somethingedit: as an additional beenfits it should use less space on target it you have a retention > 1 since we will only have one active disk