• CBT: the thread to centralize your feedback

    Pinned
    455
    1 Votes
    455 Posts
    633k Views
    olivierlambertO
    Okay, I thought the autoscan was only for like 10 minutes or so, but hey I'm not deep down in the stack anymore
  • Feedback on immutability

    Pinned
    56
    2 Votes
    56 Posts
    19k Views
    olivierlambertO
    Sadly, Backblaze is often having issues on S3 (timeout, not reliable etc). We are updating our doc to give a "tiering" support.
  • backup mail report says INTERRUPTED but it's not ?

    37
    5
    0 Votes
    37 Posts
    1k Views
    J
    @MajorP93 said in backup mail report says INTERRUPTED but it's not ?: @john.c Considering how widely Node JS is being used out there I highly doubt that memory management in itself is broken in Node 22 and 24. If that would be the case it would have been covered by IT bloggers and most users would switch to using something else. Classifying memory management as unstable for the whole LTS branches 22 and 24 is something a LLM would do. I think it is more likely a XO + Node issue. @pilow already said that they are using XOA which (AFAIK) is still using Node 20. Even on Node 20 there seems to be some memory leak ongoing according to them which is why it being a "XO + Node" issue rather than a Node 22/24 being borked in general becomes even more likely. //EDIT: even if using Node 20 would improve anything here, sticking with it might not be the best idea as Node 20 will become EOL in April 2026. @bastien-nollet @florent @olivierlambert It takes placing enough stress on those certain areas, to trigger RSS spikes in NodeJS 22 and 24. It’s happened and/or happening to other developers who use NodeJS. Just to clarify a few things from the earlier AI‑generated reply: In this case we are not dealing with a kernel OOM kill. The log I attached in my first post clearly shows a Node‑level heap out‑of‑memory error. So statements like “No crash logs = kernel OOM” don’t apply here. That said, it is still worth looking into Node 22/24 memory behavior, but not because those LTS branches are “broken.” If Node’s memory management were fundamentally unstable, the entire ecosystem would be in chaos. Instead, what seems more likely is: XO’s backup workload + Node 22/24 = hitting a known memory‑management edge case. This is supported by the fact that even XOA (which uses Node 20) is showing signs of a slow leak according to @pilow. That strongly suggests the issue is not “Node 22/24 bad,” but rather: “XO + Node” interaction that becomes more visible under newer V8 versions. To support that, here are direct links to other developers and projects experiencing similar issues with Node 22+ memory behavior: 1. Cribl’s deep dive into Node 22 memory regressions They observed significantly higher RSS and memory anomalies when upgrading from Node 20 → 22, and ended up contributing fixes upstream. “Understanding Node.js 22 memory behavior and our upstream contribution” https://cribl.io/blog/understanding-node-js-22-memory-behavior-and-our-upstream-contribution/ (cribl.io in Bing) This is one of the clearest real‑world examples of a production workload exposing V8 memory issues that didn’t appear in Node 20. 2. Node.js upstream issue: RetainedMaps memory leak in Node 22 This is a confirmed V8‑level leak that affected Node 22 until fixed upstream. GitHub Issue #57412 — “Memory leak due to increasing RetainedMaps size in V8 (Fixed upstream)” https://github.com/nodejs/node/issues/57412 (github.com in Bing) This shows that Node 22+ did have real memory regressions, even if they don’t affect all workloads. 3. Broader discussions about increased RSS in modern Node/V8 There are multiple reports of higher RSS and “apparent leaks” in Node 22+ under heavy async I/O, streaming, or buffer‑intensive workloads — which is exactly what XO’s backup pipeline does. Examples include: Matteo Collina’s posts on V8 memory behavior and GC tuning Various debugging guides for Node 22 memory regressions Reports from teams running high‑throughput streaming workloads These aren’t XO‑specific, but they show the pattern is real. Why this matters for XO? XO’s backup pipeline is unusually heavy for a Node application: large streaming buffers compression encryption S3 multipart uploads high concurrency long‑lived async chains This is exactly the kind of workload that tends to surface V8 memory issues that don’t appear in typical web servers or CLIs. And since Node 20 goes EOL in April 2026, XO will eventually need to run reliably on Node 22/24 or an alternative runtime. So the more accurate framing is: This is not a kernel OOM. This is a Node heap OOM, confirmed by the logs. Node 22/24 are not globally unstable, but they do have documented memory regressions and behavior changes. XO’s backup workload is heavy enough to expose those issues. Even Node 20 shows a slow leak in XOA, which strongly suggests a XO + Node interaction, not a Node‑only problem. Investigating Node 22/24 memory behavior is still worthwhile because XO recommends using the latest LTS. Long‑term, XO may need fixes, profiling, or architectural adjustments to run reliably on future Node versions. tunamagur0 created this issue in nodejs/node closed Memory leak due to increasing RetainedMaps size in V8 (Fixed upstream) #57412
  • VM export failing with Unix.Unix_error(Unix.EIO, "read"..)

    2
    0 Votes
    2 Posts
    18 Views
    DanpD
    Hi, There weren't any other errors / warnings just prior to the Feb 3 00:22:24 entries? Have you tried using vhd-cli check to verify the VDI's integrity? Dan
  • Execute pre-freeze and post-thaw

    22
    0 Votes
    22 Posts
    743 Views
    J
    What is the purpose of this discussion? Could this feature integrate XCP-ng? We are looking for a new virtualization solution. Oracle is a bottleneck for backups. We tested the backup with the Veeam agent, which appears to be successful. However, replication is not possible for this Oracle point via Veeam.
  • XOA 6.0.3 Backup Job Failure and VDI Export Hang

    3
    0 Votes
    3 Posts
    93 Views
    planedropP
    @florent As it turns out, the job completed the next time it ran and this task went away, not sure why that first one failed but it seems to be OK for now. I will update this if I see this error again and be sure to include as many logs as I can. Thanks as always!
  • S3 Chunk Size

    14
    0 Votes
    14 Posts
    393 Views
    olivierlambertO
    Keep us posted, happy to hear from it!
  • Replication is leaving VDIs attached to Control Domain, again

    7
    0 Votes
    7 Posts
    149 Views
    A
    @florent Yes.
  • bug about provoked BACKUP FELL BACK TO A FULL due to DR job

    9
    5
    0 Votes
    9 Posts
    395 Views
    G
    @LoTus111 Hello, I attempted to reproduce the issue without success. Could you please share the logs with us so we can conduct a deeper investigation? Feel free to send them to me via PM.
  • Restore job went off the rails. How to fix it?

    Solved
    6
    0 Votes
    6 Posts
    185 Views
    B
    @florent increasing the timeout value was the solution. Thank you.
  • This topic is deleted!

    1
    1
    0 Votes
    1 Posts
    2 Views
    No one has replied
  • Restoring folder via backup file restore feature broken for .tar.gz

    3
    1 Votes
    3 Posts
    107 Views
    M
    @florent I think you meant it was locking the xoa via .tar.gz but not via .zip which seemed to be the case Yeah I just decompressed the restored .zip and checked number of files and size via ncdu, result: "Total disk usage: 194,5 MiB Apparent size: 193,1 MiB Items: 613" In case I can provide more information for investigating this issue, please let me know. Thanks and best regards
  • 0 Votes
    8 Posts
    360 Views
    anivardA
    I recreate my ZFS pool. But next time or for other stuck with this kind of issue, I think you should start to rename your ZFS POOL. I think the old ZPOOL name stay in XO configuration shadow settings. To rename a ZFS pool, export it with zpool export [poolname] and then import it with the new name using zpool import [poolname] [newpoolname] And then recreate (sr-create) your STORAGE in XO
  • Potential bug with Windows VM backup: "Body Timeout Error"

    49
    3
    2 Votes
    49 Posts
    5k Views
    psafontP
    @olivierlambert Because Andriy did quite a bit of work before this merge, xcp-ng depends on xenserver releasing the package to the clients before we can integrate it into xcp-ng. I'd say we can release the package February at the earliest
  • http time out error on backup

    4
    0 Votes
    4 Posts
    226 Views
    A
    @myles3 Hello Myles3. This post is over 20 days old. The issue was since been resolved by support. I dont remember the exact solution off the top of my head with out looking it up.
  • 0 Votes
    1 Posts
    54 Views
    No one has replied
  • 1 Votes
    14 Posts
    782 Views
    Bastien NolletB
    @cbaguzman for information, I made some changes on vhd-cli so in the future we'll get a more explanatory error message when a command failed because we passed an incorrect argument: https://github.com/vatesfr/xen-orchestra/pull/9386 b-Nollet opened this pull request in vatesfr/xen-orchestra closed feat(vhd-cli): prevent using invalid options #9386
  • Backup and the replication - Functioning/Scale

    14
    0 Votes
    14 Posts
    437 Views
    F
    @florent "the export is done by one of the host of the pool" How is this host selected ? Is it the master one, the one hosting the VM, or the less busy one in case of shared storage ?
  • Backup as .ova to remote NFS

    7
    0 Votes
    7 Posts
    160 Views
    S
    @olivierlambert Client is a proxmox shop and whilst tolerating services running in XCP-ng / XO world they require that if everything fails they have the ability to pick up the pieces in-house. Proxmox has easy import of .ova but very manual process for .xva hence....
  • "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update

    23
    0 Votes
    23 Posts
    892 Views
    olivierlambertO
    We could probably make the doc even more precise. Adding @thomas-dkmt in the loop for that.