• CBT: the thread to centralize your feedback

    Pinned
    455
    1 Votes
    455 Posts
    648k Views
    olivierlambertO
    Okay, I thought the autoscan was only for like 10 minutes or so, but hey I'm not deep down in the stack anymore
  • Feedback on immutability

    Pinned
    56
    2 Votes
    56 Posts
    20k Views
    olivierlambertO
    Sadly, Backblaze is often having issues on S3 (timeout, not reliable etc). We are updating our doc to give a "tiering" support.
  • S3 Chunk Size

    15
    0 Votes
    15 Posts
    586 Views
    B
    @olivierlambert Hi all. We've deployed one client on the last version of XCP and XOA and we need to backup on S3 remote (installed on Ceph and working with all of our backup solution : Veeam, cinder...) Backup failed with 502 error on clean-vm step : { "id": "1772809915090", "message": "clean-vm", "start": 1772809915090, "status": "failure", "end": 1772811181179, "result": { "name": "502", "$fault": "client", "$metadata": { "httpStatusCode": 502, "attempts": 3, "totalRetryDelay": 70 }, I only have this destination (my customer needs to use 50To), how can i correcting this pb (a ticket opened in vates)
  • VHD Check Error

    1
    1
    0 Votes
    1 Posts
    8 Views
    No one has replied
  • Backup: ERR_OUT_OF_RANGE in RemoteVhdDisk.mergeBlock

    13
    1
    0 Votes
    13 Posts
    158 Views
    florentF
    @wralb it is in master, we are preparing a patch release for XOA on monday morning with this fix
  • 0 Votes
    4 Posts
    46 Views
    W
    @simonp I'm not sure which one as I can see 2 config.tom file. 1st is under "/root/.config/xo-server/" config.toml.txt 2nd is under "/opt/xo/xo-server/" config.toml2.txt Both config.toml attached. Thank you. Best regards, Azren
  • backup mail report says INTERRUPTED but it's not ?

    110
    5
    0 Votes
    110 Posts
    6k Views
    F
    @florent I can try running this command next time memory usage is high and will report my findings!
  • Backup and the replication - Functioning/Scale

    20
    0 Votes
    20 Posts
    781 Views
    florentF
    thanks Andryi We us round robin when using NBD , but to be fair, it does not change the performance a lot in most of the case. The concurrency settings ( multiple connection to the same file ) is helping when there is a high latency between XO and the host. SO , @fcgo if you have thousand of VMs , you should enable NBD it will consume less resource on the DOM0 and XO , and it will be spread on all the possible hosts.
  • Mirror backup: Progress status and ETA

    4
    1
    1 Votes
    4 Posts
    237 Views
    A
    @Forza Too funny. I came across this post and clicked on the URL you referenced......and that earlier question was from me! Well, nothing has changed. I'm doing mirror'ed backups and I'm still blind as a bat.
  • Backups routinely get stuck on the health check

    Unsolved
    10
    1
    0 Votes
    10 Posts
    733 Views
    D
    Hello @Austin.Payne, I wanted to share my experience. I had similar issues through multiple XO versions. However, after learning that health checks rely on the management port I did some more digging. TL;DR it was a network configuration, and not an XO or XCPNG problem. If you have a multi-nic setup and you have XO as a VM on your XCPNG host, I would recommend that whatever network you use for management is on the same NIC. Setup: XO is a VM on XCPNG Host (only one host in pool). Network setup: eth0 = 1GB NIC = Management interface for XCPNG host (192.168.0.0/24 network) eth1 = 10GB DAC = NIC for 192.168.0.0/24 network to pass through VMs (XO uses this NIC) eth1.200 = 10GB DAC = VLAN 200 on eth1 NIC for storage network (10.10.200.0/28). Both the XCPNG host and VMs (including XO VM) use this. IP setup: XCPNG host = 192.168.0.201 on eth0; 10.10.200.1 on eth1.200 XO VM = 192.168.0.202 on eth1; 10.10.200.2 on eth1.200 Remote NAS for backups = Different computer on 10.10.200.0/28 network In this setup, backups would always finish, but health checks would hang indefinitely. However, after changing the XCPNG host to use eth1 for the management interface instead of eth0, health checks starting passing flawlessly. I am not sure if the problem was having the XCPNG host connecting to the same network with two different NICs or if eth1 was the better NIC thus was more reliable during the health check (could also explain why backups would always succeed). It's also possible it was switch related. In this setup, eth0 was connected to a Cisco switch and eth1/eth1.200 was connected to a MIkroTIk switch. Again, not sure what actually solved it, but consolidating everything to a single NIC solved the issue for me (and physically unplugging eth0 after the eth1 consolidation). Hopefully sharing my experience helps solve this issue for you.
  • XOA 6.2 parent VHD is missing followed by full Backup

    7
    1
    0 Votes
    7 Posts
    175 Views
    F
    @florent Awesome, i have patched a second location's XOA and am not experiencing the issue anymore there either.
  • VM export failing with Unix.Unix_error(Unix.EIO, "read"..)

    7
    0 Votes
    7 Posts
    236 Views
    V
    Hello, I have hardware issues, specially on the disks. Unfortunately I don't have fully access to the machine to be able to confirm if is effectively disks( more likely ) or back-plane. Interestingly some of those VMs does work, I can boot them and work with them but cannot copy the disks. Since those are Windows server VM I have also tried the Windows backup tool and this failed as well to perform to finish the backup. So I'm closing this investigation as hardware issue.
  • Delta Backup not deleting old snapshots

    6
    1
    0 Votes
    6 Posts
    146 Views
    P
    @Andrew This was the exact problem I was facing. The affected VMs still had the guest-tools.iso mounted in the virtual drive. Once I ejected the ISO, I was able to clear the errors and the backup jobs resumed normal operation. For anyone else hitting this "stuck delta" or VDI_IN_USE state, here is the exact workflow I followed to resolve it: Shut down the VM (Essential to release all storage locks). Eject the ISO/CD from the VM console or CLI. Delete all snapshots associated with the VM (including any old or "ghost" backup snapshots). Delete the VM's backup folder from the remote SR/Backup target to ensure a clean metadata start. Monitor the Coalesce process (via SMlog) and wait for it to finish flattening the VHD chain. Power the VM back on. Manually trigger a Backup Job. Following these steps, the first run was (as expected) a Full backup, but every subsequent run has been a successful Delta with no "hanging" snapshots left behind.
  • S3 Timeout

    4
    0 Votes
    4 Posts
    137 Views
    F
    @florent So If my Mikrotik router is sitting between XO and the HITRON and the HITRON is power cycled, because the Mikrotik is the gateway as far as XO is concerned and the gateway is not down, will the 10 minute timeout apply or might it timeout faster than that?
  • Parent VHD missing - VHD linked to backup are missing

    1
    0 Votes
    1 Posts
    48 Views
    No one has replied
  • Potential bug with Windows VM backup: "Body Timeout Error"

    52
    3
    2 Votes
    52 Posts
    6k Views
    R
    I had same problem, about last 2 weeks, but is is about Xen update, after apply last update, Backup of Windows VMs works again.
  • File restore error on LVMs

    26
    3
    0 Votes
    26 Posts
    9k Views
    Tristis OrisT
    Any news about issue?
  • Detached VM Snapshots after Warm Migration

    26
    1
    0 Votes
    26 Posts
    838 Views
    DustyArmstrongD
    @florent No problem, just thought it would be fun. Thanks for your work anyway!
  • bug about provoked BACKUP FELL BACK TO A FULL due to DR job

    13
    5
    0 Votes
    13 Posts
    599 Views
    A
    @florent do the logs i provided help at all? or Idea why i am see this issue as well? I do not have a DR job just Delta backup and mirror backup.
  • Master, commit a3139 failing backups

    Solved
    20
    0 Votes
    20 Posts
    560 Views
    simonpS
    Thanks for the heads-up and thanks for the help !