XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. andrewreid
    3. Posts
    A
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 3
    • Posts 9
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: New and exciting backup errors

      @florent Thank you for your reply!

      No, bog standard configuration with no object locking changes. This bucket has been receiving backups for months without fault, and no configuration has changed.

      The other remote is an NFS share and that’s working perfectly well.

      Is your hypothesis that the S3 backup has become corrupt? Thus, would the solution be to simply abandon these backups and create new ones?

      β€” Andrew

      posted in Xen Orchestra
      A
      andrewreid
    • RE: New and exciting backup errors

      @olivierlambert Ta – you're right, I should have checked that before posting, but the issue persists on 667d0. Slightly different words but the same smell:

      https://gist.github.com/andrewreid/cf4f7299b2ae7e52c61e31471675740f

      Is this the best spot to discuss this, or is a Github issue the better forum?

      posted in Xen Orchestra
      A
      andrewreid
    • New and exciting backup errors

      Running c0d58 from source.

      I can't work out what suddenly happened, because I haven't changed anything that I can think of.

      All of a sudden, some delta backups are failing. Seems to be to do with timestamp formats, and the VHD cleaning process, with two of my seven VMs failing to backup to the S3 remote (Backblaze). In the GUI, the errors look like:

      1. Expected values to be strictly equal: + actual - expected + 4294960416 - 4294959235 ^
      2. Clean VM directory, missing or broken alias target, some metadata VHDs are missing
      3. Invalid RFC-7231 date-time value

      Here's the error log if it helps: https://gist.github.com/andrewreid/4a8e7ac8da8d7f381884d4732a03d94f

      Any ideas what I've done, or what might be happening?

      Cheers,

      Andrew

      posted in Xen Orchestra
      A
      andrewreid
    • RE: Understanding backup sizes

      @olivierlambert Do tools exist to defragment VHDs?

      posted in Xen Orchestra
      A
      andrewreid
    • RE: Understanding backup sizes

      Update: part of this is my own error: the NFS storage is on ZFS with compression; du -Ash shows a result of a much larger amount of data, more consistent with what the backup report is showing. Still doesn't reconcile backup size with the amount of data being stored on the VHD.

      posted in Xen Orchestra
      A
      andrewreid
    • Understanding backup sizes

      I'm running XO af5d8 and have been following along the delta backup issues in the last week or so. This is a homelab, so I cleared the decks and started with a fresh backup last night – new remote, cleared all old backups and snapshots.

      I'm trying to understand transfer sizes. I've configured a nightly delta backup job, and as you'd expect, the first backup was a full (given I'd started with a clean slate). I've got two remotes configured for this job: a local NFS share and a Backblaze S3 bucket:

      • The e-mailed backup report states 258.3GiB transferred
      • The XO backup log on the backup overview screen shows 129.15GiB transferred
      • The output of du -sh on the NFS share has 39GiB in the folder
      • The S3 bucket size according to Backblaze is 46.7 GiB

      I'm confused!

      Equally confusing, looking into the detail of one of my container VMs, cont-01:

      • Has a reported transfer size of 59.18GiB in the XO backup overview screen logs.
      • Has 3.8GiB reported disk usage according to df -h
      • The associated VHD file on disk reports 189MiB from du -h

      I'm missing something here. How to I make sense of this, and what is causing the disparity in sizes? Why would a VM which, seemingly has a maximum of about 4GB of data result in a reported nearly 60GB transfer? Even though, it would seem that the amount of data being stored in the remote is clearly a fraction of that.

      Is this a bug, a misunderstanding by me (almost certainly) or something else?

      Cheers,

      Andrew

      posted in Xen Orchestra
      A
      andrewreid
    • RE: Delta Backup Changes in 5.66 ?

      I'm not sure if mine is an edge case or another "me too", but d3071 has partially fixed the always a full backup issue on my end.

      I'm not sure exactly what's going on, but this morning I did wonder if it's to do with having two delta backup jobs (backing up to two different remotes), and whether something strange was happening with the snapshots between the two jobs.

      For example, from my backup report for the first backup to a local NFS remote:

      f2136443-ab35-4881-ab3e-8ac3f9c8bf31-image.png

      Half an hour or so later, that same VM yields a full backup-sized transfer:

      6f6bf1a5-418e-4d8f-9508-c45ce9014338-image.png

      (Which, incidentally, I don't understand how it's a 3.33GiB transfer, when the VHD size on the SR is ~111MB and the output of df -h on the VM shows 1.8GB used. I'm confused!)

      I'll roll onto feat-add-debug-backup and see what that yields!

      posted in Xen Orchestra
      A
      andrewreid
    • RE: Backblaze B2 as a backup remote

      @olivierlambert Ah! I did read your blog post. That's fantastic. I'm in no screaming hurry to implement S3 backups, but it's great that you've got some promising work in the pipeline! πŸ™‚

      posted in Xen Orchestra
      A
      andrewreid
    • Backblaze B2 as a backup remote

      I'm consistently seeing timeout errors at the six minute mark when attempting to use Backblaze B2 as a remote for VM backups. This doesn't happen with smaller backups (i.e., metadata), only the larger backups of VMs.

      The relevant error seems to boil down to (full log) :

      Error calling AWS.S3.upload: HTTP connection has timed out
      

      I'm not sure if this is a bug per se, or a misconfiguration issue; I don't feel like it's a connectivity issue, as I'm able to make smaller backups without a problem. It's either a Backblaze thing, or there's an implementation problem in XO around the writing of large files.

      I've read previous forum posts about the XO server running out of RAM and confirmed it's not – watched it closely during the backup job and it doesn't swap.

      Grateful for any pointers!

      Cheers,

      β€” andrew

      posted in Xen Orchestra
      A
      andrewreid