XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Delta Backups Failing: AWS.S3.uploadPartCopy: CPU too busy

    Scheduled Pinned Locked Moved Xen Orchestra
    8 Posts 3 Posters 514 Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S Offline
      stevewest15
      last edited by

      Hi,

      Is the following error from XCP-ng or from Backblaze servers?

      8ffd431a-6f97-4e48-9e67-5ca7cfdc5f10-image.png

      Thx,

      SW

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Hmm never heard of this one, any idea @florent ?

        florentF 1 Reply Last reply Reply Quote 0
        • florentF Offline
          florent Vates 🪐 XO Team @olivierlambert
          last edited by

          @olivierlambert never heard of this one . I will search if it's a backblaze or a xo message
          since it use uploadPartCopy , it should be during a full (xva ) backup

          florentF 1 Reply Last reply Reply Quote 0
          • florentF Offline
            florent Vates 🪐 XO Team @florent
            last edited by

            @stevewest15

            there isn't much documentation online, but I found here ( https://www.reddit.com/r/backblaze/comments/bvufz0/servers_are_often_too_busy_is_this_normal_b2/ ) that we should retry when this error occurs

            can you copy the full log of this backup ? It would be easier If we could get a machine code ( probably a http code like 50x ) as explained here : https://www.backblaze.com/blog/b2-503-500-server-error/

            You can get it by clicking on the second icons from the left in your job report
            4f65be9e-9656-4aac-9cc4-d8690a67de9b-image.png

            1 Reply Last reply Reply Quote 0
            • S Offline
              stevewest15
              last edited by

              @florent , thank you! I tried clicking on the report bug but it returned an error saying url too long. So I've attached the backup log.backup_log.txt

              florentF 1 Reply Last reply Reply Quote 0
              • florentF Offline
                florent Vates 🪐 XO Team @stevewest15
                last edited by

                thank you @stevewest15
                I don't have the info I wanted inside, but I think we'll have to handle the 500 error of b2 with a retry . Theses errors are uncommon on S3, but are here by design in b2

                1 Reply Last reply Reply Quote 0
                • S Offline
                  stevewest15
                  last edited by

                  @florent , thank you for your help on this! I'm wondering if we should rely on b2 for our offsite disaster recovery. Other than testing the backups done to b2, is there a method XO ensures the backups are valid and all data made to the S3 buckets?

                  Thank You,

                  SW

                  florentF 1 Reply Last reply Reply Quote 0
                  • florentF Offline
                    florent Vates 🪐 XO Team @stevewest15
                    last edited by

                    @stevewest15 I don't have any advice of the fiability of b2
                    Their design require us to make a minor modification to our upload service .

                    During a full xva backup to a S3 like service, each part uploaded is uploaded along its hash and that is used to control the integrity

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post