XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    S3 Backup - maximum number of parts

    Scheduled Pinned Locked Moved Xen Orchestra
    4 Posts 3 Posters 223 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D Offline
      deligatedgeek
      last edited by

      We use S3 swift proxy for swift storage and have built some infrastructure around it.

      When we tried S3 for backup we got this error
      xo-server[144792]: error: InvalidArgument: Part number must be an integer between 1 and 1000, inclusive

      Swift defaults to max number of parts of 1000, as do several commercial S3 implementations, and we have tried increasing but it caused instability in one of applications built around swift SLO.

      The solution I've implemented is to reduce the MAX_PART_NUMBER in
      @xen-orchestra/fs/dist/s3.js to 1000, but this be overwritten on each upgrade.

      Could an option be added into the AWS S3 parameters to set this? or even just being able to override the value in a configuration file somewhere.

      D 1 Reply Last reply Reply Quote 0
      • D Offline
        deligatedgeek @deligatedgeek
        last edited by

        Hi @olivierlambert,

        Finally got round to following up on this.

        I found a better solution using DEFAULT_CHUNK_SIZE, which is normally set to 4MB, with MAX_PART_NUMBER being 1000, this limits the largest object backed up to 3rd party S3 implementations as 4GB.

        Increasing the DEFAULT_CHUNK_SIZE in the S3 code, or having a setting in configuration, either file or web GUI would allow backups of VMs larger than 4GB, without the memory required to track several thousand object chunks.

        Regards,

        Mark

        florentF 1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          That's very interesting, let me make sure @florent got this

          1 Reply Last reply Reply Quote 0
          • florentF Offline
            florent Vates 🪐 XO Team @deligatedgeek
            last edited by

            @deligatedgeek are you modifying the chunk for delta or full backup ?
            because a lot of part in the incremental code assume that blocks are aligned to 2MB

            For full backup ( one big xva file per VM) the code should handle file with a size bigger than 4GB , by dynamically increasing the chunk size. We do not increase the chunk size by default because it also increase a lot the memory consumption. The minimum size of a chunk is given by MIN_PART_SIZE ( 5MB as per AWS doc ) and the maximum size is MAX_PART_SIZE ( 5GB) . The maximum file is 50TB , the biggest full backup I saw in the wild was a few hundred GB

            1 Reply Last reply Reply Quote 0

            Hello! It looks like you're interested in this conversation, but you don't have an account yet.

            Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

            With your input, this post could be even better 💗

            Register Login
            • First post
              Last post