XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup to S3 aborted what permissions are required?

    Scheduled Pinned Locked Moved Xen Orchestra
    backupaws
    26 Posts 5 Posters 4.4k Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • julien-fJ Offline
      julien-f Vates 🪐 Co-Founder XO Team @olivierlambert
      last edited by julien-f

      isFull: true means that it's a full copy (because there were no previous valid delta chain).

      The issue is not related to this or to permissions, it appears that S3 protocol has constraints not really compatible with our use case (more info), we are investigating but it does not look great so far.

      J 1 Reply Last reply Reply Quote 0
      • J Offline
        jensolsson.se @julien-f
        last edited by

        @julien-f Ah of course, If I would have thought about that isFull line a few minutes I would probably figure that out 🙂

        Regarding S3 I think it sounds a bit strange. I read the linked article, but for what I understand there should not be a problem uploading a multipart object to S3 not knowing the total filesize. https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html

        Do you really need to sign the url? I think the credentials can be used to upload without signing the URL. Or is this used so that the xcp-ng servers are actually transferring directly to S3 ? I also found this article which implies that every part of the multipart upload could be signed individually if signed urls are preferred:
        https://www.altostra.com/blog/multipart-uploads-with-s3-presigned-url

        Don't know what API you are using with S3, but using the command line, if the source is a stream one could do something like
        <command streaming to stdout> | aws s3 cp - s3://bucket/...

        I know that S3 needs an expected size of the upload. This is to calculate the number of parts so that it does not reach the limit of 5GB per part or too many parts, but setting the expected upload size to 5TB would probably work.
        https://loige.co/aws-command-line-s3-content-from-stdin-or-to-stdout/

        Does this makes sense or did I fully misunderstood the problem?

        Kind regards
        Jens

        nraynaudN 1 Reply Last reply Reply Quote 0
        • nraynaudN Offline
          nraynaud XCP-ng Team @jensolsson.se
          last edited by

          @jensolsson-se Hi Jens, you are on the right track. The other thing is that 5TB/10000 leads to big fragment size that are a bit much to keep in memory.

          J 1 Reply Last reply Reply Quote 0
          • J Offline
            jensolsson.se @nraynaud
            last edited by jensolsson.se

            @nraynaud So memory requirements are ~500 MB and that is too much?

            Is there a requirement that there need to be one single file in the destination or would it be possible to set the fragment size to 100 MB and if the file will be over 1 TB just create a new file? .0 .1 .2 .3 and so on ?

            Maybe I am now braking some convention here but I guess it should work in a good way?

            Also read that it is possible to copy objects into parts
            https://aws.amazon.com/blogs/developer/efficient-amazon-s3-object-concatenation-using-the-aws-sdk-for-ruby/

            I am thinking if this would make it possible to make a multipart upload of 10 MB parts into one big 100 GB file

            Then take 50 x 100 GB files and combine them into one big 5 TB file and as I understand it this can be done 100% within S3. No upload/download

            Kind regards
            Jens

            nraynaudN 1 Reply Last reply Reply Quote 0
            • nraynaudN Offline
              nraynaud XCP-ng Team @jensolsson.se
              last edited by

              @jensolsson-se yes, we have though of complicated solutions too, but we haven't yet really dug into it, because this is a backup situation, we'd like the state of things and failure modes to be manageable.

              J 1 Reply Last reply Reply Quote 0
              • J Offline
                jensolsson.se @nraynaud
                last edited by jensolsson.se

                @nraynaud Makes sense to keep it simple.

                But this means that S3 backup in XO is currently broken, right, and I need to find some other way to back up my VMs for now.

                nraynaudN 1 Reply Last reply Reply Quote 0
                • nraynaudN Offline
                  nraynaud XCP-ng Team @jensolsson.se
                  last edited by

                  @jensolsson-se Can you use SMB, NFS or local backups? I don't think S3 has ever worked for anyone.

                  J 1 Reply Last reply Reply Quote 0
                  • J Offline
                    jensolsson.se @nraynaud
                    last edited by

                    @nraynaud yes i use nfs today. But would love to send it to the cloud somewhere as well.

                    A 1 Reply Last reply Reply Quote 0
                    • A Offline
                      alexredston @jensolsson.se
                      last edited by

                      @jensolsson-se

                      I have backups working to S3 using IAM permissions and KMS on S3.

                      Right now backing up a 1TB VM to S3 in an hour which is great.

                      First thing - don't create the directory where the backups will be stored on S3 in advance, it will get created automatically or fail otherwise complaining that it should be empty

                      Then you need the permissions:

                      I created them in Json assigned to a group and assigned that to an IAM user to be used as a service account. Note the key that is referred to is a key which is a property of that IAM user, not to be confused with the symetric encryption key which will need to be assigned to your bucket.

                      {
                          "Version": "2012-10-17",
                          "Statement": [
                              {
                                  "Sid": "AllowBucketListing",
                                  "Effect": "Allow",
                                  "Action": [
                                      "s3:ListBucket",
                                      "s3:GetBucketLocation",
                                      "s3:ListBucketVersions"
                                  ],
                                  "Resource": [
                                      "arn:aws:s3:::your-bucket-name-here",
                                      "arn:aws:s3:::your-bucket-name-here/*"
                                  ]
                              },
                              {
                                  "Sid": "AllowObjectOperations",
                                  "Effect": "Allow",
                                  "Action": [
                                      "s3:GetObject",
                                      "s3:PutObject",
                                      "s3:DeleteObject",
                                      "s3:DeleteObjectVersion",
                                      "s3:ListBucketMultipartUploads",
                                      "s3:ListMultipartUploadParts",
                                      "s3:AbortMultipartUpload",
                                      "s3:GetObjectVersion",
                                      "kms:GenerateDataKey"
                                  ],
                                  "Resource": [
                                      "arn:aws:s3:::your-bucket-name-here/*",
                                      "arn:aws:s3:::your-bucket-name-here"
                                  ]
                              },
                              {
                                  "Sid": "AllowKeyAccess",
                                  "Effect": "Allow",
                                  "Action": [
                                      "kms:GenerateDataKey",
                                      "kms:Decrypt"
                                  ],
                                  "Resource": "arn:aws:kms:your-region-here:your-numeric-account-id-here:key/the-uuid-of-the-encryption-key-for-your-bucket-here"
                              }
                          ]
                      }
                      
                      A 1 Reply Last reply Reply Quote 0
                      • A Offline
                        alexredston @alexredston
                        last edited by

                        Posted this as I personally found this configuration quite involved, and the permissions earlier in the thread were insufficient to make it work when using AWS KMS for bucket encryption as well as the XO provided encryption secret.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post