XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    S3 Chunk Size

    Scheduled Pinned Locked Moved Backup
    14 Posts 5 Posters 285 Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • rizaemet 0R Offline
      rizaemet 0
      last edited by

      @florent We are a university in Türkiye. The S3 service is provided to us by the National Academic Network and Information Center (ULAKBİM), an official institution of Türkiye.

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Online
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Do you know what software are they using? Ceph? Minio? Garage?

        rizaemet 0R 1 Reply Last reply Reply Quote 0
        • rizaemet 0R Offline
          rizaemet 0 @olivierlambert
          last edited by rizaemet 0

          @olivierlambert Ceph. There are a few configuration examples. That's how I learned it's Ceph.

          Edit: When I asked the AI ​​some questions, it said something like this: "If the chunk size is too small → the risk of a 502 increase". Seeing this, I ran a few tests. A backup of a virtual machine with 80 GB of disk space (backup size: 70 GB) went through without any problems. However, a backup of a virtual machine with 16 GB of disk space (backup size: 3 GB) resulted in a failure. It seems the 502 error occurred during the clean-vm phase of the backup. However, the backup appears to have been created and it was working when I restored. I was backing up virtual machines with large disk sizes and had never encountered this error before.
          This section in the log exists both before the snapshot and after the export:

          ...
          {
            "id": "1768672615028",
            "message": "clean-vm",
            "start": 1768672615028,
            "status": "failure",
            "end": 1768673350279,
            "result": {
              "$metadata": {
                "httpStatusCode": 502,
                "clockSkewCorrected": true,
                "attempts": 3,
                "totalRetryDelay": 112
              },
              "message": "Expected closing tag 'hr' (opened in line 9, col 1) instead of closing tag 'body'.:11:1
            Deserialization error: to see the raw response, inspect the hidden field {error}.$response on this object.",
              "name": "Error",
              "stack": "Error: Expected closing tag 'hr' (opened in line 9, col 1) instead of closing tag 'body'.:11:1
            Deserialization error: to see the raw response, inspect the hidden field {error}.$response on this object.
              at st.parse (/opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/fast-xml-parser/lib/fxp.cjs:1:20727)
              at parseXML (/opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@aws-sdk/xml-builder/dist-cjs/xml-parser.js:17:19)
              at /opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@aws-sdk/core/dist-cjs/submodules/protocols/index.js:1454:52
              at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
              at async parseXmlErrorBody (/opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@aws-sdk/core/dist-cjs/submodules/protocols/index.js:1475:17)
              at async de_CommandError (/opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:5154:11)
              at async /opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@smithy/middleware-serde/dist-cjs/index.js:8:24
              at async /opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:488:18
              at async /opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@smithy/middleware-retry/dist-cjs/index.js:254:46
              at async /opt/xo/xo-builds/xen-orchestra-202601171930/node_modules/@aws-sdk/middleware-flexible-checksums/dist-cjs/index.js:318:18"
            }
          }
          ...
          
          florentF 1 Reply Last reply Reply Quote 0
          • florentF Offline
            florent Vates 🪐 XO Team @rizaemet 0
            last edited by

            @rizaemet-0 the cleanVM is the most demanding part of the backup job (mostly listing, moving and deleting blocks )

            1 Reply Last reply Reply Quote 1
            • olivierlambertO Online
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Yeah, it's not surprising. Ceph S3 implementation is known to be "average" and failure-prone in those cases.

              J 1 Reply Last reply Reply Quote 0
              • rizaemet 0R Offline
                rizaemet 0
                last edited by

                Could you please share which version of aws-sdk Xen-Orchestra is currently using? I will share this with our S3 service provider.

                florentF 1 Reply Last reply Reply Quote 0
                • florentF Offline
                  florent Vates 🪐 XO Team @rizaemet 0
                  last edited by

                  @rizaemet-0
                  sure : "@aws-sdk/client-s3": "^3.54.0",

                  1 Reply Last reply Reply Quote 0
                  • J Offline
                    john.c @olivierlambert
                    last edited by john.c

                    @olivierlambert said in S3 Chunk Size:

                    Yeah, it's not surprising. Ceph S3 implementation is known to be "average" and failure-prone in those cases.

                    @olivierlambert @florent Well you’re likely to see more use of Ceph by providers, following MinIO entering maintenance mode. Also Canonical are going to be doing more development and selling what it calls Micro Ceph. It’s blog post gives more details.

                    https://ubuntu.com/blog/microceph-why-its-the-superior-minio-alternative

                    florentF 1 Reply Last reply Reply Quote 0
                    • florentF Offline
                      florent Vates 🪐 XO Team @john.c
                      last edited by

                      @john.c yes for sure, and maybe we will be able to set the chunk size on a future date ( especially since we did some of the ground work for vhd / qcow2)

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Online
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        I heard better feedback from Garage or RustFS than Ceph for a successor after Minio.

                        P 1 Reply Last reply Reply Quote 0
                        • P Offline
                          Pilow @olivierlambert
                          last edited by Pilow

                          @olivierlambert planning to give RustFS a try, i'll report back (currently full minio)

                          1 Reply Last reply Reply Quote 1
                          • olivierlambertO Online
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            Keep us posted, happy to hear from it!

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post