XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. robyt
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 14
    • Posts 68
    • Groups 0

    robyt

    @robyt

    10
    Reputation
    24
    Profile views
    68
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online
    Website www.tosnet.it
    Location italy

    robyt Unfollow Follow

    Best posts made by robyt

    • RE: mirror backup to S3

      @florent mmm.
      Sounds good 🙂

       dns_interno2 (ctx3.tosnet.it)
      
          Wasabi
              transfer
              Start: 2025-07-27 06:02
              End: 2025-07-27 06:15
              Duration: 13 minutes
              Size: 14.21 GiB
              Speed: 18.73 MiB/s
              transfer
              Start: 2025-07-27 06:15
              End: 2025-07-27 06:16
              Duration: a few seconds
              Size: 314 MiB
              Speed: 12.17 MiB/s
          Start: 2025-07-27 06:02
          End: 2025-07-27 06:19
          Duration: 16 minutes
          Wasabi
          Start: 2025-07-27 06:15
          End: 2025-07-27 06:19
          Duration: 3 minutes
      
      Start: 2025-07-27 06:02
      End: 2025-07-27 06:19
      Duration: 16 minutes
      dns_interno1 (ctx1.tosnet.it)
      
          Wasabi
              transfer
              Start: 2025-07-27 06:02
              End: 2025-07-27 06:20
              Duration: 17 minutes
              Size: 25.03 GiB
              Speed: 24.84 MiB/s
              transfer
              Start: 2025-07-27 06:20
              End: 2025-07-27 06:20
              Duration: a few seconds
              Size: 506 MiB
              Speed: 16.66 MiB/s
          Start: 2025-07-27 06:02
          End: 2025-07-27 06:23
          Duration: 21 minutes
          Wasabi
          Start: 2025-07-27 06:20
          End: 2025-07-27 06:23
          Duration: 3 minutes
      
      Start: 2025-07-27 06:02
      End: 2025-07-27 06:23
      Duration: 21 minutes
      FtpA TEST (ctx1.tosnet.it)
      
          Wasabi
              transfer
              Start: 2025-07-27 06:19
              End: 2025-07-27 07:40
              Duration: an hour
              Size: 164.59 GiB
              Speed: 34.67 MiB/s
              transfer
              Start: 2025-07-27 07:40
              End: 2025-07-27 07:41
              Duration: a few seconds
              Size: 804 MiB
              Speed: 52.72 MiB/s
          Start: 2025-07-27 06:19
          End: 2025-07-27 07:42
          Duration: an hour
          Wasabi
          Start: 2025-07-27 07:40
          End: 2025-07-27 07:42
          Duration: 2 minutes
      
      Start: 2025-07-27 06:19
      End: 2025-07-27 07:42
      Duration: an hour
      
      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent hi, i've adjusted the retention parameters and i'm waiting for some days of backup/mirror for checking

      posted in Backup
      robytR
      robyt
    • huge number of api call "sr.getAllUnhealthyVdiChainsLength" in tasks

      Hi, i've this situation in Task
      6bd4ea87-1a46-48b4-a92f-0edba988c385-immagine.png
      What's this task?
      Now i've ~50 of this call..

      posted in Xen Orchestra
      robytR
      robyt
    • RE: New xoa: unable to login

      @olivierlambert now work fine, thank you

      posted in Xen Orchestra
      robytR
      robyt

    Latest posts made by robyt

    • RE: mirror backup to S3

      @florent mmm.
      Sounds good 🙂

       dns_interno2 (ctx3.tosnet.it)
      
          Wasabi
              transfer
              Start: 2025-07-27 06:02
              End: 2025-07-27 06:15
              Duration: 13 minutes
              Size: 14.21 GiB
              Speed: 18.73 MiB/s
              transfer
              Start: 2025-07-27 06:15
              End: 2025-07-27 06:16
              Duration: a few seconds
              Size: 314 MiB
              Speed: 12.17 MiB/s
          Start: 2025-07-27 06:02
          End: 2025-07-27 06:19
          Duration: 16 minutes
          Wasabi
          Start: 2025-07-27 06:15
          End: 2025-07-27 06:19
          Duration: 3 minutes
      
      Start: 2025-07-27 06:02
      End: 2025-07-27 06:19
      Duration: 16 minutes
      dns_interno1 (ctx1.tosnet.it)
      
          Wasabi
              transfer
              Start: 2025-07-27 06:02
              End: 2025-07-27 06:20
              Duration: 17 minutes
              Size: 25.03 GiB
              Speed: 24.84 MiB/s
              transfer
              Start: 2025-07-27 06:20
              End: 2025-07-27 06:20
              Duration: a few seconds
              Size: 506 MiB
              Speed: 16.66 MiB/s
          Start: 2025-07-27 06:02
          End: 2025-07-27 06:23
          Duration: 21 minutes
          Wasabi
          Start: 2025-07-27 06:20
          End: 2025-07-27 06:23
          Duration: 3 minutes
      
      Start: 2025-07-27 06:02
      End: 2025-07-27 06:23
      Duration: 21 minutes
      FtpA TEST (ctx1.tosnet.it)
      
          Wasabi
              transfer
              Start: 2025-07-27 06:19
              End: 2025-07-27 07:40
              Duration: an hour
              Size: 164.59 GiB
              Speed: 34.67 MiB/s
              transfer
              Start: 2025-07-27 07:40
              End: 2025-07-27 07:41
              Duration: a few seconds
              Size: 804 MiB
              Speed: 52.72 MiB/s
          Start: 2025-07-27 06:19
          End: 2025-07-27 07:42
          Duration: an hour
          Wasabi
          Start: 2025-07-27 07:40
          End: 2025-07-27 07:42
          Duration: 2 minutes
      
      Start: 2025-07-27 06:19
      End: 2025-07-27 07:42
      Duration: an hour
      
      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent said in mirror backup to S3:

      @robyt your doiing incremental backup with 2 step : complete backup ( full/key disks) and delta (differencing/incremental) Both of theses are transfered through an incremental mirror

      on the other hand if you do Backup , it build one xva file per VM containing all the VM data at each backup. These are transfered through a Full backup mirror

      we are working on clarifying the vocabularyahhhh...
      the full mirror to S3 is not necessary

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent said in mirror backup to S3:

      @robyt are you using full backus ( called "backup" ) on the source ?
      because incremental mirror will transfer all the backups generated by a "delta backup" whereas it's the first transfer or the following delta

      (our terminology can be confusing for now)

      Hi, i've two delta jobs, one with "force full backup" checked
      In log i've only this:
      31356631-ce43-43a0-b626-e9b0dbe52da2-immagine.png

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      any ideas?

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent i've a little problem with backup to s3/wasabi..

      for delta seems all ok:

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1751914964818",
        "jobId": "e4adc26c-8723-4388-a5df-c2a1663ed0f7",
        "jobName": "Mirror wasabi delta",
        "message": "backup",
        "scheduleId": "62a5edce-88b8-4db9-982e-ad2f525c4eb9",
        "start": 1751914964818,
        "status": "success",
        "infos": [
          {
            "data": {
              "vms": [
                "2771e7a0-2572-ca87-97cf-e174a1d35e6f",
                "b89670f6-b785-7df0-3791-e5e41ec8ee08",
                "cac6afed-5df8-0817-604c-a047a162093f"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "b89670f6-b785-7df0-3791-e5e41ec8ee08"
            },
            "id": "1751914968373",
            "message": "backup VM",
            "start": 1751914968373,
            "status": "success",
            "tasks": [
              {
                "id": "1751914968742",
                "message": "clean-vm",
                "start": 1751914968742,
                "status": "success",
                "end": 1751914979708,
                "result": {
                  "merge": false
                }
              },
              {
                "data": {
                  "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                  "type": "remote"
                },
                "id": "1751914984503",
                "message": "export",
                "start": 1751914984503,
                "status": "success",
                "tasks": [
                  {
                    "id": "1751914984667",
                    "message": "transfer",
                    "start": 1751914984667,
                    "status": "success",
                    "end": 1751914992365,
                    "result": {
                      "size": 125829120
                    }
                  },
                  {
                    "id": "1751914995521",
                    "message": "clean-vm",
                    "start": 1751914995521,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1751915004208",
                        "message": "merge",
                        "start": 1751915004208,
                        "status": "success",
                        "end": 1751915018911
                      }
                    ],
                    "end": 1751915020075,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1751915020077
              }
            ],
            "end": 1751915020077
          },
          {
            "data": {
              "type": "VM",
              "id": "2771e7a0-2572-ca87-97cf-e174a1d35e6f"
            },
            "id": "1751914968380",
            "message": "backup VM",
            "start": 1751914968380,
            "status": "success",
            "tasks": [
              {
                "id": "1751914968903",
                "message": "clean-vm",
                "start": 1751914968903,
                "status": "success",
                "end": 1751914979840,
                "result": {
                  "merge": false
                }
              },
              {
                "data": {
                  "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                  "type": "remote"
                },
                "id": "1751914986808",
                "message": "export",
                "start": 1751914986808,
                "status": "success",
                "tasks": [
                  {
                    "id": "1751914987416",
                    "message": "transfer",
                    "start": 1751914987416,
                    "status": "success",
                    "end": 1751914993152,
                    "result": {
                      "size": 119537664
                    }
                  },
                  {
                    "id": "1751914996024",
                    "message": "clean-vm",
                    "start": 1751914996024,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1751915005023",
                        "message": "merge",
                        "start": 1751915005023,
                        "status": "success",
                        "end": 1751915035567
                      }
                    ],
                    "end": 1751915039414,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1751915039414
              }
            ],
            "end": 1751915039415
          },
          {
            "data": {
              "type": "VM",
              "id": "cac6afed-5df8-0817-604c-a047a162093f"
            },
            "id": "1751915020089",
            "message": "backup VM",
            "start": 1751915020089,
            "status": "success",
            "tasks": [
              {
                "id": "1751915020443",
                "message": "clean-vm",
                "start": 1751915020443,
                "status": "success",
                "end": 1751915030194,
                "result": {
                  "merge": false
                }
              },
              {
                "data": {
                  "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                  "type": "remote"
                },
                "id": "1751915034962",
                "message": "export",
                "start": 1751915034962,
                "status": "success",
                "tasks": [
                  {
                    "id": "1751915035142",
                    "message": "transfer",
                    "start": 1751915035142,
                    "status": "success",
                    "end": 1751915052723,
                    "result": {
                      "size": 719323136
                    }
                  },
                  {
                    "id": "1751915056146",
                    "message": "clean-vm",
                    "start": 1751915056146,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1751915064681",
                        "message": "merge",
                        "start": 1751915064681,
                        "status": "success",
                        "end": 1751915116508
                      }
                    ],
                    "end": 1751915117838,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1751915117839
              }
            ],
            "end": 1751915117839
          }
        ],
        "end": 1751915117839
      }
      

      For full i'm not sure:

         {
            "data": {
              "mode": "full",
              "reportWhen": "always"
            },
            "id": "1751757492933",
            "jobId": "35c78a31-67c5-47ba-9988-9c4cb404ed8e",
            "jobName": "Mirror wasabi full",
            "message": "backup",
            "scheduleId": "476b863d-a651-42e5-9bb3-db830dbdac7c",
            "start": 1751757492933,
            "status": "success",
            "infos": [
              {
                "data": {
                  "vms": [
                    "2771e7a0-2572-ca87-97cf-e174a1d35e6f",
                    "b89670f6-b785-7df0-3791-e5e41ec8ee08",
                    "cac6afed-5df8-0817-604c-a047a162093f"
                  ]
                },
                "message": "vms"
              }
            ],
            "end": 1751757496499
          }
      

      XOA send to me the email with this report

      Job ID: 35c78a31-67c5-47ba-9988-9c4cb404ed8e
      Run ID: 1751757492933
      Mode: full
      Start time: Sunday, July 6th 2025, 1:18:12 am
      End time: Sunday, July 6th 2025, 1:18:16 am
      Duration: a few seconds
      

      four second for 203 gb?

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      Hi @florent, i've clean backup data, add the correct retention and now
      it's fine
      591a0ded-9ab4-40f4-803e-81ea50270e87-immagine.png
      i'm lowering nbd connection (from 4 to 1), the speed of "test backup con mirror" is too low

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent hi, i've adjusted the retention parameters and i'm waiting for some days of backup/mirror for checking

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @acebmxer of course, this is only a test.
      the problem is not the schedulng but why incremental send every time all data.

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @acebmxer [excuse for the poor english!]
      i've now this situation:
      1 backup job with two disables schedules, one full and one delta, to a nas
      1 mirror full backup to wasabi (S3)
      1 mirror incremental backup

      i've insert two sequences:
      one starting at sunday for full backup (the sequence is full backup and then full mirror)
      one every 3 hours with delta backup and then mirror incremental

      the job start at the correct hour but the mirror incremental send every time the same data size..
      backup to nas:

      dns_interno1 (ctx1.tosnet.it)
      Transfer data using NBD
          Clean VM directory
          cleanVm: incorrect backup size in metadata
          Start: 2025-06-24 16:00
          End: 2025-06-24 16:00
          Snapshot
          Start: 2025-06-24 16:00
          End: 2025-06-24 16:00
          Backup XEN OLD
              transfer
              Start: 2025-06-24 16:00
              End: 2025-06-24 16:01
              Duration: a few seconds
              Size: 132 MiB
              Speed: 11.86 MiB/s
          Start: 2025-06-24 16:00
          End: 2025-06-24 16:01
          Duration: a minute
      
      Start: 2025-06-24 16:00
      End: 2025-06-24 16:01
      Duration: a minute
      Type: delta
      
       dns_interno1 (ctx1.tosnet.it)
          Wasabi
              transfer
              Start: 2025-06-24 16:02
              End: 2025-06-24 16:15
              Duration: 13 minutes
              Size: 25.03 GiB
              Speed: 34.14 MiB/s
              transfer
              Start: 2025-06-24 16:15
              End: 2025-06-24 16:15
              Duration: a few seconds
              Size: 394 MiB
              Speed: 22.49 MiB/s
          Start: 2025-06-24 16:02
          End: 2025-06-24 16:17
          Duration: 15 minutes
          Wasabi
          Start: 2025-06-24 16:15
          End: 2025-06-24 16:17
          Duration: 2 minutes
      
      Start: 2025-06-24 16:02
      End: 2025-06-24 16:17
      Duration: 15 minutes
      

      the job send every time 25gb to wasabi, not the incremental data.

      posted in Backup
      robytR
      robyt
    • error in xo task with sequence?

      85c68ffa-9fd3-49ce-84e0-2eb9128babe3-immagine.png

      Good morning, sequence work fine but i've a long list of task closed but at 50% (?)
      raw log is correct

      {
        "id": "0mca491c8",
        "properties": {
          "name": "Schedule sequence",
          "userId": "c5ce5e50-29d9-4c00-84e8-402e1063a5c7",
          "type": "xo:schedule:sequence",
          "progress": 50
        },
        "start": 1750744800007,
        "status": "success",
        "updatedAt": 1750746259107,
        "end": 1750746259107
      }
      

      It's only a ui problem?

      posted in Backup
      robytR
      robyt