XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. robyt
    3. Posts
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 14
    • Posts 67
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: mirror backup to S3

      @florent said in mirror backup to S3:

      @robyt your doiing incremental backup with 2 step : complete backup ( full/key disks) and delta (differencing/incremental) Both of theses are transfered through an incremental mirror

      on the other hand if you do Backup , it build one xva file per VM containing all the VM data at each backup. These are transfered through a Full backup mirror

      we are working on clarifying the vocabularyahhhh...
      the full mirror to S3 is not necessary

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent said in mirror backup to S3:

      @robyt are you using full backus ( called "backup" ) on the source ?
      because incremental mirror will transfer all the backups generated by a "delta backup" whereas it's the first transfer or the following delta

      (our terminology can be confusing for now)

      Hi, i've two delta jobs, one with "force full backup" checked
      In log i've only this:
      31356631-ce43-43a0-b626-e9b0dbe52da2-immagine.png

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      any ideas?

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent i've a little problem with backup to s3/wasabi..

      for delta seems all ok:

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1751914964818",
        "jobId": "e4adc26c-8723-4388-a5df-c2a1663ed0f7",
        "jobName": "Mirror wasabi delta",
        "message": "backup",
        "scheduleId": "62a5edce-88b8-4db9-982e-ad2f525c4eb9",
        "start": 1751914964818,
        "status": "success",
        "infos": [
          {
            "data": {
              "vms": [
                "2771e7a0-2572-ca87-97cf-e174a1d35e6f",
                "b89670f6-b785-7df0-3791-e5e41ec8ee08",
                "cac6afed-5df8-0817-604c-a047a162093f"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "b89670f6-b785-7df0-3791-e5e41ec8ee08"
            },
            "id": "1751914968373",
            "message": "backup VM",
            "start": 1751914968373,
            "status": "success",
            "tasks": [
              {
                "id": "1751914968742",
                "message": "clean-vm",
                "start": 1751914968742,
                "status": "success",
                "end": 1751914979708,
                "result": {
                  "merge": false
                }
              },
              {
                "data": {
                  "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                  "type": "remote"
                },
                "id": "1751914984503",
                "message": "export",
                "start": 1751914984503,
                "status": "success",
                "tasks": [
                  {
                    "id": "1751914984667",
                    "message": "transfer",
                    "start": 1751914984667,
                    "status": "success",
                    "end": 1751914992365,
                    "result": {
                      "size": 125829120
                    }
                  },
                  {
                    "id": "1751914995521",
                    "message": "clean-vm",
                    "start": 1751914995521,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1751915004208",
                        "message": "merge",
                        "start": 1751915004208,
                        "status": "success",
                        "end": 1751915018911
                      }
                    ],
                    "end": 1751915020075,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1751915020077
              }
            ],
            "end": 1751915020077
          },
          {
            "data": {
              "type": "VM",
              "id": "2771e7a0-2572-ca87-97cf-e174a1d35e6f"
            },
            "id": "1751914968380",
            "message": "backup VM",
            "start": 1751914968380,
            "status": "success",
            "tasks": [
              {
                "id": "1751914968903",
                "message": "clean-vm",
                "start": 1751914968903,
                "status": "success",
                "end": 1751914979840,
                "result": {
                  "merge": false
                }
              },
              {
                "data": {
                  "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                  "type": "remote"
                },
                "id": "1751914986808",
                "message": "export",
                "start": 1751914986808,
                "status": "success",
                "tasks": [
                  {
                    "id": "1751914987416",
                    "message": "transfer",
                    "start": 1751914987416,
                    "status": "success",
                    "end": 1751914993152,
                    "result": {
                      "size": 119537664
                    }
                  },
                  {
                    "id": "1751914996024",
                    "message": "clean-vm",
                    "start": 1751914996024,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1751915005023",
                        "message": "merge",
                        "start": 1751915005023,
                        "status": "success",
                        "end": 1751915035567
                      }
                    ],
                    "end": 1751915039414,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1751915039414
              }
            ],
            "end": 1751915039415
          },
          {
            "data": {
              "type": "VM",
              "id": "cac6afed-5df8-0817-604c-a047a162093f"
            },
            "id": "1751915020089",
            "message": "backup VM",
            "start": 1751915020089,
            "status": "success",
            "tasks": [
              {
                "id": "1751915020443",
                "message": "clean-vm",
                "start": 1751915020443,
                "status": "success",
                "end": 1751915030194,
                "result": {
                  "merge": false
                }
              },
              {
                "data": {
                  "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88",
                  "type": "remote"
                },
                "id": "1751915034962",
                "message": "export",
                "start": 1751915034962,
                "status": "success",
                "tasks": [
                  {
                    "id": "1751915035142",
                    "message": "transfer",
                    "start": 1751915035142,
                    "status": "success",
                    "end": 1751915052723,
                    "result": {
                      "size": 719323136
                    }
                  },
                  {
                    "id": "1751915056146",
                    "message": "clean-vm",
                    "start": 1751915056146,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1751915064681",
                        "message": "merge",
                        "start": 1751915064681,
                        "status": "success",
                        "end": 1751915116508
                      }
                    ],
                    "end": 1751915117838,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1751915117839
              }
            ],
            "end": 1751915117839
          }
        ],
        "end": 1751915117839
      }
      

      For full i'm not sure:

         {
            "data": {
              "mode": "full",
              "reportWhen": "always"
            },
            "id": "1751757492933",
            "jobId": "35c78a31-67c5-47ba-9988-9c4cb404ed8e",
            "jobName": "Mirror wasabi full",
            "message": "backup",
            "scheduleId": "476b863d-a651-42e5-9bb3-db830dbdac7c",
            "start": 1751757492933,
            "status": "success",
            "infos": [
              {
                "data": {
                  "vms": [
                    "2771e7a0-2572-ca87-97cf-e174a1d35e6f",
                    "b89670f6-b785-7df0-3791-e5e41ec8ee08",
                    "cac6afed-5df8-0817-604c-a047a162093f"
                  ]
                },
                "message": "vms"
              }
            ],
            "end": 1751757496499
          }
      

      XOA send to me the email with this report

      Job ID: 35c78a31-67c5-47ba-9988-9c4cb404ed8e
      Run ID: 1751757492933
      Mode: full
      Start time: Sunday, July 6th 2025, 1:18:12 am
      End time: Sunday, July 6th 2025, 1:18:16 am
      Duration: a few seconds
      

      four second for 203 gb?

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      Hi @florent, i've clean backup data, add the correct retention and now
      it's fine
      591a0ded-9ab4-40f4-803e-81ea50270e87-immagine.png
      i'm lowering nbd connection (from 4 to 1), the speed of "test backup con mirror" is too low

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @florent hi, i've adjusted the retention parameters and i'm waiting for some days of backup/mirror for checking

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @acebmxer of course, this is only a test.
      the problem is not the schedulng but why incremental send every time all data.

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @acebmxer [excuse for the poor english!]
      i've now this situation:
      1 backup job with two disables schedules, one full and one delta, to a nas
      1 mirror full backup to wasabi (S3)
      1 mirror incremental backup

      i've insert two sequences:
      one starting at sunday for full backup (the sequence is full backup and then full mirror)
      one every 3 hours with delta backup and then mirror incremental

      the job start at the correct hour but the mirror incremental send every time the same data size..
      backup to nas:

      dns_interno1 (ctx1.tosnet.it)
      Transfer data using NBD
          Clean VM directory
          cleanVm: incorrect backup size in metadata
          Start: 2025-06-24 16:00
          End: 2025-06-24 16:00
          Snapshot
          Start: 2025-06-24 16:00
          End: 2025-06-24 16:00
          Backup XEN OLD
              transfer
              Start: 2025-06-24 16:00
              End: 2025-06-24 16:01
              Duration: a few seconds
              Size: 132 MiB
              Speed: 11.86 MiB/s
          Start: 2025-06-24 16:00
          End: 2025-06-24 16:01
          Duration: a minute
      
      Start: 2025-06-24 16:00
      End: 2025-06-24 16:01
      Duration: a minute
      Type: delta
      
       dns_interno1 (ctx1.tosnet.it)
          Wasabi
              transfer
              Start: 2025-06-24 16:02
              End: 2025-06-24 16:15
              Duration: 13 minutes
              Size: 25.03 GiB
              Speed: 34.14 MiB/s
              transfer
              Start: 2025-06-24 16:15
              End: 2025-06-24 16:15
              Duration: a few seconds
              Size: 394 MiB
              Speed: 22.49 MiB/s
          Start: 2025-06-24 16:02
          End: 2025-06-24 16:17
          Duration: 15 minutes
          Wasabi
          Start: 2025-06-24 16:15
          End: 2025-06-24 16:17
          Duration: 2 minutes
      
      Start: 2025-06-24 16:02
      End: 2025-06-24 16:17
      Duration: 15 minutes
      

      the job send every time 25gb to wasabi, not the incremental data.

      posted in Backup
      robytR
      robyt
    • error in xo task with sequence?

      85c68ffa-9fd3-49ce-84e0-2eb9128babe3-immagine.png

      Good morning, sequence work fine but i've a long list of task closed but at 50% (?)
      raw log is correct

      {
        "id": "0mca491c8",
        "properties": {
          "name": "Schedule sequence",
          "userId": "c5ce5e50-29d9-4c00-84e8-402e1063a5c7",
          "type": "xo:schedule:sequence",
          "progress": 50
        },
        "start": 1750744800007,
        "status": "success",
        "updatedAt": 1750746259107,
        "end": 1750746259107
      }
      

      It's only a ui problem?

      posted in Backup
      robytR
      robyt
    • RE: mirror backup to S3

      @acebmxer ok, i don't use (for full + delta) the old schedulng (one job but two scheduling, one full and one delta), but i must separate the two jobs?

      posted in Backup
      robytR
      robyt
    • mirror backup to S3

      Good morning, i'v some VM (~30) in four logical group.
      for every group i create the backup (one full weekly and 40 incremental) and i want mirror to wasabi S3 storage.
      How i can start mirroring when one of the full/incremental backup end?
      I don't want to start mirror when a backup is not finished!
      thank you

      posted in Backup
      robytR
      robyt
    • RE: Short VM freeze when migrating to another host

      @olivierlambert ops.. why the best topology?

      posted in Compute
      robytR
      robyt
    • RE: Short VM freeze when migrating to another host

      @olivierlambert live migration, the vm is very important (today, in christmas holyday, i've received some phone calls for 7 minutes of freeze..)
      17407a92-730e-4e68-885f-44a4141e863d-immagine.png

      posted in Compute
      robytR
      robyt
    • RE: Short VM freeze when migrating to another host

      @olivierlambert hi, today i've upgraded my host..
      The big VM frozen for ~7 minutes, is a big vm (96 gbram and 32 cpu) but 7 minutes is a very long time (for customer!)
      i've setting 96/06 in dynamic: is a normal time?

      posted in Compute
      robytR
      robyt
    • xoa not show host patch?

      Good morning, today i see a strange thing in my pool
      e07e1f27-058d-4038-b35f-a59cf9ecdef8-immagine.png
      I've not update ctx7..
      I login to host (ctx7 and ctx6 for comparision), do a yum update and i see the same 9 packages: why xoa not see the patch for ctx7?

      posted in Xen Orchestra
      robytR
      robyt
    • RE: Clarification of "VM Limits" in XO

      @olivierlambert said in Clarification of "VM Limits" in XO:

      1. Static is the global range that can be modified only when the VM is halted. Dynamic is the range when the the VM memory can be changed while the VM is running. Obviously, dynamic range is included inside the static one.

      Most of the time, except if you have a very good reason for it, do not use dynamic memory.

      ok.. but if i set memory limit static 1gb-16gb
      and dynamic 16gb-16gb XCP assign 16 gb to vm?
      the static limit is only a "barrier" for dynamic?

      posted in Advanced features
      robytR
      robyt
    • RE: Short VM freeze when migrating to another host

      @nikade 2568c5bf-5336-4461-8f1f-60cf093f93a2-immagine.png
      in VM (linux) with a free i see 94 gb of total memory

      posted in Compute
      robytR
      robyt
    • RE: update via yum or via xoa?

      @bleader said in update via yum or via xoa?:

      yes you're basically doing an RPU manually.
      But it is indeed odd that the process is stuck at 0%, it should be fairly fast to do the install patches, no errors in the logs?

      i've another install all patches and install all.
      Now i'll use rpm update and see if speed is the same or not

      posted in XCP-ng
      robytR
      robyt
    • RE: update via yum or via xoa?

      @bleader said in update via yum or via xoa?:

      It actually depends if you chose "rolling pool update" or "Install all pool patches", as you're talking about evacuate I assume you went with the first one.

      Rolling pool update (RPU) is documented here and Pool updates here.

      But to sum it up, "Install all pool patches" button will indeed run the yum update on all server, so similar as doing it manually, while RPU will do hosts one by one, moving VM to other hosts in the pool, install updates and reboot host. Therefore it can take way longer to complete, time will vary based on the number of VMs that have to be migrated around, network speed between hosts, etc…

      RPU is the recommanded way as it allows hosts to restart and therefore take hypervisor, microcode and dom0 kernel updates into account right away with no service interruption. But if you don't really mind shutting down some of the VMs to restart hosts, or if there are no low level updates that requires a reboot, you could get away with just the yum update manually. But if the RPU is started already, I would not advise trying to do things manually at the same time.

      ok, i've do a "install all patches" from xoa host page, i want some control form move vm to other hosts
      But is the same thing at all (if i evacuate by hand the host, rpm update, reboot host), right?

      posted in XCP-ng
      robytR
      robyt
    • update via yum or via xoa?

      Hi, i'm updating my poo; i evacuate the master, goes to patch, install all patch..
      for now is at 0% (11 minutes from start)
      Why is too slow?
      It's the same if i go with console and do a yum update to all server?

      posted in XCP-ng
      robytR
      robyt