XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. SylvainB
    3. Posts
    S
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 10
    • Posts 33
    • Groups 0

    Posts

    Recent Best Controversial
    • WORM Backups with XCP-ng / Xen Orchestra - Seeking Solutions & Experience

      Hello everyone,

      I'm exploring options for implementing WORM (Write Once, Read Many) capabilities for my backups within my XCP-ng environment, specifically using Xen Orchestra.

      My current setup:

      • XCP-ng Version: 8.3
      • Xen Orchestra Version: 5.106.4 (Stable)
      • Intended Backup Target: Synology NAS

      My primary goal is to ensure that my backup data, once written, becomes immutable for a defined retention period, offering protection against accidental deletion or ransomware attacks.

      My questions are:

      1. Does Xen Orchestra offer any native WORM features or integrations that I might be overlooking for its backup jobs?
      2. If not directly, has anyone successfully implemented WORM backups with a similar perimeter (XCP-ng, Xen Orchestra, and potentially a Synology NAS or other storage solution)? I'm very interested in learning about your setup, the specific technologies you used (e.g., storage features, specific configurations), and any lessons learned or best practices.

      Any insights, architectural recommendations, or shared experiences would be highly valuable.

      Thank you in advance for your help!

      Best regards,

      SylvainB

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      @rtjdamen said in CBT: the thread to centralize your feedback:

      are u able to spin up a test xoa based on stable? Maybe u can check if it does work in that version?

      I'm already on stable channel 😉

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      @olivierlambert @florent

      I still have error, even after disabled CBT and purge snapshot.

      "stream has ended with not enough data (actual: 446, expected: 512)"

      It's a production VM, for a customer. What can I do quickly ?

      Ticket #7729749

      Thanks !

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      @olivierlambert

      I can't test on latest channel, because long backup job are running?

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      Hi,

      Same error here, Current version: 5.98.1

      I disabled purge data option, but I still had error.

      I opened a support ticket #7729749

      posted in Backup
      S
      SylvainB
    • Do I have to backup XOA VM ?

      Hello,

      A simple question, should I backup my XOA VM? If yes, what are the recommendations?

      Thanks !

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      @olivierlambert nice !

      Ticket#7727235

      Thanks !

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      Thanks @olivierlambert

      I'm available to install patch on my installation if you want, and if it's possible.

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      in my case, even if I restart the jobs in error, the error vdi_in_use persists

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      I followed the advice above, restarted all my hosts one by one, I still have errors

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1725368860234",
        "jobId": "9170c82b-ede4-491d-b792-f421b3e4b525",
        "jobName": "BACKUP_AND_REPLICATE_DEFAULT_BACKUP_JOB",
        "message": "backup",
        "scheduleId": "3e87844e-b48a-453f-87b4-e2a2cc3fa2c8",
        "start": 1725368860234,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "cc59f885-c138-675d-fc81-054713586bc1",
                "96cfde06-61c0-0f3e-cf6d-f637d41cc8c6",
                "e1da68e8-ef42-44a9-386b-aceb7a920463",
                "565c724d-020f-baa8-8e7d-cf54d8b57a28"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "cc59f885-c138-675d-fc81-054713586bc1",
              "name_label": "REC-APP-BCTI"
            },
            "id": "1725368863035",
            "message": "backup VM",
            "start": 1725368863035,
            "status": "failure",
            "tasks": [
              {
                "id": "1725368863044:0",
                "message": "clean-vm",
                "start": 1725368863044,
                "status": "success",
                "end": 1725368863493,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1725368863818",
                "message": "snapshot",
                "start": 1725368863818,
                "status": "success",
                "end": 1725368945306,
                "result": "7f7f7d83-4ac6-0cbd-0bd4-34f9fb94c4a5"
              },
              {
                "data": {
                  "id": "122ddf1f-090d-4c23-8c5e-fe095321f8b9",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1725368945306:0",
                "message": "export",
                "start": 1725368945306,
                "status": "success",
                "tasks": [
                  {
                    "id": "1725369159363",
                    "message": "clean-vm",
                    "start": 1725369159363,
                    "status": "success",
                    "end": 1725369159662,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1725369159662
              },
              {
                "data": {
                  "id": "beee944b-e502-61d7-e03b-e1408f01db8c",
                  "isFull": false,
                  "name_label": "iSCSI-STORE-CES-01_HDD-01",
                  "type": "SR"
                },
                "id": "1725368945306:1",
                "message": "export",
                "start": 1725368945306,
                "status": "interrupted"
              }
            ],
            "end": 1725369159662,
            "result": {
              "code": "VDI_IN_USE",
              "params": [
                "OpaqueRef:d1b7eaca-1bb8-41da-9a96-11103411b868",
                "destroy"
              ],
              "task": {
                "uuid": "e2318576-41e0-7c97-bef3-b66fb17dc12f",
                "name_label": "Async.VDI.destroy",
                "name_description": "",
                "allowed_operations": [],
                "current_operations": {},
                "created": "20240903T13:12:39Z",
                "finished": "20240903T13:12:39Z",
                "status": "failure",
                "resident_on": "OpaqueRef:c329c60f-e5d8-4797-9019-2dcb1083227c",
                "progress": 1,
                "type": "<none/>",
                "result": "",
                "error_info": [
                  "VDI_IN_USE",
                  "OpaqueRef:d1b7eaca-1bb8-41da-9a96-11103411b868",
                  "destroy"
                ],
                "other_config": {},
                "subtask_of": "OpaqueRef:NULL",
                "subtasks": [],
                "backtrace": "(((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4711))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
              },
              "message": "VDI_IN_USE(OpaqueRef:d1b7eaca-1bb8-41da-9a96-11103411b868, destroy)",
              "name": "XapiError",
              "stack": "XapiError: VDI_IN_USE(OpaqueRef:d1b7eaca-1bb8-41da-9a96-11103411b868, destroy)\n    at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n    at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1076:24)\n    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1110:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1100:12)\n    at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1273:14)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
            }
          },
          {
            "data": {
              "type": "VM",
              "id": "e1da68e8-ef42-44a9-386b-aceb7a920463",
              "name_label": "PROD-BDD-LIC01"
            },
            "id": "1725368863036",
            "message": "backup VM",
            "start": 1725368863036,
            "status": "failure",
            "tasks": [
              {
                "id": "1725368863043",
                "message": "clean-vm",
                "start": 1725368863043,
                "status": "success",
                "end": 1725368863472,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1725368863842",
                "message": "snapshot",
                "start": 1725368863842,
                "status": "success",
                "end": 1725368876837,
                "result": "882ebd40-e16a-e286-f0ac-580728a3ec19"
              },
              {
                "data": {
                  "id": "122ddf1f-090d-4c23-8c5e-fe095321f8b9",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1725368876838",
                "message": "export",
                "start": 1725368876838,
                "status": "success",
                "tasks": [
                  {
                    "id": "1725368983568",
                    "message": "clean-vm",
                    "start": 1725368983568,
                    "status": "success",
                    "end": 1725368983673,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1725368983675
              },
              {
                "data": {
                  "id": "beee944b-e502-61d7-e03b-e1408f01db8c",
                  "isFull": false,
                  "name_label": "iSCSI-STORE-CES-01_HDD-01",
                  "type": "SR"
                },
                "id": "1725368876839",
                "message": "export",
                "start": 1725368876839,
                "status": "interrupted"
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:f10a5b53-170b-4c2d-9975-d419197cbf2a"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1725368983676,
            "result": {
              "message": "can't create a stream from a metadata VDI, fall back to a base ",
              "name": "Error",
              "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:379:9)\n    at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
            }
          },
          {
            "data": {
              "type": "VM",
              "id": "565c724d-020f-baa8-8e7d-cf54d8b57a28",
              "name_label": "ADMIN-SEC-ADM01"
            },
            "id": "1725368863036:0",
            "message": "backup VM",
            "start": 1725368863036,
            "status": "failure",
            "tasks": [
              {
                "id": "1725368863044",
                "message": "clean-vm",
                "start": 1725368863044,
                "status": "success",
                "end": 1725368863755,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1725368864026",
                "message": "snapshot",
                "start": 1725368864026,
                "status": "success",
                "end": 1725369030903,
                "result": "3c37e2e9-9e08-6fa7-a1e4-c16d51b4106a"
              },
              {
                "data": {
                  "id": "122ddf1f-090d-4c23-8c5e-fe095321f8b9",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1725369030903:0",
                "message": "export",
                "start": 1725369030903,
                "status": "success",
                "tasks": [
                  {
                    "id": "1725369054746",
                    "message": "clean-vm",
                    "start": 1725369054746,
                    "status": "success",
                    "end": 1725369055027,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1725369055031
              },
              {
                "data": {
                  "id": "beee944b-e502-61d7-e03b-e1408f01db8c",
                  "isFull": false,
                  "name_label": "iSCSI-STORE-CES-01_HDD-01",
                  "type": "SR"
                },
                "id": "1725369030903:1",
                "message": "export",
                "start": 1725369030903,
                "status": "interrupted"
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:7a307227-d0bc-4810-92a8-6be38d0eecbb"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:0048f302-86c2-4d69-81bc-feae8ef3cc15"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1725369055031,
            "result": {
              "message": "can't create a stream from a metadata VDI, fall back to a base ",
              "name": "Error",
              "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:379:9)\n    at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
            }
          },
          {
            "data": {
              "type": "VM",
              "id": "96cfde06-61c0-0f3e-cf6d-f637d41cc8c6",
              "name_label": "SRV-SQL"
            },
            "id": "1725368863038",
            "message": "backup VM",
            "start": 1725368863038,
            "status": "failure",
            "tasks": [
              {
                "id": "1725368863045",
                "message": "clean-vm",
                "start": 1725368863045,
                "status": "success",
                "end": 1725368863747,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1725368864026:0",
                "message": "snapshot",
                "start": 1725368864026,
                "status": "success",
                "end": 1725368911729,
                "result": "4db8f7fb-d7d0-cd20-24d1-9223180f0c09"
              },
              {
                "data": {
                  "id": "122ddf1f-090d-4c23-8c5e-fe095321f8b9",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1725368911729:0",
                "message": "export",
                "start": 1725368911729,
                "status": "success",
                "tasks": [
                  {
                    "id": "1725369104743",
                    "message": "clean-vm",
                    "start": 1725369104743,
                    "status": "success",
                    "end": 1725369105118,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1725369105118
              },
              {
                "data": {
                  "id": "beee944b-e502-61d7-e03b-e1408f01db8c",
                  "isFull": false,
                  "name_label": "iSCSI-STORE-CES-01_HDD-01",
                  "type": "SR"
                },
                "id": "1725368911729:1",
                "message": "export",
                "start": 1725368911729,
                "status": "interrupted"
              }
            ],
            "end": 1725369105118,
            "result": {
              "code": "VDI_IN_USE",
              "params": [
                "OpaqueRef:7b2ab98c-bfa5-4403-8890-73b88ce4b1dd",
                "destroy"
              ],
              "task": {
                "uuid": "d3978e94-c447-987c-c15d-21adcb7a8805",
                "name_label": "Async.VDI.destroy",
                "name_description": "",
                "allowed_operations": [],
                "current_operations": {},
                "created": "20240903T13:11:44Z",
                "finished": "20240903T13:11:44Z",
                "status": "failure",
                "resident_on": "OpaqueRef:c329c60f-e5d8-4797-9019-2dcb1083227c",
                "progress": 1,
                "type": "<none/>",
                "result": "",
                "error_info": [
                  "VDI_IN_USE",
                  "OpaqueRef:7b2ab98c-bfa5-4403-8890-73b88ce4b1dd",
                  "destroy"
                ],
                "other_config": {},
                "subtask_of": "OpaqueRef:NULL",
                "subtasks": [],
                "backtrace": "(((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4711))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
              },
              "message": "VDI_IN_USE(OpaqueRef:7b2ab98c-bfa5-4403-8890-73b88ce4b1dd, destroy)",
              "name": "XapiError",
              "stack": "XapiError: VDI_IN_USE(OpaqueRef:7b2ab98c-bfa5-4403-8890-73b88ce4b1dd, destroy)\n    at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n    at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1076:24)\n    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1110:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1100:12)\n    at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1273:14)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
            }
          }
        ],
        "end": 1725369159662
      }
      

      What I have to do ?

      posted in Backup
      S
      SylvainB
    • RE: CBT: the thread to centralize your feedback

      Hi,

      Same here, I updated XOA to 5.98 and I have this error

      "can't create a stream from a metadata VDI, fall back to a base" on some VM

      I have an active support contract.

      Here the detailed log

      {
            "data": {
              "type": "VM",
              "id": "96cfde06-61c0-0f3e-cf6d-f637d41cc8c6",
              "name_label": "blabla_VM"
            },
            "id": "1725081943938",
            "message": "backup VM",
            "start": 1725081943938,
            "status": "failure",
            "tasks": [
              {
                "id": "1725081943938:0",
                "message": "clean-vm",
                "start": 1725081943938,
                "status": "success",
                "end": 1725081944676,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1725081944876",
                "message": "snapshot",
                "start": 1725081944876,
                "status": "success",
                "end": 1725081978972,
                "result": "46334bc0-cb3c-23f7-18e1-f25320a6c4b4"
              },
              {
                "data": {
                  "id": "122ddf1f-090d-4c23-8c5e-fe095321f8b9",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1725081978972:0",
                "message": "export",
                "start": 1725081978972,
                "status": "success",
                "tasks": [
                  {
                    "id": "1725082089246",
                    "message": "clean-vm",
                    "start": 1725082089246,
                    "status": "success",
                    "end": 1725082089709,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1725082089719
              },
              {
                "data": {
                  "id": "beee944b-e502-61d7-e03b-e1408f01db8c",
                  "isFull": false,
                  "name_label": "BLABLA_SR_HDD-01",
                  "type": "SR"
                },
                "id": "1725081978972:1",
                "message": "export",
                "start": 1725081978972,
                "status": "pending"
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:1b614f6b-0f69-47a1-a0cd-eee64007441d"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "warnings": [
              {
                "data": {
                  "error": {
                    "code": "VDI_IN_USE",
                    "params": [
                      "OpaqueRef:989f7dd8-0b73-4a87-b249-6cfc660a90bb",
                      "data_destroy"
                    ],
                    "call": {
                      "method": "VDI.data_destroy",
                      "params": [
                        "OpaqueRef:989f7dd8-0b73-4a87-b249-6cfc660a90bb"
                      ]
                    }
                  },
                  "vdiRef": "OpaqueRef:989f7dd8-0b73-4a87-b249-6cfc660a90bb"
                },
                "message": "Couldn't deleted snapshot data"
              }
            ],
            "end": 1725082089719,
            "result": {
              "message": "can't create a stream from a metadata VDI, fall back to a base ",
              "name": "Error",
              "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:379:9)\n    at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
            }
          },
      
      posted in Backup
      S
      SylvainB
    • RE: Delegating Operations in Xen Orchestra Without Granting Administrator Access

      Thanks,

      In a perfect world, self-service users should be able to manage ACLs on their VMs and show backup tab on VM.

      posted in Management
      S
      SylvainB
    • Delegating Operations in Xen Orchestra Without Granting Administrator Access

      Hello, I need your help with Xen Orchestra. I have created a Self Service and associated a user group (GR_01). This user group independently manages resources (creation, deletion, etc.). The users of GR_01 want to delegate some operations (console access, reboot, etc.) to other users (GR_02).

      I do not want to define these users as administrators because my instance is shared.

      How can I do this?

      Thank you for your help!

      posted in Management
      S
      SylvainB
    • RE: First SMAPIv3 driver is available in preview

      @john-c You've right, thanks for precisions.

      However, thin provisioning on iSCSI is a real blocking thing for me, and I'm sure that I'm not alone 🙂

      Will SMAPIv3 enable thin provisioning on iSCSI SRs?

      posted in Development
      S
      SylvainB
    • RE: First SMAPIv3 driver is available in preview

      Hello @olivierlambert ,

      I am joining this topic as I have a few questions about SMAPIv3:

      • Will it allow provisioning of VDIs larger than 2TB?

      • Will it enable thin provisioning on iSCSI SRs?

      Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.

      Thanks !

      posted in Development
      S
      SylvainB
    • Need help for Replication/Backup Policy

      Hello,

      I would like to implement the following backup/replication policy:

      Replication of VMs every 6 hours on my second site

      Backup of VMs every night at 10 PM, with retention of backups as follows:

      • Retention of the last 7 days

      • Retention of the last 4 weekends

      • Retention of the last 12 month-ends

      What do you recommend?

      Thank you for your help!

      posted in Backup
      S
      SylvainB
    • RE: XCPNG 8.3 availability?

      @olivierlambert Thanks !👍

      posted in Development
      S
      SylvainB
    • RE: XCPNG 8.3 availability?

      @olivierlambert

      Thanks Olivier, if I accept the risk, how can I upgrade from 8.2 to 8.3 beta ?

      posted in Development
      S
      SylvainB
    • XCPNG 8.3 availability?

      Hello everyone,

      I am setting up a shared infrastructure for a client, and they are asking me to enable vTPM on the self-service. I read this feature requires XCPNG 8.3. When will XCPNG 8.3 be publicly available? Is there an easy workaround in the meantime?

      Thank you!

      posted in Development
      S
      SylvainB
    • RE: Two company on single infrastructure

      @olivierlambert

      OK thanks!

      posted in Advanced features
      S
      SylvainB