XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • M

      Too many snapshots

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      40
      2
      0 Votes
      40 Posts
      583 Views
      M
      @Pilow [image: 1776289093675-eac3d82b-82a4-46b3-b8e1-1f4b64c57e35-image.jpeg] The number of snapshots shows 16, which makes sense as I have two backup schedules, one with a retention of 15 and one with a retention of 1. The daily backup with a retention of 1 resets the chain, as it is a full backup. [image: 1776289174076-883856d8-222a-4593-a013-3204a340ecbc-image.jpeg]
    • A

      XOA - Memory Usage

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      27
      2
      0 Votes
      27 Posts
      653 Views
      P
      @florent was finally able to read the pull /clap ! the fix seems totally legit and consistant with XOA ram ramping up ! when will this be officially published ? so we can disable daily reboot of XOA & XO PROXies
    • P

      clean-vm (end) is stalling ?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      15
      2
      0 Votes
      15 Posts
      270 Views
      simonpS
      @Pilow Thanks for the heads-up, you should be able to add back concurrency as it was before and get similar performance to before the refactoring.
    • O

      When the XCPNG host restart, it restarts running directly, instead of being in maintenance mode

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      17
      0 Votes
      17 Posts
      501 Views
      P
      perhaps "in the context of a proceeding RPU, do not start halted VMs" ? or "boot only halted VMs that have HA enabled" ? but I can imagine corner cases where this is not wanted. some chicken & egg problem.
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      441
      1 Votes
      441 Posts
      183k Views
      A
      @rzr (edit) After upgrading two main pools, I'm having CR delta backup issues. Everything was working before the XCP update, now every VM has the same error of Backup fell back to a full. Using XO master db9c4, but the same XO setup was working just fine before the XCP update. (edit 2) XO logs Apr 15 22:55:40 xo1 xo-server[1409]: 2026-04-16T02:55:40.613Z xo:backups:worker INFO starting backup Apr 15 22:55:42 xo1 xo-server[1409]: 2026-04-16T02:55:42.006Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.093Z xo:xapi:vdi INFO OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e was already destroyed { Apr 15 22:55:44 xo1 xo-server[1409]: vdiRef: 'OpaqueRef:49ff18d3-5c18-176c-4930-0163c6727c2b', Apr 15 22:55:44 xo1 xo-server[1409]: vbdRef: 'OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e' Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.839Z xo:xapi:vdi INFO OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82 has been disconnected from dom0 { Apr 15 22:55:44 xo1 xo-server[1409]: vdiRef: 'OpaqueRef:02f9ba92-1ee2-88eb-f660-a2cf3eeb287d', Apr 15 22:55:44 xo1 xo-server[1409]: vbdRef: 'OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82' Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.910Z xo:xapi:vm WARN _assertHealthyVdiChain, could not fetch VDI { Apr 15 22:55:44 xo1 xo-server[1409]: error: XapiError: UUID_INVALID(VDI, 8f233bfc-9deb-4a06-aa07-0510de7496a1) Apr 15 22:55:44 xo1 xo-server[1409]: at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12) Apr 15 22:55:44 xo1 xo-server[1409]: at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/transports/json-rpc.mjs:38:21 Apr 15 22:55:44 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) { Apr 15 22:55:44 xo1 xo-server[1409]: code: 'UUID_INVALID', Apr 15 22:55:44 xo1 xo-server[1409]: params: [ 'VDI', '8f233bfc-9deb-4a06-aa07-0510de7496a1' ], Apr 15 22:55:44 xo1 xo-server[1409]: call: { duration: 3, method: 'VDI.get_by_uuid', params: [Array] }, Apr 15 22:55:44 xo1 xo-server[1409]: url: undefined, Apr 15 22:55:44 xo1 xo-server[1409]: task: undefined Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.732Z xo:xapi:xapi-disks INFO Error in openNbdCBT XapiError: SR_BACKEND_FAILURE_460(, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated], ) Apr 15 22:55:46 xo1 xo-server[1409]: at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12) Apr 15 22:55:46 xo1 xo-server[1409]: at default (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_getTaskResult.mjs:13:29) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1078:24) Apr 15 22:55:46 xo1 xo-server[1409]: at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1112:14 Apr 15 22:55:46 xo1 xo-server[1409]: at Array.forEach (<anonymous>) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1102:12) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1275:14) Apr 15 22:55:46 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) { Apr 15 22:55:46 xo1 xo-server[1409]: code: 'SR_BACKEND_FAILURE_460', Apr 15 22:55:46 xo1 xo-server[1409]: params: [ Apr 15 22:55:46 xo1 xo-server[1409]: '', Apr 15 22:55:46 xo1 xo-server[1409]: 'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]', Apr 15 22:55:46 xo1 xo-server[1409]: '' Apr 15 22:55:46 xo1 xo-server[1409]: ], Apr 15 22:55:46 xo1 xo-server[1409]: call: undefined, Apr 15 22:55:46 xo1 xo-server[1409]: url: undefined, Apr 15 22:55:46 xo1 xo-server[1409]: task: task { Apr 15 22:55:46 xo1 xo-server[1409]: uuid: '8fae41b4-de82-789c-980a-5ff2d490d2d8', Apr 15 22:55:46 xo1 xo-server[1409]: name_label: 'Async.VDI.list_changed_blocks', Apr 15 22:55:46 xo1 xo-server[1409]: name_description: '', Apr 15 22:55:46 xo1 xo-server[1409]: allowed_operations: [], Apr 15 22:55:46 xo1 xo-server[1409]: current_operations: {}, Apr 15 22:55:46 xo1 xo-server[1409]: created: '20260416T02:55:46Z', Apr 15 22:55:46 xo1 xo-server[1409]: finished: '20260416T02:55:46Z', Apr 15 22:55:46 xo1 xo-server[1409]: status: 'failure', Apr 15 22:55:46 xo1 xo-server[1409]: resident_on: 'OpaqueRef:7b987b11-ada0-99ce-d831-6e589bf34b50', Apr 15 22:55:46 xo1 xo-server[1409]: progress: 1, Apr 15 22:55:46 xo1 xo-server[1409]: type: '<none/>', Apr 15 22:55:46 xo1 xo-server[1409]: result: '', Apr 15 22:55:46 xo1 xo-server[1409]: error_info: [ Apr 15 22:55:46 xo1 xo-server[1409]: 'SR_BACKEND_FAILURE_460', Apr 15 22:55:46 xo1 xo-server[1409]: '', Apr 15 22:55:46 xo1 xo-server[1409]: 'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]', Apr 15 22:55:46 xo1 xo-server[1409]: '' Apr 15 22:55:46 xo1 xo-server[1409]: ], Apr 15 22:55:46 xo1 xo-server[1409]: other_config: {}, Apr 15 22:55:46 xo1 xo-server[1409]: subtask_of: 'OpaqueRef:NULL', Apr 15 22:55:46 xo1 xo-server[1409]: subtasks: [], Apr 15 22:55:46 xo1 xo-server[1409]: backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_utils.ml)(line 150))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))' Apr 15 22:55:46 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.735Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.115Z xo:xapi:vdi WARN invalid HTTP header in response body { Apr 15 22:55:48 xo1 xo-server[1409]: body: 'HTTP/1.1 500 Internal Error\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'content-length: 318\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'content-type: text/html\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'connection: close\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'cache-control: no-cache, no-store\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: '\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac; CBT metadata ]</body></html>' Apr 15 22:55:48 xo1 xo-server[1409]: } Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.124Z xo:xapi:xapi-disks WARN can't compute delta OpaqueRef:e7de1446-34fd-1ae8-4680-351b1e72b2dd from OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac, fallBack to a full { Apr 15 22:55:48 xo1 xo-server[1409]: error: Error: invalid HTTP header in response body Apr 15 22:55:48 xo1 xo-server[1409]: at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:37:19) Apr 15 22:55:48 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) Apr 15 22:55:48 xo1 xo-server[1409]: at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:261:5) Apr 15 22:55:48 xo1 xo-server[1409]: at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiVhdStreamSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23) Apr 15 22:55:48 xo1 xo-server[1409]: at async #openExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:182:7) Apr 15 22:55:48 xo1 xo-server[1409]: at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:97:22) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiDiskSource.openSource (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:258:18) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41) Apr 15 22:55:48 xo1 xo-server[1409]: at async file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/backups/_incrementalVm.mjs:66:5 Apr 15 22:55:48 xo1 xo-server[1409]: } Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.126Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.047Z xo:backups:worker INFO backup has ended Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.231Z xo:backups:worker INFO process will exit { Apr 15 22:56:24 xo1 xo-server[1409]: duration: 43618102, Apr 15 22:56:24 xo1 xo-server[1409]: exitCode: 0, Apr 15 22:56:24 xo1 xo-server[1409]: resourceUsage: { Apr 15 22:56:24 xo1 xo-server[1409]: userCPUTime: 45307253, Apr 15 22:56:24 xo1 xo-server[1409]: systemCPUTime: 6674413, Apr 15 22:56:24 xo1 xo-server[1409]: maxRSS: 30928, Apr 15 22:56:24 xo1 xo-server[1409]: sharedMemorySize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: unsharedDataSize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: unsharedStackSize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: minorPageFault: 287968, Apr 15 22:56:24 xo1 xo-server[1409]: majorPageFault: 0, Apr 15 22:56:24 xo1 xo-server[1409]: swappedOut: 0, Apr 15 22:56:24 xo1 xo-server[1409]: fsRead: 0, Apr 15 22:56:24 xo1 xo-server[1409]: fsWrite: 0, Apr 15 22:56:24 xo1 xo-server[1409]: ipcSent: 0, Apr 15 22:56:24 xo1 xo-server[1409]: ipcReceived: 0, Apr 15 22:56:24 xo1 xo-server[1409]: signalsCount: 0, Apr 15 22:56:24 xo1 xo-server[1409]: voluntaryContextSwitches: 14665, Apr 15 22:56:24 xo1 xo-server[1409]: involuntaryContextSwitches: 962 Apr 15 22:56:24 xo1 xo-server[1409]: }, Apr 15 22:56:24 xo1 xo-server[1409]: summary: { duration: '44s', cpuUsage: '119%', memoryUsage: '30.2 MiB' } Apr 15 22:56:24 xo1 xo-server[1409]: }
    • K

      Question about Continuous Replication/ Backups always doing Full Backups

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      16
      1
      0 Votes
      16 Posts
      368 Views
      K
      @tsukraw No worries! Just glad that we can all help each other out!
    • burbilogB

      VM backup fails with INVALID_VALUE

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      0 Votes
      8 Posts
      130 Views
      burbilogB
      main.xxx (azazel.xxx) Snapshot Start: 2026-04-10 00:03 End: 2026-04-10 00:03 Local storage (137.41 GiB free - thin) - legion.xxx transfer Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Size: 17.08 GiB Speed: 47.42 MiB/s Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Type: full
    • A

      GPU share to more Windows VMs on same XCP-NG node

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      6
      0 Votes
      6 Posts
      112 Views
      F
      @Aleksander There are plenty of examples on this forum. We have used Nvidia T4, it works very well.
    • O

      question about a master node crash in a pool.

      Watching Ignoring Scheduled Pinned Locked Moved XO Lite
      6
      0 Votes
      6 Posts
      67 Views
      O
      Hi @olivierlambert and @pilow Thank you for your answers, it helps a lot, Regards, Olivier
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      174
      7 Votes
      174 Posts
      21k Views
      olivierlambertO
      Let me ping @Team-XO-Frontend
    • V

      Build XCP-ng ISO - issue at create-installimg

      Watching Ignoring Scheduled Pinned Locked Moved Development
      3
      0 Votes
      3 Posts
      81 Views
      V
      @poddingue thank you that was it ! I had the feeling that the issue was around the path with the 4 slashes but couldn't figure out why, what and where. So essentially, after setting the working directory to /tmp for my docker run it worked. Here is the extract of the working build step for install.img - name: Build install.img run: | XCPNG_VER="${{ github.event.inputs.xcpng_version }}" docker run --rm \ --user root -w /tmp \ -v "$(pwd)/create-install-image:/create-install-image:ro" \ -v "/tmp/RPM-GPG-KEY-xcp-ng-ce:/etc/pki/rpm-gpg/RPM-GPG-KEY-xcp-ng-ce" \ -v "$(pwd):/output" \ xcp-ng-build-ready \ bash -ce " rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-xcpng rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-xcp-ng-ce /create-install-image/scripts/create-installimg.sh \ --output /output/install-${XCPNG_VER}.img \ --define-repo base!https://updates.xcp-ng.org/8/${XCPNG_VER}/base \ --define-repo updates!https://updates.xcp-ng.org/8/${XCPNG_VER}/updates \ ${XCPNG_VER} echo 'install.img built' Regarding the output you wanted to see, here is it when it fails, first the way I trigger the container for context. sudo docker run --rm -it -v "$(pwd)/create-install-image:/create-install-image:ro" -v "$(pwd):/output" b292e8a21068 /bin/bash ./create-install-image/scripts/create-installimg.sh --output /output/instal.img 8.3 -----Set REPOS----- --- PWD var and TMPDIR content---- / total 20 drwx------ 4 root root 4096 Apr 16 00:54 . drwxr-xr-x 1 root root 4096 Apr 16 00:54 .. drwx------ 2 root root 4096 Apr 16 00:54 rootfs-FJWbFM -rw------- 1 root root 295 Apr 16 00:54 yum-HRyIb1.conf drwx------ 2 root root 4096 Apr 16 00:54 yum-repos-1FbWwV.d --- ISSUE happens here *setup_yum_repos* ---- CRITICAL:yum.cli:Config error: Error accessing file for config file:////tmpdir-sApL80/yum-HRyIb1.conf As soon as I'm moving to different directory other than the root / then this issue goes away. Now going through the ISO build. With kind regards.
    • M

      log_fs_usage / /var/log directory on pool master filling up constantly

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      20
      1
      0 Votes
      20 Posts
      1k Views
      G
      @denis.grilli The problem is not the performance of the scan ... the problem is, that this storage device only consists of block devices (disks) that should go into standby mode when not used ... but I think I've found a code line that checks if other-config for an SR contains auto-scan: false... I think ...
    • P

      backup mail report says INTERRUPTED but it's not ?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      122
      5
      0 Votes
      122 Posts
      10k Views
      M
      I updated to branch "mra-fix-rest-memory-leak". I will look at backup job results tomorrow and report back.
    • A

      File based restore is missing tons of files

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      3
      0 Votes
      3 Posts
      80 Views
      DanpD
      @archw Could the results you are observing be due to the fact that some of the files weren't modified in the specific delta backup that you selected as your restore point? For example: In another subdirectory there are supposed to be 7 files but in the selection window there are only two. When were these 5 "missing" files last modified?
    • F

      Xen Orchestra 6.3.2 Random Replication Failure

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      1
      0 Votes
      8 Posts
      192 Views
      florentF
      @flakpyro that's a good news ( but at least another user saw this) we are currently testing the branch ensuring that at least the fix don't create other issues
    • F

      Just FYI: current update seams to break NUT dependancies

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      28
      0 Votes
      28 Posts
      1k Views
      rzrR
      @cobordism said: yum update --disablerepo=* --enablerepo=xcp-ng-base,xcp-ng-updates It's currently in testing and will move to updates if everything (not only nut) is ok: yum install --disablerepo=* \ --enablerepo=xcp-ng-base,xcp-ng-updates,xcp-ng-testing nut
    • maximsachsM

      XCP-ng 8.3: Broadcom BCM57414 `bnxt_en` Driver Fails to Probe on HPE DL380a Gen12

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      6
      2 Votes
      6 Posts
      255 Views
      maximsachsM
      @yannsionneau Thanks for the update! We are eagerly awaiting your findings! Thanks for looking into it.
    • P

      Timestamp lost in Continuous Replication

      Watching Ignoring Scheduled Pinned Locked Moved Solved Backup
      29
      2
      0 Votes
      29 Posts
      1k Views
      olivierlambertO
      Thank you for your feedback @kratos !
    • TechGripsT

      Warnings with Backups?

      Watching Ignoring Scheduled Pinned Locked Moved Backup backup backup failure
      5
      2
      0 Votes
      5 Posts
      193 Views
      P
      @TechGrips Sorry, there is no quick test to be sure the VM is not corrupted. The usual way would be to make a healthcheck. We cannot be sure everything is ok as it concerns multiple tar linked to each other. If it keeps warning you on the same backups, it may be due to a faulty parent. You would need for this to create a new chain of snapshot
    • stormiS

      Second (and final) Release Candidate for QCOW2 image format support

      Watching Ignoring Scheduled Pinned Locked Moved News
      2
      5 Votes
      2 Posts
      215 Views
      stormiS
      Here's a work in progress version of the FAQ that will go with the release. QCOW2 FAQ What storage space available do I need to have on my SR to have large QCOW2 disks to support snapshots? Depending on a thin or thick allocated SR type, the answer is the same as VHD. A thin allocated is almost free, just a bit of data for the metadata of a few new VDI. For thick allocated, you need the space for the base copy, the snapshot and the active disk. Must I create new SRs to create large disks? No. Most existing SR will support QCOW2. LinstorSR and SMBSR (for VDI) does not support QCOW2. Can we have multiples different type of VDIs (VHD and QCOW2) on the same SR? Yes, it’s supported, any existing SR (unless unsupported e.g. linstor) will be able to create QCOW2 beside VHD after installing the new sm package What happen in Live migration scenarios? preferred-image-formats on the PBD of the master of a SR will choose the destination format in case of a migration. source preferred-image-format VHD or no format specified preferred-image-format qcow2 qcow2 >2 TiB X qcow2 qcow2 <2 TiB vhd qcow2 vhd vhd qcow2 Can we create QCOW2 VDI from XO? XO hasn’t yet added the possibility to choose the image format at the VDI creation. But if you try to create a VDI bigger than 2TiB on a SR without any preferred image formats configuration or if preferred image formats contains QCOW2, it will create a QCOW2. Can we change the cluster size? Yes, on File based SR, you can create a QCOW2 with a different cluster size with the command: qemu-img create -f qcow2 -o cluster_size=2M $(uuidgen).qcow2 10G xe sr-scan uuid=<SR UUID> # to introduce it in the XAPI The qemu-img command will print the name, the VDI is <VDI UUI>.qcow2 from the output. We have not exposed the cluster size in any API call, which would allow you to create these VDIs more easily. Can you create a SR which only ever manages QCOW2 disks? How? Yes, you can by setting the preferred-image-formats parameter to only qcow2. Can you convert an existing SR so that it only manages QCOW2 disks? If so, and it had VHDs, what happens to them? You can modify a SR to manage QCOW2 by modifying the preferred-image-formats parameter of the PBD’s device-config. Modifying the PBD necessitates to delete it and recreate it with the new parameter. This implies stopping access to all VDIs of the SR on the master (you can for shared SR migrate all VMs with VDIs on other hosts in the pool and temporarily stop the PBD of the master to recreate it, the parameter only need to be set on the PBD of the master). If the SR had VHDs, they will continue to exist and be usable but won’t be automatically transformed in QCOW2. Can I resize my VDI above 2 TiB? A disk in VHD format can’t be resized above 2 TiB, no automatic format change is implemented. It is technically possible to resize above 2 TiB following a migration that would have transferred the VDI to QCOW2. Is there any thing to do to enable the new feature? Installing updated packages that supports QCOW2 is enough to enable the new feature (packages: xapi, sm, blktap). Creating a VDI bigger than 2 TiB in XO will create a QCOW2 VDI instead of failing. Can I create QCOW2 disks lesser than 2 TiB? Yes, but you need to create it manually while setting sm-config:image-format=qcow2 or configure preferred image formats on the SR. Is QCOW2 format the default format now? Is it the best practice? We kept VHD as the default format in order to limit the impact on production. In the future, QCOW2 will become the default image format for new disks, and VHD progressively deprecated. What’s the maximum disk size? The current limit is set to 16 TiB. It’s not a technical limit, it’s a limit that we corresponds to what we tested. We will raise it progressively in the future. We’ll be able to go up to 64 TiB before meeting a new technical limit related to live migration support, that we will adress at this point. The theoretical maximum is even higher. We’re not limited by the image format anymore. Can I import without modification my KVM QCOW2 disk in XCP-ng? No. You can import them, but they need to be configured to boot with the drivers like in this documentation: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-kvm-libvirt You can just skip the conversion to VHD. So it should work depending on different configuration.