Exports OVA Timeout...
-
@chelle_belle I have (and reported) similar issues like this. See post #5871
I'm on XO source (current master) and using XCP 8.2.1 and I have most of the same issues, except I do get a valid download that finishes.
How large is your VM that you are trying to export?
For me:
- the export starts and I get a small download (4-5K) and it pauses
- export task continues to run until 100%
- the rest of the download continues slowly
- I get a complete download after a while
- there is a task stuck and I have to restart the tool stack (on the master host) to make it go away.
-
@Andrew - Thanks for your response - although the outcome is slightly different - the symptoms look very similar. Perhaps your experience - was like my "fluke". Citrix XenCenter reports the disk is 32GB in size, which I assumed was thinly provisioned. However, an OVF export indicates the corresponding VHD is 32GB so it looks like the default is not thinly provisioned disks in Citrix Hypervisor8.x.x
-
32GiB isn't that big to export
@florent any idea what could go wrong?
edit: at some point, that would be interesting to get your VM in XVA format so we can import it in our lab, and try to export it into OVA and see if we can reproduce.
-
@olivierlambert - okay - one thing haven't tried is XVA export via XOA. If that falls, I can export to some other format - and zip up and ping across...
-
@chelle_belle - So i tried using an XVA export from XOA and that works like a dream - immediately see the file grow and the transfer starts immediately...
-
@florent will come here to ask some question
-
@chelle_belle @florent I have the same result. XVA export works great.
-
@chelle_belle so the ova is a tar containing many files, (one 4KB ovf file containing the metadata and one vmdk file per disk containing the data)
since it's a tar we need to know the size of each file to be able to stream it
but the most supported subformat of vmdk is the stream optimized , which mandates to compress each 64KB part of the disk. Since we can't compute the compressed size of the vhd transformed to vmdk we need to transfert the disk, compress it to vmdk , keep track of the compressed size, then start again, this time sending the data
Since we can't store the vmdk on xo-server disks, we need to transfer and transform the disk to vmdk twice. And the transform process is quite slow.
As I write this, I am wondering if I can create a less efficient vmdk with empty spaces at the end so that I can do it in one pass.
I will test it and let you know the result
-
@chelle_belle can you test this branch : fix_ova_speed
there will still have a pause after the 5kb and after each disk, but it should speed up things . I will look for other performance optimization during this sprint.
-
@florent That branch is a LOT faster for OVA export but does not compress the data as much.
- XVA/NoComp 40 seconds to complete. 1760MB
- XVA/Zstd 30 seconds to complete. 560MB
- XVA/gzip 90 seconds to complete. 586MB
- New OVA 40 seconds to start, 60 seconds to download. 1570MB
- Old OVA 6 minutes to start, 6 minutes to download. 597MB
It still leaves a zombie task....
[XO] VM OVA export (TestSmall7-64 on xcp1) 0%
-
Thanks for your feedback @Andrew as usual
-
Ah I see, @florent just disabled compression entirely to see the diff, so maybe using compression level to 1 will be better while keeping that speed
-
@olivierlambert I see that now...
I guess the best choice would to have an option during export like with XVA. Quick-NoComp OR Slow-Compression. It could also be applied to export disk function.
I tried 1 but it did not seem to change the size of time. Maybe I did not change it correctly.
I guess technically the no-compression option could stream the data without the pre-calculation work for compression that takes time but is no longer needed.
-
Maybe try 6? Don't forget to rebuild after that
-
@olivierlambert @florent My mistake.... I did not rebuild XO correctly.
Using level 1 is about 1 minute to start and about 1 minute to download and was 637MB. Seems to be a reasonable tradeoff for a significant increase in speed (over level 9) and a minimal decrease in compression.
So level 0 (no compression) actually does not help unless the code is modified to just stream the data without the pre-calculation and would cut the export time in half (no scanning/calculation needed).
Level 1 is reasonable if it the current procedure is maintained. Level 9 should be optional but not forced. Level 9 is very slow.... If compression is left in place then it would be nice to have some options.... 1..3..6..9... Or some names (min=1, fast=3, standard=6, max=9).
The quickest change is to just make it level 1. That still offers compression and still takes time but is MUCH faster and I think that's what most people want. speed...speed....speed....
-
Sounds a good modification to do in the short term until make it editable indeed
@chelle_belle if you need assistance so we can configure it this way on your XOA, let me know
-
@olivierlambert - I'm struggling to switch to the channel and do the update. When I got to set it I get this error message:
06/06/2022, 17:07:33: Start updating...
06/06/2022, 17:07:33: No manifest found for fix_ova_speed channel.I assumed I had to switch to "Unlisted Channel" and then input fix_ova_speed as the Unlisted channel name...
-
@chelle_belle a release channel is not the branch on Github, which can be used only if you installed XO from the sources (https://xen-orchestra.com/docs/installation.html#from-the-sources).
In theory, we could build packages to match such a release channel, but it's not trivial on our side. It might be easier to just install XO from the sources and not using
master
branch butfix_ova_speed
. If you need assistance on doing that, I'm here -
@chelle_belle If you can open a support tunnel I can deploy the fix today ( with a more reasonable compression level of 1 , thanks @Andrew )
Can you open a support ticket with the tunnel information ?
-
@florent @julien-f Trying the new OVA export (XO source commit c11e0)..... I exported a VM quickly and without a zombie task left over (good), but then I tried to import it and got no error when clicking import but it did nothing (failure without notice). The logs do show an error:
HTTP handler of vm.import undefined { "code": "INTERNAL_ERROR", "params": [ "(Failure \"Expected string, got 'N'\")" ], "call": { "method": "VM.create", "params": [ { "actions_after_crash": "restart", "actions_after_reboot": "restart", "actions_after_shutdown": "destroy", "affinity": null, "HVM_boot_params": { "order": "cdn" }, "HVM_boot_policy": "BIOS order", "is_a_template": false, "memory_dynamic_max": 4294967296, "memory_dynamic_min": 4294967296, "memory_static_max": 4294967296, "memory_static_min": 4294967296, "other_config": { "vgpu_pci": "", "base_template_name": "Other install media", "install-methods": "cdrom" }, "PCI_bus": "", "platform": { "timeoffset": "0", "nx": "true", "acpi": "1", "apic": "true", "pae": "true", "hpet": "true", "viridian": "true" }, "PV_args": "", "PV_bootloader_args": "", "PV_bootloader": "", "PV_kernel": "", "PV_legacy_args": "", "PV_ramdisk": "", "recommendations": "<restrictions><restriction field=\"memory-static-max\" max=\"137438953472\" /><restriction field=\"vcpus-max\" max=\"32\" /><restriction property=\"number-of-vbds\" max=\"255\" /><restriction property=\"number-of-vifs\" max=\"7\" /><restriction field=\"has-vendor-device\" value=\"false\" /></restrictions>", "user_version": 1, "VCPUs_at_startup": 2, "VCPUs_max": 2, "VCPUs_params": {}, "blocked_operations": {}, "has_vendor_device": false, "HVM_shadow_multiplier": 1, "name_description": "", "name_label": "NginX Test (ova)", "order": 0, "shutdown_delay": 0, "start_delay": 0, "version": 0 } ] }, "message": "INTERNAL_ERROR((Failure \"Expected string, got 'N'\"))", "name": "XapiError", "stack": "XapiError: INTERNAL_ERROR((Failure \"Expected string, got 'N'\")) at Function.wrap (/opt/xo/xo-builds/xen-orchestra-202206091200/packages/xen-api/src/_XapiError.js:16:12) at /opt/xo/xo-builds/xen-orchestra-202206091200/packages/xen-api/src/transports/json-rpc.js:37:27 at AsyncResource.runInAsyncScope (node:async_hooks:202:9) at cb (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/util.js:355:42) at tryCatcher (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/promise.js:547:31) at Promise._settlePromise (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/promise.js:604:18) at Promise._settlePromise0 (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/promise.js:649:10) at Promise._settlePromises (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/promise.js:729:18) at _drainQueueStep (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/async.js:93:12) at _drainQueue (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/async.js:86:9) at Async._drainQueues (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/async.js:102:5) at Immediate.Async.drainQueues [as _onImmediate] (/opt/xo/xo-builds/xen-orchestra-202206091200/node_modules/bluebird/js/release/async.js:15:14) at processImmediate (node:internal/timers:466:21) at process.callbackTrampoline (node:internal/async_hooks:130:17)" }