cloud-init disk not created when cloning a cloud-init ready template
-
Hi all
I'm doing some testing with an installation of xcp-ng managed from a XOA applicance built from source.
It's all good, i can do everything except create a cloud-init powered VM
This is what i've done:
- Downloaded Debian 11 latest daily image, nocloud variant: https://cloud.debian.org/images/cloud/bullseye/daily/20210909-760/debian-11-nocloud-amd64-daily-20210909-760.qcow2
- Converted to VHD
- Imported in my SR
- Created a blank VM and used the imported disk ad OS disk
- Converted the VM to a template
- Created a simple cloud-init config file
- Created a new VM with the previous template and the config file(commentig out the network config placeholder or not makes no difference)
- Booted the VM
The result is that the vm boots but cloud init does not find any config drive and it goes straight to the login screen(making it inusable since there is no configuration)
I've noticed that every time I try a template clone there is a task that remains stuck at 0%
[XO] VDI Content Import (XO CloudConfigDrive on node02) 0%
The only way to stop this task is to restart the toolstack
Just to be sure, i booted one of the failed attempt with a parted live iso and i can confirm that the config disk is not even formatted
Any help on this? I've found nothing in the logs(I'm reading extensively through the official documentation)
Thanks
-
Is your host behind a NAT?
-
@olivierlambert Yes it's behind a router. Normal plain /24 network with NAT to Internet
-
Okay that's why. I suppose XO is trying to get access to the host IP address (
host.address
field). And it's likely returning the private IPv4. And XO can't connect to it.In your host/pool advanced tab, do you have a default migration network set?
-
@olivierlambert sorry maybe my explanation is too poor of details!
The XO appliance and the XCP-NG host are both on the same network since the appliance is a VM on the host:
xoa: 192.168.10.66
xcp-ng: 192.168.10.241Maybe that's what you supposed anyway
The default migration network was not set, I've set it to the default "Pool-wide network associated with eth0"
No changes though
-
I've found this issue on gh
https://github.com/vatesfr/xen-orchestra/issues/5896
this is pretty much the same
the xo-server log:
2021-09-10T14:03:20.653Z xo:xapi WARN importVdiContent: { error: Error: connect ECONNREFUSED 127.0.0.1:80 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1148:16) at TCPConnectWrap.callbackTrampoline (internal/async_hooks.js:131:17) { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '127.0.0.1', port: 80, url: 'http://localhost/', pool_master: host { uuid: 'b6213bdb-7eb4-4ca4-91e4-795bd09328c5', name_label: 'node02', name_description: 'Default install', memory_overhead: 730824704, allowed_operations: [Array], current_operations: {}, API_version_major: 2, API_version_minor: 16, API_version_vendor: 'XenSource', API_version_vendor_implementation: {}, enabled: true, software_version: [Object], other_config: [Object], capabilities: [Array], cpu_configuration: {}, sched_policy: 'credit', supported_bootloaders: [Array], resident_VMs: [Array], logging: {}, PIFs: [Array], suspend_image_sr: 'OpaqueRef:2f6820bd-028f-4ea0-8373-99a2de567a0c', crash_dump_sr: 'OpaqueRef:2f6820bd-028f-4ea0-8373-99a2de567a0c', crashdumps: [], patches: [], updates: [], PBDs: [Array], host_CPUs: [Array], cpu_info: [Object], hostname: 'node02', address: '192.168.10.241', metrics: 'OpaqueRef:5e566621-a235-4450-9e86-d4818e878d17', license_params: [Object], ha_statefiles: [], ha_network_peers: [], blobs: {}, tags: [], external_auth_type: '', external_auth_service_name: '', external_auth_configuration: {}, edition: 'xcp-ng', license_server: [Object], bios_strings: [Object], power_on_mode: '', power_on_config: {}, local_cache_sr: 'OpaqueRef:NULL', chipset_info: [Object], PCIs: [Array], PGPUs: [Array], PUSBs: [Array], ssl_legacy: false, guest_VCPUs_params: {}, display: 'enabled', virtual_hardware_platform_versions: [Array], control_domain: 'OpaqueRef:9084856c-0678-48ca-a01b-b2cbd8bb5f5b', updates_requiring_reboot: [], features: [], iscsi_iqn: 'iqn.2021-09.mlan:80327267', multipathing: false, uefi_certificates: '', certificates: [], editions: [Array] }, SR: SR { uuid: '3a66d84f-d28a-ebfc-ced0-35d6f62c1a7f', name_label: 'nvme', name_description: 'nvme pool', allowed_operations: [Array], current_operations: {}, VDIs: [Array], PBDs: [Array], virtual_allocation: 35443965952, physical_utilisation: 35546726400, physical_size: 256045481984, type: 'lvm', content_type: 'user', shared: false, other_config: {}, tags: [], sm_config: [Object], blobs: {}, local_cache_enabled: false, introduced_by: 'OpaqueRef:NULL', clustered: false, is_tools_sr: false }, VDI: VDI { uuid: 'f4f0801d-6d14-4677-abc1-274ef19605a5', name_label: 'XO CloudConfigDrive', name_description: '', allowed_operations: [Array], current_operations: {}, SR: 'OpaqueRef:cefff33f-8dff-48fb-93d0-3c5b2c0dd34e', VBDs: [], crash_dumps: [], virtual_size: 10485760, physical_utilisation: 16777216, type: 'user', sharable: false, read_only: false, other_config: {}, storage_lock: false, location: 'f4f0801d-6d14-4677-abc1-274ef19605a5', managed: true, missing: false, parent: 'OpaqueRef:NULL', xenstore_data: {}, sm_config: [Object], is_a_snapshot: false, snapshot_of: 'OpaqueRef:NULL', snapshots: [], snapshot_time: '19700101T00:00:00Z', tags: [], allow_caching: false, on_boot: 'persist', metadata_of_pool: '', metadata_latest: false, is_tools_iso: false, cbt_enabled: false } } }
-
Oh okay, so it's not a NAT issue then (thought XOA and the host were not together).
Might be this issue then, please comment in the GH issue you posted to report you have the same problem
edit: with your log too!