@xerxist have you tried with the 8.3 beta of XCP-ng? I believe it's got a newer kernel maybe?
Posts made by bullerwins
-
RE: Intel iGPU passthough
-
RE: Intel iGPU passthough
@xerxist my ubuntu 22.04 install came with kernel 5.15, i have it updated regularly but it doens't update the kernel it seems. But newer fresh installs of ubuntu 22.04 install a newer kernel. I'll check out if the kernel needs to be manually updated
-
RE: Intel iGPU passthough
@xerxist in BIOS mode, i would say it was the default for my ubuntu VM
-
RE: Intel iGPU passthough
@xerxist are you using Plex in docker or native install?
-
RE: Intel iGPU passthough
@adriangabura Gigabyte B760M Gaming X / DDR4 / MicroATX
-
RE: Intel iGPU passthough
I got it working!
Pic while transcoding with plex
Plex info:
Detected:
For future reference if anyone finds this with google:
I had to " sudo chmod -R 777 /dev/dri" inside the Ubuntu VM, otherwise it didnt work.
I'm using binhex plex docker image, I have to add the device to the container. It's really easy with portainer:
The "/dev/dri/" part is the important
PS: @olivierlambert there is a typo in the xcp docs, a space is missing in Step 5:
https://docs.xcp-ng.org/compute/#5-put-this-pci-device-into-your-vm
This copy/pastes as:
xe vm-param-set other-config:pci=0/0000:04:01.0uuid=<vm uuid>It should be:
xe vm-param-set other-config:pci=0/0000:04:01.0 uuid=<vm uuid> -
RE: Intel iGPU passthough
@bullerwins Update. The VM recognized the intel GPU:
intel_gpu_top
lspci -k | grep -EA2 'VGA|3D'
I will test with Plex Encoding
-
RE: Intel iGPU passthough
@olivierlambert I tried but getting this error when turning on the VM
INTERNAL_ERROR(xenopsd internal error: (Failure
"Error from xenguesthelper: Populate on Demand and PCI Passthrough are mutually exclusive"))Not sure what it means
EDIT: after googleing it seems that static and dynamic memory has to be the same:
-
RE: Intel iGPU passthough
@olivierlambert I'm not sure as it not like a discrete GPU in a dedicated PCIe slot as it's integrated into the CPU. The model I'm testing in particular is an Intel 12400.
This is the lspci output
It looks like there is a VGA there. Should I try this guide? https://docs.xcp-ng.org/compute/
-
Intel iGPU passthough
Hi!
I've searched this topic but only found old threads without any clear solution, or using the Windows Center tool which is no longer supported.
Is there any guide on how to pass the Intel iGPU to a VM in order to use it for example for plex encoding? CLI is fine
-
RE: XOA Recipe for Kubernetes cluster requiements?
@olivierlambert
I declared the minimum just in case:
I rebooted the server and this time I didn't get any error (it seems there was a task stuck from previous attempts).
It has created the master node, but no worker node.
I didn't get a green confirmation message, the "create" button it's stuck spinning.
It's been like this for 15min.
This is the console output for the master node:
I'm not sure where to look to see if there is any progress creating the worker node of if it has failed somewhere
Thanks a lot for the help!
-
XOA Recipe for Kubernetes cluster requiements?
Hi!
I'm trying to deploy a k8s cluster but getting a "500 internal error" this is the log.Is there maybe a minimum requirement for free RAM/Disk that I'm not meeting? Is there anywhere where I can check the recipe config file or something similar to check it?
The target SR free space is 350GB
This are the host stats
Error log file (i've only taken out the pub key):
xoa.recipe.createKubernetesCluster { "clusterName": "k8sRodri", "controlPlanePoolSize": 1, "k8sVersion": "1.28.3-00", "nbNodes": 1, "network": "fe86cad9-df93-deb8-0f56-afe8e453b8e0", "sr": "a1316ff3-b8ad-fdd8-4bb3-4ed660bbdb8b", "sshKey": "AAAAB3xx[censored]xxxx2kQ==" } { "originalUrl": "https://192.168.10.198/import/?sr_id=OpaqueRef%3A3cec6886-1be2-4dce-b152-1e190fd8aa5b&session_id=OpaqueRef%3A3c71c7f9-27ce-470d-b5a5-524f0f73f887", "url": "https://192.168.10.198/import/?sr_id=OpaqueRef%3A3cec6886-1be2-4dce-b152-1e190fd8aa5b&session_id=OpaqueRef%3A3c71c7f9-27ce-470d-b5a5-524f0f73f887", "pool_master": { "uuid": "b5a406c7-e3c3-4656-a0c4-f01baf26d2b4", "name_label": "xcp-ng-12400", "name_description": "Default install", "memory_overhead": 655278080, "allowed_operations": [ "vm_migrate", "provision", "vm_resume", "evacuate", "vm_start" ], "current_operations": {}, "API_version_major": 2, "API_version_minor": 16, "API_version_vendor": "XenSource", "API_version_vendor_implementation": {}, "enabled": true, "software_version": { "product_version": "8.2.1", "product_version_text": "8.2", "product_version_text_short": "8.2", "platform_name": "XCP", "platform_version": "3.2.1", "product_brand": "XCP-ng", "build_number": "release/yangtze/master/58", "hostname": "localhost", "date": "2023-10-18", "dbv": "0.0.1", "xapi": "1.20", "xen": "4.13.5-9.37", "linux": "4.19.0+1", "xencenter_min": "2.16", "xencenter_max": "2.16", "network_backend": "openvswitch", "db_schema": "5.603" }, "other_config": { "agent_start_time": "1700030285.", "boot_time": "1700030236.", "rpm_patch_installation_time": "1699202788.539", "iscsi_iqn": "iqn.2023-07.com.example:aa9ef3b7" }, "capabilities": [ "xen-3.0-x86_64", "hvm-3.0-x86_32", "hvm-3.0-x86_32p", "hvm-3.0-x86_64", "" ], "cpu_configuration": {}, "sched_policy": "credit", "supported_bootloaders": [ "pygrub", "eliloader" ], "resident_VMs": [ "OpaqueRef:9758ce9f-bfc4-49d4-a5c9-92ac9457fe64", "OpaqueRef:3da033b8-1a94-4e56-bed5-0edda49315ab", "OpaqueRef:8e78839e-826f-429f-bb8b-ac6141fe7abe", "OpaqueRef:b11b30e1-353a-42c1-b5d4-1c7e44c5ff24", "OpaqueRef:2f0bea77-1af8-4e25-a9a5-1b8758732343" ], "logging": {}, "PIFs": [ "OpaqueRef:d44da9d7-0dff-4919-b891-dc6342f73570" ], "suspend_image_sr": "OpaqueRef:9f3db79f-b06d-4ef9-adc2-df280d476d3c", "crash_dump_sr": "OpaqueRef:9f3db79f-b06d-4ef9-adc2-df280d476d3c", "crashdumps": [], "patches": [], "updates": [], "PBDs": [ "OpaqueRef:feb86773-f35c-41b0-8e37-741c6a315613", "OpaqueRef:7619d6a7-db7c-4670-97cc-f4fc0fd06ec4", "OpaqueRef:5a1a4090-0602-4298-a837-fd718dc689a8", "OpaqueRef:561f9549-a6e9-41be-b711-1619bc54e309", "OpaqueRef:537ce3e9-f134-4de3-8274-06675517b111", "OpaqueRef:3088a871-d7ab-4011-94f0-d961dff9eb36", "OpaqueRef:19d98d1e-5166-4b88-b861-d5fe2335fc16", "OpaqueRef:07b0be80-9c78-4216-9745-02cd9ebe8b9a" ], "host_CPUs": [ "OpaqueRef:0d0e820a-1785-450c-be71-2ba3bafd6185", "OpaqueRef:aeb53d4a-2a40-4713-b8a3-647ddf4820d1", "OpaqueRef:d5630f42-a5b6-4226-8611-4b2ff3204c2c", "OpaqueRef:27f06ef0-3a1b-48c0-b93b-435a91da639e", "OpaqueRef:267853ae-f85f-44b8-b07a-7d0233f84110", "OpaqueRef:057190e0-b7db-474b-b9e2-27f78c271cf3", "OpaqueRef:ac60e622-673e-47dd-8abb-66a8a4ab4ede", "OpaqueRef:66aa9e2d-65b0-4ba0-b3ef-0881d41f3ab6", "OpaqueRef:f441c719-c49f-4b75-be34-98701e36d684", "OpaqueRef:50cd74cb-bd30-48b0-b67e-62b2589b7fef", "OpaqueRef:0fe4767c-a5bd-46f8-a02c-a505d3313f1b", "OpaqueRef:29cadecc-d48f-4ad1-b95b-db6baab5a41e" ], "cpu_info": { "cpu_count": "12", "socket_count": "1", "vendor": "GenuineIntel", "speed": "2496.126", "modelname": "12th Gen Intel(R) Core(TM) i5-12400", "family": "6", "model": "151", "stepping": "2", "flags": "fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid pni pclmulqdq monitor est ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 bmi2 erms rdseed adx clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 gfni vaes vpclmulqdq rdpid arch_capabilities", "features_pv": "1fc9cbf5-f6f83203-2991cbf5-00000123-00000007-218c0329-00400700-00000000-00001000-ac000400-00000000-00000000-00000000-00000001-00000000-00000000-0408e167-00000000-00000000-00000000-00000000-00000000", "features_hvm": "1fcbfbff-f7fa3223-2d93fbff-00000523-0000000f-219c07ab-0040070c-00000000-00001000-bc000400-00000000-00000000-00000000-00000001-00000000-00000000-0408e167-00000000-00000000-00000000-00000000-00000000", "features_hvm_host": "1fcbfbff-f7fa3223-2c100800-00000121-0000000f-219c07ab-0040070c-00000000-00001000-bc000400-00000000-00000000-00000000-00000001-00000000-00000000-0408e163-00000000-00000000-00000000-00000000-00000000", "features_pv_host": "1fc9cbf5-f6f83203-28100800-00000121-00000007-218c0329-00400700-00000000-00001000-ac000400-00000000-00000000-00000000-00000001-00000000-00000000-0408e163-00000000-00000000-00000000-00000000-00000000" }, "hostname": "xcp-ng-12400", "address": "192.168.10.198", "metrics": "OpaqueRef:ab7679c5-31e0-4975-8e72-ca800ad31e02", "license_params": { "restrict_vswitch_controller": "false", "restrict_lab": "false", "restrict_stage": "false", "restrict_storagelink": "false", "restrict_storagelink_site_recovery": "false", "restrict_web_selfservice": "false", "restrict_web_selfservice_manager": "false", "restrict_hotfix_apply": "false", "restrict_export_resource_data": "false", "restrict_read_caching": "false", "restrict_cifs": "false", "restrict_health_check": "false", "restrict_xcm": "false", "restrict_vm_memory_introspection": "false", "restrict_batch_hotfix_apply": "false", "restrict_management_on_vlan": "false", "restrict_ws_proxy": "false", "restrict_vlan": "false", "restrict_qos": "false", "restrict_pool_attached_storage": "false", "restrict_netapp": "false", "restrict_equalogic": "false", "restrict_pooling": "false", "enable_xha": "true", "restrict_marathon": "false", "restrict_email_alerting": "false", "restrict_historical_performance": "false", "restrict_wlb": "false", "restrict_rbac": "false", "restrict_dmc": "false", "restrict_checkpoint": "false", "restrict_cpu_masking": "false", "restrict_connection": "false", "platform_filter": "false", "regular_nag_dialog": "false", "restrict_vmpr": "false", "restrict_vmss": "false", "restrict_intellicache": "false", "restrict_gpu": "false", "restrict_dr": "false", "restrict_vif_locking": "false", "restrict_storage_xen_motion": "false", "restrict_vgpu": "false", "restrict_integrated_gpu_passthrough": "false", "restrict_vss": "false", "restrict_guest_agent_auto_update": "false", "restrict_pci_device_for_auto_update": "false", "restrict_xen_motion": "false", "restrict_guest_ip_setting": "false", "restrict_ad": "false", "restrict_nested_virt": "false", "restrict_live_patching": "false", "restrict_set_vcpus_number_live": "false", "restrict_pvs_proxy": "false", "restrict_igmp_snooping": "false", "restrict_rpu": "false", "restrict_pool_size": "false", "restrict_cbt": "false", "restrict_usb_passthrough": "false", "restrict_network_sriov": "false", "restrict_corosync": "true", "restrict_zstd_export": "false", "restrict_pool_secret_rotation": "false" }, "ha_statefiles": [], "ha_network_peers": [], "blobs": {}, "tags": [], "external_auth_type": "", "external_auth_service_name": "", "external_auth_configuration": {}, "edition": "xcp-ng", "license_server": { "address": "localhost", "port": "27000" }, "bios_strings": { "bios-vendor": "American Megatrends International, LLC.", "bios-version": "F11", "system-manufacturer": "Gigabyte Technology Co., Ltd.", "system-product-name": "B760M GAMING X DDR4", "system-version": "Default string", "system-serial-number": "Default string", "baseboard-manufacturer": "Gigabyte Technology Co., Ltd.", "baseboard-product-name": "B760M GAMING X DDR4", "baseboard-version": "x.x", "baseboard-serial-number": "Default string", "oem-1": "Xen", "oem-2": "MS_VM_CERT/SHA1/bdbeb6e0a816d43fa6d3fe8aaef04c2bad9d3e3d", "oem-3": "Default string", "hp-rombios": "" }, "power_on_mode": "", "power_on_config": {}, "local_cache_sr": "OpaqueRef:9f3db79f-b06d-4ef9-adc2-df280d476d3c", "chipset_info": { "iommu": "false" }, "PCIs": [ "OpaqueRef:c3ccbcef-757c-426e-a6c0-71ca59c392a8", "OpaqueRef:b634af83-c4f6-44c7-be28-aad0158d2b72", "OpaqueRef:2d18b304-b03a-45d7-9378-4055cddf68a0", "OpaqueRef:16189584-ccdd-4b31-acfd-28ccd80a0c9e", "OpaqueRef:0dc37b73-b7f5-4c67-a55e-18ae3c7e2588" ], "PGPUs": [ "OpaqueRef:1b9f4c3b-1136-4670-969a-4d8e72946a43" ], "PUSBs": [], "ssl_legacy": false, "guest_VCPUs_params": {}, "display": "enabled", "virtual_hardware_platform_versions": [ 0, 1, 2 ], "control_domain": "OpaqueRef:2f0bea77-1af8-4e25-a9a5-1b8758732343", "updates_requiring_reboot": [], "features": [], "iscsi_iqn": "iqn.2023-07.com.example:aa9ef3b7", "multipathing": false, "uefi_certificates": "", "certificates": [], "editions": [ "xcp-ng" ], "https_only": false }, "SR": { "uuid": "a1316ff3-b8ad-fdd8-4bb3-4ed660bbdb8b", "name_label": "Local storage", "name_description": "", "allowed_operations": [ "vdi_enable_cbt", "vdi_list_changed_blocks", "unplug", "plug", "pbd_create", "vdi_disable_cbt", "update", "pbd_destroy", "vdi_resize", "forget", "vdi_clone", "vdi_data_destroy", "scan", "vdi_snapshot", "vdi_mirror", "vdi_create", "vdi_destroy", "vdi_set_on_boot" ], "current_operations": {}, "VDIs": [ "OpaqueRef:e711b1d9-8a68-4774-9625-92d50da403e1", "OpaqueRef:9c46f21b-23d4-4aa4-9cfe-a047f7309fab" ], "PBDs": [], "virtual_allocation": 42949672960, "physical_utilisation": 31253938176, "physical_size": 448251158528, "type": "ext", "content_type": "user", "shared": false, "other_config": { "dirty": "", "i18n-original-value-name_label": "Local storage", "i18n-key": "local-storage" }, "tags": [], "sm_config": { "devserial": "scsi-3500a075110f7b66d" }, "blobs": {}, "local_cache_enabled": true, "introduced_by": "OpaqueRef:NULL", "clustered": false, "is_tools_sr": false }, "message": "500 Internal Error", "name": "Error", "stack": "Error: 500 Internal Error at Object.assertSuccess (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:144:19) at httpRequestPlus (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:211:22) at Xapi.putResource (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/src/index.js:508:22) at Xapi._importVm (file:///usr/local/lib/node_modules/xo-server/src/xapi/index.mjs:677:19) at Xapi.importVm (file:///usr/local/lib/node_modules/xo-server/src/xapi/index.mjs:799:48) at Xoa._downloadAndInstallHubXva (/usr/local/lib/node_modules/xo-server-xoa/src/index.js:702:16) at Xoa.createCluster (/usr/local/lib/node_modules/xo-server-xoa/src/recipes/kubernetes-cluster.js:235:18) at Api.#callApiMethod (file:///usr/local/lib/node_modules/xo-server/src/xo-mixins/api.mjs:417:20)" }
-
RE: Question about using HUB as a XO built from sources
@olivierlambert Thanks for the answer!
So, could I maybe deploy a XOA Free VM, use the hub to deploy the recipe, and then switch back to my normal VM with XO from sources to manage everything? or does the HUB recipe require some sort of GUI that would not be available in the XO from sources?
-
Question about using HUB as a XO built from sources
Hi!
I'm currently using XO Built from sources in my homelab to run a few VM's. Working great! But I want to test a deployment of a kubernetes cluster in my homelab before using it at work.
I've seen that the easiest way to deploy a K8s cluster in XCP-ng would be to use the Hub recipes, but I see that the hub is only available for XOA (not for xo built from sources).My question is. Is the Hub available for the XOA Free version? Am I able to use the XOA free account at the same time as XO built from sources? I'm a bit confused here.
If not, any other recommendation to easily deply a kubernetes cluster? 1 master, 2-3 nodes is enough. Just to avoid having to create the VM's myself from scratch
Thanks!
-
RE: Continuous Replication health check fails
Just updated to lastest commit https://github.com/vatesfr/xen-orchestra/commit/4bd5b38aeb9d063e9666e3a174944cbbfcb92721
It's fixed now. Working wonderfully. Thanks!
-
RE: Continuous Replication health check fails
@florent said in Continuous Replication health check fails:
@bullerwins nice catch, the fix is trivial with such a detailled error : https://github.com/vatesfr/xen-orchestra/pull/6830
nice! as soon as it's merges I'll update it and check. I'm so happy if this report helped you guys in any way!
-
RE: Continuous Replication health check fails
@florent said in Continuous Replication health check fails:
@bullerwins hi, can you export the full JOSN log (the button with the downard arrow in the title of the modal ) ?
Sure!
I copied it to pastebin, I can't upload .json's I believe
-
Continuous Replication health check fails
Hi!
I've been testing both the normal and delta backups, and the DR and Continuous Replication backups with the health checks.
The "normal" backups work fine, it creates a backup and with the health check it boots a copy of the VM and waits for the management tools to load and then destroys it. Works wonders:
But when using the Continuous Replication method, it works fine as long as I don't check the "health check". It actually works "fine", it creates the backup vm, but it doesn't boot it, i can boot it though manually after and it boots fine. Is this normal behavior?
I get the error: Error: task has already endedFull error log:
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1683735233267", "jobId": "0ae3a8f1-b21c-4302-a4f4-a175b3766380", "jobName": "ubuntuBackup", "message": "backup", "scheduleId": "fde39c31-a81a-4534-8265-3d63264cd629", "start": 1683735233267, "status": "failure", "infos": [ { "data": { "vms": [ "89cc6c0e-4a19-eaf7-edd2-c3189094e66e" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "89cc6c0e-4a19-eaf7-edd2-c3189094e66e", "name_label": "Ubuntu Focal Fossa 20.04 ori" }, "id": "1683735235212", "message": "backup VM", "start": 1683735235212, "status": "failure", "tasks": [ { "id": "1683735235906", "message": "snapshot", "start": 1683735235906, "status": "success", "end": 1683735237813, "result": "3504e7ab-240e-5d4b-d09b-97b59a4ae6a8" }, { "data": { "id": "340350a1-1198-edb3-b477-caaad85c02e3", "isFull": true, "name_label": "truenas", "type": "SR" }, "id": "1683735237814", "message": "export", "start": 1683735237814, "status": "success", "tasks": [ { "id": "1683735237848", "message": "transfer", "start": 1683735237848, "status": "success", "end": 1683735442805, "result": { "size": 5181273600 } } ], "end": 1683735442889 } ], "end": 1683735442907, "result": { "log": { "result": { "log": { "message": "health check", "parentId": "aw473bidagq", "event": "start", "taskId": "wzq5it9o9ip", "timestamp": 1683735442901 }, "message": "task has already ended", "name": "Error", "stack": "Error: task has already ended\n at Task.logAfterEnd (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:7:17)\n at Task.onLog (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:66:37)\n at #log (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:146:16)\n at new Task (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:82:14)\n at Task.run (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:40:12)\n at DeltaReplicationWriter.healthCheck (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/writers/_MixinReplicationWriter.js:26:19)\n at /opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:136:32\n at /opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:110:24\n at Zone.run (/opt/xo/xo-builds/xen-orchestra-202305091555/node_modules/node-zone/index.js:80:23)\n at Task.run (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:108:23)" }, "status": "failure", "event": "end", "taskId": "aw473bidagq", "timestamp": 1683735442901 }, "message": "task has already ended", "name": "Error", "stack": "Error: task has already ended\n at Task.logAfterEnd (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:7:17)\n at #log (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:146:16)\n at #end (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:141:14)\n at Task.failure (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:90:14)\n at /opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:119:14\n at Zone.run (/opt/xo/xo-builds/xen-orchestra-202305091555/node_modules/node-zone/index.js:80:23)\n at Task.run (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:108:23)\n at DeltaReplicationWriter.healthCheck (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/Task.js:136:19)\n at /opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/_VmBackup.js:458:46\n at callWriter (/opt/xo/xo-builds/xen-orchestra-202305091555/@xen-orchestra/backups/_VmBackup.js:148:15)" } } ], "end": 1683735442908 }
XO build from source today.
-
RE: XOA Proxy Error when updating
@julien-f said in XOA Proxy Error when updating:
@bullerwins XO Proxy Appliance deployment is not supported in XO built from sources.
my bad, it used to work a few months back I believe
@olivierlambert said in XOA Proxy Error when updating:
I missed that in the original message, indeed
FYI @bullerwins there's no such thing as XOA built from sources, you have either:
- XOA is Xen Orchestra virtual Appliance, that's the VM we distribute with pro support and QA/stable channel, updater and such
- XO from the sources is from Github, without pro support but community support, no QA no stable version, no udpater and such
Thanks, I got confused with the naming, I'm using the second option.
As a workaround I used a VPN to manage the remote XCP-ng host, thanks!
-
XOA Proxy Error when updating
Hi!
I just installed the XOA Proxy following the guide from https://xen-orchestra.com/blog/xo-proxy-a-concrete-guide/ but I get an error when trying to upgrade the proxy
The whole error log is:
proxy.upgradeAppliance { "id": "3e2ad3d7-f4d1-4eb1-bea0-a6129f12325c", "ignoreRunningJobs": true } { "message": "this._app.getCurrentChannel is not a function", "name": "TypeError", "stack": "TypeError: this._app.getCurrentChannel is not a function at Proxy._getChannel (file:///opt/xo/xo-builds/xen-orchestra-202305091555/packages/xo-server/src/xo-mixins/proxies.mjs:92:41) at Proxy.updateProxyAppliance (file:///opt/xo/xo-builds/xen-orchestra-202305091555/packages/xo-server/src/xo-mixins/proxies.mjs:236:79) at Proxy.upgradeProxyAppliance (file:///opt/xo/xo-builds/xen-orchestra-202305091555/packages/xo-server/src/xo-mixins/proxies.mjs:223:18) at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202305091555/packages/xo-server/src/xo-mixins/api.mjs:417:20)" }
This is with XOA built from sources, build commit https://github.com/vatesfr/xen-orchestra/commit/90ce1c4d1ed8d97c024098e26d1fb8ecfd78cb25