@JBlessing You're welcome!
Posts
-
RE: XCP-ng Center 25.04 Released
@JBlessing At the bottom of the release page (https://github.com/xcp-ng/xenadmin/releases/tag/v25.04) are the links to download either the MSI installer or a Zip file containing the entire application (if you want a portable package).
-
RE: XO6 Possible Issue with Lock Sidebar Button
@lsouai-vates Duly noted.
If amd when you do get a moment, could you have a look at https://xcp-ng.org/forum/post/94297 and offer up any guidance on how I can stop the debug log entries and also the double timestamp entries in the audit.log file?
Thank you.
-
RE: XO6 Possible Issue with Lock Sidebar Button
@lsouai-vates Actually I do. Don't want to derail this thread, mind if I DM you? I promise not to monopolize your time.
-
RE: XO6 Possible Issue with Lock Sidebar Button
@lsouai-vates No worries, I think we can mark this issue as resolved (or close it altogether).
While capturing the screenshots for you, I noticed that I was holding the mouse cursor over the sidebar - thus preventing it from collapsing. Once I click anywhere else on the page, the sidebar collapsed accordingly.
Sorry for causing any panic - the UI looks like it's behaving as designed. Thanks again for engaging so quickly.
-
XO6 Possible Issue with Lock Sidebar Button
Good-day Folks,
XOCE Version: Commit 2effd85
Wondering if anyone else is seeing this behavior on XO6 on Commit 2effd85. Looks like if you click the Lock Sidebar button, the sidebar locks indeed, however, the whole page still behaves as if you'd clicked on the Close Sidebar button.
I created an animated GIF to share, but it's not allowed here so unable to share it directly. So I uploaded it to my Google Photos account, here's a link: https://photos.app.goo.gl/RVT1CE19tPv8SbF37
-
RE: XOA vs XO vs Backup feature
@abudef I know your question was targeted to the Vates team, however, I'd like to chime in....if that's ok?
As a Windows SysAdmin, myself, I had to eat the proverbial "humble pie" and eventually learn Linux to take advantage of a lot Opensource software. It goes without saying, that you'll have to do the same here.
Now, if you really wanted to, you could install XO from sources using Ronivay's script into a Debian VM running on VirtualBox or Hyper-V on your Windows machine, then use that to manage your pool.
-
RE: Can't get slave out of maintenance mode after yum updates
@rustylh Not a pro here, but in my experience thus far, most weird issues are solved with a Toolstack Restart. Have you tried that already? If not, do so from whichever node is currently the master, then report back.
-
RE: Shipping System Logs to a Remote Syslog Server
@ThasianXi Thanks, I will checkout the resources you've shared. I already figured out the XCP-ng side and wanted to get the XO side as well.
Thanks again.
-
Shipping System Logs to a Remote Syslog Server
Good-day Folks,
Has anyone been successful in shipping off Xen Orchestra logs to a remote syslog server?
I'm in the process of configuring the XCP-ng hosts to forward logs to a Logstash server for ingest into Elasticsearch, as part of an effort to demonstrate compliance to RMF and DISA STIGs (similar to VMware ESXi + VCenter).
-
RE: VM Console Screen Suddenly Inaccessible
Hmm, interesting observation about potentially this being a failed migration issue.
I can confirm that none of my users in the lab are migrating any VMs or this particular one, for that matter. However, I do have the Load Balancer plugin running and it could be the source of a migration that could've failed. How can I confirm this in the logs? I mean, what should I be looking for?
-
VM Console Screen Suddenly Inaccessible
Good-day Folks,
I trust you're all doing well. For the past couple of days, I've been noticing that the Console of one of my VMs (DC02) becomes inaccessible (from within Xen Orchestra), however, during this state I am able to using RDP to remotely access the VM.
To begin troubleshooting, I attempted to reboot the VM from Xen Orchestra's VM controls menus. This failed and I was greeted by the following error message:
INTERNAL_ERROR: Object with type VM and id 9fa84df4-3912-5cbf-09a6-3374dd27eead/config does not exist in xenopsd
. Next, I attempted to force a VM Reboot or Shutdown from the VM's Advanced tab, and was met with the same error message.The Temporary Solution (Workaround)
This is what got my VM back into a working/running state - though I'm not sure if the order is important:- First, I ran
Rescan all disks
on the SR where the VDI of the VM was located - Second, I restarted the Toolstack of the host that the VM was running on. I immediately noticed that the previous shutdown attempt took effect, but I was now unable to restart the VM. All attempts to start the VM resulted in the error:
INTERNAL_ERROR: xenopsd internal error: Storage_error ([S(Illegal_transition);[[S(Attached);S(RO)];[S(Attached);S(RW)]]])
- Last, I restarted the Toolstack of the master host. Once Xen Orchestra reconnected, I was able to start up the VM without any issues.
Unfortunately, I am unable to tell when the issue is occurring or what conditions lead to the VM getting into this state. Since this is a lab environment being used for a Proof of Concept (POC), we're in the lab sporadically. I've observed this issue twice now, generally in the mornings.
Anyway, I thought I'd report it to the community to see if anyone has encountered a similar issue before and could offer some hints on a permanent solution. Thanks.
My Environment:
Hosts: 3-node XCP-ng v8.3.0
XOA: v5.102.1 Build: 20250220 (air-gapped) - First, I ran
-
RE: Email to Sales Team Bouncing Back as SPAM
Thanks to both of you, I adjust accordingly.
The strange thing is that, I have an open ticket and the way I've always corresponded with the team is by simply replying to that email and it's always gotten through. This is the first time I've seen that bounce (which suggests a change somewhere).
No worries though, I'll just log into the helpdesk portal and add a comment directly to the ticket.
-
Email to Sales Team Bouncing Back as SPAM
Hi Team,
I just sent an email to the sales team from my work address and it bounced back as SPAM. Is there any email issues going on?
-
Storage Repository Maintenance Mode When XO VM's VDI is Remote
Good-day Folks,
A few days ago, I got myself into a little jam while trying to do what I thought was the proper way of handling the reboot of the only storage server in my test lab. Now, I managed to get myself out of trouble but I'm here for guidance on how I could've done things differently. So, here's what happened.
For those who don't know, I'm running a small test lab where I'm testing out the Vates VMS stack as a viable drop-in replacement for VMware's VCF stack. Unfortunately, I don't have a lot of funding, so don't have a lot of hardware. As such, I only have 4 physical machines that I had available to dedicate as servers. I used three of them as XCP-ng hosts and turned the last into an Active Directory Domain Controller, DHCP Server, as well as the File Server (SMB/CIFS and NFS). I also have attached to this same box an 8TB external HDD which I'm sharing out over NFS and using as Remotes (to test the backup features of XO). This entire setup isn't ideal, but hey, it's what I got - and it works. Actually, the fact that the Vates VMS stack works in such a condition alone is a huge testament to the resiliency of the solution. Anyway, I digress; back to the issue at hand.
Given the above setup, a need arose to reboot this server (let's call it DC01). After reading through this documentation - https://docs.xen-orchestra.com/manage_infrastructure#maintenance-mode-1 - I decided that it was a good idea to place the SRs into Maintenance Mode before doing the reboot. I had done this before in another environment (at my church) and never ran into the problems I'm about to describe (however, in hindsight, I think the difference may have been that the VDI of the XO VM was local to the host it was running on).
When I clicked the button to enable maintenance mode, it gave me the usual warning that the running VMs will be halted, so I said OK to proceed. What I didn't realize was that because the XO appliance was running with its VDI on the SR that I just put in maintenance mode, I would immediately lose connectivity with it and it would subsequently refuse to start. I had a backup plan; to use XCP-ng Center (vNext) to connect to the pool master and attempt to see if I could start the XOA VM; hoping that maybe I'll get prompted to move the VDI - but this never happened. The startup attempts kept failing, citing a timeout error. So, running out of ideas, I simply decided to reboot all three hosts, hoping that once they came back up, they would reconnect the SRs and then I would be able to start up XOA. Unfortunately for me, the reboots took a very long time to complete. So long that I left the lab (this was around 9pm) and returned around 2am. Not sure exactly how long it took for the reboot to happen, but when I got back to the lab all hosts were back up but no SRs were connected.
At this point, my thinking was that the SRs didn't reconnect on each host likely because XOA wasn't running to instruct them to (don't know if this is entirely accurate). I googled around and found that I could re-introduce the SRs directly on each host by using the
xe pbd-plug/unplug
commands. Strangely enough, while I was able to run those commands on the CLI of each host without any errors, the SR reconnected on only one host. It wasn't until I used XCP-ng Center (vNext) to perform a repair on the SR. That's when it clearly showed that the SR was connected to Host #2 but not #1 and #3. So I proceed through the wizard and it successfully repaired the connections. I was then able to successfully start the XOA VM and got the lab back up and running.So my ultimate question:
- When the VDI of the XOA or XOCE VM resides on an SR that's being targeted for maintenance mode enablement, what is the proper procedure?
Thanks in advance to anyone who reads through my long narration and then offers a response. You are very much appreciated!
-
RE: Rolling pool update failure: not enough PCPUs even though all should fit (dom0 culprit?)
I have seen this problem before in my test lab. Unfortunately, I didn't document it enough to report here. For me, the solution was also to simply power off the culprit VM to prevent the attempted migration.
In my mind, I think the RPU logic should be using the current running state of VMs to determine resources currently in use and which hosts can support that. Since the move is only temporary. Then again, I'm not in a know of all the factors that went into the decision to have it working the way it is. I'm sure there's a valid reason.
-
RE: IMPORT_ERROR - Tar.Header.Checksum_mismatch
@Danp No sir, I have not.
I have, however, exported the same VM in the OVA format. Ended up being about 47 GiB, however, I was able to import it into XOA without any issues, and the import took just under 10 minutes to complete.
-
IMPORT_ERROR - Tar.Header.Checksum_mismatch
Good-day Folks,
I am running into some trouble while attempting to import a VM I exported from another XCP-ng pool. Size of the export is about 26.3 GiB. The import kicks off and I see it running in the tasks section. However, once it gets to about 26% the tasks stops and throws the following XapiError in the logs:
"IMPORT_ERROR(INTERNAL_ERROR: [ Tar.Header.Checksum_mismatch ])"
The VM in question does not have any of the XCP-ng or Citrix guest tools installed, and the export was done with the XVA format.
My Environment:
- Hosts: 3-node XCP-ng v8.3.0
- XOA: v5.102.1 Build: 20250220 (air-gapped)
Here's the full text of the error:
HTTP handler of vm.import undefined { "code": "IMPORT_ERROR", "params": [ "INTERNAL_ERROR: [ Tar.Header.Checksum_mismatch ]" ], "url": "", "task": { "uuid": "9fcd80c7-4e8c-8ab6-05a4-59d2e2b842c1", "name_label": "[XO] VM import", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20250403T22:46:56Z", "finished": "20250403T22:59:02Z", "status": "failure", "resident_on": "OpaqueRef:268f8817-f152-7be6-9797-bc643f3d355f", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "IMPORT_ERROR", "INTERNAL_ERROR: [ Tar.Header.Checksum_mismatch ]" ], "other_config": { "object_creation": "complete" }, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/import.ml)(line 2204))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 72)))" }, "pool_master": { "uuid": "d7400002-a65f-4bb6-b3df-e781b670c129", "name_label": "VMH01", "name_description": "XCP-ng Virtualization Host #1", "memory_overhead": 1979920384, "allowed_operations": [ "vm_migrate", "provision", "vm_resume", "enable", "evacuate", "vm_start" ], "current_operations": {}, "API_version_major": 2, "API_version_minor": 21, "API_version_vendor": "XenSource", "API_version_vendor_implementation": {}, "enabled": true, "software_version": { "product_version": "8.3.0", "product_version_text": "8.3", "product_version_text_short": "8.3", "platform_name": "XCP", "platform_version": "3.4.0", "product_brand": "XCP-ng", "build_number": "cloud", "git_id": "2", "hostname": "localhost", "date": "20241011T11:26:51Z", "dbv": "0.0.1", "xapi": "24.19", "xapi_build": "24.19.2", "xen": "4.17.5-4", "linux": "4.19.0+1", "xencenter_min": "2.21", "xencenter_max": "2.21", "network_backend": "openvswitch", "db_schema": "5.780" }, "other_config": { "agent_start_time": "1743706565.", "boot_time": "1743635624.", "rpm_patch_installation_time": "1739235125.987", "iscsi_iqn": "iqn.2024-10.com.example:6d765dba" }, "capabilities": [ "xen-3.0-x86_64", "hvm-3.0-x86_32", "hvm-3.0-x86_32p", "hvm-3.0-x86_64", "" ], "cpu_configuration": {}, "sched_policy": "credit", "supported_bootloaders": [ "pygrub", "eliloader" ], "resident_VMs": [ "OpaqueRef:5fdd98ae-2900-7fbc-70c5-535cad7d5cff", "OpaqueRef:9e8b526a-c4c4-4496-1757-8c9d7025c9c6", "OpaqueRef:9631ea16-4411-24e0-03a5-4d2c51dea877", "OpaqueRef:869d147a-0d78-f40d-e11b-d5606cad768d", "OpaqueRef:7af5d282-c912-cffe-868c-2c7f439581ef", "OpaqueRef:1bdaf1df-de8d-d2ea-9450-285a619c35a6" ], "logging": { "syslog_destination": "netxms01.labnet.local" }, "PIFs": [ "OpaqueRef:d9c64aa6-358e-a865-aeee-a480af51e0a8", "OpaqueRef:babfd358-d676-cc20-f95a-464a1e61c303", "OpaqueRef:a5bafa19-02cb-637c-0a35-61cbb61c7d2c", "OpaqueRef:a05d800a-f00b-f4c5-f5e3-107fbb50847f", "OpaqueRef:7b60aff1-5641-e936-bfa2-3f68ca52da5c", "OpaqueRef:3d349a09-19af-5ab2-aae1-42502b405ebb", "OpaqueRef:19261bea-498d-15f5-088f-eff5444ea7a2" ], "suspend_image_sr": "OpaqueRef:c72bf8c5-573e-6778-2d37-1da8ed3897b4", "crash_dump_sr": "OpaqueRef:c72bf8c5-573e-6778-2d37-1da8ed3897b4", "crashdumps": [], "patches": [], "updates": [], "PBDs": [ "OpaqueRef:e684325a-060f-5d48-121e-ee49667210f1", "OpaqueRef:be237cf6-8a9c-dbb6-59dd-682026316449", "OpaqueRef:aaa2cffa-480c-6438-9a29-6256f7928860", "OpaqueRef:a35fddeb-9514-bd24-9a3c-856a2ecb7164", "OpaqueRef:80775942-38e8-c2cb-35cc-2108e2fe704b", "OpaqueRef:8065eb1d-defd-c9bd-bc6a-9d3c619d1b8d", "OpaqueRef:7a9f7eb2-b380-b3ea-d121-6e737af1bd99", "OpaqueRef:698d6565-d120-c0cf-9b2e-63a028d395ca", "OpaqueRef:32cb7d1c-3f29-7421-b38c-0fe2c1291611", "OpaqueRef:29e0a576-97e7-a7b6-cd51-23828994c1d8", "OpaqueRef:075941f1-060c-1284-3379-726280775f5d" ], "host_CPUs": [ "OpaqueRef:686227aa-341e-c8ea-79cf-dd1c19b432c4", "OpaqueRef:d24ba30c-3adf-6dc3-e8a5-bb579be5554f", "OpaqueRef:e88f4b5f-cb7f-1443-104d-f3a215b5ca70", "OpaqueRef:5d6868a5-0bbc-9a5a-ca65-0b369109513a", "OpaqueRef:cadf048d-979b-898a-33c7-8c934e51fda5", "OpaqueRef:deda1083-0aed-8cb1-9625-94075edd0b7c", "OpaqueRef:1b9201cf-dbbc-e3ce-fb47-0cc21947ab20", "OpaqueRef:1f194b30-262e-e8db-6ac3-7a65c5dda95c", "OpaqueRef:85b61bd5-bd77-ad41-f290-c5ee60e10428", "OpaqueRef:cd7e04f5-b6ec-63a0-30fb-ebf8c10e1f7a", "OpaqueRef:3015da21-a7e8-ab20-7825-fa7cca9e4fe4", "OpaqueRef:b960425b-85b4-35aa-c5c8-e48816443458", "OpaqueRef:bac1124b-fc1e-86a7-e42b-3d7d4794133d", "OpaqueRef:4f6b2644-d774-fc54-fe16-84968f4f3dac", "OpaqueRef:300c48d8-7521-1518-2cbe-a71923599ae6", "OpaqueRef:34d35616-d126-6580-8137-21d99175a654" ], "cpu_info": { "cpu_count": "16", "socket_count": "1", "threads_per_core": "2", "vendor": "GenuineIntel", "speed": "3096.024", "modelname": "Intel(R) Xeon(R) w3-2435", "family": "6", "model": "143", "stepping": "8", "flags": "fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid pni pclmulqdq monitor est ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 bmi2 erms rtm avx512f avx512dq rdseed adx avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 avx512vbmi avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote arch_capabilities", "features_pv": "1fc9cbf5-f6f83203-2991cbf5-00000123-00000007-f1af0b39-1a405f42-00000100-00001000-ac014c10-00001c30-00000000-00000000-00000001-00000000-00000000-1c0ae167-00000000-00000000-00000000-00000000-00000000", "features_hvm": "1fcbfbff-f7fa3223-2d93fbff-00000523-0000000f-f1bf0fbb-9a405f4e-00000100-00101000-bc014c10-00001c30-00000000-00000000-00000017-00000000-00000000-1c0ae167-00000000-00000000-00000000-00000000-00000000", "features_hvm_host": "1fcbfbff-f7fa3203-2c100800-00000121-0000000f-f1bf0fab-82405f4e-00000000-00101000-bc004410-00001c30-00000000-00000000-00000017-00000000-00000000-0c08e163-00000000-00000000-00000000-00000000-00000000", "features_pv_host": "1fc9cbf5-f6d83203-28100800-00000121-00000007-f1af0b29-02405f42-00000000-00001000-ac004410-00001c30-00000000-00000000-00000001-00000000-00000000-0c08e163-00000000-00000000-00000000-00000000-00000000" }, "hostname": "VMH01", "address": "10.0.10.11", "metrics": "OpaqueRef:6aef20de-70f3-ae80-77bf-8649cbb1bb13", "license_params": { "restrict_vswitch_controller": "false", "restrict_lab": "false", "restrict_stage": "false", "restrict_storagelink": "false", "restrict_storagelink_site_recovery": "false", "restrict_web_selfservice": "false", "restrict_web_selfservice_manager": "false", "restrict_hotfix_apply": "false", "restrict_export_resource_data": "false", "restrict_read_caching": "false", "restrict_cifs": "false", "restrict_health_check": "false", "restrict_xcm": "false", "restrict_vm_memory_introspection": "false", "restrict_batch_hotfix_apply": "false", "restrict_management_on_vlan": "false", "restrict_ws_proxy": "false", "restrict_cloud_management": "false", "restrict_vtpm": "false", "restrict_nrpe": "false", "restrict_vlan": "false", "restrict_qos": "false", "restrict_pool_attached_storage": "false", "restrict_netapp": "false", "restrict_equalogic": "false", "restrict_pooling": "false", "enable_xha": "true", "restrict_marathon": "false", "restrict_email_alerting": "false", "restrict_historical_performance": "false", "restrict_wlb": "false", "restrict_rbac": "false", "restrict_dmc": "false", "restrict_checkpoint": "false", "restrict_cpu_masking": "false", "restrict_connection": "false", "platform_filter": "false", "regular_nag_dialog": "false", "restrict_vmpr": "false", "restrict_vmss": "false", "restrict_intellicache": "false", "restrict_gpu": "false", "restrict_dr": "false", "restrict_vif_locking": "false", "restrict_storage_xen_motion": "false", "restrict_vgpu": "false", "restrict_integrated_gpu_passthrough": "false", "restrict_vss": "false", "restrict_guest_agent_auto_update": "false", "restrict_pci_device_for_auto_update": "false", "restrict_xen_motion": "false", "restrict_guest_ip_setting": "false", "restrict_ad": "false", "restrict_nested_virt": "false", "restrict_live_patching": "false", "restrict_set_vcpus_number_live": "false", "restrict_pvs_proxy": "false", "restrict_igmp_snooping": "false", "restrict_rpu": "false", "restrict_pool_size": "false", "restrict_cbt": "false", "restrict_usb_passthrough": "false", "restrict_network_sriov": "false", "restrict_corosync": "true", "restrict_zstd_export": "false", "restrict_pool_secret_rotation": "false", "restrict_certificate_verification": "false", "restrict_updates": "false", "restrict_internal_repo_access": "false", "restrict_vm_groups": "false" }, "ha_statefiles": [], "ha_network_peers": [], "blobs": {}, "tags": [], "external_auth_type": "", "external_auth_service_name": "", "external_auth_configuration": {}, "edition": "xcp-ng", "license_server": { "address": "localhost", "port": "27000" }, "bios_strings": { "bios-vendor": "Dell Inc.", "bios-version": "2.4.1", "system-manufacturer": "Dell Inc.", "system-product-name": "Precision 5860 Tower", "system-version": "", "system-serial-number": "DT2YP54", "baseboard-manufacturer": "Dell Inc.", "baseboard-product-name": "097FW8", "baseboard-version": "A01", "baseboard-serial-number": "/DT2YP54/CNFCW0045D00LF/", "oem-1": "Xen", "oem-2": "MS_VM_CERT/SHA1/bdbeb6e0a816d43fa6d3fe8aaef04c2bad9d3e3d", "oem-3": "Dell System", "oem-4": "1[0A3C]", "oem-5": "3[1.0]", "oem-6": "12[www.dell.com]", "oem-7": "14[1]", "oem-8": "15[3]", "oem-9": "27[30056667592]", "hp-rombios": "" }, "power_on_mode": "", "power_on_config": {}, "local_cache_sr": "OpaqueRef:c72bf8c5-573e-6778-2d37-1da8ed3897b4", "chipset_info": { "iommu": "true" }, "PCIs": [ "OpaqueRef:f3a35d1a-19c3-a313-4611-1a7788c0896d", "OpaqueRef:cbd7fe1c-2cc2-1eee-d59d-223ff78f9277", "OpaqueRef:b07528dd-f7d2-42f3-b1f5-228ea084d02f", "OpaqueRef:a01436df-ae6e-bece-4f98-512d594f6e9d", "OpaqueRef:9cd31e03-1754-baa8-dee9-d607732f5482", "OpaqueRef:9941a915-e0a3-51f8-06fb-b49662eeda73", "OpaqueRef:9131387f-2fc7-b01e-4586-10d8bfc1bbce", "OpaqueRef:7a225dd7-1ef2-d91f-8c43-e246f36ccd07", "OpaqueRef:6c4df35d-29ca-2ac8-41c4-ba1898b080a4", "OpaqueRef:4c3d4721-c3f0-8191-4141-cfa5cf185330", "OpaqueRef:30fcb852-1940-94c1-5030-291e2333cf52", "OpaqueRef:2bdbbb5e-3486-2c94-9442-b435e4005fd5" ], "PGPUs": [ "OpaqueRef:7ad0a91f-a0bf-e8e0-862b-8e258c3d062b" ], "PUSBs": [], "ssl_legacy": false, "guest_VCPUs_params": {}, "display": "enabled", "virtual_hardware_platform_versions": [ 0, 1, 2 ], "control_domain": "OpaqueRef:7af5d282-c912-cffe-868c-2c7f439581ef", "updates_requiring_reboot": [], "features": [], "iscsi_iqn": "iqn.2024-10.com.example:6d765dba", "multipathing": false, "uefi_certificates": "L3Vzci9zaGFyZS92YXJzdG9yZWQvUEsuYXV0aAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NDQAMDAwMDAwMAAwMDAwMDAwADAwMDAwMDAzNjc3ADE0NjEwNTIyMjc2ADAwMTE3MjcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADoBwQTEQcJAAAAAAAAAAAAkAQAAAAC8Q6d0q9K32juSYqpNH03VmWnMIIEdAYJKoZIhvcNAQcCoIIEZTCCBGECAQExDzANBglghkgBZQMEAgEFADALBgkqhkiG9w0BBwGgggLzMIIC7zCCAdegAwIBAgIJAIDCz0kJ/Vi6MA0GCSqGSIb3DQEBCwUAMA0xCzAJBgNVBAMMAlBLMCAXDTI0MDQxOTE3MDcwOVoYDzIxMjQwMzI2MTcwNzA5WjANMQswCQYDVQQDDAJQSzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANJ0SX/t3u0EMN25WmC7VoS89zDcC7XGyG9xcpNzFa7R5W5emDdZZPGP4LpPPpMV5jEUtzrx/YtV6oGAOC8zsTpRREMtFSHAVzo5lkcSevC94DCvOgLC1x0DXh9nCkVdkbhVzpCDGyTMZsAXSZnfwFxjnU5tJ8MB/wDla0qFCnANcJ2ZbQ8djY0zGXkF9cY2zWtiUpoxXk26Hd1YsYo2wzGrZpHoKjb3ApFCeNqfOPN/5M+oLLHhcfX3jdg4AqmuRkAfZyRvojQfMoDnEdhWGXxxj12kHzJ2mPLiCzepE4YNPMEjAjGP6BZytDJFmz7219QWPOVsGPXWseULtpE9Fr8CAwEAAaNQME4wHQYDVR0OBBYEFKPphYGYn39j2SAc9G+n2YyjsZjoMB8GA1UdIwQYMBaAFKPphYGYn39j2SAc9G+n2YyjsZjoMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGn142+V+r2s3bVjdmGVHyvZQzD3FVMWJN/nR+6V8MLdhdxYCK3VvLFhDtLpSWFMYqX0LHoUTx99IY+SNrlHwJuF/paaGFieWHHCdYd5DAER+F9pO/wawpuhQlrMpFIuOAVRWg9QTJtIKz5c8M01y7BXVmVK+Wo6DhkNUeknwTLsKQqRfCLKJCI/xNOiXT1NEdu+pvE4EeXvXH53Ya2spULjYzlCcvlnylj8SXGrAkN9K4L3c4iKXFfFfMSpS5WssbHMi/WaF5n0J4PBfPRvOegtv5Yjtl64745nu+fXDynKLBcwDxcsdcLGtBo3FIwLJ6aLPfpKkgNwc1vF+IpYVDkxggFFMIIBQQIBATAaMA0xCzAJBgNVBAMMAlBLAgkAgMLPSQn9WLowDQYJYIZIAWUDBAIBBQAwDQYJKoZIhvcNAQEBBQAEggEAXe1Bg8qgiZoGOoQdB3w0OMCP+0rWqcsLngEGoF3iM+uN5GvJ+nUeVW/dFfJZtByyscvxES1rz103TWHJbl/adiDSpd8mt2GtgXINare5TCtSH1IEJ8VcqLpFbBi+Bub4OPcn3bRIdHk/kXXPwFISQ5dG7l3tDA9ELkk8j50BFNqXDUrqA43M61FlSyLFlUoN93F41n0ye1ZGxW5aB/PVkBDH7aotzVWW52ix4trZkATByhmt2W3Cws2RIynnlUSIMQY/S0DZt+syMQZVLoes80RIdCRlsX4Fb1FSWa1IUhFyvTjpXi1dw1+MLIJTZQLLLXN/c8wIcZuT8EtGLOW0Y6FZwKXklKdKh7WrFVwr8HIfAwAAAAAAAAMDAAA1xazAyCVGZJJbXdfQsvWqMIIC7zCCAdegAwIBAgIJAIDCz0kJ/Vi6MA0GCSqGSIb3DQEBCwUAMA0xCzAJBgNVBAMMAlBLMCAXDTI0MDQxOTE3MDcwOVoYDzIxMjQwMzI2MTcwNzA5WjANMQswCQYDVQQDDAJQSzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANJ0SX/t3u0EMN25WmC7VoS89zDcC7XGyG9xcpNzFa7R5W5emDdZZPGP4LpPPpMV5jEUtzrx/YtV6oGAOC8zsTpRREMtFSHAVzo5lkcSevC94DCvOgLC1x0DXh9nCkVdkbhVzpCDGyTMZsAXSZnfwFxjnU5tJ8MB/wDla0qFCnANcJ2ZbQ8djY0zGXkF9cY2zWtiUpoxXk26Hd1YsYo2wzGrZpHoKjb3ApFCeNqfOPN/5M+oLLHhcfX3jdg4AqmuRkAfZyRvojQfMoDnEdhWGXxxj12kHzJ2mPLiCzepE4YNPMEjAjGP6BZytDJFmz7219QWPOVsGPXWseULtpE9Fr8CAwEAAaNQME4wHQYDVR0OBBYEFKPphYGYn39j2SAc9G+n2YyjsZjoMB8GA1UdIwQYMBaAFKPphYGYn39j2SAc9G+n2YyjsZjoMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGn142+V+r2s3bVjdmGVHyvZQzD3FVMWJN/nR+6V8MLdhdxYCK3VvLFhDtLpSWFMYqX0LHoUTx99IY+SNrlHwJuF/paaGFieWHHCdYd5DAER+F9pO/wawpuhQlrMpFIuOAVRWg9QTJtIKz5c8M01y7BXVmVK+Wo6DhkNUeknwTLsKQqRfCLKJCI/xNOiXT1NEdu+pvE4EeXvXH53Ya2spULjYzlCcvlnylj8SXGrAkN9K4L3c4iKXFfFfMSpS5WssbHMi/WaF5n0J4PBfPRvOegtv5Yjtl64745nu+fXDynKLBcwDxcsdcLGtBo3FIwLJ6aLPfpKkgNwc1vF+IpYVDkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=", "certificates": [ "OpaqueRef:753a9b7d-215e-6b07-ed18-2cf4da965823", "OpaqueRef:72f838a9-b683-d6b0-2707-a633eacbe58d" ], "editions": [ "xcp-ng" ], "pending_guidances": [], "tls_verification_enabled": true, "last_software_update": "19700101T00:00:00Z", "https_only": false, "latest_synced_updates_applied": "unknown", "numa_affinity_policy": "default_policy", "pending_guidances_recommended": [], "pending_guidances_full": [], "last_update_hash": "" }, "SR": { "uuid": "921094bf-12a3-78a2-f470-a2b6ad342647", "name_label": "DS2-NFS", "name_description": "Datastore #2 over NFS", "allowed_operations": [ "vdi_enable_cbt", "vdi_list_changed_blocks", "unplug", "plug", "pbd_create", "vdi_disable_cbt", "update", "pbd_destroy", "vdi_resize", "vdi_clone", "vdi_data_destroy", "scan", "vdi_snapshot", "vdi_mirror", "vdi_create", "vdi_destroy", "vdi_set_on_boot" ], "current_operations": {}, "VDIs": [ "OpaqueRef:ae6a3380-9948-fbb9-6023-80e5e45e97d4", "OpaqueRef:ff80a7f0-9346-8f9b-5334-cec19c53bd36", "OpaqueRef:fd1c8a18-fff8-1b40-ee8e-e2fb60e0f1e7", "OpaqueRef:fa2541e1-667a-d138-fedd-df054eb87229", "OpaqueRef:f89fd341-0f08-0a62-f48b-32fc96aff45f", "OpaqueRef:f482e185-139b-ee54-c406-72e6b87a009d", "OpaqueRef:f13d394f-6f10-970a-ef4d-34bdb01b9b16", "OpaqueRef:ea490b53-6fd0-ff02-7060-febf42610ca5", "OpaqueRef:e68eb2bf-1aa8-5096-8e5c-6750d9ead875", "OpaqueRef:e3585d3f-13a4-3707-51f2-3888455d5d09", "OpaqueRef:e2da7061-7df3-d57d-7e97-3f776f99e1f7", "OpaqueRef:e24553b4-4fd2-e0dc-b224-e9c44718ccd3", "OpaqueRef:e200cb5d-272d-e8ee-8fa7-ab5ecfc77d89", "OpaqueRef:e118a788-51c4-2095-e431-0a64495e552a", "OpaqueRef:e07886f7-4863-209e-f76a-793a7aae43b3", "OpaqueRef:e070d1ae-a4de-9ef7-0150-580243e03817", "OpaqueRef:dc4c16d7-7007-1d6c-2d77-9607d23be141", "OpaqueRef:d8cb9eaf-d4e3-6712-c8c0-eb1df5e91e08", "OpaqueRef:d8680065-6be7-7727-1282-714c374e6785", "OpaqueRef:d5dac6ef-8425-fbdb-0117-326823b9477f", "OpaqueRef:d4e0a11b-492e-6da4-b790-25894c36182d", "OpaqueRef:d2578d9b-d37a-4ae4-93ff-964c039b46b2", "OpaqueRef:cffbc408-b8ca-a540-7744-d9320937307a", "OpaqueRef:c7d617d7-e614-e822-978a-22342c56724b", "OpaqueRef:c77d6a65-3cd1-f653-2626-00ea2536faee", "OpaqueRef:c7248ef8-1dcd-fbf5-cf05-a32326d9161b", "OpaqueRef:bd5d55d0-8885-bf08-68f4-b852e02b4707", "OpaqueRef:bb1550af-54ad-d38b-d6e1-365c049f1687", "OpaqueRef:ba762abc-26d8-2714-a0da-ff1c5acde266", "OpaqueRef:b66f43c7-0d7d-48d1-3834-11fb89e3c2f0", "OpaqueRef:af4dce96-000f-8a65-d99a-3e4ecf0c9423", "OpaqueRef:aebc94d1-553c-503e-be48-b8d0b3edb70b", "OpaqueRef:ab9fdd6a-fb16-6838-719f-a1d89cbcecd7", "OpaqueRef:a93193d5-b282-e925-a7c2-4a6a4b4c536f", "OpaqueRef:a3336727-b8fd-a4db-4a70-4073b9281323", "OpaqueRef:a2cb2583-0c1b-9c62-e25d-c0a27de663f7", "OpaqueRef:a2bc8c53-00c6-e388-ae91-8efea64694aa", "OpaqueRef:a1a6cbdf-099f-a9e4-6ad0-9d6c1ed31adc", "OpaqueRef:a0b9c17c-5065-7898-eb82-9d5fd73ae1dd", "OpaqueRef:9e56ec14-abe0-b38c-6fa3-5dcf3bab1f93", "OpaqueRef:9c73d1b9-dc2f-6fec-0d0f-277012959546", "OpaqueRef:937228ba-705e-064c-f749-868083db8ab4", "OpaqueRef:8e4b4c40-74e6-fd5d-1b99-52fa135a9488", "OpaqueRef:8c83c729-0702-1ca3-b420-868390402d7f", "OpaqueRef:8c564285-2cb2-67e6-cb5b-6303bf526f24", "OpaqueRef:8acde1b1-8a5c-93cd-3172-8f5737ec49bc", "OpaqueRef:8861137f-3c42-7adb-6038-718fae177ccc", "OpaqueRef:87c20687-8c5d-7ed1-283e-080b40b055fd", "OpaqueRef:87716617-3870-780e-91cc-77c7469581aa", "OpaqueRef:84d4f159-d1a9-78be-939d-f71222a935c3", "OpaqueRef:82e6c355-0145-54ac-7a94-9902a93321bd", "OpaqueRef:811cb282-f2be-8297-2ba4-40b29889a3c2", "OpaqueRef:7d18351e-89de-61b7-c557-72c32c3a724d", "OpaqueRef:7887ad7b-506a-129f-a431-8f15e8d7efd8", "OpaqueRef:730a1bb7-293d-5332-53d1-b1c81690061c", "OpaqueRef:72fba226-8564-9c9a-8c76-f952518c176c", "OpaqueRef:6ac0a7d7-abbd-a974-5220-2fbfe32b347c", "OpaqueRef:65fe775b-af8a-ef5d-ca80-731c16c98a41", "OpaqueRef:60822c69-4e9f-67b2-9b6d-244ce979d9e5", "OpaqueRef:5dee4a21-528e-bd16-e434-05e32af1ea0c", "OpaqueRef:5bd4b666-87e7-bf06-c027-63d927d9fcce", "OpaqueRef:5aa12303-0dd3-13bd-28cb-ad40c0d1ecad", "OpaqueRef:5a3aaec4-07c8-81ae-ae62-a237bf9712ca", "OpaqueRef:4ed22ec4-5ed3-4a40-f138-746151ab719b", "OpaqueRef:4be586a8-22c6-056e-4cfa-235ce6fe9ce4", "OpaqueRef:498d8c62-d6a8-4882-d3fa-4aecaa48ccb2", "OpaqueRef:45a4be38-fadc-e616-711a-d3c9a7b264b4", "OpaqueRef:44458616-3e25-bb51-4aa8-a86ee3c54cff", "OpaqueRef:41e180d5-ac48-4121-769b-4e93a92d1707", "OpaqueRef:40f45c8d-0447-ec0f-52a1-42196f43e8a5", "OpaqueRef:3fbb39b3-68e3-2cab-6d2b-9e69eb6ea595", "OpaqueRef:3c08a0b7-d0d2-7551-a8d1-8639141c9762", "OpaqueRef:3b730d7b-e633-be54-9f0c-e703ffb1e859", "OpaqueRef:3b6db74a-c71b-bf63-0b01-15a17348c898", "OpaqueRef:38e7925a-ed7d-48f2-0226-17519af8e876", "OpaqueRef:3677ab4a-db31-60c8-f87a-877466f77c4e", "OpaqueRef:331a6fd8-d127-db18-4336-0749e927a1a3", "OpaqueRef:31bb7228-6f1a-3c09-fff8-ff387d6f5d2a", "OpaqueRef:296543b7-8032-622c-943e-985f7b7bd7ee", "OpaqueRef:2936310b-4036-b7cb-9ef2-0529f06f52cd", "OpaqueRef:28dc831c-f9e2-63b9-fca5-6729d392bd67", "OpaqueRef:286180b0-c8ea-261f-944e-b6cc3c03b6b4", "OpaqueRef:26749d46-d2bd-c340-dac9-5b190457631c", "OpaqueRef:24dd74c5-b5f9-bfe8-d51f-11e730fafe56", "OpaqueRef:209d97b5-cd5a-faad-b422-8610022311da", "OpaqueRef:1cc6360d-5a4c-96e7-2a27-dfcacd48b933", "OpaqueRef:1bcc33d0-112d-e0e7-b965-5ae1caadc93b", "OpaqueRef:1a43cad2-836b-0b62-84ec-a0b1b8b247da", "OpaqueRef:1a2c104a-56ef-87c0-3660-da3a8eda39ea", "OpaqueRef:182c68e0-d5b4-7a13-d244-4ec597f4096c", "OpaqueRef:15ca8ae2-a5a3-146d-1086-310bb438060f", "OpaqueRef:14eddb45-2dbe-b047-c507-b2d833062f5a", "OpaqueRef:135174ad-9903-5822-6d40-9f9e5df3fd0e", "OpaqueRef:11b3ebcf-63fd-d6da-758a-295d049f1e51", "OpaqueRef:103bb7b6-a81f-f0e4-3ff4-181ac161068d", "OpaqueRef:08fd21e1-4ed0-a382-556f-22f6d53343b8", "OpaqueRef:087db81c-d77c-5bcf-330c-a59dcb9d0fbd", "OpaqueRef:082baac1-c937-e809-7196-ed741f7bcaec", "OpaqueRef:051d3a89-2b83-7fc5-fee0-09b612dd04b3", "OpaqueRef:044a06f1-5d3b-5898-0363-ecc14066cf67", "OpaqueRef:01f19765-ed16-778b-4a39-f41831663e91", "OpaqueRef:00b68d6d-6d4c-f0c0-3924-d51d0ab36679" ], "PBDs": [ "OpaqueRef:fb6aa82c-ae09-bc48-2f25-14aa700fd6c7", "OpaqueRef:7a9f7eb2-b380-b3ea-d121-6e737af1bd99", "OpaqueRef:5bda504e-5502-5ca3-7931-d1c66a8097c3" ], "virtual_allocation": 15192373067776, "physical_utilisation": 437784510464, "physical_size": 8001427603456, "type": "nfs", "content_type": "user", "shared": true, "other_config": { "auto-scan": "true" }, "tags": [ "Shared" ], "sm_config": {}, "blobs": {}, "local_cache_enabled": false, "introduced_by": "OpaqueRef:NULL", "clustered": false, "is_tools_sr": false }, "message": "IMPORT_ERROR(INTERNAL_ERROR: [ Tar.Header.Checksum_mismatch ])", "name": "XapiError", "stack": "XapiError: IMPORT_ERROR(INTERNAL_ERROR: [ Tar.Header.Checksum_mismatch ]) at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12) at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29) at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1068:24) at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1102:14 at Array.forEach (<anonymous>) at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1092:12) at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1265:14)" }
-
RE: Ldap plugin : filter to allow only specific group to login ?
@Chico008 Seems like you're duplicating your inquiries. As I suggested in the previous thread, I think your
memberOf
is missing the full DN of the group.