@DustyArmstrong thats super-strange, i actually have the same setup at home, 2 hp z240 machines running xcp-ng in a small pool.
xcp1 is always up and running, xcp2 is powered down when I dont need it, everything important is running on xcp1, maybe that's the reason I don't run into these issues.
Ok, my syntax is oudated.
resize_rootfs: true
growpart:
mode: auto
devices: ['/dev/xvda3']
ignore_growroot_disabled: false
runcmd:
- pvresize /dev/xvda3
- lvextend -r -l +100%FREE /dev/ubuntu-vg/ubuntu-lv || true
this one works.
final_message. doesn't support any macro like %(uptime) %(UPTIME)
@florent Thanks, had to put the DFIR hat on.
May as well ask as I thought about a PR for this - would it be feasible/practical/desirable to allow this to be done from XO's UI? I don't know how much of an edge case this was for me, but being able to remove "other-config" data following a migration (e.g. you do what I did and want the VMs to start over independently on a new host) might be beneficial to others.
Obviously it would be quite destructive I imagine, if used inappropriately. Even just reporting those ghostly associations would be nice - again not sure of your overall design ethos so there may be good reasons why it's not a solid idea.
My validation of this was not successful; I used the Quick Start PoC.
Pods eventually went into CrashLoopBackOff after ErrImagePull and ImagePullBackOff.
I created a GitHub token with these permissions: public_repo, read:packages. I also used a token with more permissions (although that was futile) however, I figured at least it required the aforementioned ones.
I have since uninstalled via the script but captured the following events from the controller and one of the node pods:
kgp -nkube-system | grep csi*
csi-xenorchestra-controller-748db9b45b-z26h6 1/3 CrashLoopBackOff 31 (2m31s ago) 77m
csi-xenorchestra-node-4jw9z 1/3 CrashLoopBackOff 18 (42s ago) 77m
csi-xenorchestra-node-7wcld 1/3 CrashLoopBackOff 18 (58s ago) 77m
csi-xenorchestra-node-8jrlq 1/3 CrashLoopBackOff 18 (34s ago) 77m
csi-xenorchestra-node-hqwjj 1/3 CrashLoopBackOff 18 (50s ago) 77m
Pod events:
csi-xenorchestra-controller-748db9b45b-z26h6
Normal BackOff 3m48s (x391 over 78m) kubelet Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
csi-xenorchestra-node-4jw9z
Normal BackOff 14m (x314 over 79m) kubelet Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
Warning BackOff 4m21s (x309 over 78m) kubelet Back-off restarting failed container node-driver-registrar in pod csi-xenorchestra-node-4jw9z_kube-system(b533c28b-1f28-488a-a31e-862117461964)
I can deploy again and capture more information if needed.
@Danp No sir, I have not.
I have, however, exported the same VM in the OVA format. Ended up being about 47 GiB, however, I was able to import it into XOA without any issues, and the import took just under 10 minutes to complete.
@Chico008 Seems like you're duplicating your inquiries. As I suggested in the previous thread, I think your memberOf is missing the full DN of the group.
@dinhngtu said in Windows VMGuest changing network cause the guest to crash:
This is a driver bug that we fixed in XCP-ng Windows tools v9.0.9030 but hasn't been integrated by Citrix yet. You can try it out if you're not running a production system.
Gosh, we are now officially faster than Citrix to detect and fix bugs even in the Windows PV driver
@flakpyro yes, indeed that exactly what I was talking about. The XO6 new interface will provide more information, but I am noting the missing type of bound to discuss it with XO team for next feature.
@yzgulec said in How to deploy XO on ESXi:
I just used VMware Converter for V2V (seems more practical for me)
Installing from Source or using an installation script from the community is also very straight-forwards.
Maybe 10 minutes worth of setup for the OS and then for at least my github it's a single line installation.
@splastunov I realize this thread is old, but I think there is important info to keep connected to this thread for future readers.
The IP locking trick doesn't seem to prevent all traffic -- it only prevents traffic that has an IP that isn't 255.255.255.255 (and possibly others as well). That is, I can still successfully acquire a DHCP IP even when the IP locking mechanism is engaged. I think this is important for others to know since it doesn't fully isolate VMs like the OP wanted.
UPDATE: Setting the locking mode to "disabled" is what you want -- not "locked". Disabled will drop all traffic; locked simply checks if the set of IPs in the VM is permitted. Source: https://docs.xenserver.com/en-us/citrix-hypervisor/networking/manage.html#vif-locking-mode-states
@irtaza9
Sometimes when I have been tinkering with a host, I get a red triangle.
Then just pressing Enabled and then Disabled can get rid of the triangle and the host back in business again.
@julien-f said in Enhancement suggestion: Filter showing VMs that don't have the agent installed:
XO does not support this for halted VMs but that could be added indeed.
Can you change the title of my thread to "Enhancement suggestion: Update 'ManagementAgentDetected?' to support halted VMs" ?
@yzgulec You will likely need to install the fix given in the following post for XOA to function correctly on vSphere --
https://xcp-ng.org/forum/post/50279