@olivierlambert You got it, I'll do that. Thank you.
Posts
-
RE: What metadata restore really do?
-
RE: What metadata restore really do?
@olivierlambert Understood, however, I'm failing to see what part of my posting was irrelevant to the topic at hand. I was merely sharing what I'd experienced as well and providing context.
I thought this was the kind of feedback expected of community members, who want to see this project flourish?
-
RE: What metadata restore really do?
@Tristis-Oris Just wanted to add my 2-cents that I, too, ran into a similar issue just a couple of days ago and it feels like this could be a bug.
My scenario:
I had a fully working three-node pool. One of the hosts had a 12 CPUs, whereas the other two had 16 each - so I would occasionally run into errors with the load-balancer trying to migrate VMs (usually something like "VM Lacks Feature"). So I decided to replace that host with another one that was identical to the other two hosts.
I did what I thought was logical; detached that host from the pool, installed XCP-ng 8.3 on the new host, gave it the same hostname and IP address, used the same root password, then powered it up. Now, without doing anything at all, when I went into XO and viewed the Pools page I saw the new host in a pool by itself (which was strange, because I wasn't expecting that). I was expecting to have to manually add a new server to XO - but hey, perhaps there is some kinda discovery going on.
Anyway, I attempted to add this new host to the pool but was unsuccessful. I was met with an error about the SDN certificate existing. Unfortunately, I didn't write it down. But ultimately, the solution for me was to run the same
xe pool-uninstall-ca-certificate name=sdn-controller-ca.pem
command on the new host.Anyone from Vates care to comment?
-
RE: XCP-ng Center 25.04 Released
@JBlessing At the bottom of the release page (https://github.com/xcp-ng/xenadmin/releases/tag/v25.04) are the links to download either the MSI installer or a Zip file containing the entire application (if you want a portable package).
-
RE: XO6 Possible Issue with Lock Sidebar Button
@lsouai-vates Duly noted.
If amd when you do get a moment, could you have a look at https://xcp-ng.org/forum/post/94297 and offer up any guidance on how I can stop the debug log entries and also the double timestamp entries in the audit.log file?
Thank you.
-
RE: XO6 Possible Issue with Lock Sidebar Button
@lsouai-vates Actually I do. Don't want to derail this thread, mind if I DM you? I promise not to monopolize your time.
-
RE: XO6 Possible Issue with Lock Sidebar Button
@lsouai-vates No worries, I think we can mark this issue as resolved (or close it altogether).
While capturing the screenshots for you, I noticed that I was holding the mouse cursor over the sidebar - thus preventing it from collapsing. Once I click anywhere else on the page, the sidebar collapsed accordingly.
Sorry for causing any panic - the UI looks like it's behaving as designed. Thanks again for engaging so quickly.
-
XO6 Possible Issue with Lock Sidebar Button
Good-day Folks,
XOCE Version: Commit 2effd85
Wondering if anyone else is seeing this behavior on XO6 on Commit 2effd85. Looks like if you click the Lock Sidebar button, the sidebar locks indeed, however, the whole page still behaves as if you'd clicked on the Close Sidebar button.
I created an animated GIF to share, but it's not allowed here so unable to share it directly. So I uploaded it to my Google Photos account, here's a link: https://photos.app.goo.gl/RVT1CE19tPv8SbF37
-
RE: XOA vs XO vs Backup feature
@abudef I know your question was targeted to the Vates team, however, I'd like to chime in....if that's ok?
As a Windows SysAdmin, myself, I had to eat the proverbial "humble pie" and eventually learn Linux to take advantage of a lot Opensource software. It goes without saying, that you'll have to do the same here.
Now, if you really wanted to, you could install XO from sources using Ronivay's script into a Debian VM running on VirtualBox or Hyper-V on your Windows machine, then use that to manage your pool.
-
RE: Can't get slave out of maintenance mode after yum updates
@rustylh Not a pro here, but in my experience thus far, most weird issues are solved with a Toolstack Restart. Have you tried that already? If not, do so from whichever node is currently the master, then report back.
-
RE: Shipping System Logs to a Remote Syslog Server
@ThasianXi Thanks, I will checkout the resources you've shared. I already figured out the XCP-ng side and wanted to get the XO side as well.
Thanks again.
-
Shipping System Logs to a Remote Syslog Server
Good-day Folks,
Has anyone been successful in shipping off Xen Orchestra logs to a remote syslog server?
I'm in the process of configuring the XCP-ng hosts to forward logs to a Logstash server for ingest into Elasticsearch, as part of an effort to demonstrate compliance to RMF and DISA STIGs (similar to VMware ESXi + VCenter).
-
RE: VM Console Screen Suddenly Inaccessible
Hmm, interesting observation about potentially this being a failed migration issue.
I can confirm that none of my users in the lab are migrating any VMs or this particular one, for that matter. However, I do have the Load Balancer plugin running and it could be the source of a migration that could've failed. How can I confirm this in the logs? I mean, what should I be looking for?
-
VM Console Screen Suddenly Inaccessible
Good-day Folks,
I trust you're all doing well. For the past couple of days, I've been noticing that the Console of one of my VMs (DC02) becomes inaccessible (from within Xen Orchestra), however, during this state I am able to using RDP to remotely access the VM.
To begin troubleshooting, I attempted to reboot the VM from Xen Orchestra's VM controls menus. This failed and I was greeted by the following error message:
INTERNAL_ERROR: Object with type VM and id 9fa84df4-3912-5cbf-09a6-3374dd27eead/config does not exist in xenopsd
. Next, I attempted to force a VM Reboot or Shutdown from the VM's Advanced tab, and was met with the same error message.The Temporary Solution (Workaround)
This is what got my VM back into a working/running state - though I'm not sure if the order is important:- First, I ran
Rescan all disks
on the SR where the VDI of the VM was located - Second, I restarted the Toolstack of the host that the VM was running on. I immediately noticed that the previous shutdown attempt took effect, but I was now unable to restart the VM. All attempts to start the VM resulted in the error:
INTERNAL_ERROR: xenopsd internal error: Storage_error ([S(Illegal_transition);[[S(Attached);S(RO)];[S(Attached);S(RW)]]])
- Last, I restarted the Toolstack of the master host. Once Xen Orchestra reconnected, I was able to start up the VM without any issues.
Unfortunately, I am unable to tell when the issue is occurring or what conditions lead to the VM getting into this state. Since this is a lab environment being used for a Proof of Concept (POC), we're in the lab sporadically. I've observed this issue twice now, generally in the mornings.
Anyway, I thought I'd report it to the community to see if anyone has encountered a similar issue before and could offer some hints on a permanent solution. Thanks.
My Environment:
Hosts: 3-node XCP-ng v8.3.0
XOA: v5.102.1 Build: 20250220 (air-gapped) - First, I ran
-
RE: Email to Sales Team Bouncing Back as SPAM
Thanks to both of you, I adjust accordingly.
The strange thing is that, I have an open ticket and the way I've always corresponded with the team is by simply replying to that email and it's always gotten through. This is the first time I've seen that bounce (which suggests a change somewhere).
No worries though, I'll just log into the helpdesk portal and add a comment directly to the ticket.
-
Email to Sales Team Bouncing Back as SPAM
Hi Team,
I just sent an email to the sales team from my work address and it bounced back as SPAM. Is there any email issues going on?
-
Storage Repository Maintenance Mode When XO VM's VDI is Remote
Good-day Folks,
A few days ago, I got myself into a little jam while trying to do what I thought was the proper way of handling the reboot of the only storage server in my test lab. Now, I managed to get myself out of trouble but I'm here for guidance on how I could've done things differently. So, here's what happened.
For those who don't know, I'm running a small test lab where I'm testing out the Vates VMS stack as a viable drop-in replacement for VMware's VCF stack. Unfortunately, I don't have a lot of funding, so don't have a lot of hardware. As such, I only have 4 physical machines that I had available to dedicate as servers. I used three of them as XCP-ng hosts and turned the last into an Active Directory Domain Controller, DHCP Server, as well as the File Server (SMB/CIFS and NFS). I also have attached to this same box an 8TB external HDD which I'm sharing out over NFS and using as Remotes (to test the backup features of XO). This entire setup isn't ideal, but hey, it's what I got - and it works. Actually, the fact that the Vates VMS stack works in such a condition alone is a huge testament to the resiliency of the solution. Anyway, I digress; back to the issue at hand.
Given the above setup, a need arose to reboot this server (let's call it DC01). After reading through this documentation - https://docs.xen-orchestra.com/manage_infrastructure#maintenance-mode-1 - I decided that it was a good idea to place the SRs into Maintenance Mode before doing the reboot. I had done this before in another environment (at my church) and never ran into the problems I'm about to describe (however, in hindsight, I think the difference may have been that the VDI of the XO VM was local to the host it was running on).
When I clicked the button to enable maintenance mode, it gave me the usual warning that the running VMs will be halted, so I said OK to proceed. What I didn't realize was that because the XO appliance was running with its VDI on the SR that I just put in maintenance mode, I would immediately lose connectivity with it and it would subsequently refuse to start. I had a backup plan; to use XCP-ng Center (vNext) to connect to the pool master and attempt to see if I could start the XOA VM; hoping that maybe I'll get prompted to move the VDI - but this never happened. The startup attempts kept failing, citing a timeout error. So, running out of ideas, I simply decided to reboot all three hosts, hoping that once they came back up, they would reconnect the SRs and then I would be able to start up XOA. Unfortunately for me, the reboots took a very long time to complete. So long that I left the lab (this was around 9pm) and returned around 2am. Not sure exactly how long it took for the reboot to happen, but when I got back to the lab all hosts were back up but no SRs were connected.
At this point, my thinking was that the SRs didn't reconnect on each host likely because XOA wasn't running to instruct them to (don't know if this is entirely accurate). I googled around and found that I could re-introduce the SRs directly on each host by using the
xe pbd-plug/unplug
commands. Strangely enough, while I was able to run those commands on the CLI of each host without any errors, the SR reconnected on only one host. It wasn't until I used XCP-ng Center (vNext) to perform a repair on the SR. That's when it clearly showed that the SR was connected to Host #2 but not #1 and #3. So I proceed through the wizard and it successfully repaired the connections. I was then able to successfully start the XOA VM and got the lab back up and running.So my ultimate question:
- When the VDI of the XOA or XOCE VM resides on an SR that's being targeted for maintenance mode enablement, what is the proper procedure?
Thanks in advance to anyone who reads through my long narration and then offers a response. You are very much appreciated!
-
RE: Rolling pool update failure: not enough PCPUs even though all should fit (dom0 culprit?)
I have seen this problem before in my test lab. Unfortunately, I didn't document it enough to report here. For me, the solution was also to simply power off the culprit VM to prevent the attempted migration.
In my mind, I think the RPU logic should be using the current running state of VMs to determine resources currently in use and which hosts can support that. Since the move is only temporary. Then again, I'm not in a know of all the factors that went into the decision to have it working the way it is. I'm sure there's a valid reason.