@olivierlambert Sorry, but still not overly clear.
If I create a VM with 4 vCPU, then I expect that to have 4 vCPU's.
So what is the point of the Max vCPUs? If I want more vCPU's I just shut down VM and change vCPU's to 8 or something.
@olivierlambert Sorry, but still not overly clear.
If I create a VM with 4 vCPU, then I expect that to have 4 vCPU's.
So what is the point of the Max vCPUs? If I want more vCPU's I just shut down VM and change vCPU's to 8 or something.
@olivierlambert Your points are valid, but, I guess just like many others:
(contribute to speed up the process), don't have the skills, time and or resources to do it ourselves, and besides wouldn't know where to start to code this in.
Again, money. Just because we are business, doesn't mean we have money to spend on things like this. It's not so much as we won't spend money, its more like there are other priorities, especially considering that we have only just come out of a very severe and long drought, and COVID-19, which has severely impacted our business. So on one hand, having this feature would be a large help, as we can cut corners a little on GPU's, but on the other hand, sorting this feature out ourselves is resources and such that we simply don't have spare.
In our case, we have plenty of redundant systems, to the point I can easily, and comfortably move everything over to VMWare so we have this feature. Like others, we simply have to take the path of least resistance.
Personally not sure if we will move platforms yet as we can get by on some M4000 cards for now.
When creating a new VM in XO, there is the usual vCPU's and topology setting, which I am familiar with.
Then I have noticed in the Advanced setting, there is a Max vCPU's setting.
What is it, do I need to do anything there, and how is it different or not different from the vCPU's and Topology settings?
What is the current status on this? We are hoping for this soonish, and if it isn't coming at all or not for a long time, then we need to plan a migration to another platform.
Our use case is for business, but we are not a very big business and do not wish to spend thousands on buying Quadro cards when much cheaper GTX and other consumer cards will do the job perfectly if it wasn't for Nvidia's artificial limitation.
We run a custom program for Kitchen Design as RemoteApps under an RDS setup. At this point, we are running under Quadro 4000 cards, as we already had them, but due to this, I cannot upgrade to Server 2019 as there are no drivers. To upgrade we have to upgrade cards, and the best bang for buck cards in our use, are P4000 which do run under Server 2019, but are $1000+ each. WE want to have 2 cards in each server, totalling 4 cards. We can get even better bang for buck on some RTX 2070's or the like for nearly half the cost.
The only thing stopping us is this stupid Code 43 "bug".
We are willing to wait as we haven't quite hit the limits of our current setup, but we are heading close to it, and we need to plan our next move. I love XCP-ng and honestly cannot fault it for anything, other than having the capability to hide the fact the VM is a vm.
We have already ran a test setup on Proxmox and the Code 43 fix works perfectly.
@olivierlambert I don't' think that is a complete solution.
Yes, you can spin one up whenever needed. But to do that you need access to the servers. So if XO is your only means of access at the time, then how does one just spin up another XOA?
I don't think it's about having to set up another XOA or about it being easy or difficult to set one up. It is more about the times when it is your only access to the VM's and XCP-ng Servers then what? If you have at least 2 XOA's running then at least if one fails, you still have full access to the servers.
I like the idea from @shwetkprabhat but in my case I may just run a second XOA on our second Pool and just not run any backups from it.
I actually want a way to HA this as well. There is a use case for this.
I use XO for remote access to the Servers and VM's. The issue is, that if the Server with XO or the XO VM goes down, I have no access to any of the other servers or pools.
At the moment I personally am not too far from work and my home network is a part of my work network, so not too much of an issue to get back in another way. But in January I am heading to Fiji for 2 weeks and will not be able to do anything at all if the XO VM goes down.
@olivierlambert I did dig through the logs and couldn't find anything.
In saying that since the last time it happened (mentioned above), it hasn't happened again. All servers up to date and I haven't had any problems at all regarding XCP-ng.
Sorry for the late reply.
If it helps, I'll explain exactly what happened to me.
We have had 2 Cisco C210 M2 servers running XCP since release (and Xen prior) as a single pool. This had worked for around 3 years. Never once saw the time error.
A few months ago we set up 2 R720 with XCP 8 as a single pool. Once it was all running, I was running VM's on them just fine for around 2 weeks. Then for some reason, 1 of them completely disappeared from the pool. It was still there, the iDRAC was still accessible. SSH was accessible to the server. When SSHing in and looking at the console, it reported there was no network cards or storage. It seems completely broken but was all still functioning. The VM's that were running on it had moved over to the second server. Rebooted the machine and it was back to normal.
Can't be precise as I hadn't written it all down, but I remember looking thru the logs (via XO) and seeing something about XOA time not matching something time...
I didn't take much notice at that point, and within 24 hours the Second server did exactly what the first server did.
After that, I saw the same error messages again regarding time. I then double-checked the bios time on the servers (both set to UTC) and both were correct. I then went into xsconsole and saw there that neither has the timezone set, so I set them, and also set both NTP settings up. For a few weeks, they seemed to start behaving.
I had to run some XCP patches on them recently and in the process (doing the install, reboot, restarting toolstack etc) I did notice that for a moment XO reported the time error again. My guess is in this instance it had to do with the tool stack restarting. Once updates were complete, all returned to normal and I haven't seen the time error since (yet).
I hope the above information helps.
I would like to report this as a bug to be honest.
I have had the same issue on 4 servers running as 2 pools on XCP 8. Never had the problem before. All 4 servers are set to UTC time in the bios, and after the first time I got the message I made sure XO and all 4 XCP installs were set to the correct timezone and double-checked they reported the same time and dates (they did, but not sure if they did prior to thoroughly checking them), and yet on occasion still get the same error.