The ronivay script requires you to select an option (#2 to update).
I look at things this way, it's good to have more people working on scripts like this.
The ronivay script requires you to select an option (#2 to update).
I look at things this way, it's good to have more people working on scripts like this.
I haven't tried this yet, but liking the menu you just showed!
How many of these are critical? I haven't even had time to apply the last round of patches to either lab or production. 
In theory Harvester supports nested virtualization, but I haven't tried it yet. Might be worth setting up a single host to try it. I think Proxmox also supports it, again might be worth a single host if that would handle the load.
Do you have compression turned on for the backups?
I have a couple vms that I don't compress because of the old error, I didn't update yet or Red enable compression to see what that status is.
Normal for the pool master to take a bit longer but never seen 15 minutes before.
The wait period will be nice, after an RPU I'm watched VMs bounce back and forth trying to put themselves on another host. It's been a few updates since I noticed that, but a cooldown seems like a good idea, especially one that can be adjusted as you've made this one.
This reminds me that I have some work to do on my policy, mostly for anti-affinity.
I just set my lab back up from scratch, I can't remember for certain, but I think it pushed me over to v5 to set up the SR.
Indeed I will, but are the prices really that close? I was looking a few weeks ago with some replacements and spinning was still enough cheaper that it made a difference. I was looking for around 4tb enterprise class drives.
Just wondering about a possible XOstor configuration as far as performance:
3 hosts, each can have 3 spinning disks for HCI. Need about 6tb of usable free space and understanding each host is essentially a mirror of the other hosts. Total 12 drives in the pool.
What size drives would be best to achieve this? Would 2tb x3 drives give me this because they are not put into a raid like array on each host? Would I need to go 4tb x3 drives because it would make a RAID3 like array on each host? Or would I need larger x3 drives in each host? I only have SATA on these hosts.
Also how close are we to a "from sources" guide this would certainly get prototyped in my lab with a different drive configuration (single nvme), but I'd need to plan ahead for production. Time always being against me, it would take months to get this prototyped and the demo period will run out multiple times while I'm trying to work on this.
An example of time against me, I've had Harvester up for a month and still haven't build the first VM and haven't tried to get Rancher installed on a docker and integrated. It's certainly a much heavier lift to get to where XCP-ng and XO get you. I just feel like eggs in one basket is not something I should be doing (looking at you Broadcom and IBM
).
The only GPU I've ever tried was nvidia quadro series, and that was probably under 8.2.
Truenas on bare metal just as storage, XCP-NG is also bare metal on 3 hosts to make a pool. That's the minimum if you want to enable High Availability, it also works really well as a "normal" pool which is what I have.
Rolling Pool functions are great, click the button and the system moves the VMS off of the host that need updates, reboots, moves VMs off of the next host, repeat. Only works with Shared storage.
I will make one suggestion that might be a problem for some users with really strict password requirements. Make the default password something more complex that doesn't contain the username or the word password.
One capital, one lower case, one number or special, and minimum 8 characters.
Alternate would be a note calling our where in the script the default can be edited, this way you don't have to do much, and those that need the default more complex can change it themselves before they run the script.
I'll try this in the future when I move me lab up to XCP-ng version 9, no timeline on this since I wanted it done two weeks ago.
[edit] full of typos today, please excuse my mistakes
My Truenas runs on bare metal, I have smb SR for iso, NFS SR for VMs, another smb for VMs.
In production I have a second old Truenas that that has smb and nfs that I use for storage updates, migrate from faster to slower storage which is still generally enough for my needs, update Truenas, migrate back to faster. I also have a third Truenas that has user data, but it's big so I set up a backup remote smb to spread out my disaster footprint.
In my lab is just a single bare metal Truenas with whatever kind of share I need. Generally just smh and nfs or iso and vm.
Both systems run three XCP-ng hosts so I can do things like rolling pool updates with no VM downtime. RPU is genius, all automated, just click a button and watch VMs move and hosts reboot.
I would restart the xcp-ng tool stack or do a rolling pool reboot.
Since your SR is hosted on Windows, I might uninstall the last updates and see if things start working. Then make a plan to move storage to Truenas.
I wish I had newer generations of the big HP DL360 on the bottom, mine do not support uefi, and that's getting in the way. Sitting power off to keep the bios battery from running down. Those are 20 cores with 128gb each, and again 10gbe networking.
HP T740 with 64g of ram, 256g sata for OS, optional nvme for faster storage, optional 10gbe networking through low profile card. Works great for my lab.



Ran vSphere8 on three of them, running Harvester on 3 of them now. XCP-ng and vSphere8 work great, Harvester is a bit more resource hungry. Only the bottom three have nvme and currently have Harvester 1.7.0 running while I try to find time to learn more about this system.
XO from sources means you are probably in XO6, have you tried from the XO5 link or tried from XO-lite?
Is the storage where you are trying to put it properly connected and working?
I haven't had to create a VM since version 6 came out, but I thought there was a thread mentioning some difficulty making a new VM from xo6.
I think I need to update mine, probably a ton of commits old by now.