The server had high memory usage so I expect lots of paging, which could explain the block writes. I've increased the mem and want to see what difference that makes.
Best posts made by McHenry
-
RE: Large incremental backups
-
RE: Disaster Recovery hardware compatibility
Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
-
RE: ZFS for a backup server
Thanks Oliver. We have used GFS with Veeam previously and will be a great addition.
-
RE: Windows11 VMs failing to boot
Thank you so much. If you want me I'll be at the pub.
-
RE: Zabbix on xcp-ng
We have successfully installed using:
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm yum install zabbix-agent2 zabbix-agent2-plugin-* --enablerepo=base,updates
-
RE: Migrating a single host to an existing pool
Worked perfectly. Thanks guys.
-
RE: from Hyper-V
Our Hyper-V servers have no GUI and the process I use is:
- RDP to the Hyper-V host
- Open PowerShell
- Get a list of the VMs on the host
Get-VM
- Stop the VM
STOP-VM -Name <name of VM>
- Identify the VM's disk(s) for conversion
Get-VMHardDiskDrive -VMName <name of VM>
- Convert the VHDX to VHD (destination file extension sets the type so use ".vhd")
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
To transfer the newly created .vhd files to xcp-ng we use Putty via the cli
-
RE: from Hyper-V
Either way.
-
If you can have the server offline then shutdown and create the VHD from the VHDX. The process creates another disk file so the original remains unchanged and if it all goes wrong you can simply restart the VM in Hyper-V and try again another day. You will need enough disk space for the original VM & the new VHD file.
-
If the server cannot be offline then export the VM and then convert the VHDX to VHD. The issue being the original will VM still be updated whilst the migration to xcp-ng takes place. You will need enough disk space for the original VM, the exported VM and the new VHD file.
-
-
RE: from Hyper-V
We have a simplified process now.
- Shutdown VM in Hyper-V
- Convert VHDX to VHD using PowerShell
- Move VHD to xcp-ng using SSH
- Generate new name using uuidgen
- Rename VHD
- Create VM in XO and attach VHD
After much trial and error this works every time.
Latest posts made by McHenry
-
RE: Best strategy for Continuous Replication
Here's the new model. We've tried a few combinations and I think with TrueNAS shared storage this will now work well.
-
RE: Best strategy for Continuous Replication
For anyone else looking to connect TrueNAS with XCP-NG
-
RE: Best strategy for Continuous Replication
In TrueNAS NFS share settings I set this and it now works.
-
RE: Best strategy for Continuous Replication
I have setup TrueNAS with an NFS share however am unable to connect as a remote.
Is there a guide on how to configure connecting XO to NFS?
-
RE: Best strategy for Continuous Replication
Makes perfect sense.
I expect having separate storage for the production VMs and CR VMs makes sense too.
I am now thinking a good robust model would be:
- One or more production hosts in a single pool (allows host migration for updates)
- One TrueNAS Scale for production shared storage
- One CR host with local storage
-
Best strategy for Continuous Replication
I had a server dedicated to CR that was part of my pool.
i recently lost the pool master and in turn lost access to the CR host too.
The official docs state that the CR can be used if the main pools fails which indicates having the CR host as part of the pool is not a good idea.
Is it best practice to not have the CR host as part of the main pool?
Alternatively, would a better setup not being having multiple xcp-ng hosts with central shared storage for both production VMs and CR VMs. This way if a single xcp-ng host fails the CR VMs can be easily started on the other host? A variation of this would be to have two shared storage repos, one for production VMs and one for CR VMs.
I am keen to hear other's thought on this.
-
RE: Host failure after patches
All this complexity makes me question the advantages of having all hosts in the same pool.
-
RE: Host failure after patches
My setup is pretty basic.
I have two hosts in the pool, one for running VMs on local storage and one for DR backups on local storage.
I'd like to setup shared storage so i could run the VMs on multiple hosts and seamlessly move them between hosts without migrating storage too.To setup shared storage would this be on an xcp-ng host or totally independent of xcp-ng?
-
RE: Host failure after patches
Can I restart the slave and install patches if the master has not been patched yet?
-
RE: Host failure after patches
Have restarted the tool stack but no different.
When I view hst110 I see:
I was tempted to restart it however it has Patches pending installation after a restart and as the mater is not fully patched I thought best not to restart it. I understand the master needs to be patched first.
I just checked xsconsole on hst110 and it shill shows pool master unavailable
Do I need to change the pool master used by the slave?