The server had high memory usage so I expect lots of paging, which could explain the block writes. I've increased the mem and want to see what difference that makes.
Best posts made by McHenry
-
RE: Large incremental backups
-
RE: Disaster Recovery hardware compatibility
Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
-
RE: ZFS for a backup server
Thanks Oliver. We have used GFS with Veeam previously and will be a great addition.
-
RE: Zabbix on xcp-ng
We have successfully installed using:
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm yum install zabbix-agent2 zabbix-agent2-plugin-* --enablerepo=base,updates
-
RE: Migrating a single host to an existing pool
Worked perfectly. Thanks guys.
-
RE: from Hyper-V
Our Hyper-V servers have no GUI and the process I use is:
- RDP to the Hyper-V host
- Open PowerShell
- Get a list of the VMs on the host
Get-VM
- Stop the VM
STOP-VM -Name <name of VM>
- Identify the VM's disk(s) for conversion
Get-VMHardDiskDrive -VMName <name of VM>
- Convert the VHDX to VHD (destination file extension sets the type so use ".vhd")
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
To transfer the newly created .vhd files to xcp-ng we use Putty via the cli
-
RE: from Hyper-V
Either way.
-
If you can have the server offline then shutdown and create the VHD from the VHDX. The process creates another disk file so the original remains unchanged and if it all goes wrong you can simply restart the VM in Hyper-V and try again another day. You will need enough disk space for the original VM & the new VHD file.
-
If the server cannot be offline then export the VM and then convert the VHDX to VHD. The issue being the original will VM still be updated whilst the migration to xcp-ng takes place. You will need enough disk space for the original VM, the exported VM and the new VHD file.
-
-
RE: from Hyper-V
We have a simplified process now.
- Shutdown VM in Hyper-V
- Convert VHDX to VHD using PowerShell
- Move VHD to xcp-ng using SSH
- Generate new name using uuidgen
- Rename VHD
- Create VM in XO and attach VHD
After much trial and error this works every time.
-
Paste from console
Is there any way to paste from the console?
We have long passwords for Windows and these need to be typed manually to obtain console access which is a real pain.
Latest posts made by McHenry
-
RE: Identify job from notification email
Still trying to identify the owner of a job so the I can determine which backup job failed. The email report does not identify the host/pool. Accordingly, I need to login to every individual XO/XOA and check the backup log manually to find the failed job.
-
RE: Large incremental backups
Hyper-V has the concept of Dynamic memory as documented below. This allows a startup mem value to be specified and something similar could be used for health checks as this is only needed for the network to connect then them VM gets killed.
-
RE: Large incremental backups
Other hypervisors I have used do not perform the healthcheck as an auto restore on a different host so I cannot say. It would be good if the healthcheck could start the VM with the minimum memory value configured.
-
RE: Large incremental backups
The results are good!
Before increasing the VM memory to reduce paging:
After increasing the VM memory to reduce paging:
@livierlambert
One problem we now have is the new VM memory exceeds the DR host's memory so the health check fails "no hosts available". It would be good if a health check could be started using a lesser mem value as it only needs the network to activate to pass. -
RE: Large incremental backups
I do not know how CBT works but will take a look.
-
RE: Export backup reports
Thank you. Can a filter be used too?
root@xo:~# xo-cli --list-commands | grep backupNg.getLogs `--list-commands` is deprecated and will be removed in the future, use `list-commands` subcommand instead backupNg.getLogs [after=<number|string>] [before=<number|string>] [limit=<number>] *=<unknown type> root@xo:~#
root@xo:~# xo-cli backupNg.getLogs --json limit=500 ā invalid parameters property @./limit: must be number root@xo:~#
-
Export backup reports
I have the Backup Reports plugin working and it sends an email after each report.
I would like to export the backup logs for the last 3 months and I have tried:
xo-cli backupNg.getLogs
However the export appears to be truncated ending with:
... 1919 more items
-
RE: Large incremental backups
The server had high memory usage so I expect lots of paging, which could explain the block writes. I've increased the mem and want to see what difference that makes.
-
RE: Large incremental backups
Thanks for the clarification.
So this means it is not directly rated to files being updated rather disk changes which may or may not be a result of files being updated. So me searching for the large files that have been changed since the last delta is a waste of time?
I guess something like a defrag would result in a large delta too.
-
Large incremental backups
I have a Windows RDS server that has an hourly delta backup. I expect these backups to only be 1.5GB or so however they are now over 150GB.
Is the delta backup only looking for file changes and if so 150GB should be easy enough to identify whwn comparing tow deltas.
Is there a recommended way to diagnose large delta backups?