Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Thanks Oliver. We have used GFS with Veeam previously and will be a great addition.
We have successfully installed using:
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm
yum install zabbix-agent2 zabbix-agent2-plugin-* --enablerepo=base,updates
Our Hyper-V servers have no GUI and the process I use is:
Get-VM
STOP-VM -Name <name of VM>
Get-VMHardDiskDrive -VMName <name of VM>
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
To transfer the newly created .vhd files to xcp-ng we use Putty via the cli
Either way.
If you can have the server offline then shutdown and create the VHD from the VHDX. The process creates another disk file so the original remains unchanged and if it all goes wrong you can simply restart the VM in Hyper-V and try again another day. You will need enough disk space for the original VM & the new VHD file.
If the server cannot be offline then export the VM and then convert the VHDX to VHD. The issue being the original will VM still be updated whilst the migration to xcp-ng takes place. You will need enough disk space for the original VM, the exported VM and the new VHD file.
We have a simplified process now.
After much trial and error this works every time.
Is there any way to paste from the console?
We have long passwords for Windows and these need to be typed manually to obtain console access which is a real pain.
Tried many times without success.
What am I missing?
Error:
sr.scan
{
"id": "c8dc68cc-f065-f06f-9de1-946362817953"
}
{
"code": "SR_BACKEND_FAILURE_40",
"params": [
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"task": {
"uuid": "47ebf82e-0996-f107-f0f0-54c0c36d2416",
"name_label": "Async.SR.scan",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20240929T09:59:42Z",
"finished": "20240929T09:59:42Z",
"status": "failure",
"resident_on": "OpaqueRef:702151e8-1c6a-1208-4d10-e738b883cf1a",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"SR_BACKEND_FAILURE_40",
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [],
"backtrace": "(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_access.ml)(line 36))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 143))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))"
},
"message": "SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )
at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_XapiError.mjs:16:12)
at default (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_getTaskResult.mjs:13:29)
at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1041:24)
at file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1075:14
at Array.forEach (<anonymous>)
at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1065:12)
at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1238:14)"
}
I have a backup job with three schedules that will occasionally conflict:
Is the way to resolve this:
My understanding is this 3 will trump 2 & 1 and 2 will trump 1
I do not think sequences can help here as the three conflicts are within the same backup job.
I wish to perform a full backup to WASABI monthly on a Sunday when the bandwidth impact will be least noticed.
I cannot see any way to schedule a backup for the 1st Sunday of the month in XOA/XO
The cron command is:
00 09 * * 7 [ $(date +\%d) -le 07 ] && /run/your/script
https://stackoverflow.com/questions/3241086/how-to-schedule-to-run-first-sunday-of-every-month
Can I edit the cron file directly in XOA/XO?
We have successfully installed using:
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm
yum install zabbix-agent2 zabbix-agent2-plugin-* --enablerepo=base,updates
As per the Zabbix install guide is it safe to issue this command on xcp-ng?
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm
When installing the Zabbix agent the correct distro & version needs to be selected.
Based upon the information below is appears xcp-ng is compatible with Centos/RHEL/Fedora however I am unsure of which OS version to select. I am running xcp-ng 8.3
What are your thoughts on my last three questions please.
Our Hyper-V servers have no GUI and the process I use is:
Get-VM
STOP-VM -Name <name of VM>
Get-VMHardDiskDrive -VMName <name of VM>
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
To transfer the newly created .vhd files to xcp-ng we use Putty via the cli
When installing patches what is the difference between updating the hosts in a pool individually or simply installing the pool patches?
I understand that the pool master needs to be updated before the slaves when updating hosts individually however would it not be a simpler process just to install the pool patches?
Either way.
If you can have the server offline then shutdown and create the VHD from the VHDX. The process creates another disk file so the original remains unchanged and if it all goes wrong you can simply restart the VM in Hyper-V and try again another day. You will need enough disk space for the original VM & the new VHD file.
If the server cannot be offline then export the VM and then convert the VHDX to VHD. The issue being the original will VM still be updated whilst the migration to xcp-ng takes place. You will need enough disk space for the original VM, the exported VM and the new VHD file.