Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Thanks Oliver. We have used GFS with Veeam previously and will be a great addition.
We have successfully installed using:
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm
yum install zabbix-agent2 zabbix-agent2-plugin-* --enablerepo=base,updates
Our Hyper-V servers have no GUI and the process I use is:
Get-VM
STOP-VM -Name <name of VM>
Get-VMHardDiskDrive -VMName <name of VM>
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
To transfer the newly created .vhd files to xcp-ng we use Putty via the cli
Either way.
If you can have the server offline then shutdown and create the VHD from the VHDX. The process creates another disk file so the original remains unchanged and if it all goes wrong you can simply restart the VM in Hyper-V and try again another day. You will need enough disk space for the original VM & the new VHD file.
If the server cannot be offline then export the VM and then convert the VHDX to VHD. The issue being the original will VM still be updated whilst the migration to xcp-ng takes place. You will need enough disk space for the original VM, the exported VM and the new VHD file.
We have a simplified process now.
After much trial and error this works every time.
Is there any way to paste from the console?
We have long passwords for Windows and these need to be typed manually to obtain console access which is a real pain.
Tried many times without success.
What am I missing?
Error:
sr.scan
{
"id": "c8dc68cc-f065-f06f-9de1-946362817953"
}
{
"code": "SR_BACKEND_FAILURE_40",
"params": [
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"task": {
"uuid": "47ebf82e-0996-f107-f0f0-54c0c36d2416",
"name_label": "Async.SR.scan",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20240929T09:59:42Z",
"finished": "20240929T09:59:42Z",
"status": "failure",
"resident_on": "OpaqueRef:702151e8-1c6a-1208-4d10-e738b883cf1a",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"SR_BACKEND_FAILURE_40",
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [],
"backtrace": "(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_access.ml)(line 36))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 143))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))"
},
"message": "SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )
at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_XapiError.mjs:16:12)
at default (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_getTaskResult.mjs:13:29)
at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1041:24)
at file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1075:14
at Array.forEach (<anonymous>)
at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1065:12)
at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1238:14)"
}
Whilst this request awaits implementation. If I check all hosts individually however I identify the host based upon the jobID or runID from the notification?
A quick follow up for anyone performing warm migrations.
If you elect for the migrated VM to be powered on post migration, then the option to protect from accidental shutdown needs to be off to enable the source VM to be turned off when the migrated VM it turned on.
If you like the thrill of adrenaline and elect for the source VM to be deleted post migration then the protect from accidental deletion would also need to be turned off. I am not that brave.
Is it possible to rename the pool from the host cli?
https://docs.xcp-ng.org/appendix/cli_reference/#pool-commands
Thanks Dustin. This is exactly our goal setup.
Thinking further, as XO/XOA could be hosted on either host it would make sense to host this on the DR host.
Our setup is to have two xcp-ng servers at a client site:
I just performed an update of both servers and it appeared that when the pool master was offline the DR server was also unavailable? Is this the correct?
If so then, as I would need the DR server to be available in the event of the Production server failing, the two servers would need to be in separate single server pools.
I have a backup job to Wasabi (S3) that kicks off at 8pm.
When the job runs a task is created in the task list however after a while the tasks disappears.
The backup job is still listed as active:
The router shows large amounts of traffic flowing from Xen Orchestra to Wasabi
This indicates to me that the backup is still actively transferring data somehow...