Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Thanks Oliver. We have used GFS with Veeam previously and will be a great addition.
We have successfully installed using:
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm
yum install zabbix-agent2 zabbix-agent2-plugin-* --enablerepo=base,updates
Our Hyper-V servers have no GUI and the process I use is:
Get-VM
STOP-VM -Name <name of VM>
Get-VMHardDiskDrive -VMName <name of VM>
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
To transfer the newly created .vhd files to xcp-ng we use Putty via the cli
Either way.
If you can have the server offline then shutdown and create the VHD from the VHDX. The process creates another disk file so the original remains unchanged and if it all goes wrong you can simply restart the VM in Hyper-V and try again another day. You will need enough disk space for the original VM & the new VHD file.
If the server cannot be offline then export the VM and then convert the VHDX to VHD. The issue being the original will VM still be updated whilst the migration to xcp-ng takes place. You will need enough disk space for the original VM, the exported VM and the new VHD file.
We have a simplified process now.
After much trial and error this works every time.
Is there any way to paste from the console?
We have long passwords for Windows and these need to be typed manually to obtain console access which is a real pain.
Tried many times without success.
What am I missing?
Error:
sr.scan
{
"id": "c8dc68cc-f065-f06f-9de1-946362817953"
}
{
"code": "SR_BACKEND_FAILURE_40",
"params": [
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"task": {
"uuid": "47ebf82e-0996-f107-f0f0-54c0c36d2416",
"name_label": "Async.SR.scan",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20240929T09:59:42Z",
"finished": "20240929T09:59:42Z",
"status": "failure",
"resident_on": "OpaqueRef:702151e8-1c6a-1208-4d10-e738b883cf1a",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"SR_BACKEND_FAILURE_40",
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [],
"backtrace": "(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_access.ml)(line 36))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 143))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))"
},
"message": "SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )
at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_XapiError.mjs:16:12)
at default (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_getTaskResult.mjs:13:29)
at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1041:24)
at file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1075:14
at Array.forEach (<anonymous>)
at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1065:12)
at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1238:14)"
}
I thought so however booting the VM on the LAN will cause IP conflicts and alternatively, if the VM is off the LAN then it will be inaccessible.
What is the best way to restore a single folder from a CR backup?
I do not wish to restore the entire VM, just a folder that was accidentally deleted.
Thanks in advance...
Is there a maximum run time for a backup before it will timeout?
I have a backup of a large VM to the cloud that takes longer than 24 hours to complete. After 24 hours the backup Export task disappears form the task list however the backup appears to continue running.
Should I simply wait longer or has this backup reached a timeout limit?
Is there a better way to complete backups that will take a long time?
What happens to a backup if the VM restarts during the backup?
We restart client VMs on a Sat night and some backups to offsite can take longer than 24 hours
I have a backup job that has hung. What does this error mean?
{
"data": {
"type": "VM",
"id": "af2b36be-d1cf-4e67-e1a7-37f0c94300d5",
"name_label": "SERVER01"
},
"id": "1733738841196",
"message": "backup VM",
"start": 1733738841196,
"status": "pending",
"tasks": [
{
"id": "1733738841312",
"message": "clean-vm",
"start": 1733738841312,
"status": "success",
"warnings": [
{
"data": {
"path": "/xo-vm-backups/af2b36be-d1cf-4e67-e1a7-37f0c94300d5/vdis/578e1ed8-4005-4010-8b46-2590a258120d/ca6904e4-2fa2-4cad-927b-5d20529cd26c/data/a76f8556-58ff-42a6-8836-639ce2ed5024.vhd"
},
"message": "no alias references VHD"
}
],
"end": 1733739422534,
"result": {
"merge": false
}
},
{
"id": "1733739422820",
"message": "snapshot",
"start": 1733739422820,
"status": "success",
"end": 1733739425172,
"result": "a0aecb03-4178-5614-9b8d-90091fcf0cc4"
},
{
"data": {
"id": "2b68764f-1824-46ab-b280-2d83dcff0202",
"isFull": true,
"type": "remote"
},
"id": "1733739425172:0",
"message": "export",
"start": 1733739425172,
"status": "pending",
"tasks": [
{
"id": "1733739426063",
"message": "transfer",
"start": 1733739426063,
"status": "pending"
}
]
}
]
}
I have a backup job with three schedules that will occasionally conflict:
Is the way to resolve this:
My understanding is this 3 will trump 2 & 1 and 2 will trump 1
I do not think sequences can help here as the three conflicts are within the same backup job.
I wish to perform a full backup to WASABI monthly on a Sunday when the bandwidth impact will be least noticed.
I cannot see any way to schedule a backup for the 1st Sunday of the month in XOA/XO
The cron command is:
00 09 * * 7 [ $(date +\%d) -le 07 ] && /run/your/script
https://stackoverflow.com/questions/3241086/how-to-schedule-to-run-first-sunday-of-every-month
Can I edit the cron file directly in XOA/XO?
We have successfully installed using:
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm
yum install zabbix-agent2 zabbix-agent2-plugin-* --enablerepo=base,updates
As per the Zabbix install guide is it safe to issue this command on xcp-ng?
rpm -Uvh https://repo.zabbix.com/zabbix/7.0/rhel/7/x86_64/zabbix-release-latest.el7.noarch.rpm