Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Results are in...
Four VMs migrated. Three using warm migration and all worked. 4th used straight migration and BSOD but worked after a reboot.
Thanks Oliver. We have used GFS with Veeam previously and will be a great addition.
We have a simplified process now.
After much trial and error this works every time.
Is there any way to paste from the console?
We have long passwords for Windows and these need to be typed manually to obtain console access which is a real pain.
Tried many times without success.
What am I missing?
Error:
sr.scan
{
"id": "c8dc68cc-f065-f06f-9de1-946362817953"
}
{
"code": "SR_BACKEND_FAILURE_40",
"params": [
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"task": {
"uuid": "47ebf82e-0996-f107-f0f0-54c0c36d2416",
"name_label": "Async.SR.scan",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20240929T09:59:42Z",
"finished": "20240929T09:59:42Z",
"status": "failure",
"resident_on": "OpaqueRef:702151e8-1c6a-1208-4d10-e738b883cf1a",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"SR_BACKEND_FAILURE_40",
"",
"The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8]",
""
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [],
"backtrace": "(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_access.ml)(line 36))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 143))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))"
},
"message": "SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_40(, The SR scan failed [opterr=uuid=25ab0e4d-a91d-4a19-964b-0f2b157818f8], )
at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_XapiError.mjs:16:12)
at default (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/_getTaskResult.mjs:13:29)
at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1041:24)
at file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1075:14
at Array.forEach (<anonymous>)
at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1065:12)
at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202409161442/packages/xen-api/index.mjs:1238:14)"
}
Got it. As we only have a single production server there is no shared storage so I guess the idea of a pool is mute.
@Andrew said in Replication retention & max chain size:
It's a quick DR server that can start an VM (or a clone) immediately if the main pool fails
Does this mean you do not have your production server(s) and DR server in the same pool?
At each client site we have a single production server and a single DR server. Both have been added to the same client specific pool however I have no idea if this is good or bad.
We are looking to migrate from Datto backups to xcp-ng/XO
The standard Datto DR device we use for clients is:
https://continuity.datto.com/help/Content/kb/siris-alto-nas/208645136.html#S4B3
4 cores with 8 threads so HT must be enabled already
https://www.intel.com/content/www/us/en/products/sku/136429/intel-xeon-d2123it-processor-8-25m-cache-2-20-ghz/specifications.html
The Datto system also does a type health check (screenshot verification). I am not sure how they do it as the VMs never fail to boot regardless of VM resources.
https://continuity.datto.com/help/Content/kb/siris-alto-nas/KB205330860.html
The lower spec'd DR machine is not perfect however it has served it's purposes to date, being:
The second scenario is not very likely, from my experience, however if the VMs had to be virtualised on the DR host then they would work, all be it slower.
What I am still not understanding is this concept of "CPU Limits"... if a host has 32 CPUs and a VM is allocated 8 then why in the advanced tab can I then allocate the VM only a fraction of those 8? Such as "4/8". What is the difference between setting the VM vCPUs on the General tab vs the CPU Limits section on the Advanced tab?
At each client site we run two hosts on premise.
The DR host usually has fewer resources than the production host.
I am trying to comprehend how best to allocate resources (RAM & CPU) to a VM to ensure it gets the maximum benefit from the production host whilst ensuring it can perform a health check on the DR host.
For example:
I can set my VM to run with 24GB RAM however if I allocate it only 8 vCPUs then it runs slow on production. If I allocate it more then 8 vCPUs then it fails the health check with "No Hosts Available"
I am not fully understanding the concept of CPU Limits
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
Disk type is set by extension so ensure the destination path filename ends in ".vhd"
We have a simplified process now.
After much trial and error this works every time.
Sorry, 100% confused now
In a virtualized environment, the compute capacity limit is based on the number of logical processors, not cores. The reason is that the processor architecture isn't visible to the guest applications.
https://learn.microsoft.com/en-us/sql/sql-server/compute-capacity-limits-by-edition-of-sql-server?view=sql-server-2016
As I am using a Windows VM, does this mean I need to base my calculations on logical processors?
A logical processor is the number of the processor's cores multiplied by the number of threads per core.
https://www.intel.com/content/www/us/en/support/articles/000036898/processors/intel-xeon-processors.html
Xen Orchestra shows my CPU as having 15 cores whereas the Intel website shows my CPU as having 8 cores and 16 threads.
Would I be correct in concluding that the number or cores in XO equates to the number of logical processors? So the combination of sockets and cores on the VM does not matter as long as the total does not exceed the number of cores listed in the Microsoft website?
Got it. So the DR host may have fewer resources than the production host, which would work for a failure of one or two VMs but not for a total failure of the production host whereby all VMs need to run on the DR host.
Is it only RAM that is a consideration between hosts? Meaning if my production host has 16 vCPUs and my DR host only has 8 vCPUs can I still set the VM to use 16 vCPUs and it will work on both hosts?
@Andrew said in xcp-ng host RAM:
If you have a pool of machines with different sizes of memory this can be an issue because a VM may not be able to run on all hosts in the pool.
That is the issue I have experienced. Our model is one production host and one DR host per client however, as the DR host has fewer resources than the production host the VMs are limited to the lesser value for the health check to work.