Got it. As we only have a single production server there is no shared storage so I guess the idea of a pool is mute.
Posts made by McHenry
-
RE: Replication retention & max chain size
-
RE: Replication retention & max chain size
@Andrew said in Replication retention & max chain size:
It's a quick DR server that can start an VM (or a clone) immediately if the main pool fails
Does this mean you do not have your production server(s) and DR server in the same pool?
At each client site we have a single production server and a single DR server. Both have been added to the same client specific pool however I have no idea if this is good or bad.
-
RE: VM RAM & CPU allocation for health check on DR host
We are looking to migrate from Datto backups to xcp-ng/XO
The standard Datto DR device we use for clients is:
https://continuity.datto.com/help/Content/kb/siris-alto-nas/208645136.html#S4B34 cores with 8 threads so HT must be enabled already
https://www.intel.com/content/www/us/en/products/sku/136429/intel-xeon-d2123it-processor-8-25m-cache-2-20-ghz/specifications.htmlThe Datto system also does a type health check (screenshot verification). I am not sure how they do it as the VMs never fail to boot regardless of VM resources.
https://continuity.datto.com/help/Content/kb/siris-alto-nas/KB205330860.htmlThe lower spec'd DR machine is not perfect however it has served it's purposes to date, being:
- Automatic backup verification (health check / screenshot verification)
- DR hardware in the event of a production server failure
The second scenario is not very likely, from my experience, however if the VMs had to be virtualised on the DR host then they would work, all be it slower.
What I am still not understanding is this concept of "CPU Limits"... if a host has 32 CPUs and a VM is allocated 8 then why in the advanced tab can I then allocate the VM only a fraction of those 8? Such as "4/8". What is the difference between setting the VM vCPUs on the General tab vs the CPU Limits section on the Advanced tab?
-
VM RAM & CPU allocation for health check on DR host
At each client site we run two hosts on premise.
- Production
- DR
The DR host usually has fewer resources than the production host.
I am trying to comprehend how best to allocate resources (RAM & CPU) to a VM to ensure it gets the maximum benefit from the production host whilst ensuring it can perform a health check on the DR host.
For example:
- Production host = 128GB & 32 CPUs
- DR host = 32GB & 8 CPUs
I can set my VM to run with 24GB RAM however if I allocate it only 8 vCPUs then it runs slow on production. If I allocate it more then 8 vCPUs then it fails the health check with "No Hosts Available"
I am not fully understanding the concept of CPU Limits
-
RE: from Hyper-V
Convert-VHD -Path <source path> -DestinationPath <destination path> -VHDType Dynamic
Disk type is set by extension so ensure the destination path filename ends in ".vhd"
-
RE: from Hyper-V
We have a simplified process now.
- Shutdown VM in Hyper-V
- Convert VHDX to VHD using PowerShell
- Move VHD to xcp-ng using SSH
- Generate new name using uuidgen
- Rename VHD
- Create VM in XO and attach VHD
After much trial and error this works every time.
-
RE: VM vCPU allocation
Sorry, 100% confused now
In a virtualized environment, the compute capacity limit is based on the number of logical processors, not cores. The reason is that the processor architecture isn't visible to the guest applications.
https://learn.microsoft.com/en-us/sql/sql-server/compute-capacity-limits-by-edition-of-sql-server?view=sql-server-2016As I am using a Windows VM, does this mean I need to base my calculations on logical processors?
A logical processor is the number of the processor's cores multiplied by the number of threads per core.
https://www.intel.com/content/www/us/en/support/articles/000036898/processors/intel-xeon-processors.htmlXen Orchestra shows my CPU as having 15 cores whereas the Intel website shows my CPU as having 8 cores and 16 threads.
Would I be correct in concluding that the number or cores in XO equates to the number of logical processors? So the combination of sockets and cores on the VM does not matter as long as the total does not exceed the number of cores listed in the Microsoft website?
-
RE: xcp-ng host RAM
Got it. So the DR host may have fewer resources than the production host, which would work for a failure of one or two VMs but not for a total failure of the production host whereby all VMs need to run on the DR host.
Is it only RAM that is a consideration between hosts? Meaning if my production host has 16 vCPUs and my DR host only has 8 vCPUs can I still set the VM to use 16 vCPUs and it will work on both hosts?
-
RE: xcp-ng host RAM
@Andrew said in xcp-ng host RAM:
If you have a pool of machines with different sizes of memory this can be an issue because a VM may not be able to run on all hosts in the pool.
That is the issue I have experienced. Our model is one production host and one DR host per client however, as the DR host has fewer resources than the production host the VMs are limited to the lesser value for the health check to work.
-
xcp-ng host RAM
When determineing how much RAM to allocate to a VM to esnure it can boot on another host I need to take into accoun the RAM allocted to the host OS
I can see this value can be set here:
My question is, how to determine what this value should be set to?
-
RE: VM vCPU allocation
What about topology?
- 1 X 8
- 2 X 4
- 4 X 2
- 8 X 1
I understand this to mean:
- With SQL Server Express 1 X 8 would be optimal
- With SQL Server Standard 4 X 2 would be optimal
Does that sound right?
-
RE: VM vCPU allocation
So if a VM will only ever use the CPU resources it requires, regardless of what is allocated to it. What is the case to ever allocate less than the maximum number of vCPUs available?
-
VM vCPU allocation
If a host has 8 CPUs:
If running one VM will there be a difference in performance between allocating to the VM:
- 2 vCPUs
- 8 vCPUs
If running two VMs will there be a difference in performance between allocating to each VM:
- 2 vCPUs
- 4 vCPUs
- 8 vCPUs
Edit: Windows Server VMs
-
RE: Health Check - No Hosts Available
So when determining the resource allocation on a VM to ensure the health check does not encounter the "NO_HOSTS_AVAILABLE" error should I set the static or dynamic memory values?
In Hyper-V there is a toggle to select between static or dynamic memory allocation
-
RE: Health Check - No Hosts Available
What is the difference between the static and dynamic values for RAM?
My production host has 128GB and my DR host has 32GB so I want to have enough RAM on the VM in production but still allow the health check on the DR host.
-
RE: Health Check - No Hosts Available
I am not sure if we are booting the VM on the production host or BCDR host however there should be plenty of RAM.
Production - 41GB free
BCDR - 27GB freeHow can I further diagnose this?
-
RE: Health Check - No Hosts Available
This is using XO.
There are three VMs in the backup however to isolate the issue I only have the health check being performed on one.
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1730359077342", "jobId": "73df5884-3e18-4ce9-9833-53be5a6e0a34", "jobName": "Production", "message": "backup", "scheduleId": "dae62e27-1434-45d5-930b-489e58fd7909", "start": 1730359077342, "status": "failure", "infos": [ { "data": { "vms": [ "ae29cc56-8db5-560a-7694-ba74c66f9b21", "14de66b1-4010-083c-a6f1-36718eec5c71", "af2b36be-d1cf-4e67-e1a7-37f0c94300d5" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "ae29cc56-8db5-560a-7694-ba74c66f9b21", "name_label": "SERVER3" }, "id": "1730359078760", "message": "backup VM", "start": 1730359078760, "status": "success", "tasks": [ { "id": "1730359079179", "message": "snapshot", "start": 1730359079179, "status": "success", "end": 1730359080449, "result": "ba1ea463-f4df-189e-1215-2cf200da84eb" }, { "id": "1730359089180", "message": "health check", "start": 1730359089180, "status": "success", "infos": [ { "message": "This VM doesn't match the health check's tags for this schedule" } ], "end": 1730359089181 }, { "data": { "id": "eb9076c3-13a5-f9a7-43e3-ff1146e5599d", "isFull": false, "name_label": "SVR10533 ZFS", "type": "SR" }, "id": "1730359080450", "message": "export", "start": 1730359080450, "status": "interrupted", "tasks": [ { "id": "1730359081779", "message": "transfer", "start": 1730359081779, "status": "success", "end": 1730359087928, "result": { "size": 205881344 } } ] } ], "end": 1730359089208 }, { "data": { "type": "VM", "id": "14de66b1-4010-083c-a6f1-36718eec5c71", "name_label": "SERVER2" }, "id": "1730359078763", "message": "backup VM", "start": 1730359078763, "status": "success", "tasks": [ { "id": "1730359079179:0", "message": "snapshot", "start": 1730359079179, "status": "success", "end": 1730359399186, "result": "8b2645a2-13ef-72f7-9b67-6c5a0d0ae055" }, { "id": "1730359704874", "message": "health check", "start": 1730359704874, "status": "success", "infos": [ { "message": "This VM doesn't match the health check's tags for this schedule" } ], "end": 1730359704875 }, { "data": { "id": "eb9076c3-13a5-f9a7-43e3-ff1146e5599d", "isFull": false, "name_label": "SVR10533 ZFS", "type": "SR" }, "id": "1730359399186:0", "message": "export", "start": 1730359399186, "status": "interrupted", "tasks": [ { "id": "1730359400614", "message": "transfer", "start": 1730359400614, "status": "success", "end": 1730359421833, "result": { "size": 570976768 } } ] } ], "end": 1730359704909 }, { "data": { "type": "VM", "id": "af2b36be-d1cf-4e67-e1a7-37f0c94300d5", "name_label": "SERVER01" }, "id": "1730359089212", "message": "backup VM", "start": 1730359089212, "status": "failure", "tasks": [ { "id": "1730359089388", "message": "snapshot", "start": 1730359089388, "status": "success", "end": 1730359091775, "result": "775b85a8-25e9-c424-5661-29b5658a32b0" }, { "data": { "id": "eb9076c3-13a5-f9a7-43e3-ff1146e5599d", "isFull": false, "name_label": "SVR10533 ZFS", "type": "SR" }, "id": "1730359091775:0", "message": "export", "start": 1730359091775, "status": "failure", "tasks": [ { "id": "1730359093218", "message": "transfer", "start": 1730359093218, "status": "success", "end": 1730359139809, "result": { "size": 2256734720 } }, { "id": "1730359703451", "message": "health check", "start": 1730359703451, "status": "failure", "tasks": [ { "id": "1730359703457", "message": "cloning-vm", "start": 1730359703457, "status": "success", "end": 1730359712724, "result": "OpaqueRef:97325b70-1993-bfc6-d976-0578cd13fafc" }, { "id": "1730359712732", "message": "vmstart", "start": 1730359712732, "status": "failure", "end": 1730359713131, "result": { "code": "NO_HOSTS_AVAILABLE", "params": [], "task": { "uuid": "d4383fd4-5a0a-bf4d-ee58-9dd87e5ef698", "name_label": "Async.VM.start", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20241031T07:28:32Z", "finished": "20241031T07:28:32Z", "status": "failure", "resident_on": "OpaqueRef:1cfd3a0d-5417-0899-2351-6da1b9449f09", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "NO_HOSTS_AVAILABLE" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_placement.ml)(line 106))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1484))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1512))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1476))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1876))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1858))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))" }, "message": "NO_HOSTS_AVAILABLE()", "name": "XapiError", "stack": "XapiError: NO_HOSTS_AVAILABLE()\n at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1047:24)\n at file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1081:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1071:12)\n at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1244:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } } ], "end": 1730359720906, "result": { "code": "NO_HOSTS_AVAILABLE", "params": [], "task": { "uuid": "d4383fd4-5a0a-bf4d-ee58-9dd87e5ef698", "name_label": "Async.VM.start", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20241031T07:28:32Z", "finished": "20241031T07:28:32Z", "status": "failure", "resident_on": "OpaqueRef:1cfd3a0d-5417-0899-2351-6da1b9449f09", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "NO_HOSTS_AVAILABLE" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_placement.ml)(line 106))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1484))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1512))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1476))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1876))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1858))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))" }, "message": "NO_HOSTS_AVAILABLE()", "name": "XapiError", "stack": "XapiError: NO_HOSTS_AVAILABLE()\n at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1047:24)\n at file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1081:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1071:12)\n at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1244:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } } ], "end": 1730359720906, "result": { "code": "NO_HOSTS_AVAILABLE", "params": [], "task": { "uuid": "d4383fd4-5a0a-bf4d-ee58-9dd87e5ef698", "name_label": "Async.VM.start", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20241031T07:28:32Z", "finished": "20241031T07:28:32Z", "status": "failure", "resident_on": "OpaqueRef:1cfd3a0d-5417-0899-2351-6da1b9449f09", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "NO_HOSTS_AVAILABLE" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_placement.ml)(line 106))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1484))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1512))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1476))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1876))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1858))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))" }, "message": "NO_HOSTS_AVAILABLE()", "name": "XapiError", "stack": "XapiError: NO_HOSTS_AVAILABLE()\n at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1047:24)\n at file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1081:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1071:12)\n at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1244:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } } ], "end": 1730359720956, "result": { "code": "NO_HOSTS_AVAILABLE", "params": [], "task": { "uuid": "d4383fd4-5a0a-bf4d-ee58-9dd87e5ef698", "name_label": "Async.VM.start", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20241031T07:28:32Z", "finished": "20241031T07:28:32Z", "status": "failure", "resident_on": "OpaqueRef:1cfd3a0d-5417-0899-2351-6da1b9449f09", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "NO_HOSTS_AVAILABLE" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_placement.ml)(line 106))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1484))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1512))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1476))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1876))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1858))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))" }, "message": "NO_HOSTS_AVAILABLE()", "name": "XapiError", "stack": "XapiError: NO_HOSTS_AVAILABLE()\n at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1047:24)\n at file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1081:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1071:12)\n at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202410311742/packages/xen-api/index.mjs:1244:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } } ], "end": 1730359720957 }
-
Health Check - No Hosts Available
Not sure where to start to diagnose this issue.
The backup works however fails on health check. My backup is configured to use the DR host for the health check:
The VM has the following resource limits which I understand means start with 8 vCPUs and 16GB RAM.
Here are the specs of my DR host: