@lorent I'm doing this from source.
Running git pull --rebase
seems to have fixed the build problems.
Best posts made by Davidj 0
-
RE: Our future backup code: test it!
-
RE: Recommended CPU & RAM for physical hypervisor node
@zgulec
You might build your first pool using your existing rules (eg, pCPU:vCPU = 1:3). As business expands, you will gain experience that allows you to adjust your rules when building future pools. I would be interested in hearing what you find.Be sure to give the control domain (aka "dom0") enough RAM.
The hosts in a pool should all be the same - same CPU type, same number of cores, same amount of RAM, same number of network ports, and a consistent network design.
-
RE: Remove running host
@assebasse said in Remove running host:
I am just trying to remove the host, because it is no longer exists
In that case, try
xe host-declare-dead
You will need the prior UUID of the host, that is, the UUID assigned to the host before you reinstalled xcp-ng.
xe host-list
will provide that.Read the warnings about host-declare-dead carefully.
-
RE: DevOps Megathread: what you need and how we can help!
The existing technical documentation is great. An operations guide would be helpful. Here are a couple of chapter ideas:
-
Best practices for setting up your environment
(eg, odd number of hosts, isolate management network, treat your hosts like cattle, etc) -
Preparing for disasters (eg, requirements for restoring pool metadata, how to recover a single VM if your pool data is gone, etc)
-
Latest posts made by Davidj 0
-
RE: Multiple AD sources to Xen Orchestra
@uckertt Are both AD domains in the same forest?
-
RE: Large incremental backups
@cHenry Can you put the paging file on a separate disk, and then tag that disk not to be backed up?
-
RE: Our future backup code: test it!
@ristis Oris My setup is similar to yours, and I get similar errors on any existing backup job.
However, if I create a new backup job, then it works without any error.
ping @lorent , maybe these data points are useful.
-
RE: Our future backup code: test it!
Like @ristis Oris , my first tests were done using a clone of my production XO. They failed with "No NBD" errors.
Creating a new backup works.
-
RE: Our future backup code: test it!
Testing a continuous replication backup, I get Error: can't connect to any nbd client
Running the same job on my regular XO works fine. Let me know what logs you want to see.
mail (h2) Snapshot Start: 2025-04-10 18:18 End: 2025-04-10 18:18 h3 local storage (130.02 GiB free) - h3 Start: 2025-04-10 18:18 Start: 2025-04-10 18:18 End: 2025-04-10 18:18 Duration: a few seconds Error: can't connect to any nbd client Type: delta
-
RE: Our future backup code: test it!
@lorent I'm doing this from source.
Runninggit pull --rebase
seems to have fixed the build problems. -
RE: Our future backup code: test it!
I'm still getting "Cannot find module '@vates/generator-toolbox'" errors when I try to build it. I'm using the 'feat_generator_backups" branch.
-
Suggestion: Backup of VM with no disk should not fail with "Error: NO_HOSTS_AVAILABLE()"
I found this while investigating repeated backup failure reports. The error report says "NO_HOSTS_AVAILABLE()".
I finally noticed that the user had removed all of the disks from the VM.
Could the backup code be adjusted to ignore VMs that don't have disks, or to return a different error?
-
RE: Using Multiple Servers in LDAP Plug-in
@agbasi-ngc
First, a disclaimer. It's been a while since I last designed an HA solution for AD. Things may have improved.You should reconsider your approach to designing a highly availability AD. Configuring the client to guess which AD server to use will work fine, but only as long as both AD servers are healthy. If there is ever a problem between them, you will have no control over which AD the servers hit.
If I remember correctly, best practice for highly available Active Directory is either to use Microsoft's AD VIP solution, or to use round-robin DNS (multiple records with the same name but different IP addresses). Either way, all of the clients in a single domain should have the same configuration.