XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. McHenry
    3. Posts
    M
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 60
    • Posts 208
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: VM association with shared storage

      @ph7

      To automatically update the hosts? I expect, to work, a host reboot would be required however how can this be automated if the host has running VMs?

      posted in Management
      M
      McHenry
    • RE: Alarms in XO

      @Danp @DustinB @ph7

      This host does not run any VMs, just used for CR

      I've increased the dom0 ram to 4GB with no more alarms.

      14d145a4-ed5f-40c9-9d97-f9fa5da99023-image.png

      posted in Management
      M
      McHenry
    • Alarms in XO

      When I check "Health" in XO everything appears fine but I do see a number of Alarms, problems is I have no idea what they mean. I do not think I have any system performance issues but am sure these should not be ignored.

      HST150 is a host for disaster recovery using CR

      a374dc23-83e6-4fdc-9421-287b5035187f-image.png

      posted in Management
      M
      McHenry
    • RE: VM association with shared storage

      @olivierlambert

      Why did I not do this sooner 🙂

      posted in Management
      M
      McHenry
    • VM association with shared storage

      I have recently changed our setup to use FreeNAS shared storage for VMs. Now I have shared storage and two hosts I can move running VMs between hosts. This makes it easy to patch & restart a host by moving the VMs off it first.

      As opposed to moving the VMs, I could schedule a maint windows and down the VMs then patch and reboot the host. In this scenario, if the host was to fail I expect nothing will be lost as the shared storage is independent. Then I can simple start the VMs on the remaining host, meaning there is no hard link between the host and VM.

      Does this sound correct?

      posted in Management
      M
      McHenry
    • RE: Backup Issue: "timeout reached while waiting for OpaqueRef"

      @stevewest15

      I believe this issue was resolved when the health check system was changed to detect network connectivity at startup so it did not need to wait for then entire VM to boot. Needs the Xen tools to be installed. I have not had an issue since this change.

      posted in Backup
      M
      McHenry
    • VM resource usage

      I have a host with 48 CPUs and 96GB RAM
      f908b4c2-ba10-49ca-97b9-1a3942c62d4e-image.png

      I understand that the RAM cannot be over allocated to VMs or I get the no hosts available message a startup however I understand CPUs can, so I could allocate all VMs 48 CPUs and they will only utilise what they require. Is this correct and if so is there any reason not to allocate the max CPUs to a VM.

      Further to this, if I only allocate the VMs a smaller number of CPUs do they all overlap on the first X CPUs and leave the others unused. i.e. if VM1 has 8 then it will only use the 1st 8 and if VM2 has 4 then it will only use the first 4.

      I ask as I currently have multiple VMs using a lesser number of CPUs than the host has however the status only show the first 15 CPUs in use on the host:
      0e52a47f-8bda-4218-b3f1-6ad93281a9cc-image.png

      posted in Management
      M
      McHenry
    • RE: Best strategy for Continuous Replication

      @olivierlambert

      Here's the new model. We've tried a few combinations and I think with TrueNAS shared storage this will now work well.
      f78178c0-3330-4233-8f9b-debc3c61a3e9-image.png

      posted in Backup
      M
      McHenry
    • RE: Best strategy for Continuous Replication

      For anyone else looking to connect TrueNAS with XCP-NG

      https://www.youtube.com/watch?v=mdHmcwWTNWA

      posted in Backup
      M
      McHenry
    • RE: Best strategy for Continuous Replication

      In TrueNAS NFS share settings I set this and it now works.

      08f0b176-0627-4689-aeca-7045d7910266-image.png

      posted in Backup
      M
      McHenry
    • RE: Best strategy for Continuous Replication

      @Andrew @olivierlambert

      I have setup TrueNAS with an NFS share however am unable to connect as a remote.
      3d14f96c-8432-43a2-8379-cf2b9d4f22a7-image.png

      3035a6a1-aa80-4f62-b78c-0f7e5aed1041-image.png

      Is there a guide on how to configure connecting XO to NFS?

      posted in Backup
      M
      McHenry
    • RE: Best strategy for Continuous Replication

      @olivierlambert

      Makes perfect sense.

      I expect having separate storage for the production VMs and CR VMs makes sense too.

      I am now thinking a good robust model would be:

      1. One or more production hosts in a single pool (allows host migration for updates)
      2. One TrueNAS Scale for production shared storage
      3. One CR host with local storage
      posted in Backup
      M
      McHenry
    • Best strategy for Continuous Replication

      I had a server dedicated to CR that was part of my pool.

      i recently lost the pool master and in turn lost access to the CR host too.

      The official docs state that the CR can be used if the main pools fails which indicates having the CR host as part of the pool is not a good idea.
      2f32fc5f-69c5-4c25-aa2a-e1a13fcf187b-image.png

      Is it best practice to not have the CR host as part of the main pool?

      Alternatively, would a better setup not being having multiple xcp-ng hosts with central shared storage for both production VMs and CR VMs. This way if a single xcp-ng host fails the CR VMs can be easily started on the other host? A variation of this would be to have two shared storage repos, one for production VMs and one for CR VMs.

      I am keen to hear other's thought on this.

      posted in Backup
      M
      McHenry
    • RE: Host failure after patches

      @flakpyro

      All this complexity makes me question the advantages of having all hosts in the same pool.

      posted in Management
      M
      McHenry
    • RE: Host failure after patches

      @flakpyro

      My setup is pretty basic.
      I have two hosts in the pool, one for running VMs on local storage and one for DR backups on local storage.
      I'd like to setup shared storage so i could run the VMs on multiple hosts and seamlessly move them between hosts without migrating storage too.

      To setup shared storage would this be on an xcp-ng host or totally independent of xcp-ng?

      posted in Management
      M
      McHenry
    • RE: Host failure after patches

      @flakpyro

      Can I restart the slave and install patches if the master has not been patched yet?

      posted in Management
      M
      McHenry
    • RE: Host failure after patches

      @flakpyro

      Have restarted the tool stack but no different.

      When I view hst110 I see:
      30f512bf-b0e0-4aea-ac57-6912519c401d-image.png

      I was tempted to restart it however it has Patches pending installation after a restart and as the mater is not fully patched I thought best not to restart it. I understand the master needs to be patched first.

      I just checked xsconsole on hst110 and it shill shows pool master unavailable
      8bdfff57-f781-4bfe-b970-685c6e7da261-image.png

      Do I need to change the pool master used by the slave?
      d7798915-9ee9-480c-889f-b700f48a7f97-image.png

      posted in Management
      M
      McHenry
    • RE: Host failure after patches

      @flakpyro

      Apologies, I missed that. I have run the command and now see:
      cda41870-c92a-4ff4-a0c5-3359db209d05-image.png

      hst103 is the new pool master. hst100 is the old pool master that failed.

      In XO I can only see hst103 under hosts however all three hosts are listed under the pool:
      c7ec1829-fd07-486e-9280-723544d3a1b3-image.png

      posted in Management
      M
      McHenry
    • RE: Host failure after patches

      @flakpyro

      No I didn't. Does this need to be run on the slave itself or the new master?

      When you say "after selecting a new master" do you mean after I did this on the new master?

      xe pool-emergency-transition-to-master
      

      Edit:
      Found this which shows this is run on the new pool master:
      https://www.ervik.as/how-to-change-the-pool-master-in-a-xenserver-farm/

      posted in Management
      M
      McHenry