XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. crembz
    C
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 4
    • Posts 7
    • Groups 0

    crembz

    @crembz

    1
    Reputation
    1
    Profile views
    7
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    crembz Unfollow Follow

    Best posts made by crembz

    • RE: programmatically connect SR (NFS)

      @Danp That worked a treat thanks for the link!

      posted in REST API
      C
      crembz

    Latest posts made by crembz

    • Intel Arc & GVT-g

      Has anyone tried getting an Intel Arc working with with GVT-d for vgpu? I've seen some people tried it with the igpus.

      posted in Hardware
      C
      crembz
    • RE: VM Autostart order

      @olivierlambert Yes I was playing around with that, but I couldn't seem to get it behaving properly, so I went looking through the doco, and then came here. I must be missing something.

      Are VMs started in random sequence or will everything will boot storm?
      If I add delays, how can that control the order with a larger sequence?

      posted in Management
      C
      crembz
    • VM Autostart order

      I'm just getting my head around xcpng/xo after spending a fair few years in esxi and proxmox.

      I'm trying to understand how I can control the order in which VMs start and shutdown.

      eg, I want XO up first, followed by any primary services followed by 'the rest'.

      proxmox has the concept of both order and delay. Is there a similar mechanism or a different way of achieving this in XO?

      posted in Management
      C
      crembz
    • RE: programmatically connect SR (NFS)

      @Danp That worked a treat thanks for the link!

      posted in REST API
      C
      crembz
    • programmatically connect SR (NFS)

      I'm somewhat new to xcpng and am trying to figure out a way to connect an NFS SR (or several) once an NFS server comes online.

      Use Case:

      Following a power outage there is a race condition between xcpng and my NAS. Typically the xcpng hosts start first.
      The SR does not connect to the NAS as it is unavailable at that time
      Any VMs hosted on the NFS server fail to start
      SR remains disconnected even once the NAS is up

      Currently I need to manually reconnect the SR and start the VMs

      I'm trying to understand how I can have the NAS make an API call to XO to reconnect the SRs and trigger the VMs to boot.

      Using the xo-cli I'm able to issue

      xo-cli sr.connectAllPbds id={SRUID}
      

      but I'm not sure how to automate this on XO or call it via an API call from the NAS.

      Can anyone help guide me here?

      Cheers

      posted in REST API
      C
      crembz
    • RE: Nas/Home/Lab heterogeneous pools and failure

      @olivierlambert interesting, I do like how xo manages all hosts in the one place. My understanding though is ha is not possible across clusters right?

      I tend to pick up hosts over time, so having a homogenous cluster is practically impossible.

      posted in Compute
      C
      crembz
    • Nas/Home/Lab heterogeneous pools and failure

      I'm new to xcp-ng so please bear with me. I'm in the process of rebuilding my home network and lab and have been having a play with xc-png. I'm coming from over years of running proxmox and kvm before that.

      I'm trying to understand whether I can achieve the following using xc-png:

      I have a cluster of 7 hosts.

      1. Virtual NAS with PCIE passthrough - threadripper 2950x
      2. One host is for ciritical network infra, fw, dns, vpn, omada- i5 8500t
      3. Virtual ESXi host - 5900x
      4. Virtual Hyper V host - i5 10500t
      5. Virtual Nutanix host - i7 6700
      6. Virtual KVM host - i5 6500t
      7. Docker host - i5 8500

      I run everything virtual to easily snapshot and revert states, I'm testing a bunch of automation across hypervisors and public clouds.

      Host 1,2,3,7 are connected with 10gb NICs. The others all have 1gb NICs. The NAS hosts VM drives for all but host #2

      I'd love to have everything in one giant pool, and turn all bar the critical network host off at night (thus why host #2 I run VMs on local storage) to save on power. I've not found an elegant way to do this on pve. Are there any clever ideas on how I could achieve this on xcp-ng?

      Also, I noticed VIFs are setup pool wide and mapped to a PIF. How do you handle hosts with different NIC counts? In the 10g boxes my PIF0 is mapped to 1g ports with is not being used (using the 10g).

      NOTE - I've been able to have both AMD and Intel hosts in the same cluster in pve and live migrate guests between them. I don't think having mixed CPU vendors in the same pool works in xcp-ng. My understanding though on xcp-ng is that I can cold migrate between pools which I might be able to live with.

      posted in Compute
      C
      crembz