XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. mauzilla
    3. Posts
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 48
    • Posts 150
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Unable to add host to pool (db mismatch)

      @olivierlambert yes we have. What's funny is that we were able to add the other member to the pool a couple of months ago (this is the final one we need to add before we can actually schedule downtime to do proper pool maintenance). Our current pool only has 2 hosts in it, so we cannot really do master / pool restarts as the risk of reboot scenario with a single host will more than likely overload the single host.

      We have installed exactly the same patches of the pool. The new host has also been rebooted. Is there a manual mechanism to upgrade the schema on the new host? As the new host is currently empty, we're happy to perform whatever is needed to get it to join the pool.

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Unable to add host to pool (db mismatch)

      @olivierlambert - what can we provide to assist log wise? I can confirm the only 2 packages that are different from the yum changes are iftop and the intel fibre card. The rest are identical.

      Is there another command we can run to retrigger the database migration on the new host?

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • Unable to add host to pool (db mismatch)

      We've been building on a running pool and want to add our last host to it. Host had a fresh install, renamed interfaces, but adding it provides:

      POOL_JOINING_HOST_MUST_HAVE_SAME_DB_SCHEMA(5.602, 5.603)

      We've opened a ticket and was advised to do the following:
      https://help.vates.tech/kb/en-us/14/103

      Completed this step and the new host now has identical packages to the rest of the pool. Tried to add again, same error:
      POOL_JOINING_HOST_MUST_HAVE_SAME_DB_SCHEMA(5.602, 5.603)

      Also rebooted the new host (incase this was the issue) but same result.

      How do we update the db schema?

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Socket topology in a pool

      @Greg_E we're running into an issue at the moment where one one of our hypervisors with 4 sockets, the CPU is approximately 30-50% utilized, but the VM's are battling with CPU usage (or contention rather). Most VM's run fine, but when the CPU comes into play, it's quite obvious that some of the VM's are dramatically slower.

      I have a mix match of configurations (mostly 1-2 sockets x cores), so I am trying to assess whether there is a configuration issue (where it would be better to specify 4 sockets x cores than 1-2 sockets if you have a 4 socket system)

      posted in Management
      mauzillaM
      mauzilla
    • RE: Socket topology in a pool

      So to further simplify (sorry previous reply was on my phone). I need to state upfront that although I know that we specify the topology / RAM / disk etc for each VM, when it comes down to the RAM and CPU aspect I am a complete novice in the understanding of the underlying technologies work in distributing the resources to the VM's. I know we can set it, but how it works is a new landscape.

      What I am trying to assess is whether there is a possibility of bad design / rollout. If my hypervisor has 4 physical CPU's (or sockets I presume within XOA). Let's say I have 10 cores per socket, so the over simplification that I have in theory 40 cores available for my VM's.

      Say I setup 10 VM's, each with 2 cores. In theory I am allocated / maximum consuming 20 cores. What I am trying to assess is that if I setup my VM's to use only 1 socket (thus my XOA setup is 1 socket 2 cores), is this setting referring to the actual socket / underlying physical CPU 1, or is this a virtualized topology (so the VM is under the impression it has 1 socket?)

      If it is the underlying socket / physical CPU, would this then imply that the physical CPU 2, 3 and 4 would never be utilized because I have my VM's all setup as a 1 socket 2 cores setup? If however my understanding now is incorrect and that the setting of x sockets x cores is simply to give the VM a topology of what it think it has, what benefit is there then for the underlying VM in having different sockets / CPU's if this is simply a virtualized setting?

      posted in Management
      mauzillaM
      mauzilla
    • RE: Socket topology in a pool

      I think I did not ask the question correctly. To simplify, if I have 4 sockets but set my vms up to use 2 sockets x cores, does this mean it will utilize socket 1 and 2 and never 3 and 4? If so, how would my config need to change to say use any sockets available but x cores.

      I'm trying to assess if in certain situations I am not distributing the load of my vms correctly.

      posted in Management
      mauzillaM
      mauzilla
    • Socket topology in a pool

      We're finalizing our pool. We have 2 hosts with 4x CPU's and a last host with 2x CPU's (same range, just the sockets).

      This obviously means that if I have VM's that has a topology say of 4 sockets 2 Cores, I will not be able to move the VM's from a 4 socket host to a 2 socket host.

      How does XCP distribute the load? If we change our topology to have all VM's utilize 2 sockets, will only 2 sockets be used (thus socket 3/4 on a 4 socket host will not have any VM's utilize those CPU's) or will XCP still distribute the load to all CPU's but utilize the least busy CPU when booting that VM up?

      posted in Management
      mauzillaM
      mauzilla
    • RE: Benchmarks between XCP & TrueNAS

      @andrewperry yes, we replicate TrueNAS to a "standby" TrueNAS using zpool / truenas replication. Our current policy is hourly. We then (plan) to do Incremental replication of all VM's to a standby TrueNAS over weekends giving a 3 possible methods for recovery.

      Currently we're running into a major drawback with VM's on TrueNAS over NFS (specifically VM's that rely on fast storage such as databases and recordings). We did not anticipate such a huge drop in performance having VHD files over NFS. We're reaching out to XCP to ask them to give us some advice as it could likely be in part due to customization we can make.

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Urgent: how to stop a backup task

      @olivierlambert do you have a link where we can read up on chained backups?

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Benchmarks between XCP & TrueNAS

      @andrewperry - we've started our migration from local storage to TrueNAS based NAS (NFS) the last couple of weeks, and ironically ran into our first "confirmed" issue on Friday. For most of the transferred VM's we had little "direct" issues, but we can definately see a major issue with VM's that are reliant on heavy databases (and call recordings).

      We will be investigating options this week, will keep everyone posted. Right now we dont know here the issue is (or what a reasonable benchmark is) but with 4 x TrueNAS, 2 pools per TrueNAS with enterprise SSD, SLOG's for each pool and a dual 10GB Fibre connection to Arista having only 2-3 VM's per pool is giving mixed results.

      We will first try to rule out other hardware but will update the thread here as we think it will be of value for others and maybe we find a solution. Right now I am of the opinion that it's NFS, hoping we can do some tweaks as the prospects of just running iSCSI is concerning as thin provisioning is then not possible anymore.

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Urgent: how to stop a backup task

      @andrewperry I think healthchecks is the answer here. We're not backing up vms but rather incremental replication with health checks. If a vm does not fail health checks I cannot see a reason for a full backup unless it becomes a snapshot chain issue. Olivier might be able to provide better insights here, we're in process of implementing the above will keep you posted

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Incremental Replication Health Testing?

      @olivierlambert sorry yes, I meant replication - Would be a nice little feature 🙂

      Thank you again for everything your team is doing!

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • Incremental Replication Health Testing?

      Backups have the ability to do health tests, do you have any plans on incorporating a similar feature for incremental backups? We're opting to rather do IR instead of backups as it vastly changes DR timeframes, but to avoid "full backups" after x backups, it would be a great addition if the consistency of IR can also be tested to ensure you can continue the chain until an issue occurs

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Replicating bare metal to VM

      @olivierlambert yeah something similar, we have a prospective client with baremetal servers looking at creating a "DR" VMs with us, I imagine some agent or something would need to replicate this from within the VM. Getting some more info, was just wondering if anyone else has done something similar

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Replicating bare metal to VM

      @olivierlambert I could be wrong but with clonezilla the partition needs to be offline, we're trying to replicate a production baremetal to a VM as an ongoing basis

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • Replicating bare metal to VM

      I imagine this isn't out of the box available, but curious if anyone has had any luck in replicating bare metal servers to a XCP VM? I imagine this would need to happen from within the VM (wont be host to host).

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: Running XO / XCP's on a "backup" network

      just bumping my post 🙂 Hoping someone has some recommendations?

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • Running XO / XCP's on a "backup" network

      We're replacing our 10GB switch with a redundant 10GB network next weekend. During this maintenance our 10GB network will be off and our normal 1GB network (WAN network) will remain active. Our 10GB network isnt used for any client facing / WAN traffic, it's only really there to facilitate backups and off course access between the XOA appliance and the various XCP hosts (they are all on the same private LAN).

      During maintenance we still want to get a "view" of the hosts to ensure we can ensure these remain active (our teams work seperately and will not be at the DC, so the network team wants to ensure that we have access to the hosts whilst the 10GB network is taken out, new cabling etc which can take a couple of hours).

      Our plan is to setup a temp XOA appliance that will run on the WAN network and add the hosts to the temp XOA via a designated IP range we will setup. This is where we need some assistance.

      For example:
      1 - 10GB LAN (current) - 10.1.1.0/24 (where each servers management IP is configured on the 10GB interface with an IP of say 10.1.1.1 and the XOA appliance is setup also on a 10GB port
      2 - We want to setup a "temp" network on the 1GB network for each host and appliance, for example (1GB LAN 10.1.2.0/24) and give each host an additional IP (not management) on one of the 1GB interfaces and setup a XOA appliance with the same range say 10.1.2.1 for the host and 10.1.2.2 for the appliance

      We have tested this in our testbench and it seems to work. We're able to access the XCP hosts on one of the 1GB nics instead of the 10GB nic using a seperate XOA server, our only concern is about the "management" part. We dont want to change the management interfaces of each host to the 1GB during this period, we effectively just want a "view" to see hosts are still online and operational. This is a short term option for a couple of hours so changing management interfaces seems like a bit of overkill as we will terminate the temp XOA after the maintenance is completed.

      So questions:

      1. As it appears I am able to access the host on both switches in independant subnets (providing the XOA has access to the subnet either via firewall or just being on the same subnet, what is the difference between this and a "management" interface. As the management interface will remain in the example 10.1.1.0/24 range but I am still able to add the server on another subnet, what actions may not be operational if my access is not on the management interface but on another interface?
      2. Second question is related to the new redundant 10GB network. Currently each host has a single 10GB port on which the management interface is setup. After this, we need to create a LACP bond that goes to each switch. What is the process in achieving this as we would like to then have the management interface on the bond and not a single interface as it's now. I assume we need to create a new network > bond > choose the interfaces and choose LACP, but as the management interface is already setup on one of the interfaces we want to add to the bond, would we even be able to create a bond if the interface is active?

      We have our testbench here and happy to run some individual tests, but hoping the XCP community can assist us here with some tips considering the above.

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: XOSAN, what a time to be alive!

      @nikade thank you, we're on premium, I see it indicates optional on the price matrix but I cannot see the optional costs anywhere?

      posted in Xen Orchestra
      mauzillaM
      mauzilla
    • RE: XOSAN, what a time to be alive!

      @tjkreidl Im beginning to wonder if I have XOSAN and XOSTOR confused? Are these different solutions? Also, (we have a premium subscription), is this included in the XO license cost or are there other costs involved?

      posted in Xen Orchestra
      mauzillaM
      mauzilla