XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • J

      Does dom0 require a GPU?

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      5
      0 Votes
      5 Posts
      73 Views
      J
      Hey @Andrew, thanks very much for looking that up and providing the suggestions! Re. option 2: it's possible to just hide the T600 from dom0 and remove the R9 270 from the machine? dom0 won't complain that it's missing a GPU, or take control of the only GPU in the system?
    • R

      botched pool patching and now we can't change pool master

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      8
      0 Votes
      8 Posts
      141 Views
      R
      @Danp To answer your earlier question about the state of the patched hosts, three hosts are currently fully patched with one of them as the current pool master. I've tried to promote both of the other two with the same "Cannot restore" error. For the sake of completeness I just tried promoting one of the non-patched hosts. Same error. And I'm still unable to deploy a new VM ("NOT_SUPPORTED_DURING_UPGRADE").
    • cbaguzmanC

      There are any commands that allow me to verify the integrity of the backup files?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      12
      1 Votes
      12 Posts
      475 Views
      cbaguzmanC
      @florent , I tried the option of running "vhd-cli raw alias.vhd /dev/null". I was reading that vhd-cli raw copies the entire VM disk from the backup to /dev/null, reading the incremental backup file and the parent backup. But something seems off here. The VM disk in the backup has 133GB of block files within /data. And when I ran the tool, it took 7 seconds. The backup is on a USB hard drive (HDD), so I suspect the read speeds aren't higher than 30-60MB/s. Just reading all the blocks should take at least 40 minutes. Did I misunderstand how vhd-cli raw works? Tank you for your attention.
    • F

      Backup and the replication - Functioning/Scale

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      13
      0 Votes
      13 Posts
      185 Views
      florentF
      @fcgo if the storage is shared : the export is done by one of the host of the pool If the storage is not the export is done by the host with the storage . Same for the host receiving the data The command channel, as you said, is always the master to the xoa (and eventually xo-proxy) So for a replication : [source SR] => source host =https export call=> xoa / xo-proxy =https import call=> target host => [target SR] if the xoa is running on the host doing an export, it does not use the physical network the network use between the host and SR is dependent on the storage used
    • S

      Backup as .ova to remote NFS

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      2
      0 Votes
      2 Posts
      2 Views
      DanpD
      The REST API supports exporting a VM in OVA format.
    • U

      XCP-ng 8.3 and Dell R660 - crash during boot, halts remainder of installer process (bnxt_en?)

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      37
      0 Votes
      37 Posts
      3k Views
      olivierlambertO
      Yes but why it works with some and not others. I suspect the firmware version in the NIC itself.
    • P

      backup mail report says INTERRUPTED but it's not ?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      17
      5
      0 Votes
      17 Posts
      362 Views
      P
      @Bastien-Nollet okay i'll do that tonight and will report back
    • A

      "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      20
      0 Votes
      20 Posts
      467 Views
      S
      As an observation ; I'm going to draw attention to @majorp93's point about rebooting the servers after ALL nodes have been upgraded. Historically we would move all VMs off master, upgrade master, restart its toolstack, then reboot master, then move VMs from Node 1 to master so we could begin the upgrade on Node1. Normally works ok but last time around it caused all sorts of problems. Previously it had felt right to upgrade master in its entirety including the reboot before moving on to the next host and rinse, repeat - but this cost us a lot of time, corruptions and pain. TLDR: Perhaps add a footnote to the docs that when upgrading a pool the reboots should take place as a final step across the pool only after all nodes have been updated.