Subcategories

  • All Xen related stuff

    572 Topics
    6k Posts
    D
    @olivierlambert Thanks for the quick response, Oliver. We will update both hosts soon and test the export/import again. We'll be maintain you informed.
  • The integrated web UI to manage XCP-ng

    19 Topics
    271 Posts
    lsouai-vatesL
    @olivierlambert can you close this thread?
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    93 Topics
    1k Posts
    V
    @iLix The sizing of the VM in XCP-NG was bigger compared to the source
  • Hardware related section

    118 Topics
    1k Posts
    M
    @gb.123 In XO on the "Advanced" tab for the VM I added the GPU devices by first adding them both as "Attached PCIs" near the bottom of the page. I also disabled the "VGA" option under "Xen Settings" and clicked the "+" next to "GPUs" and added the vGPU type "passthrough () 0x0" which was available on the drop-down list. I don't know if it matters or not, but I also set the "Static Max", "Dynamic Min", and "Dynamic Max" memory limits under "VM limits" to the total RAM size I allocated the VM.
  • The place to discuss new additions into XCP-ng

    240 Topics
    3k Posts
    olivierlambertO
  • VM Graceful shutdown using apc network shutdown

    6
    0 Votes
    6 Posts
    951 Views
    S
    @olivierlambert Ihave been tryign to get graceful power off to work with my nas whcih is Truenas by the way but i havent gotten it to work yet oddly i cant seem to get them talking yet
  • 0 Votes
    10 Posts
    717 Views
    D
    @Danp said in Patching and trying to Pool Hosts after they've been in production: Warm migration should work in this case because the VM is halted then restarted as part of the process. See here for more details. Sweet, I'll setup something small on the old host for testing and use the Warm Migration process.
  • Can not recover /dev/xvda2

    4
    1
    0 Votes
    4 Posts
    631 Views
    olivierlambertO
    There's no issue to restore everything from scratch, as long as your backup repo (BR/remote) is available. For example, fresh XCP-ng install, deploy XO, connect to the BR and it will find all your previous backups. Then restore, that's it!
  • Largest Stack?

    10
    0 Votes
    10 Posts
    833 Views
    D
    Now we have 76 VMs running on a 3 host pool. Each server has 320 GB of RAM. Our scenario doesn't need big CPU resources so everything works fine.
  • This topic is deleted!

    1
    0 Votes
    1 Posts
    2 Views
    No one has replied
  • 0 Votes
    11 Posts
    1k Views
    RAG67958472R
    anyone has any other ideas ?? I seem to be lost at what to do.
  • VM Templates does choosing correct one matter?

    4
    0 Votes
    4 Posts
    745 Views
    planedropP
    @wilsonqanda Yes, BIOS as well, assuming I am remembering right haha. If you just pick whatever is closest and go with it I doubt you'll run into issues, if you do just make a post here and I'm sure someone will be willing to help out or work on a new template or something.
  • 1 Votes
    6 Posts
    761 Views
    A
    @wilsonqanda Downgrading the EKD2 package fix it for now, as posted: yum downgrade edk2-20180522git4b8552d-1.5.1.xcpng8.3
  • Add kernel boot params for dom0

    4
    0 Votes
    4 Posts
    2k Views
    stormiS
    The grub setup is rather simple and not very flexible. There's just one file to modify, as you found out (/etc/grub.cfg in BIOS mode, /etc/grub-efi.cfg in EFI mode, both being symbolic links to the actual file location). You can add an entry to it, but there's a small chance this doesn't play well with scripts from either XenServer or ourselves which may want to update the file and get confused. It's usually better to just modify the existing entries, ideally using /opt/xensource/libexec/xen-cmdline, so that the file structure remains unchanged.
  • Menu Migrate to server missing

    Moved
    4
    0 Votes
    4 Posts
    305 Views
    olivierlambertO
    Ah that makes sense then
  • Migration woes - SR_BACKEND_FAILURE_78

    13
    0 Votes
    13 Posts
    1k Views
    olivierlambertO
    You need a fully up to date XO (check that first). If you are on XOA Free, you might need a trial (send me a private chat message with your email registered to this XOA, so I can unlock you the free trial). Alternatively, you can also use XO from the sources
  • 0 Votes
    7 Posts
    799 Views
    J
    Just updating for anyone that has the same issue, we ended up just rebooting the master and like @JamuelStarkey said everything just automatically fell in place. Did have to exit maintenance mode on the master and replug the PBD but everything else went back to normal immediately. Still frustrating to experience and would really love to know what caused this. If there's any logs I can pull to figure this out do let me know @olivierlambert
  • This topic is deleted!

    1
    0 Votes
    1 Posts
    2 Views
    No one has replied
  • XCP-ng Center 20.04.01 - console no "remote desktop RDP" on Windows 11 VM

    Solved
    15
    0 Votes
    15 Posts
    2k Views
    olivierlambertO
    Perfect!
  • Multiple PCI-E passthrough problem

    3
    0 Votes
    3 Posts
    358 Views
    kasn_codeK
    So I managed to get the VM to see both drives. In dmesg I can see something like this [ 1.160149] ahci 0000:00:08.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x20 impl SATA mode [ 1.161043] ahci 0000:00:08.0: flags: 64bit ncq sntf ilck led clo only pmp fbs pio slum part [ 1.187888] ahci 0000:00:09.0: SSS flag set, parallel bus scan disabled [ 2.186211] ahci 0000:00:09.0: controller reset failed (0xffffffff) [ 2.187307] ahci: probe of 0000:00:09.0 failed with error -5 So I removed the pci and did a rescan with root@mars-test-uefi:~# echo "1" > /sys/bus/pci/devices/0000\:00\:09.0/remove root@mars-test-uefi:~# echo "1" > /sys/bus/pci/rescan Now when I checked dmesg it shows up properly [ 795.538214] ahci 0000:00:09.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x4 impl SATA mode [ 795.538219] ahci 0000:00:09.0: flags: 64bit ncq sntf ilck led clo only pmp fbs pio slum part fdisk -l also shows up both drives now too root@mars-test-uefi:~# fdisk -l Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: ST8000DM004-2U91 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WDC WD80EFZZ-68B Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes It seems like on boot it tries to find the device but it fails to, but on a remove/rescan it seems like its fine? The question now becomes, how do I make this work ON boot.
  • This topic is deleted!

    1
    0 Votes
    1 Posts
    2 Views
    No one has replied
  • Upgrade from 8.1 to 8.2

    Solved
    12
    1
    0 Votes
    12 Posts
    657 Views
    olivierlambertO
    Thanks a lot for your feedback!
  • Ubuntu desktop issue

    1
    0 Votes
    1 Posts
    175 Views
    No one has replied
  • Host dbsync failed and XAPI restart everytime after patches installation

    4
    0 Votes
    4 Posts
    216 Views
    olivierlambertO
    Really, you should because it would have prevent you this situation in the first place, but also some other future questions or problem you might have That's one of the few capital rules in XCP-ng world. Your master must ALWAYS be more recent than the slaves (or at the same level obviously), otherwise slaves won't be able to connect. I think you really need some assistance on best practices and such, please contact us so we can assist more generally. If this basic requirement is not understand, you might have other/deeper issues.
  • 0 Votes
    8 Posts
    2k Views
    olivierlambertO
    Any of the host: if they are in the same pool, that's logical. Only the master is needed to be reach. XAPI will probably be in "Starting state" as long as all SR aren't plugged. If you have the SR on a VM on another host than the master, reboot the master only, you should be able to connect sooner Alternatively, check https://docs.xcp-ng.org/troubleshooting/