Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    TeddyAstieT
    @benapetr said in New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client: @Pilow I know them a little bit, I will have a look, but I am now working on another new cool thing! It's called xen_exporter: https://github.com/benapetr/xen_exporter It's a prometheus exporter that hooks directly to xen kernel via xenctrl library from dom0 and extract all low-level metrics from the host, allowing very detailed graphs with very low granularity with stuff I always missed in both XenOrchestra and XenAdmin: ... We have a similar project : https://github.com/xcp-ng/xcp-metrics, but unfortunately it's not used as of today (though it could get revived as Rust for Xen matures, i.e easier to build). There is also Xen Orchestra OpenMetrics support but it's not on XCP-ng itself.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    A
    @semarie This pool is still on 8.2.1, we are trying to add this host in order upgrade with little to no downtime.
  • 3k Topics
    27k Posts
    J
    @rama said: @olivierlambert thank you. but is it possible to keep tracking all the CURD operation like we have in terraform. but currently MCP have only Read tasks. Like if some new interns in my lab don't know about this and in this agentic framework if he/she need a VM's, delete or update it can be done very quick. it will save many hours. I hope this will be available in future or if you wish to do tell me how far it is. The plugin MCP Server is read only by design to keep using it safe, to have an MCP for reading and another for writing is best practice. If you desire to have a separate MCP server for the writing actions, feel free to suggest that in the feedback portal. You can even develop your own MCP server, which makes calls to the write side of the XO REST API. https://modelcontextprotocol.io/
  • Our hyperconverged storage solution

    42 Topics
    719 Posts
    P
    @Greg_E hi there, beyond number of minimal hosts to be supported (think it's 3) and minimal disks to get good redundancy (I think its minimal 3 per host, must be identical) you have a replication parameter when building an XOSTOR [image: 1772477657867-b8a05f84-bd06-40a5-bf11-ca8ac16f9f01-image.png] it defaults to two (you have two copies of each workload) and this parameter can impact your total usable space. also beware of network requirements (for satellites connections, and DRBD replication) minimum of 2 nics per server, and DRDB replication should be at least 10Gb nics tip : the linstor-controller is not always the pool master... go here : https://docs.xcp-ng.org/xostor/#prerequisites
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.