XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. benapetr
    B
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 3
    • Posts 28
    • Groups 0

    benapetr

    @benapetr

    36
    Reputation
    3
    Profile views
    28
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    benapetr Unfollow Follow

    Best posts made by benapetr

    • New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      Hello,

      I know some people here will absolutely hate the idea of reviving that old-school thick client (yes I am talking about XenAdmin), but there are also some of us that just love it and can't get to like those web-based UIs 😄

      Despite running Xen Orchestra for many things almost everywhere (it's really handy) I just still prefer doing many things in the original thick client - especially setup of new pools and cluster, low level stuff like HA configuration, and so on (also - deploying that first VM with xen orchestra - hah).

      I already have experience with rewriting C# apps to Qt (I have many years of experience with both programming languages) and I had this idea in my head for a long time, but never felt like I could take such challenge on all by myself - since original XenAdmin is about 1715 files (files, not lines) of code, but then AI happened and it just seems it's really great at translating one language to another.

      But fret not, this is not some vibe coded AI slop, I was just using AI (for months) to help convert large pieces of code and then gave a manual review to everything and corrected most of stuff and I plan to do that until everything works completely.

      The project lives here for now: https://github.com/benapetr/XenAdminQt

      It's using exactly same license as original XenAdmin. I also took liberty of reusing the icons as I am terrible with graphics. If you have any problems with that (especially the rocket logo) let me know, I will have AI generate some slop logo instead, but I really would like to expand the xcp-ng open source world with this 🙂

      Here is a screenshot from Debian:

      13314316-bcce-4dcb-8bd3-209f6770395a-image.png

      (Now that I am looking at it, there are some visual artefacts in that bottom toolbar - but that's just because of that active dark mode, in light mode it looks fine :P)

      The client already works great on both macOS and linux (I use both platforms extensively - I don't do much Windows TBH) - if you don't have Windows and want to give it a try, you can compile it easily, assuming you have Qt installed you can just open the .pro file, compile it and run it. No tricks needed, it's very easy and straightforward - I purposefully decided to not add any 3rd party dependencies beside Qt itself to keep it very easy to port anywhere.

      In fact thanks to WASM this thing can probably even run in a browser, but networking stack would need some overhaul for that to work.

      Just keep in mind this is alpha - run at your own risk, I am myself only using it in my lab clusters, but it didn't break anything so far 🙂

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      I just released 0.0.4! Still an alpha, but it's becoming pretty usable - there was over 60 commits of code cleanup, polish and also limited testing of all visible features - pretty much everything that is now visible in the UI should be operational at this point, option pages, properties of hosts, pools, VDIs, configuration of NICs, even bonding and advanced stuff like pool password / secret rotation. All XAPI actions and commands were already ported over from C# variant, so my focus now is only on finishing it into a final usable product.

      We also have some features that original client doesn't have (such as XenCache explorer)!

      Next on my to-do is to add all features that are currently missing completely: HA, GPU, PCIe, VM import / export etc.

      Note: I will not be porting over any licensed or proprietary Citrix stuff. This tool may work with XenServer just fine, but I will not be porting over any of the proprietary features, because I will never use it and I have no way to test it either.

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      Version 0.0.5 alpha was just released.

      Tree view got major fix - it's now almost fully on-par with the C# version, all logic and missing icons and context menus were fixed.

      This is rather a quality of life release, from features only GPU support was added, rest were only bug fixes, but a lot of them. XenAdmin is now so stable now I am even using it on my production servers. It already feels same (even better in some aspects) as the C# version.

      I also added some new minor feature - all table views now support export to CSV via context menu (into clipboard), very handy if you need to export data from various views.

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      Here are some more screenshots from debian:

      df3b5fb0-c5e5-43cd-8b09-b3e78aa80b61-image.png
      255e428c-98a8-45ca-8d84-e4692e513a20-image.png
      347101c0-c303-4349-8b97-5d7ee922fbf8-image.png
      88000de7-bc1b-4137-9f5c-92960700b96e-image.png
      d822614e-603c-41a1-8bdc-ed4327bc6d35-image.png
      783a9005-c534-4c07-97c7-cefff3a59f89-image.png

      posted in News
      B
      benapetr
    • Native Ceph RBD SM driver for XCP-ng

      Hello,

      I am using CEPH for a very long time, I started back in the times of old Citrix Xen Server (I think version 6 maybe?)

      I was using RBD for a long time in a hacky way via LVM with a shared attribute, which had major issues in latest xcp-ng, then I migrated to CephFS (which is relatively stable, but has its own issues related to the nature of CephFS - reduced performance, dependency on the MDS etc.).

      I finally decided to move outside of my comfort zone and try and write my own SM driver for actual RBD and after some testing move it to my production cluster. I know there is already another github project wrote by another guy, that is unmaintained for many years and has various versions. I didn't want to bother trying to understand how that one is implemented - I already know how to use RBD with CEPH, I just needed to put it into a SM driver. So I made my own.

      I will see how it goes, there are drawbacks already to that "native" RBD over CephFS - while IO performance is superior, the "meta" performance (creating disks, snapshots, deleting disks, rescanning SR) is actually slower because it relies on native CEPH APIs and doesn't just use very fast low-level "file access" of CephFS. But I still think it could be a valuable addition to people who need raw IO performance.

      My driver is fully open source and available here - I currently target XCP-ng 8.2 and SMAPIv2, because that's what I use on my production cluster which I am primarily making this for. But eventually I will try to test this with XCP-ng 8.3 and when SMAPIv3 is finally ready, I might port it there as well.

      Here is the code: https://github.com/benapetr/CephRBDSR/blob/master/CephRBDSR.py

      There is also an installation script that makes the installation of the SM driver pretty simple, may need a tweak as I am not sure if manually adding SM modules to /etc/xapi.conf is really a good idea 🙂

      Please note it's a work in progress, it's unfinished, and some features probably don't work yet.

      What is already tested and works:

      • SR creation, unplug, plug, rescan, stat
      • Basic RBD / VDI manipulation - creating VDI, deleting VDI, openning VDI / mapping VDI, copying VDI

      It's really just managing RBD block devices for you and uses aio to map VDIs to them

      What is not tested yet

      • Snapshots
      • Cluster behaviour

      I only recommend for use on dev xcp-ng environments at this moment. I think within a month I might have a fully tested version 1.0

      Feedback and suggestions welcome!

      posted in Development
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      I just released 0.0.3 https://github.com/benapetr/XenAdminQt/releases/tag/v0.0.3-alpha it brings it even closer to the original client, with packages for macos, debian12, debian13, ubuntu 22, ubuntu 24, Fedora 43, windows

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      anyway I release most of my personal tools for XCP-ng as open source, so if I ever find a need for anything like rvtools and would start such a project (if nobody else does) I would also publish it as open source.

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      I just released 0.0.2 https://github.com/benapetr/XenAdminQt/releases/tag/v0.0.2-alpha

      The first version was really just a proof-of-concept demonstration, this second version is already pretty usable. It can handle all basic stuff, including provisioning of new VMs, VM control (start / stop / suspend / pause), force actions, parallel connections to multiple clusters etc. etc.

      Status of what is tested and works and what does:

      # Needs work
      * Menu items
      * Tree view - should show Virtual Disks, Networks in objects view
      * Pool HA tab missing
      * Actions and commands, see actions todo
      * Performance tab
      * Search tab has unfinished options panel
      * VM import / export
      * Folder and tag views
      * Network tab (host) - needs finish and test, especially wizards and properties
      * New pool wizard
      * New storage wizard
      * New VM wizard
      * VM deleting
      * HA tab
      * NIC tab - bonding
      * Clone VM
      * Create template from VM
      * Create VM from template
      
      # Needs polish
      * General tab - shows data, but access to data is weird (should use native XenObjects and their properties instead of scrapping QVarianMaps), overall layout is also not good
      * Memory tabs - they already work, but could look better
      * Console - it works most of the time, but there are random scaling issues during boot, RDP not supported
      * UI - menus and toolbar buttons sometime don't refresh on events (unpause -> still shows force shutdown)
      
      # Needs testing
      * VM disk resize
      * VM disk move
      * VM live migration
      * VM cross pool migration
      * Properties of Hosts, VMs and Pools
      * VM deleting
      * Options
      * Maintenance mode
      
      # Finished and tested
      * Add server
      * Connection to pool - redirect from slave to master
      * Connection persistence
      * Basic VM controls (start / reboot / shutdown)
      * Pause / Unpause
      * Suspend / Resume
      * Snapshots
      * VM disk management (create / delete / attach / detach)
      * SR attach / detach / repair stack
      * CD management
      * Grouping and search filtering (clicking hosts / VMs in tree view top level)
      * Tree view (infrastructure / objects)
      * Events history
      * Network tab (VM)
      * Host command -> Restart toolstack
      
      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      I made a first "demonstrator" preview release for macOS, will upload .deb and .rpm packages as well: https://github.com/benapetr/XenAdminQt/releases/tag/v0.0.1-alpha

      posted in News
      B
      benapetr
    • RE: Native Ceph RBD SM driver for XCP-ng

      Well, so I found it, the culprit is not even the SM framework itself, but rather the XAPI implementation, the problem is here:

      https://github.com/xapi-project/xen-api/blob/master/ocaml/xapi/xapi_vm_snapshot.ml#L250

      This logic is hard-coded into XAPI - when you revert to snapshot it:

      • First deletes the VDI image(s) of the VM (that is "root" for entire snapshot hierarchy - this is actually illegal in CEPH to delete such image)
      • Then it creates new VDIs from the snapshot
      • Then it modifies the VM and rewrites all VDI references to newly created clones from the snapshot

      This is fundamentally incompatible with the native CEPH snapshot logic because in CEPH:

      • You can create any amount of snapshots you want for an RBD image - but that makes it illegal to delete the RBD image as long as there is any snapshot. CEPH is using layered CoW for this, however snapshots are always read-only (which is actually fine in Xen world as it seems).
        • Creation of snapshot is very fast
        • Deletion of snapshot is also very fast
        • Rollback of snapshot is somehow also very fast (IDK how CEPH does this though)
      • You can create a clone (new RBD image) from a snapshot - that creates a parent reference to the snapshot - eg. snapshot can't be deleted until you make the new RBD independent of it via flatten operation (IO heavy and very slow).

      In simple words when:

      • You create image A
      • You create snapshot S (A -> S)

      You can very easily (cheap IO) drop S or revert A to S. However if you do what Xen does:

      • You create image A
      • You create snapshot S (A -> S)
      • You want to revert S - Xen clones S to B (A -> S -> B) and replaces VDI ref in VM from A to B

      Now it's really hard for CEPH to clean both A and S as long as B depends on both of them in the CoW hierarchy. Making B independent is IO heavy.

      What I can do as a nasty workaround is that I can hide VDI for A and when the user decides they want to delete S I would just hide S as well and schedule flatten of B as some background GC cleanup job (need to investigate what are my options here), which after finish would wipe S and subsequently A (if it was a last remaining snapshot for it).

      That would work, but still would be awfully inefficient software emulation of CoW, completely ignoring that we can get a real CoW from CEPH that is actually efficient and IO cheap (because it happens on storage-array level).

      Now I perfectly understand why nobody ever managed to deliver native RBD support to Xen - it's just that XAPI design makes it near-impossible. No wonder we ended up with weird (also inefficient) hacks like LVM pool on top of a single RBD, or CephFS.

      posted in Development
      B
      benapetr

    Latest posts made by benapetr

    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      anyway I release most of my personal tools for XCP-ng as open source, so if I ever find a need for anything like rvtools and would start such a project (if nobody else does) I would also publish it as open source.

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      @Pilow I know them a little bit, I will have a look, but I am now working on another new cool thing! It's called xen_exporter: https://github.com/benapetr/xen_exporter

      It's a prometheus exporter that hooks directly to xen kernel via xenctrl library from dom0 and extract all low-level metrics from the host, allowing very detailed graphs with very low granularity with stuff I always missed in both XenOrchestra and XenAdmin:

      Detailed information about the host, every single CPU core utilization, load, avg. load, current frequency, P-state, C-state etc., number of active Xen domains, memory utilization etc. etc. here is example from a host with 80 logical CPUs (older 2 socket ProLiant G9 I have in my lab):

      # curl localhost:9120/metrics
      # HELP xen_domain_cpu_seconds_total Total domain CPU time in seconds from libxenctrl domain info.
      # TYPE xen_domain_cpu_seconds_total counter
      xen_domain_cpu_seconds_total{domid="0",uuid="5ac0df60-1089-4c07-8095-42a693bc7150"} 19356.072263010999
      # HELP xen_domain_online_vcpus Online vCPUs for domain from libxenctrl.
      # TYPE xen_domain_online_vcpus gauge
      xen_domain_online_vcpus{domid="0",uuid="5ac0df60-1089-4c07-8095-42a693bc7150"} 16
      # HELP xen_domain_runnable_vcpus Runnable vCPUs for domain (online and not blocked), aligned with xcp-rrdd-cpu hostload counting.
      # TYPE xen_domain_runnable_vcpus gauge
      xen_domain_runnable_vcpus{domid="0",uuid="5ac0df60-1089-4c07-8095-42a693bc7150"} 1
      # HELP xen_domain_cpu_usage_ratio Domain CPU usage ratio derived from libxenctrl cpu_time; semantics align with xcp-rrdd-cpu cpu_usage.
      # TYPE xen_domain_cpu_usage_ratio gauge
      xen_domain_cpu_usage_ratio{domid="0",uuid="5ac0df60-1089-4c07-8095-42a693bc7150"} 0.0094380522656288216
      xen_domain_cpu_seconds_total{domid="1",uuid="52c4640b-a257-1db5-e587-233a7c9873d9"} 3418.0258182819998
      xen_domain_online_vcpus{domid="1",uuid="52c4640b-a257-1db5-e587-233a7c9873d9"} 4
      xen_domain_runnable_vcpus{domid="1",uuid="52c4640b-a257-1db5-e587-233a7c9873d9"} 0
      xen_domain_cpu_usage_ratio{domid="1",uuid="52c4640b-a257-1db5-e587-233a7c9873d9"} 0.012024372432675305
      # HELP xen_host_cpu_usage_ratio Physical CPU usage ratio per CPU from Xen idletime counters; semantics align with xcp-rrdd-cpu cpuN.
      # TYPE xen_host_cpu_usage_ratio gauge
      xen_host_cpu_usage_ratio{cpu="0"} 0
      xen_host_cpu_usage_ratio{cpu="1"} 0.0055116846393780117
      xen_host_cpu_usage_ratio{cpu="2"} 0.0039004025966321576
      xen_host_cpu_usage_ratio{cpu="3"} 0
      xen_host_cpu_usage_ratio{cpu="4"} 0.0066811401678944504
      xen_host_cpu_usage_ratio{cpu="5"} 0
      xen_host_cpu_usage_ratio{cpu="6"} 0.0061615590518341312
      xen_host_cpu_usage_ratio{cpu="7"} 0
      xen_host_cpu_usage_ratio{cpu="8"} 0
      xen_host_cpu_usage_ratio{cpu="9"} 0.018294401829196061
      xen_host_cpu_usage_ratio{cpu="10"} 0.0097828084505896529
      xen_host_cpu_usage_ratio{cpu="11"} 0
      xen_host_cpu_usage_ratio{cpu="12"} 0
      xen_host_cpu_usage_ratio{cpu="13"} 0.011313510038158392
      xen_host_cpu_usage_ratio{cpu="14"} 0
      xen_host_cpu_usage_ratio{cpu="15"} 0.0073604364414601164
      xen_host_cpu_usage_ratio{cpu="16"} 0.017064714271418868
      xen_host_cpu_usage_ratio{cpu="17"} 0
      xen_host_cpu_usage_ratio{cpu="18"} 0.019081688214508952
      xen_host_cpu_usage_ratio{cpu="19"} 0
      xen_host_cpu_usage_ratio{cpu="20"} 0
      xen_host_cpu_usage_ratio{cpu="21"} 0.0050337631428650775
      xen_host_cpu_usage_ratio{cpu="22"} 0
      xen_host_cpu_usage_ratio{cpu="23"} 0.0090213716778614339
      xen_host_cpu_usage_ratio{cpu="24"} 0
      xen_host_cpu_usage_ratio{cpu="25"} 0.010063162005635951
      xen_host_cpu_usage_ratio{cpu="26"} 0.0066331410932402024
      xen_host_cpu_usage_ratio{cpu="27"} 0
      xen_host_cpu_usage_ratio{cpu="28"} 0.010268124843823001
      xen_host_cpu_usage_ratio{cpu="29"} 0
      xen_host_cpu_usage_ratio{cpu="30"} 0
      xen_host_cpu_usage_ratio{cpu="31"} 0.011560252191338383
      xen_host_cpu_usage_ratio{cpu="32"} 0
      xen_host_cpu_usage_ratio{cpu="33"} 0.0099933533399266805
      xen_host_cpu_usage_ratio{cpu="34"} 0.0094337603182127472
      xen_host_cpu_usage_ratio{cpu="35"} 0
      xen_host_cpu_usage_ratio{cpu="36"} 0
      xen_host_cpu_usage_ratio{cpu="37"} 0
      xen_host_cpu_usage_ratio{cpu="38"} 0
      xen_host_cpu_usage_ratio{cpu="39"} 0
      xen_host_cpu_usage_ratio{cpu="40"} 0
      xen_host_cpu_usage_ratio{cpu="41"} 0
      xen_host_cpu_usage_ratio{cpu="42"} 0
      xen_host_cpu_usage_ratio{cpu="43"} 0
      xen_host_cpu_usage_ratio{cpu="44"} 0
      xen_host_cpu_usage_ratio{cpu="45"} 0
      xen_host_cpu_usage_ratio{cpu="46"} 0
      xen_host_cpu_usage_ratio{cpu="47"} 0
      xen_host_cpu_usage_ratio{cpu="48"} 0
      xen_host_cpu_usage_ratio{cpu="49"} 0
      xen_host_cpu_usage_ratio{cpu="50"} 0
      xen_host_cpu_usage_ratio{cpu="51"} 0
      xen_host_cpu_usage_ratio{cpu="52"} 0
      xen_host_cpu_usage_ratio{cpu="53"} 0
      xen_host_cpu_usage_ratio{cpu="54"} 0.012625092098940027
      xen_host_cpu_usage_ratio{cpu="55"} 0
      xen_host_cpu_usage_ratio{cpu="56"} 0.0091633436869092977
      xen_host_cpu_usage_ratio{cpu="57"} 0
      xen_host_cpu_usage_ratio{cpu="58"} 0
      xen_host_cpu_usage_ratio{cpu="59"} 0
      xen_host_cpu_usage_ratio{cpu="60"} 0
      xen_host_cpu_usage_ratio{cpu="61"} 0
      xen_host_cpu_usage_ratio{cpu="62"} 0
      xen_host_cpu_usage_ratio{cpu="63"} 0
      xen_host_cpu_usage_ratio{cpu="64"} 0
      xen_host_cpu_usage_ratio{cpu="65"} 0
      xen_host_cpu_usage_ratio{cpu="66"} 0
      xen_host_cpu_usage_ratio{cpu="67"} 0
      xen_host_cpu_usage_ratio{cpu="68"} 0
      xen_host_cpu_usage_ratio{cpu="69"} 0
      xen_host_cpu_usage_ratio{cpu="70"} 0
      xen_host_cpu_usage_ratio{cpu="71"} 0
      xen_host_cpu_usage_ratio{cpu="72"} 0
      xen_host_cpu_usage_ratio{cpu="73"} 0
      xen_host_cpu_usage_ratio{cpu="74"} 0
      xen_host_cpu_usage_ratio{cpu="75"} 0
      xen_host_cpu_usage_ratio{cpu="76"} 0
      xen_host_cpu_usage_ratio{cpu="77"} 0
      xen_host_cpu_usage_ratio{cpu="78"} 0
      xen_host_cpu_usage_ratio{cpu="79"} 0
      # HELP xen_host_cpu_avg_usage_ratio Average physical CPU usage ratio from Xen idletime counters; semantics align with xcp-rrdd-cpu cpu_avg.
      # TYPE xen_host_cpu_avg_usage_ratio gauge
      xen_host_cpu_avg_usage_ratio 0.0024868463762477951
      # HELP xen_host_cpu_avg_frequency_mhz Average physical CPU frequency in MHz from Xen power-management stats.
      # TYPE xen_host_cpu_avg_frequency_mhz gauge
      xen_host_cpu_avg_frequency_mhz{cpu="0"} 2090950
      xen_host_cpu_avg_frequency_mhz{cpu="1"} 2377080
      xen_host_cpu_avg_frequency_mhz{cpu="2"} 2267030
      xen_host_cpu_avg_frequency_mhz{cpu="3"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="4"} 2399090
      xen_host_cpu_avg_frequency_mhz{cpu="5"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="6"} 2377080
      xen_host_cpu_avg_frequency_mhz{cpu="7"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="8"} 2112960
      xen_host_cpu_avg_frequency_mhz{cpu="9"} 2333060
      xen_host_cpu_avg_frequency_mhz{cpu="10"} 2377080
      xen_host_cpu_avg_frequency_mhz{cpu="11"} 2178990
      xen_host_cpu_avg_frequency_mhz{cpu="12"} 2178990
      xen_host_cpu_avg_frequency_mhz{cpu="13"} 2971350
      xen_host_cpu_avg_frequency_mhz{cpu="14"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="15"} 2421100
      xen_host_cpu_avg_frequency_mhz{cpu="16"} 2971350
      xen_host_cpu_avg_frequency_mhz{cpu="17"} 2178990
      xen_host_cpu_avg_frequency_mhz{cpu="18"} 2949340
      xen_host_cpu_avg_frequency_mhz{cpu="19"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="20"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="21"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="22"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="23"} 2509140
      xen_host_cpu_avg_frequency_mhz{cpu="24"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="25"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="26"} 2399090
      xen_host_cpu_avg_frequency_mhz{cpu="27"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="28"} 2443110
      xen_host_cpu_avg_frequency_mhz{cpu="29"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="30"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="31"} 2112960
      xen_host_cpu_avg_frequency_mhz{cpu="32"} 2156980
      xen_host_cpu_avg_frequency_mhz{cpu="33"} 2333060
      xen_host_cpu_avg_frequency_mhz{cpu="34"} 2355070
      xen_host_cpu_avg_frequency_mhz{cpu="35"} 2068940
      xen_host_cpu_avg_frequency_mhz{cpu="36"} 2002910
      xen_host_cpu_avg_frequency_mhz{cpu="37"} 2024920
      xen_host_cpu_avg_frequency_mhz{cpu="38"} 2002910
      xen_host_cpu_avg_frequency_mhz{cpu="39"} 2024920
      xen_host_cpu_avg_frequency_mhz{cpu="40"} 1892860
      xen_host_cpu_avg_frequency_mhz{cpu="41"} 1914870
      xen_host_cpu_avg_frequency_mhz{cpu="42"} 1892860
      xen_host_cpu_avg_frequency_mhz{cpu="43"} 1914870
      xen_host_cpu_avg_frequency_mhz{cpu="44"} 1892860
      xen_host_cpu_avg_frequency_mhz{cpu="45"} 1914870
      xen_host_cpu_avg_frequency_mhz{cpu="46"} 1980900
      xen_host_cpu_avg_frequency_mhz{cpu="47"} 2024920
      xen_host_cpu_avg_frequency_mhz{cpu="48"} 2024920
      xen_host_cpu_avg_frequency_mhz{cpu="49"} 2046930
      xen_host_cpu_avg_frequency_mhz{cpu="50"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="51"} 1958890
      xen_host_cpu_avg_frequency_mhz{cpu="52"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="53"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="54"} 2773260
      xen_host_cpu_avg_frequency_mhz{cpu="55"} 2024920
      xen_host_cpu_avg_frequency_mhz{cpu="56"} 2068940
      xen_host_cpu_avg_frequency_mhz{cpu="57"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="58"} 1892860
      xen_host_cpu_avg_frequency_mhz{cpu="59"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="60"} 1914870
      xen_host_cpu_avg_frequency_mhz{cpu="61"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="62"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="63"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="64"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="65"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="66"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="67"} 1958890
      xen_host_cpu_avg_frequency_mhz{cpu="68"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="69"} 1958890
      xen_host_cpu_avg_frequency_mhz{cpu="70"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="71"} 1958890
      xen_host_cpu_avg_frequency_mhz{cpu="72"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="73"} 1958890
      xen_host_cpu_avg_frequency_mhz{cpu="74"} 1936880
      xen_host_cpu_avg_frequency_mhz{cpu="75"} 1958890
      xen_host_cpu_avg_frequency_mhz{cpu="76"} 1958890
      xen_host_cpu_avg_frequency_mhz{cpu="77"} 1980900
      xen_host_cpu_avg_frequency_mhz{cpu="78"} 1980900
      xen_host_cpu_avg_frequency_mhz{cpu="79"} 1980900
      # HELP xen_host_cpu_pstate_residency_ratio Proportion of time a physical CPU spent in a P-state from Xen PM residency counters.
      # TYPE xen_host_cpu_pstate_residency_ratio gauge
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P0"} 1.0208140011677872e-08
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P1"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P2"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P3"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P4"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P5"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P6"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P7"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P8"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P9"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P10"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="0",state="P11"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P0"} 0.0055227636738445791
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P1"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P2"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P3"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P4"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P5"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P6"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P7"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P8"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P9"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P10"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="1",state="P11"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P0"} 0.0039114692212028641
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P1"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P2"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P3"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P4"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P5"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P6"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P7"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P8"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P9"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P10"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="2",state="P11"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P0"} 2.0015960807211515e-09
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P1"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P2"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P3"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P4"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P5"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P6"} 0
      xen_host_cpu_pstate_residency_ratio{cpu="3",state="P7"} 0
      
      ...
      
      0.84987212423135072
      xen_host_cpu_cstate_residency_ratio{cpu="27",state="C0"} 0.00015908865793218979
      xen_host_cpu_cstate_residency_ratio{cpu="27",state="C1"} 0
      xen_host_cpu_cstate_residency_ratio{cpu="27",state="C2"} 0
      xen_host_cpu_cstate_residency_ratio{cpu="27",state="C3"} 0
      xen_host_cpu_cstate_residency_ratio{cpu="27",state="C4"} 0.99986911723355854
      xen_host_cpu_cstate_residency_ratio{cpu="28",state="C0"} 0.013343047999847271
      xen_host_cpu_cstate_residency_ratio{cpu="28",state="C1"} 0.0490509266050786
      xen_host_cpu_cstate_residency_ratio{cpu="28",state="C2"} 0.00615403285041105
      xen_host_cpu_cstate_residency_ratio{cpu="28",state="C3"} 0.029453632044006424
      
      ...
      
      0.0012707002614712984
      xen_host_cpu_cstate_residency_ratio{cpu="79",state="C3"} 0.0014181654508031323
      xen_host_cpu_cstate_residency_ratio{cpu="79",state="C4"} 0.99640523311850959
      # HELP xen_host_memory_total_kib Total amount of memory on the Xen host in KiB (xc_physinfo total_pages).
      # TYPE xen_host_memory_total_kib gauge
      xen_host_memory_total_kib 536737912
      # HELP xen_host_memory_free_kib Free memory on the Xen host in KiB (xc_physinfo free_pages).
      # TYPE xen_host_memory_free_kib gauge
      xen_host_memory_free_kib 518571944
      # HELP xen_host_memory_reclaimed_bytes Host memory reclaimed by squeezing in bytes (sum of dynamic-max minus target across domains).
      # TYPE xen_host_memory_reclaimed_bytes gauge
      xen_host_memory_reclaimed_bytes 0
      # HELP xen_host_memory_reclaimed_max_bytes Host memory that could be reclaimed by squeezing in bytes (sum of target minus dynamic-min across domains).
      # TYPE xen_host_memory_reclaimed_max_bytes gauge
      xen_host_memory_reclaimed_max_bytes 0
      # HELP xen_host_running_domains Total number of running domains from libxenctrl domain flags; semantics align with xcp-rrdd-cpu running_domains.
      # TYPE xen_host_running_domains gauge
      xen_host_running_domains 2
      # HELP xen_host_running_vcpus Total running/runnable vCPUs from libxenctrl vcpu info; semantics align with xcp-rrdd-cpu running_vcpus.
      # TYPE xen_host_running_vcpus gauge
      xen_host_running_vcpus 1
      # HELP xen_host_pcpu_count_xen Physical CPU count from libxenctrl xc_physinfo.
      # TYPE xen_host_pcpu_count_xen gauge
      xen_host_pcpu_count_xen 80
      # HELP xen_hostload_ratio Host load per physical CPU from libxenctrl runnable vCPU counting; semantics align with xcp-rrdd-cpu hostload.
      # TYPE xen_hostload_ratio gauge
      xen_hostload_ratio 0.012500000000000001
      # HELP xen_exporter_collector_success Whether a collector update succeeded.
      # TYPE xen_exporter_collector_success gauge
      xen_exporter_collector_success{collector="xenctrl"} 1
      # HELP xen_exporter_collector_duration_seconds Collector update duration in seconds.
      # TYPE xen_exporter_collector_duration_seconds gauge
      xen_exporter_collector_duration_seconds{collector="xenctrl"} 0.0095337210000000002
      # HELP xen_exporter_uptime_seconds Exporter uptime in seconds.
      # TYPE xen_exporter_uptime_seconds gauge
      xen_exporter_uptime_seconds 180.63357241
      

      This thing with combination of node_exporter allows you to get extremely detailed graphs that perfectly describe CPU, memory, network and IO on lowest level, you can literally see IO queue on individual paths, disk / path latencies, network usage, waits, CPU and RAM utilization.

      Minor downside is that this is an extra daemon that needs to be installed on dom0 to expose those diagnostic metrics, but it's written in go (I followed original node_exporter which is also in go) with cgo linking to xenctrl library - it's extremely fast (almost 0 CPU overhead) and uses very little RAM (13MB), but I think it's going to be brilliant tool for low level extremely detailed per-host performance metrics in the future.

      Note: There is already a similar project also called xen_exporter, but that one is for me architecturally inacceptable because it's just a wrapper around xapi's rrdd endpoint - same stuff that XenAdmin and Xen Orchestra already uses for its limited graphs, it has to run on 3rd VM (which is probably "cleaner" than running anything on dom0), but needs to connect to xapi using root credentials over HTTPS endpoint and scrape the data from its rrdd endpoint, which is less efficient and much less granular, and probably only works via active master node, so configuring that exporter in prometheus is tricky.

      Edit: had to trim that output, too much text for the forum

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      Version 0.0.5 alpha was just released.

      Tree view got major fix - it's now almost fully on-par with the C# version, all logic and missing icons and context menus were fixed.

      This is rather a quality of life release, from features only GPU support was added, rest were only bug fixes, but a lot of them. XenAdmin is now so stable now I am even using it on my production servers. It already feels same (even better in some aspects) as the C# version.

      I also added some new minor feature - all table views now support export to CSV via context menu (into clipboard), very handy if you need to export data from various views.

      posted in News
      B
      benapetr
    • RE: Import from VMware err: name: AssertionError: Expected "actual" to be strictly unequal to: undefined

      Hello, since this never had a clear resolution, here is explanation of the bug and why it happens:

      There is a bug in disk iteration in that XO vmware plugin (somewhere in that esxi.mjs I don't remember exact location) - it basically expects that all disks of VM exist in same datastore and if they don't it crashes as the next disk in unexpectedly missing (undefined)

      Workaround is rather simple - select the VM in vmware, migrate -> storage, disable DRS (important) and then select any DS that no disks current exist on. If you select and DS that is already used by same VM it will sometimes not get fixed! It also may happen even if VM is "apparently" looking like it's on single DS, even if it reports as such, still try to migrate it to another DS, and disable DRS so that really all files, even meta files are in same directory.

      Then run XO import again, it will magically work.

      posted in Migrate to XCP-ng
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      I just released 0.0.4! Still an alpha, but it's becoming pretty usable - there was over 60 commits of code cleanup, polish and also limited testing of all visible features - pretty much everything that is now visible in the UI should be operational at this point, option pages, properties of hosts, pools, VDIs, configuration of NICs, even bonding and advanced stuff like pool password / secret rotation. All XAPI actions and commands were already ported over from C# variant, so my focus now is only on finishing it into a final usable product.

      We also have some features that original client doesn't have (such as XenCache explorer)!

      Next on my to-do is to add all features that are currently missing completely: HA, GPU, PCIe, VM import / export etc.

      Note: I will not be porting over any licensed or proprietary Citrix stuff. This tool may work with XenServer just fine, but I will not be porting over any of the proprietary features, because I will never use it and I have no way to test it either.

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      @olivierlambert ok that's great, I didn't even notice, thanks!

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      @TeddyAstie good idea, thanks, I will do it once I get close to releasing first non-alpha version, right now it's still too immature for production use (although latest master version is so good and stable I already use it to manage most of my own clusters)

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      @Tristis-Oris There are many features missing since it's alpha now, but you can open bug tracker request on github if you want to keep track of progress for this particular feature.

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      @Tristis-Oris just FYI I fixed all of those issues you had, also regarding portable stuff - I implemented CLI switch -c <path> where you can specify directory where you want to store config, so you can wrap the app in a .bat or .sh script that would start it to use local storage (like flash drive) for config files, that should achieve portability.

      If you expected something more sophisticated let me know

      posted in News
      B
      benapetr
    • RE: New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      @Tristis-Oris the logic was directly ported over from C# version, so it does the same stuff what it filters there. It just toggles this search scope:

      QueryScope* TreeSearch::GetTreeSearchScope()
      {
          ObjectTypes types = Search::DefaultObjectTypes();
          types |= ObjectTypes::Pool;
      
          SettingsManager& settings = SettingsManager::instance();
      
          if (settings.getDefaultTemplatesVisible())
              types |= ObjectTypes::DefaultTemplate;
      
          if (settings.getUserTemplatesVisible()) // these are custom
              types |= ObjectTypes::UserTemplate;
      
          if (settings.getLocalSRsVisible())
              types |= ObjectTypes::LocalSR;
      
          return new QueryScope(types);
      }
      

      I assume it matches all user-defined templates, the defaults you see are "system defined" and part of xcp-ng .rpm packages. It's possible that this logic just wasn't ported correctly, I will look into it.

      posted in News
      B
      benapetr