Tag-Based Automation: Manage VM CPU Priority via assigned tag.
-
WHAT: Automatically assigns CPU weights and I/O priorities based on assigned VM tag (i.e. replicating what vcenter did via resource pools etc.).
HOW: Run via /etc/cron.{schedule-folder} for regular enforcement.
WHY: Automatically assign performance metrics on all pool VMs (as well as preventing configuration drift if settings are accidentally changed).
TAGS: The Performance Tiering Concept: 4-tier system with a naming convention that sorts logically in XO:
TAG CPU WEIGHT I/O PRIORITY USE CASE 0-core 2048 7 (Highest) Domain Controllers, DNS, DHCP, Core DBs 1-high 1024 7 Critical App Servers 2-normal 256 4 Standard Workloads 3-low 128 1 Dev/Test, Noisy NeighborsWhy the "0-" prefix? It forces core VMs to the top of the VM list in XO for easy visibility and management.
Important: CPU weights only matter during contention. When the host is under-utilized, all VMs get the performance they need regardless of weight. These are an insurance policy.
Change-Log
# ============================================ # CHANGE-LOG: set-performance.sh # ============================================ v3.2 - 2026-05-14 - Fixed Master Host Check: replaced invalid "xe host-is-master" command with correct method: compare "xe pool-list params=master" UUID against "xe host-list name-label=$(hostname)" v3.1 - 2026-05-14 - Moved config files to /usr/local/etc/set-performance.conf.d/ - Split into default.conf (generic) and custom.conf (pool-specific) - Removed TAG_SUFFIX from default.conf (belongs in custom.conf only) - Removed all non-ASCII characters for editor compatibility - Removed pool-specific suffix examples from defaults - Initialize now verifies config dir, default.conf, and cron dir before making any changes - Initialize warns (non-fatal) if custom.conf is missing v3.0 - 2026-05-14 - Added nested conf.d override pattern (mirrors Linux standard: sshd_config.d, sudoers.d etc.) - Added TAG_SUFFIX for pool-specific tag naming (e.g. "-1" produces 0-core-1, 1-high-1, etc.) - Base tag names defined separately, combined with suffix at runtime - Added SCHEDULE option (hourly/daily/weekly/monthly) - Symlink auto-created in correct cron folder on initialize - No manual crontab editing required - Initialize confirms active tags and suffix on completion v2.0 - 2026-05-14 - Added external configuration file (set-performance.conf) for easier management without editing the script directly - Added Master Host Check: script safely exits on non-master hosts, allowing deployment across all pool hosts via cron - Added detailed main log (/var/log/set-performance.log) - Added summary counters per tier (0-core, 1-high, 2-normal, 3-low) - Added one-line timestamped summary output to separate /var/log/set-performance-summary.log for easy auditing - Added optional NFS copy of summary log for centralized reporting via Xen Orchestra v1.0 - Initial Release - Tag-based CPU weight and I/O priority enforcement - Four performance tiers: 0-core, 1-high, 2-normal, 3-low - Network QoS cap (100Mbps) for 3-low tagged VMs - Designed for hourly cron executionThe Files:
File 1-of-3: /usr/local/etc/set-performance.conf.d/default.conf
# ============================================ # default.conf # Global default configuration for # set-performance.sh # DO NOT edit for pool-specific settings -- # use custom.conf in this same directory # ============================================ # --- SCHEDULING --- # Options: hourly, daily, weekly, monthly SCHEDULE="hourly" # --- LOG FILES --- MAIN_LOG="/var/log/set-performance.log" SUMMARY_LOG="/var/log/set-performance-summary.log" # --- BASE TAG NAMES --- # These are the base tag names without any suffix. # To add a pool-specific suffix (e.g. "-1", "-2"), # set TAG_SUFFIX in custom.conf CORE_BASE="0-core" HIGH_BASE="1-high" NORMAL_BASE="2-normal" LOW_BASE="3-low" # --- PERFORMANCE SETTINGS --- CORE_WEIGHT="2048" CORE_IO_PRI="7" HIGH_WEIGHT="1024" HIGH_IO_PRI="7" NORMAL_WEIGHT="256" NORMAL_IO_PRI="4" LOW_WEIGHT="128" LOW_IO_PRI="1"File 2-of-3: /usr/local/etc/set-performance.conf.d/custom.conf
# ============================================ # custom.conf # Pool-specific overrides for set-performance.sh # This file overrides matching settings from # default.conf if the same variable is defined # here. # # INSTRUCTIONS: # 1. Set TAG_SUFFIX to match your pool: # POOL-1 --> TAG_SUFFIX="-1" # POOL-2 --> TAG_SUFFIX="-2" # POOL-3 --> TAG_SUFFIX="-3" # Generic --> TAG_SUFFIX="" # # 2. Uncomment and adjust any performance # settings below if this pool needs # different values than the defaults. # ============================================ # --- TAG SUFFIX --- # Appended to all base tag names at runtime. # Example: TAG_SUFFIX="-1" produces: # 0-core-1, 1-high-1, 2-normal-1, 3-low-1 TAG_SUFFIX="" # --- OPTIONAL OVERRIDES --- # Uncomment to override default.conf values: # SCHEDULE="daily" # CORE_WEIGHT="4096" # HIGH_WEIGHT="2048" # NORMAL_WEIGHT="512" # LOW_WEIGHT="256"File 3-of-3: /usr/local/bin/set-performance.sh
#!/bin/bash # ============================================ # XCP-ng set-performance.sh (v3.2) # Purpose: Enforce VM Performance Tiers via Tags # Config: /usr/local/etc/set-performance.conf.d/ # Runs: Via cron symlink (set by initialize) # ============================================ CONF_DIR="/usr/local/etc/set-performance.conf.d" DEFAULT_CONF="$CONF_DIR/default.conf" CUSTOM_CONF="$CONF_DIR/custom.conf" SCRIPT_PATH="/usr/local/bin/set-performance.sh" # --- STEP 1: LOAD DEFAULT CONFIGURATION --- if [ -f "$DEFAULT_CONF" ]; then source "$DEFAULT_CONF" else echo "Error: Default configuration not found at $DEFAULT_CONF" exit 1 fi # --- STEP 2: LOAD CUSTOM OVERRIDES (if present) --- if [ -f "$CUSTOM_CONF" ]; then source "$CUSTOM_CONF" fi # --- STEP 3: CONSTRUCT FINAL TAG NAMES --- CORE_TAG="${CORE_BASE}${TAG_SUFFIX}" HIGH_TAG="${HIGH_BASE}${TAG_SUFFIX}" NORMAL_TAG="${NORMAL_BASE}${TAG_SUFFIX}" LOW_TAG="${LOW_BASE}${TAG_SUFFIX}" # ============================================ # INITIALIZE FUNCTION # Run once manually to set up cron symlink. # Usage: /usr/local/bin/set-performance.sh initialize # ============================================ initialize_script() { echo "" echo "set-performance.sh -- Initialization" echo "============================================" if [ ! -d "$CONF_DIR" ]; then echo "Error: Config directory not found: $CONF_DIR" echo "Please create it and add default.conf and custom.conf first." exit 1 fi if [ ! -f "$DEFAULT_CONF" ]; then echo "Error: default.conf not found in $CONF_DIR" exit 1 fi if [ ! -f "$CUSTOM_CONF" ]; then echo "Warning: custom.conf not found. Using defaults only." fi TARGET_DIR="/etc/cron.${SCHEDULE}" SYMLINK_PATH="${TARGET_DIR}/set-performance" if [ ! -d "$TARGET_DIR" ]; then echo "Error: Cron directory not found: $TARGET_DIR" echo "Valid SCHEDULE options: hourly, daily, weekly, monthly" exit 1 fi echo "Cleaning up existing cron symlinks..." for interval in hourly daily weekly monthly; do rm -f "/etc/cron.${interval}/set-performance" done ln -sf "$SCRIPT_PATH" "$SYMLINK_PATH" if [ -L "$SYMLINK_PATH" ]; then echo "" echo "[OK] Symlink created : $SYMLINK_PATH" echo "[OK] Schedule : $SCHEDULE" echo "[OK] Tag suffix : ${TAG_SUFFIX:-(none)}" echo "[OK] Active tags : $CORE_TAG, $HIGH_TAG, $NORMAL_TAG, $LOW_TAG" echo "" echo "Initialization complete. Script will now run $SCHEDULE via cron." else echo "Error: Failed to create symlink at $SYMLINK_PATH" exit 1 fi } # --- CHECK FOR initialize ARGUMENT --- if [ "$1" == "initialize" ]; then initialize_script exit 0 fi # ============================================ # NORMAL EXECUTION (called by cron) # ============================================ # --- MASTER HOST CHECK --- # Compare pool master UUID to this host's local UUID POOL_MASTER=$(xe pool-list params=master --minimal) LOCAL_HOST=$(xe host-list name-label=$(hostname) --minimal) if [ "$POOL_MASTER" != "$LOCAL_HOST" ]; then exit 0 fi # --- INITIALIZE COUNTERS --- count_0=0 count_1=0 count_2=0 count_3=0 # --- REDIRECT ALL OUTPUT TO MAIN LOG --- exec >> "$MAIN_LOG" 2>&1 echo "--- Starting Performance Sync: $(date) ---" echo " Tags: $CORE_TAG | $HIGH_TAG | $NORMAL_TAG | $LOW_TAG" # --- 0-core : CORE CRITICAL VMs --- echo "=== Applying $CORE_TAG (Weight: $CORE_WEIGHT, Pri: $CORE_IO_PRI) ===" for uuid in $(xe vm-list tags:contains="$CORE_TAG" --minimal | tr ',' '\n'); do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$CORE_WEIGHT other-config:sched-pri=$CORE_IO_PRI echo " [OK] CORE applied: $uuid" ((count_0++)) done # --- 1-high : HIGH PRIORITY VMs --- echo "=== Applying $HIGH_TAG (Weight: $HIGH_WEIGHT, Pri: $HIGH_IO_PRI) ===" for uuid in $(xe vm-list tags:contains="$HIGH_TAG" --minimal | tr ',' '\n'); do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$HIGH_WEIGHT other-config:sched-pri=$HIGH_IO_PRI echo " [OK] HIGH applied: $uuid" ((count_1++)) done # --- 2-normal : NORMAL PRIORITY VMs --- echo "=== Applying $NORMAL_TAG (Weight: $NORMAL_WEIGHT, Pri: $NORMAL_IO_PRI) ===" for uuid in $(xe vm-list tags:contains="$NORMAL_TAG" --minimal | tr ',' '\n'); do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$NORMAL_WEIGHT other-config:sched-pri=$NORMAL_IO_PRI echo " [OK] NORMAL applied: $uuid" ((count_2++)) done # --- 3-low : LOW PRIORITY VMs --- echo "=== Applying $LOW_TAG (Weight: $LOW_WEIGHT, Pri: $LOW_IO_PRI) ===" for uuid in $(xe vm-list tags:contains="$LOW_TAG" --minimal | tr ',' '\n'); do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$LOW_WEIGHT other-config:sched-pri=$LOW_IO_PRI echo " [OK] LOW applied: $uuid" ((count_3++)) done echo "--- Sync Complete: $(date) ---" # --- ONE-LINE SUMMARY --> set-performance-summary.log --- echo "$(date '+%Y-%m-%d %H:%M:%S') $(hostname) $CORE_TAG:$count_0 $HIGH_TAG:$count_1 $NORMAL_TAG:$count_2 $LOW_TAG:$count_3" >> "$SUMMARY_LOG"Deployment
# 1. Create config directory mkdir -p /usr/local/etc/set-performance.conf.d # 2. Place config files cp default.conf /usr/local/etc/set-performance.conf.d/ cp custom.conf /usr/local/etc/set-performance.conf.d/ # 3. Place and permission the script cp set-performance.sh /usr/local/bin/ chmod +x /usr/local/bin/set-performance.sh # 4. Edit custom.conf for your pool (if needed), and initialize vi /usr/local/etc/set-performance.conf.d/custom.conf /usr/local/bin/set-performance.sh initialize -
Nice. I believe that I read elsewhere that our dev team was working on something similar.
FYI, you have a typo (performace.sh vs performance.sh)

-
WHAT: Automatically assigns CPU weights and I/O priorities based on assigned VM tag (i.e. replicating what vcenter did via resource pools etc.).
It would be even better if you could split the configuration section off, so that it’s in its own conf file. Would make it easier to manage, also if this ends up being used, by Vates in the Vates VMS software. There can then be a vendor recommended configuration with the option of customer’s own workflow based, configuration.
-
Ping @julienxovates
-
@Danp Thanks for the correction - i.e. "Good-eye, Good-eye

-
@john.c Great idea, and done! ("Keep 'em coming")
-
@john.c Great idea, and done! ("Keep 'em coming")
Thanks for the change. By the way I meant when doing Vendor config and customer workflow config. To implement that requires nested config file. In other words the set-performance.conf, then a file for instance in a set-performance.conf.d/custom.conf (or similar). The custom.conf in the conf.d directory overrides the same property as well as section within set-performance.conf.
-
@john.c Great ideas (especially if Vates decides to bake something similar into XO someday) but may be getting too far into the weeds for now...
-
@john.c Great ideas (especially if Vates decides to bake something similar into XO someday) but may be getting too far into the weeds for now...
That’s okay. Just putting it out there - no rush! Any way to maintain Linux good practice relocate the conf file to /usr/local/etc/ (or /etc) then keep script in /usr/local/bin.
-
@johnnezero Also there’s shortcut directories including one called cron.hourly, you can place a symlink (or hard link) to the script there. Cron will then execute the script without you needing to alter the crontab file.
Just drop .sh for using the shortcut directory as it will not run there otherwise.
-
@john.c Yet another awesome idea - adding it to the "ToDo List", thanks!
-
@john.c Wandered off through the weeds (with Claude/AI that us), and got it done.

-
@john.c Also done! Thanks for all the great input, keep em' coming...
-
@johnnezero this could be a plugin in XOA !
-
@Pilow Sounds like an awesome idea. Send any details you may have on how to make plugins (if you know how that is).
Adding to the ToDo list - Thanks! -
@Pilow Sounds like an awesome idea. Send any details you may have on how to make plugins (if you know how that is).
Adding to the ToDo list - Thanks!@Pilow Sounds like an awesome idea. Send any details you may have on how to make plugins (if you know how that is).
Adding to the ToDo list - Thanks!Looks like I can help out again just tracked down this past thread on the forums.
https://xcp-ng.org/forum/topic/7202/how-do-i-create-a-new-plugin
How’s your JavaScript (Typescript), JSON etc? These are the languages needed to develop plugins for Xen Orchestra.
-
@john.c Thanks much, looking into it.
"Open-Source for the Win!"
-
@john.c Thanks much, looking into it.
"Open-Source for the Win!"
Your welcome!
-
@johnnezero It would be also interesting to take UMA/NUMA into account as VMs -- in particular, VMs with vGPUS -- can run much more efficiently if they do not cross memory bank boundaries that span more than one CPU instance. On some Linux systems -- not sure about the one hosting XCP-ng -- you can even disable NUMA. Just an additional thought here. I published a number of years ago a three-part series "A Tale of Two Servers" discussing a number of related optimizations but alas, the Citrix blogs were eliminated and I'm snot sure where copies of these articles exist anymore. But there are plenty of articles on this, in particular by Frank Denneman, and also ones like the following;
https://indico.cern.ch/event/304944/contributions/1672535/attachments/578723/796898/numa.pdf
https://docs.xenserver.com/en-us/xenserver/9/numa.html -
@johnnezero It would be also interesting to take UMA/NUMA into account as VMs -- in particular, VMs with vGPUS -- can run much more efficiently if they do not cross memory bank boundaries that span more than one CPU instance. On some Linux systems -- not sure about the one hosting XCP-ng -- you can even disable NUMA. Just an additional thought here. I published a number of years ago a three-part series "A Tale of Two Servers" discussing a number of related optimizations but alas, the Citrix blogs were eliminated and I'm snot sure where copies of these articles exist anymore. But there are plenty of articles on this, in particular by Frank Denneman, and also ones like the following;
https://indico.cern.ch/event/304944/contributions/1672535/attachments/578723/796898/numa.pdf
https://docs.xenserver.com/en-us/xenserver/9/numa.htmlIf you remember the URL and date maybe try the wayback machine of Internet Archive. They’re known to archive sites and articles wherever they can, may hold a copy that’s accessible.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login