You can install nut on a separate (small) machine and let it ssh to Your master and run a shutdown script.
That's how I do it since a while back when the NUT-package was temporarily removed from the extra repo
Posts
-
RE: Just FYI: current update seams to break NUT dependancies
-
RE: Failed backup jobs since updating
Error: _removeUnusedSnapshots don't handle vdi related to multiple VMs
I had the same error when updated to
449E7
It ran 2 CR, then threw the error.
There was other problems so I rolled back to5bdd7
https://xcp-ng.org/forum/topic/11969/timestamp-lost-in-continuous-replication -
RE: Timestamp lost in Continuous Replication
I did get a snapshot at one of the CR VM's with the 449e7 version.
I don't with 5bdd7 -
RE: Timestamp lost in Continuous Replication
I did a rollback to
5bdd7and got the date back and also delta -
RE: Timestamp lost in Continuous Replication
The latest XCP-ng update had an update regarding NTP
XAPI, XCP-ng's control plane, was updated to version 26.1.3. Added API for controlling NTP.This might be a longshot and I don't know it this has anything to do with my "problem"
The timestamp on ContRep VM's has always been inUTC
I'm usingStockholm SEas timezone and other bugs regarding presentation of time have been fixed but not this one, -
RE: Timestamp lost in Continuous Replication
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1773474108084", "jobId": "3dcd13a8-de8b-47b6-945b-12dbad9c6234", "jobName": "ContRep", "message": "backup", "scheduleId": "522d611e-7cd9-4fe0-a9e1-b409927cd8c8", "start": 1773474108084, "status": "success", "infos": [ { "data": { "vms": [ "b1940325-7c09-7342-5a90-be2185c6d5b9", "86ab334a-92dc-324c-0c42-43aad3ae3bc2", "0f5c4931-a468-e75d-fa54-e1f9da0227a1" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "b1940325-7c09-7342-5a90-be2185c6d5b9", "name_label": "PiHole wifi" }, "id": "1773474110343", "message": "backup VM", "start": 1773474110343, "status": "success", "tasks": [ { "id": "1773474111649", "message": "snapshot", "start": 1773474111649, "status": "success", "end": 1773474113141, "result": "d4b0607f-0837-c7ae-5c2d-6426995470bd" }, { "data": { "id": "4f2f7ae2-024a-9ac7-add4-ffe7d569cae7", "isFull": true, "name_label": "Q1-ContRep", "type": "SR" }, "id": "1773474113142", "message": "export", "start": 1773474113142, "status": "success", "tasks": [ { "id": "1773474114002", "message": "transfer", "start": 1773474114002, "status": "success", "tasks": [ { "id": "1773474189159", "message": "target snapshot", "start": 1773474189159, "status": "success", "end": 1773474190075, "result": "OpaqueRef:fa356b5f-4b25-d3e6-5507-6ed81c32b1d8" } ], "end": 1773474190075, "result": { "size": 4299161600 } } ], "end": 1773474190682 } ], "end": 1773474191523 }, { "data": { "type": "VM", "id": "86ab334a-92dc-324c-0c42-43aad3ae3bc2", "name_label": "Home Assistant" }, "id": "1773474191534", "message": "backup VM", "start": 1773474191534, "status": "success", "tasks": [ { "id": "1773474191707", "message": "snapshot", "start": 1773474191707, "status": "success", "end": 1773474193196, "result": "c3f038c3-7ca9-cbbb-9f84-61e1fd30c9d5" }, { "data": { "id": "4f2f7ae2-024a-9ac7-add4-ffe7d569cae7", "isFull": true, "name_label": "Q1-ContRep", "type": "SR" }, "id": "1773474193196:0", "message": "export", "start": 1773474193196, "status": "success", "tasks": [ { "id": "1773474194123", "message": "transfer", "start": 1773474194123, "status": "success", "tasks": [ { "id": "1773474462529", "message": "target snapshot", "start": 1773474462529, "status": "success", "end": 1773474463434, "result": "OpaqueRef:c13f3cab-29c8-4ef0-253f-de5998580cd9" } ], "end": 1773474463434, "result": { "size": 15548284928 } } ], "end": 1773474464311 } ], "end": 1773474466186 }, { "data": { "type": "VM", "id": "0f5c4931-a468-e75d-fa54-e1f9da0227a1", "name_label": "Sync Mate" }, "id": "1773474466193", "message": "backup VM", "start": 1773474466193, "status": "success", "tasks": [ { "id": "1773474466371", "message": "snapshot", "start": 1773474466371, "status": "success", "end": 1773474470399, "result": "36a17271-1c2f-4b26-0d86-dc0faf27fa17" }, { "data": { "id": "4f2f7ae2-024a-9ac7-add4-ffe7d569cae7", "isFull": true, "name_label": "Q1-ContRep", "type": "SR" }, "id": "1773474470399:0", "message": "export", "start": 1773474470399, "status": "success", "tasks": [ { "id": "1773474471561", "message": "transfer", "start": 1773474471561, "status": "success", "tasks": [ { "id": "1773476263789", "message": "target snapshot", "start": 1773476263789, "status": "success", "end": 1773476264925, "result": "OpaqueRef:88bd55f4-64ea-fb16-f389-4b9f42ac459f" } ], "end": 1773476264925, "result": { "size": 105526591488 } } ], "end": 1773476267354 } ], "end": 1773476268187 } ], "end": 1773476268187 } -
RE: Timestamp lost in Continuous Replication
I was on
5bdd7installed at 202603091803 -
Timestamp lost in Continuous Replication
Updated to 449e7 (roniway script) today and there used to be a timestamp in the end of the name of the Cont. repl. VM

Now it looks like this

-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey
In my homelab I've had the same problem for at least 18 month's
when shutting down theXO-VMit hangs for 2-3 minutes when it tries to umount the remotesumount /run/xo-server/mounts/xxxxx (SMB and NFS) umount /run/xo-server/mounts/yyyyy (SMB)I did report this in an earlier thread
-
RE: XCP-ng 8.3 updates announcements and testing
Well I was and still is on v0.19.0:

I am running an old AMD Ryzen 5 2400GE homelab with NFS
no XOSTOR or SDN.
Tested NTP Def and dhcp, ssh, wget, cont. repl. ...
all worked fine so far. -
RE: backup mail report says INTERRUPTED but it's not ?
I also have this problem in my XO-CE (ronivay-script) at home
I get the mail report after 4 days
A reboot resets the memorythe XO-CE have 3.1 GB and the Control domain memory have 2 GB
Node v24.13.1
Running Continuous Replication and Delta Backups -
RE: VM Pool To Pool Migration over VPN
@acebmxer said in VM Pool To Pool Migration over VPN:
Maybe VPN overhead.
Have You checked the VPN capacity spec of Your firewalls?
-
RE: Failed unmounting remotes at XO/XOA shutdown
No idea if anyone have "fixed" anything
No, the XO
commit 5fcb6hang for ~3 min at reboot today.
edit: I disabled the sceduled reboot yesterday. -
RE: Failed unmounting remotes at XO/XOA shutdown
@DwightHat
No
I did schedule a reboot of XO every morning and it seems it has worked because I "forgot" about itNo idea if anyone have "fixed" anything
-
RE: XO5 breaks after defaulting to XO6 (from source)
I'm not running https and still on node v22
-
RE: XO5 breaks after defaulting to XO6 (from source)
@Gheppy
My links from 6 to 5 works fine
Haven’t tested all of them of course, but quit a few and none of them have faild -
RE: 🛰️ XO 6: dedicated thread for all your feedback!
I run these kinds of backup jobs
- Full Backup x2
- Delta backup
- Continuous Replication
- XO Config and Metadata
In ContRep, Conf/meta and in one of the Full Backup jobs I get
Report whenNever

But they are enabled in XO5 with mail address and
Report whenSkipped and failure


