Good things take time.
Your team is doing god's work.
Remember to stay healthy, in both mind and body!
Latest posts made by ScarfAntennae
-
RE: Xen Orchestra Prometheus Backup Metrics?
-
RE: Xen Orchestra Prometheus Backup Metrics?
Got it, thanks!
I'm certain many if not all enterprise users would be interested in such a feature. -
RE: Xen Orchestra Prometheus Backup Metrics?
@olivierlambert That's where my question comes in
Any plans on having these exposed for monitoring in Grafana?
-
RE: Xen Orchestra Prometheus Backup Metrics?
@olivierlambert Could you confirm please if netdata provides such metrics? ^
-
RE: Xen Orchestra Prometheus Backup Metrics?
@olivierlambert Just to confirm, I want to look at the size of daily backups created by Xen Orchestra, and the time/speed it took for them to complete, basically this page, but on a graph:
I'm not sure that is something netdata can offer?
-
Xen Orchestra Prometheus Backup Metrics?
Any plans on having these exposed for monitoring in Grafana?
I would really much like to see a graph which would correlate backup length, backup size, per VM, per Job, etc.
-
cleanVm: incorrect backup size in metadata
cleanVm: incorrect backup size in metadata path "/xo-vm-backups/db8b6e5f-6e3a-e919-f5c1-d097f46eb259/20240321T000510Z.json" actual 21861989376 expected 21861989888
I've been having this warning for 3~ months, on all my VMs.
At first the difference was just a few bytes (if those are bytes), but now the difference seems to be growing.Running XO from the sources, updated multiple times during those 3 months, and also updated yesterday, same warning still occurred during last night's backup.
Any ideas how to get rid of this?
-
NDB Connections per Disk?
Hi,
I do not understand what this setting does.
To tailor the backup process to your specific needs and infrastructure, you can adjust the number of NBD connections per exported disk. This setting is accessible in the Advanced backup job section and allows for further customization to optimize performance according to your network conditions and backup requirements:
More connections = faster backup? What's the trade-off?
-
RE: Error: Connection refused (calling connect ) (XCP-ng toolstack hang on boot)
@olivierlambert I do.
I have also noticed something extremely weird.
I have 3 HDDs attached to one host.
2x2TB raid 1 (software raid done on the XCP-ng host)
1x4TBlsblk
shows:... SNIP ... sda 8:0 0 1.8T 0 disk ├─sda2 8:2 0 1.8T 0 part └─sda1 8:1 0 2G 0 part ... sdb 8:16 0 1.8T 0 disk ├─sdb2 8:18 0 1.8T 0 part └─sdb1 8:17 0 2G 0 part └─md127 9:127 0 2G 0 raid1 ... sde 8:64 0 3.7T 0 disk ├─sde2 8:66 0 3.7T 0 part └─sde1 8:65 0 2G 0 part └─md127 9:127 0 2G 0 raid1
All 3 disks are passed through to a TrueNAS VM on the host, and all the data is properly stored, but I have no idea why mdadm shows that the 4TB disk is part of the raid, instead of the other one?
/dev/md127: Version : 1.2 Creation Time : Sun Aug 27 14:32:08 2023 Raid Level : raid1 Array Size : 2094080 (2045.00 MiB 2144.34 MB) Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sun Oct 8 12:07:28 2023 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : november:swap0 UUID : ae045fa0:74b00896:3134ede5:c837bec3 Events : 27 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 17 1 active sync /dev/sdb1
Anyways, this doesn't seem to be the issue, since the other host which has no HDDs attached, only m.2 VM SR's, and it also took exactly 10 minutes for the toolstack to go up.
Now XO can't reach any of the hosts, even though all the VMs are up.
-
RE: Error: Connection refused (calling connect ) (XCP-ng toolstack hang on boot)
@olivierlambert Ah, I understand the naming convention now.
So XO, but XO is irellevant to this issue. The problem was the 10 minutes it took the toolstack to boot up, compared to the 1-2 minutes it always took.
I updated XCP-ng now, rebooted, and both hosts took 10 minutes for the stack to come up again. Any ideas what could be causing this delay and how we could troubleshoot it?