@MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
Posts
-
RE: log_fs_usage / /var/log directory on pool master filling up constantly
-
RE: When the XCPNG host restart, it restarts running directly, instead of being in maintenance mode
@olivierlambert I even witnessed something today, on the same thematic :
- one pool of 2 hosts
- multiple VMs not running, but some have the AUTO POWER ON checked
- reboot the slave host, and as soon as it gets online / green, the VMs with auto power on starts...
they were downed on purpose... surprise

by the way, is this what is expected if the AUTO POWER ON is also checked on the POOL advanced tab ? I supposed it was there only to check auto power on on newly created VMs
-
RE: log_fs_usage / /var/log directory on pool master filling up constantly
@MajorP93 I guess so, if someone from Vates team get us the answer as why so frequently perhaps it will enlighten us
-
RE: log_fs_usage / /var/log directory on pool master filling up constantly
@MajorP93 said in log_fs_usage / /var/log directory on pool master filling up constantly:
will keep monitoring this but it seems to improve things quite substantially!
Since it appears that multiple users are affected by this it may be a good idea to change the default value within XCP-ng and/or add this to official documentation.
Reply
nice, but these SR scans have a purpose (when you create/extend an SR, to discover VDIs and ISOs, ...)
on the legitimacy of reducing the period, and the impact on logs, it should be better documented yeahxe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID>never saw this command line in the documentation, perhaps it should be there with full warnings ?
-
RE: SR.Scan performance withing XOSTOR
@denis.grilli My experience with XOSTOR was very similar (3 hosts, 18Tb) on XCP 8.2 at the time
but less VMs... ~20
we had catastrophic failures, with tap-disk locking the VDIs, hard to start/stop VMs (10+ mins to start a VM ? and same VM on local RAID5 storage, or even NFS storage, 5 seconds max)
more problems with large VDIs (1.5Tb) on XOSTOR, and backups where painful to obtain
after many ins and outs with support, we decided to get our VMs off XOSTOR for the time being, back to local RAID5 with replicas inter-hosts. No VM mobility, but redudancy anyway.
I think that the way XOSTOR is implemented is not really the source problem.
the combo DRBD+smapiv1 is ok for small amount of small VMs. at scale is another story.we still have to upgrade to 8.3 and give it another try.
the more we exfiltrated the VDIs of XOSTOR, the more 'normal' and expected the behavior was.
-
RE: Site outage. pfSense VM offline after pool master reboot
@manilx there is some cheap NETGATE appliances (Netgate 1100 or 2100) to put your PFSense+ out of virtual infrastructure.
this is the way.
-
RE: Feature Request / Community Input – VM Boot Order & Delayed Startup
@olivierlambert nice !
a new discover...no way to manage by UI in xo5/xo6 AFAIK ?
-
RE: Feature Request / Community Input – VM Boot Order & Delayed Startup
@olivierlambert hmmm could we consider an appliance as a "group of VMs" ?
if appliance is destroyed, what about the underlying VMs ? still there ?
-
RE: Feature Request / Community Input – VM Boot Order & Delayed Startup
@Cygace in advanced settings of VMs
there is
I know at scale it is not easy with 100+ VMs but...
try to put 40s for all VMs, except 20s for domain controller, and 0 for XOA ?that will not change start order as you think it, but in case of a full restart should do the job ?
-
RE: Unable to update XOA
@fred974 you should have an ntp server configuration on both, XOA and XCP host
perhaps daemons are not started ?
-
RE: Unable to update XOA
or a good reboot of the XOA if there is no backups at this time

-
RE: Unable to update XOA
@fred974 mmmm I tested on my 5.113.1 XOA to download this template and it did work as intended.
could you verify the time/date on XOA, and XCP host where you want to deploy it ?
does your XOA have good internet access ?
-
RE: Unable to update XOA
do you see a task progressing / failing when installing the template from the Hub ?

-
RE: Mirror of full backups with low retention - copies all vms and then deletes them
@Forza I can confirm the behavior, all points being copied, then pruned to target retention...
for now it seems to not be a bug but a feature, to not have to think of what happens in the source job in smart mode (VM added/substracted to the source job)
"we copy all and see afterward"
but @bastien-nollet and @florent can confirm on that ?
ps: I had this on mirror incremental backup, as I do not do full backups.
-
RE: 🛰️ XO 6: dedicated thread for all your feedback!
is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?
-
RE: Lock file is already being held whereas no backup are running.
@henri9813 yep merging is totally transparent at this time. can be seen afterward in raw backup logs
-
RE: Lock file is already being held whereas no backup are running.
@flakpyro there is an option to merge synchronously while exporting instead of after the export
in the very bottom of backup job configit should avoid most of your current merge problems
-
RE: Disable TX checksumming with API
@SethNY you could even enhance this VM_LIST =
with TAGs on VM, so that you manage the selection directly in XOA
-
RE: delta backups with offline snapshot: VMs do not start after snapshot, they start after transfer is done.
@k11maris said in delta backups with offline snapshot: VMs do not start after snapshot, they start after transfer is done.:
I think there is a way with changed block tracking to use delta backups without snapshots, right? I'll have to look into that.
ypu need to check those

and yes a VM that is BACKUPed and REPLICAted have 2 snaps.
didn't see a snapshot for Disaster Recovery job though (seems to be snaped, but deleted afterward as there is no need to delta on the next snap), you could use these, but full only and one temporary snapshot during the DR export.
