just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

@MK.ultra don't think so
it's working without for me.
@pierrebrunet thanks for quick insight/fix, got back to stable to deploy templates, it is working
@florent yes they are, I screenshoted earlier in the thread
seeing more and more of this INTERRUPTED issue in mail reports.. anyone has this also ?
still having initial issue, even on latest XOA or XO CE 
Hi,
latest version of XOA
I'm in /v5 webui
I can't install Templates anymore from the HUB, i get this error message :

is this a new bug ?
some timeout or race condition between the end of the job and the mail generation ?
perhaps putting 10sec delay to send mail ?
we have a strange behavior in the mail reports of XOA Backup.
the backup is done, we see the delta point on the remote, in XOA it's all green, no sign of INTERRUPTED, but the mail report tells otherwise :

the "INTERRUPTION" seems to happen on the remote

the point in the remote :
in XOA logs :


other backups are okay, this same one will be okay too tonight...
what is happening ?
false alarm ? @florent @bastien-nollet
{
"data": {
"mode": "delta",
"reportWhen": "always"
},
"id": "1766680469800",
"jobId": "87966399-d428-431d-a067-bb99a8fdd67a",
"jobName": "BCK_C_xxxx",
"message": "backup",
"proxyId": "5359db6e-841b-4a6d-b5e6-a5d19f43b6c0",
"scheduleId": "56872f53-4c20-47fc-8542-2cd9aed2fdde",
"start": 1766680469800,
"status": "success",
"infos": [
{
"data": {
"vms": [
"b1eef06b-52c1-e02a-4f59-1692194e2376"
]
},
"message": "vms"
}
],
"tasks": [
{
"data": {
"type": "VM",
"id": "b1eef06b-52c1-e02a-4f59-1692194e2376",
"name_label": "xxxx"
},
"id": "1766680472044",
"message": "backup VM",
"start": 1766680472044,
"status": "success",
"tasks": [
{
"id": "1766680472050",
"message": "clean-vm",
"start": 1766680472050,
"status": "success",
"end": 1766680473396,
"result": {
"merge": false
}
},
{
"id": "1766680474042",
"message": "snapshot",
"start": 1766680474042,
"status": "success",
"end": 1766680504544,
"result": "c4b42a79-532e-c376-833b-22707ddad571"
},
{
"data": {
"id": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
"isFull": false,
"type": "remote"
},
"id": "1766680504544:0",
"message": "export",
"start": 1766680504544,
"status": "success",
"tasks": [
{
"id": "1766680511990",
"message": "transfer",
"start": 1766680511990,
"status": "success",
"end": 1766680515706,
"result": {
"size": 423624704
}
},
{
"id": "1766680521053",
"message": "clean-vm",
"start": 1766680521053,
"status": "success",
"tasks": [
{
"id": "1766680521895",
"message": "merge",
"start": 1766680521895,
"status": "success",
"end": 1766680530887
}
],
"end": 1766680531173,
"result": {
"merge": true
}
}
],
"end": 1766680531192
}
],
"infos": [
{
"message": "Transfer data using NBD"
},
{
"message": "will delete snapshot data"
},
{
"data": {
"vdiRef": "OpaqueRef:d8aef4c9-5514-6623-1cda-f5e879c4990f"
},
"message": "Snapshot data has been deleted"
}
],
"end": 1766680531211
}
],
"end": 1766680531267
}
not better with production upgraded to 6.0.1 (XOA and XO Proxies)
we will open a support ticket
ps : if we delog/relog from XOA with another user, we have better chance to get the file restore working... not 100% very unstable
is there a link ?!
@olivierlambert I had the time to test connecting a REMOTE from production on latest XO CE on replica datacenter
and file restore is working flawlessly... ultra fast, and working.
either we have a problem on production, or the last update of XO6 corrected the bug ?
we still are on 5.113.2 on production
@fluxtor is your XOA accessible over HTTP 80 ?