just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Bastien-Nollet said in backup mail report says INTERRUPTED but it's not ?:
file packages/xo-server-backup-reports/dist/index.js by a
modification done, will give feedback
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

@Bastien-Nollet another good day
I think i'll swap to feedback if problem is back than problem is not here anymore 

on your side to find something sexier than delay ^^' but it seems to be a race condition
Hi,
We stumbled upon what I would call a lack of check.. or a bug ?
You can create a VDI of a VM in an ISO shared SR...
We did it unexpectedly.
.rpc("disk.create", {
name,
size,
sr,
vm: vm.id,
bootable: false,
mode: "RW",
})
disk.create name=<string> size=<integer|string> sr=<string> [vm=<string>] [bootable=<boolean>] [mode=<string>] [position=<string>]
create a new disk on a SR
this is permitted, it should not ?
it was a 50Gb CIFS ISO SR, and we could create a 100GB VDI, appearing as a 100GB .img file in the SR 


totally broken UI

in DISK tab of the VM :

and it is really an ISO SR !

should this not be prevented ?
@Thiago_FS_Dantas hi, did you check NESTED VIRTUALISATION in the advanced tab of your VM ?

@wilsonqanda tried your workaround on a halted VM, and it worked !
If i snapshot -> disk still invisible
delete snapshot - > disk still invisible
but
if i snapshot --> disk still invisible
revert snapshot (with take snapshot option) --> disk APPEAR again
delete the two snapshots -> disks still there
edit : even without the take snapshot before revert, it is working, tried on another VM
@wilsonqanda doing a snapshot and deleting it do not resolve the issue for me
i'll try to snap and revert to the snap and tell you if its Ok this way for me
@Bastien-Nollet nice.
I can confirm with the dates of the backup logs that it is since the exact date of beginning of DR job that this problem appears on the two other jobs.
when not doing NBD+CBT, the backup & replica jobs keep the "full" snapshot on the VM.
DR do not do that, it delete the snapshot.
I guess this is a silent failure of the way DR manages its snapshot deletion that could delete the CBT bitmap ? and then, baaam fall back to full on other jobs relying on said bitmap... ?
this is a run of the DR job

no notion of NBD/CBT in DR job advanced parameters, it treats that differently

@wilsonqanda perhaps is it the same probleme as here ?
https://xcp-ng.org/forum/topic/11715/vdi-not-showing-in-xo-5-from-source./9
invisible VDIs on some SRs
they are seen as snapshots and not presented in XO5 web UI but you can see them in XO6 or by API
Hi,
latest XOA 6.0.3
I found a case where I can provoke the 
I do not understand why it happens, but it is happening anyway.
We have a normal DELTA backup job with 6 VMs in it. say VM1 to VM6.
This job has advanced options of NBD+CBT checked. NBD network exists to do the backup.
Normal snapshot mode. Merge Synchronously in advanced is checked.
Another job : CONTINUOUS REPLICATION, same 6 VMs, same advanced options checked
and a last job : DISASTER RECOVERY of a subset of two VMs : VM4 and VM6
ZSTD compression, normal snapshot mode, merge backup synchronously. 1 point retention (a full is done each time, as intended)
All 3 jobs are in sequence, they do not overlap. first backup, second replica, third DR.
All 3 jobs execute with success, but in the BACKUP and REPLICA jobs :
only VM4 and VM6 have
with "delta" backup type at the end.
the fact that the DR job is done one these two VMs provoke this behavior. this is not wanted.
What we see in REPLICA job log :

And in backup JOB log :

I checked on VM VDI after the DR job is finished and CBT is still checked

Why does a DR job resets something that make other two jobs fall back to full on these VDIs ?
@Bastien-Nollet new day without false INTERRUPTED

the scrutiny of backup email reports made me find a new bug in backups (not reports this time)
i'll create a new topic about the dreaded BACKUP FELL BACK TO A FULL --> can provoke it on purpose !
@olivierlambert on vmware, this was handled by DRS functionnality in highest level of licence
Easyvirt is a paying addon to XOA infrastructure
would some equivalent be developped internally one day ?