just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Bastien-Nollet said in backup mail report says INTERRUPTED but it's not ?:
file packages/xo-server-backup-reports/dist/index.js by a
modification done, will give feedback
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

@fcgo when adding an XOPROXY to a job, it flows from XOPROXY to the remote
I think XOA is not involved, checked the network bandwidth, xoa was sleeping
XOPROXY read/writes from source remote to destination remote
@fcgo hello there,
from my experience : XOA proxies are just some stripped downed XOA. no limitation on backup functionnalities.
I'm ok with you, it lacks from in depth informations on how the backups are done...
only place to find compression informations is in Disaster Recovey jobs.
we upgraded CPU & RAM of our XOA, but offloaded all backup tasks to proxies
if you have 1000+ VMs i think you already have at least 16Gb of RAM for dom0 (or you have 500 hosts with 2VMs each and stick with the default...)
assigning jobs to proxies is kind of manual (for exemple veeam can manage a pool of proxies and take the least occupied or chosen one(s) in the pool)
if you implement proxies, you will be confronted to assign them to a job AND subsenquently assign them to remotes too ! you have to have a good planification of what you want to be done (a remote locked to a proxy is not seen by other proxies... sometimes need to create the SAME remote twice to attach it to two proxies... and be sure not to run in parallele on these two...)
planification of backups of 1000VMs is something.
@Bastien-Nollet 100% of our backup jobs are done by proxy
we offload that of main XOA that is purely for administration/management
@Bastien-Nollet another good day
I think i'll swap to feedback if problem is back than problem is not here anymore 

on your side to find something sexier than delay ^^' but it seems to be a race condition
Hi,
We stumbled upon what I would call a lack of check.. or a bug ?
You can create a VDI of a VM in an ISO shared SR...
We did it unexpectedly.
.rpc("disk.create", {
name,
size,
sr,
vm: vm.id,
bootable: false,
mode: "RW",
})
disk.create name=<string> size=<integer|string> sr=<string> [vm=<string>] [bootable=<boolean>] [mode=<string>] [position=<string>]
create a new disk on a SR
this is permitted, it should not ?
it was a 50Gb CIFS ISO SR, and we could create a 100GB VDI, appearing as a 100GB .img file in the SR 


totally broken UI

in DISK tab of the VM :

and it is really an ISO SR !

should this not be prevented ?
@Thiago_FS_Dantas hi, did you check NESTED VIRTUALISATION in the advanced tab of your VM ?

@wilsonqanda tried your workaround on a halted VM, and it worked !
If i snapshot -> disk still invisible
delete snapshot - > disk still invisible
but
if i snapshot --> disk still invisible
revert snapshot (with take snapshot option) --> disk APPEAR again
delete the two snapshots -> disks still there
edit : even without the take snapshot before revert, it is working, tried on another VM
@wilsonqanda doing a snapshot and deleting it do not resolve the issue for me
i'll try to snap and revert to the snap and tell you if its Ok this way for me
@Bastien-Nollet nice.
I can confirm with the dates of the backup logs that it is since the exact date of beginning of DR job that this problem appears on the two other jobs.
when not doing NBD+CBT, the backup & replica jobs keep the "full" snapshot on the VM.
DR do not do that, it delete the snapshot.
I guess this is a silent failure of the way DR manages its snapshot deletion that could delete the CBT bitmap ? and then, baaam fall back to full on other jobs relying on said bitmap... ?
this is a run of the DR job

no notion of NBD/CBT in DR job advanced parameters, it treats that differently

@wilsonqanda perhaps is it the same probleme as here ?
https://xcp-ng.org/forum/topic/11715/vdi-not-showing-in-xo-5-from-source./9
invisible VDIs on some SRs
they are seen as snapshots and not presented in XO5 web UI but you can see them in XO6 or by API