@MajorP93 this settings exists (not in the ui )
you can create a configuration file named /etc/xo-server/config.diskConcurrency.toml if you use a xoa
containing
[backups]
diskPerVmConcurrency = 2
@MajorP93 this settings exists (not in the ui )
you can create a configuration file named /etc/xo-server/config.diskConcurrency.toml if you use a xoa
containing
[backups]
diskPerVmConcurrency = 2
@MajorP93
interesting, Note that this is orthogonal to NBD.
I note that there is probably more work to do to improve the performance and will retest VM with a lot of disk
Performance is really depending on the underlying storage.
compression and encryption can't be done in "legacy mode" , since we won't be able to merge block in place in this case.
@acebmxer is 10.120.20.119 your xo- proxy ?
I misread your question, you want to exclude the xoa from the process, not the proxy. Today the disk conversion can only be done directly by the xoa.
The data flow is always {vpshere/esxi} => xoa => xcp-ng , with a proxy that can be between xoa and xcp-ng
The only way to handle this without goiing through delilah would be to deploy a second xoa/xoce directly in tilton pool . You can have multiple xoa connected to the same pool.
interesting
can you try to do a perfomance test while using block storage ?

This will store backup are multiple small ( typically 1MB ) files , that are easy to deduplicated, and the merge process will be moving / deleting files instead of modifying one big monolithic file per disk. It could sidestep the hydratation process.
This is the mode by default on S3 / azure, and will probably be the mode by default everywhere in the future, given its advantages
(Note for later : don't use XO encryption at rest if you need dedup, since even the same block encrypted twice will give different results)
@olivierlambert the tar stream is passed as is
is there any warning in the relevant backup job log ?
@acebmxer is the proxy used as a http proxy forone of the involved pool ?
can you show a screenshot of your "server" screen ?
@Austin.Payne a health check will download the full backup from your remote to the storage repository
what is the size of the VM ?
@planedrop you can't change the remote encryption if the remote is not empty
in the future we intend to be able to use rolling encryption ( that is encrypting the new block/file with the new key ) to permit an easier upgrade and key rotation
@MBNext is it possible to post the full json (you can download it on the top of the windows that shows the backup progress )
@MajorP93 the size are different between the disks, did you modify it since the snapshots ?
would it be possible to take one new snapshot with the same disk structure ?
@Pilow said in Veeam backup with XCP NG:
@olivierlambert I get you.
Same in veeam and synthetic fulls, it's only between the server/proxy and the repository.
but we get a progressing percentage while the job sits while doing the synthetic full.would be cool to "see" the progress of merging.
why ? bad habits from previous technologies used I guess.
Giving better observability is something we are working actively on , this is tightly coupled with being able to stop a task, and I hope we will be able to show the progress soon
@Pilow said in Veeam backup with XCP NG:
@flakpyro thank you
@florent , would be nice to add this to XCP backup ? merge progress
You can check the toggle "merge sychronously" in the advanced block of the backup job, to at least know when the merge is done
(we intend to make this the default choice in the near future)
nice
I think they compute speed as "disk size / time" , whereas we compute it as " used data / time"
That's great to have more choice on this space
@olivierlambert that is a discrepancy between what was planned ( a vm delta backup) and what we can do with the data available ( a full ) . In this case, it's only one disk that fell back to full
We have to improve how to show the information with mixed state
@yaroz it loosk like there is an issue with some error not stopping the job as expected, I am currently working on this
In the meantime , you can restart XO , it will forcefully close all running jobs
@olivierlambert because of time constraints and some decision to take on which tag do we use if the VM had changed. We pushed back these deceision until mirror were more widely used .
( easiest way would be to use always the last tags. )
To be fair, If possible I would prefer to wait for XO6 , the backup form in XO5, is quite complex to modify
@Pilow This one should be doable, I am adding it to our backlog
@Pilow what would need to change a lot of things. We are starting a project to rewrite the ACL, and I think it's a better way to handle it. But it will take time before releasing it.
different platform have different internal , and so different edge case. We are here to help to bridge the gap.
@farokh the job name is in the snapshot name. Are theses snapshot of the same job ?
Did you enable rolling snapshot on the backup job ?