@Delgado relevant packet aren't published yet, and we are testing it as extensively as possible before letting you play with it

Posts
-
RE: Our future backup code: test it!
-
RE: Restoring a disk only not the whole VM
@McHenry you can select the SR on each disk,, and ignore the disks you don't need
then you can attach the disks to any other VM
-
RE: Backblaze as Remote error Unsupported header 'x-amz-checksum-mode' received for this API call.
thanks for pointing the exact issue in the aws SDK. It gave use enough matter to find the fix
-
RE: Question on Mirror backups
@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
-
RE: Backblaze as Remote error Unsupported header 'x-amz-checksum-mode' received for this API call.
@mguimond @jr-m4 I would like to be able to keep the library up to date.
Would it be possible to patch a file in your install ? and check if the backup run It will disable the checksum computation .Note that if you use backup encryption, the checksum are already checked on restore
you can either use this branch : fix_s3_backblaze or modify directly the code in your installation in <xo>/node_modules/@xen-orchestra/fs/dist/s3.js
replace :
requestHandler: new _nodeHttpHandler.NodeHttpHandler({ socketTimeout: 600000, httpAgent: new _http.Agent({ keepAlive: true }), httpsAgent: new _https.Agent({ rejectUnauthorized: !allowUnauthorized, keepAlive: true }) })
by
requestHandler: new _nodeHttpHandler.NodeHttpHandler({ socketTimeout: 600000, httpAgent: new _http.Agent({ keepAlive: true }), httpsAgent: new _https.Agent({ rejectUnauthorized: !allowUnauthorized, keepAlive: true }) }), requestChecksumCalculation: "WHEN_REQUIRED", responseChecksumValidation: "WHEN_REQUIRED"
(the difference is
, requestChecksumCalculation: "WHEN_REQUIRED", responseChecksumValidation: "WHEN_REQUIRED"
-
RE: Question on Mirror backups
There is a lot of room to improve UX in the backup form, and we'll do better when rewriting them for XO6. Mainly, there is a confusion between Full backup ( a xva file exported by xcp-ng) and the base backups during an incremental backup
Mirror incremental will retransfer the base + deltas . You can check it by looking into the backup job. A mirror job will always transfer all new data , and will then prune the backups on the targets depending on the retention set
For the schedule : the backup must be disabled, since you don't want them to run alone. ONly the schedule must be scheduled and run
-
RE: Feedback on immutability
@afk the agent is as dumb as possible
also if you encrypt the backup, the agent will need to decrypt the metadata to detect the chains, thus having access to the encryption key, which need getting the encryption key out of XO and transferred to the immutability agent
I think it will be easier to provide more feedback on the immutabiltiy backup, XO has access to the chain , and / or alert when something seems to be strange
-
RE: Backup issues with S3 remote
@peo do you have anything in the xo logs ? can you post the json of a failed backup job ?
-
RE: Backup issues with S3 remote
@peo in object storage world ( S3 compatible) , there is not really the notion of "directory", it's more a convention that / is a logical level in the file key, but rclone has errors on "Directory not empty". It let me think that rclone do things a little differently than aws s3.
We try to be as compatible as possible with S3 implementation, but each object storage has its own quirk. Thus we only supports the most used setups for now.
-
RE: the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution.
@SudoOracle could you post the full json log of the backup ?
(you can get it by clicking on the download button on top of a failed execution job)if possible, one per type of failed backup job
-
RE: the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution.
@manilx are there multiple job running on the same VM ?
-
RE: the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution.
this happens when 2 backup jobs are working on the same VM , or a backup and a background merge worker .
If it's the first case, you can use a job sequence to ensure the backup run one after anotherIf it's the latter, you have a check box to ensure the merge is done directly inside the job (bottom of the advanced settings of the backup job ), instead of doing this in a background job. It will slow down a little the job, but ensure it's completely done.
-
RE: Compressed backups
@abudef
incremental backups on block storage are compressed with brotli in fastest mode. But this is enough to have a massive gain , since we compress all the zeroes.incremental backup ( *.vhd files ) on non block storage aren't compressed
Full backup (as in xva file ) aren't compressed by xo, but can be compressed by xcp ng ( using gzip or zstd )
for now, this is not configurable, we are waiting for the future remote form to handle this.
-
RE: What is the status/roadmap of V2V (Migrating from VMware to XCPng/XO) ?
@afk the newer VMFS put more lock on the files, locking the full chain of snapshot and base disks instead of locking only the active disk.
Even VMFS5 sometimes lock the full chain.
The more open storage is a NFS, that can additionally be accessed directly by XO, not going through the soap api , giving anice performance boost and not hammering the esxi too much ( only for the metadata )We don't intend to migrate the VM running in vmware from XO automatically. A user could script it though, by using a combination of the vmware apis and xo-cli.
We never succeed in getting the disks though NBD, but it should be possible ( https://github.com/vexxhost/migratekit/blob/main/internal/vmware_nbdkit/vmware_nbdkit.go#L91 ) and ( https://vdc-download.vmware.com/vmwb-repository/dcr-public/8ed923df-bad4-49b3-b677-45bca5326e85/d2d90bb6-d1b3-4266-8ce5-443680187a9a/vim.vm.device.VirtualDevice.BackingInfo.html )
What we are mostly missing here, is internal knowledge on the Vmware side : how to get the NBD server address,how to authenticate and how to get the exportname of a disk through the soap API .Since we already have the knowledge of using NBD to read massive volume of data as long as we can connect.
-
RE: Can .keeper files be deleted from the backup share?
they are created to ensure the samba mount is not removed by the os or another thread on XO
I would not delete them , especially since they are 0bytes files -
RE: Feedback on immutability
@rtjdamen said in Feedback on immutability:
@florent so this does mean it will never work when a forever incremental is used?
you can't have a immutable forever backup without having a infinite length, and an infinite
It may be possible only if we release the constraints.
The immutable script could release the immutability , merge the disks, but that means : the immutability will be lifted from time to time, and the responsibilities of the immutability script will be greater, and we'll need a way to track the vhd to merge and transmit the information to the immutability script -
RE: Feedback on immutability
@rtjdamen for the immutability to be useful, the full chain must be immutable and must never be out of immutability
the merge process can't lift/ put back the immutability , and increasing synchronization between process will extend the attack surface.
immutability duration must be longer than or equal to 2 time the full backup interval -1
the retention must be strictly longer than the immutability .for example, if you have a full backup interval of 7 a retention of 14 and immutability duration of 13 , key backup are K, delta are D. Immutable backup are in bold . unprotected chain are
strikedKDDDDDDKDDDDDD worst case, only one full chain protected
KDDDDDKDDDDDDK
KDDDDKDDDDDDKD
KDDDKDDDDDDKDD
KDDKDDDDDDKDDD
KDKDDDDDDKDDDD
KKDDDDDDKDDDDD best case almost 2 full chain protected -
RE: Feedback on immutability
@rtjdamen great work
- the immutability duration is per repository, to limit the attack surface to the bare minimum
- nothing can really be software protected against the root user. This is where physical device writable only once win
- it should ignore the cache.json.gz , but the json file containing the backup metadata are protected along the disk data. Same for the pool metadata/xo config
An additional note : to ensure that an incremental backup is really protected during n days, you must have
- a full backup interval smaller than n
- a retention greater than 2n - 1
That way an attacker won't be able to modify the base disk used for restore
-
RE: Backup from replicas possible?
@flakpyro for now there is no tag selector , but you can now select the VM list to be replicated