@pierrebrunet
For Google Workspace:
Yes it is in the "Service Provider details"-section: See screenshot for example

Edit: Removed doubled screenshot
@pierrebrunet
For Google Workspace:
Yes it is in the "Service Provider details"-section: See screenshot for example

Edit: Removed doubled screenshot
@pierrebrunet
I'm jumping in here as well. Reporting that the PR fixes it for Google Workspace as well!
However, the checkbox in GW is called "Signed response".
No further adjustments of the plugin itself was needed.
@marcoi
I do this constantly by using either ctrl+mouse click, or middle mouse click.
I treat XO as almost ephemeral. And extensive regularly take scheduled backups of its config. Both locally and off-site.
For XO itself, I use community edition. But I've got an entire ansible-role I've built that sets it up within minutes on any configured computer. Mainly because I don't like/trust scripts that are from some random people on the internet. And also, my approach is as close to doing a step-by-step according to docs, as you can get.
I've been thinking about sharing the ansible-role. But unsure how much interest there is for such things.
But for XOA there isn't a need for such a thing. And still, with the extremely easy restore config from backup-file possibility. I wouldn't really do something crazy and overly complicated either.
@florent said in Backblaze as Remote error Unsupported header 'x-amz-checksum-mode' received for this API call.:
,
requestChecksumCalculation: "WHEN_REQUIRED",
responseChecksumValidation: "WHEN_REQUIRED"
I (previously as @jr-m4) just tried the patch you suggested. And I can confirm that this does indeed make the backup complete successfully!
Great work finding a solution that quicklly!
@dsmteam
Unfortunately this is the limit of what I know regarding SAML. The Google Workspace variant, was cobbled together after many hours of experimenting.
I wish you the best of luck. And if you do find a solution, please consider adding to the docs as well. As this is an area where we desperately need more comprehensive docs (as you're experiencing).
@dsmteam
Make sure to enter a newline at the end of the certificate field.
The SAML docs online are lacking. But I've written these instructions for Google Workspace SAML. Maybe this could help? And feel free to add to the docs if you happen to find a solution. 
https://docs.xen-orchestra.com/users#google-workspace---saml-supportgooglecom
@Danp responded to similar/same issue here:
https://xcp-ng.org/forum/post/89858
@R2rho
Faulty gear always sucks. But who would've guessed that two separate systems would produce the same problems. That is highly unlikely, but never impossible.
Good luck with the RMA
Well, unfortunately I got nothin... Extremely weird indeed
Given that BIOS and everything is updated to latest version possible.
First thing I do then with these kinds of symptoms, is to disable all kinds of power management and/or C-states in BIOS.
Some combinations of OS and hardware, just doesn't work properly.
If for nothing else, it's a easy non-intrusive test to do.
Update: I see that your motherboard has an IPMI interface. If the issues happen again, after you've disabled power management/c-states. You could use the remote functionality of the impi, to hopefully get some more info from the sensors and stuff.
Following; As I'm very interested in hearing more about this.
Looking a tiny bit further. The same discrepancy is present with Disaster recovery too. Being reference as Full Replication (formerly: Disaster recovery)
XO and the docs are conflicting with each other, with what the backup function should actually be called. See attached pic for example.

This is truly a niche sittuation. But I noticed that when I have VMs without any disks attached. The Rolling Snapshot schedule doesn't remove snapshots in accordance to the schedules Snapshot retention.
So I'm guessing that the schedule only looks at cleaning up snapshots of disks. But since the snapshots are acctually of the entire VM. Then maybe this should be takin into account as well?
If this is working as intended, then just ignore this post 
I finally have some new hardware to play with. And I'm noticing that the Healtcheck fails, due to the vGPU being busy with the acctual host.
INTERNAL_ERROR(xenopsd internal error: Cannot_add(0000:81:00.0, Xenctrlext.Unix_error(4, "16: Device or resource busy")))
My suggestion is that for the sake of Healthchecks should unassign any attached PCIe devices. If it is crucial that they are attached, then maybe have an opt-in checkbox either in the VM or next to the Healthcheck portion of backups?
The address field doesn't trim trailing whitespace. Not a big deal breaker. But it did take me a couple of minutes until I found out why my copy/paste address was giving me errors
I'm unsure if Threadripper is affected. But the EPYCs have a problem with networking speeds. And I'm quite sure they haven't found the root cause yet,