The address field doesn't trim trailing whitespace. Not a big deal breaker. But it did take me a couple of minutes until I found out why my copy/paste address was giving me errors
Posts made by probain
-
RE: VMware migration tool: we need your feedback!
-
RE: Threadripper TRX50 7960x....yaay/nay?
I'm unsure if Threadripper is affected. But the EPYCs have a problem with networking speeds. And I'm quite sure they haven't found the root cause yet,
-
RE: Slow Read & Okay Write Speeds with Xen Orchestra to NFS
@Houbsi
what NAS are you using? -
RE: Tips on installing XO
I've acctually got a PR to help solve the documentation regarding the yarn forever part. Since I never could get that to work, I implemented a systemd variant instead.
Ping @olivierlambert for visibility
-
RE: Backup / Migration Performance
@zach
XCP-ng is unfortunately quite slow on individual streams for storage. It kind of disappears when using lots of VMs. But looking at any one individual one, it is surprising.A good read is the write up on Understanding the storage stack in XCP-ng
-
RE: Disater recovery backups crazy slow
@ludovic78
In the case of such a power outage, the backup would fail anyway.
And I'm also assuming that you have a separate share/dataset for those specific backups.
Coble it together with the Health Checks, and it is semi-production at least.
If it is for such a highly critical environment, that it wouldn't tolerate more than that. Then obviously you should open a support ticket -
RE: Disater recovery backups crazy slow
@ludovic78
Try and set the Sync-setting to Disabled, on the target dataset that you're sharing. See if that does any difference.
This makes a GIANT difference for me.
-
RE: How do I reset my XO password?
@jasonnix
Yeah that folder can fill up quite heavily.
See the following on how to clear the cache with yarn
https://yarnpkg.com/cli/cache/clean -
RE: SAML-plugin with Google Workspace?
@olivierlambert If I could get a pointer as to which source-document we're talking about. Then yes, I could whip something up.
-
RE: SAML-plugin with Google Workspace?
I've managed to get this working to a 93% satisfactory state. Techincally I get SAML working. However, when the login is authorized, it just kicks the user (me) back to the login page. However, manually going to the bare URL (https://xo.company.net), takes the login further.
I'm sharing sanitized screenshots below for how to get this to work. XO<->Google Workspace SAML.
@olivierlambert Maybe this can be of use to further flesh out the minimalistic section for SAML within the docs?Sorry for the screenshots being huge. But they're better than nothing.
-
RE: Orchestra logon screen is messed up after update
@JamfoFL I'm still guessing cache.. Could try in incognito mode.
OR... you got a broken URL. I can replicate similar stuff if I do bad things with the address.
e.g.
Browse to https://<xo.example.net>/signin
Try https://<xo.example.net>/signin/callback
Continue to https://<xo.example.net>/signin again
The sign in page is now borked. And the page needs to be closed, and open in a new window/tab instead.But it might not be the same as what you're experiencing.
-
Feature idea: Make Disk migration reflink aware if for "Block Cloning" features
Scenario:
Backups are done to a (remote/networked/local) SR. The filesystem on the SR has features enabled e.g. ZFS Block Cloning. Having support for using reflink=always (or similar) would enable near instant replications of any copied data. I.e a Health Check is done. Another benefit would be that the data blocks being tested in said Health Check, would be the actual data blocks of the backup e.g. Disaster Recovery.Now my guess is that the VDIs are being migrated/imported through XAPI. And therefore would probably need upstream implementation. But as the title mentions, this is only a idea/suggestion/wish.
ZFS has the block cloning. BTRFS and other filesystems have similar advanced features. This could greatly improve perceived speeds far beyond what is currently being seen.
Two cents deposited.
Thanks for this great software. -
RE: Unhealthy VDI's
Unless you have thin provisioning. The garbage collection won't happen automatically. Or at least, not nearly often enough (imho). Instead it is only triggered when snapshots/VDIs are deleted.
IMHO, automatic scanning and garbage collection should happen regardless if the proviosioning is thin or thick. But that is only my two cents.
-
Feature Idea: Retry only the operation that failed (health check)?
So I was doing more backup-testing today.
When doing a "Full Backup" with subsequent health check enabled. The job became interrupted. However, the backup itself was successful. But it was the health check portion that failed/interrupted. This doesn't seem that it triggered the retry function.I propose that the health checks are included into the retry criteria. But with the added granularity that when doing a retry, it only retries/continues the actual operation that failed. So it was the backup itself, then retry and continue the job from there. However, if it's the health check. Then it's unnecessary to redo the entire backup. And therefore only the health check needs to be redone.
If I'm wrong about how things work, then feel free to correct me. I'm only spit balling ideas that are based on my understanding of what I'm seeing.
Thanks
Screenshot of one of the interupted backups report:
-
Will XCP-NG / XO consider migrating over to Valkey from Redis?
Due to Redis now going Source available. And therefore no longer being true open source. Larger and larger vendors are therefore switching over to the fork Valkey.
I would be interested to hear about what Vates sees the future of XCP-NG and XO projects going forward, in regards to this. -
RE: CBT: the thread to centralize your feedback
Enabling the new Purge snapshot data when using CBT on Delta backups
results in
cleanVm: incorrect backup size in metadata. But it seems to be successful anyway.
Update: I'm getting this regardless of what "Purge snapshot data when using CBT" is set to. So I'm right now not sure about what causes it.XOsource: commit 96b76. ?