@olivierlambert whoops! Well spotted Will post in the real one
Best posts made by mauzilla
-
RE: Best way to determine whether files in the backups are still relevant
-
RE: Best way to determine whether files in the backups are still relevant
@florent you guys are absolute geniuses, there isn't a company in the world that matches your quality support.
I have a question though, say I have my backup set with 2 retentions on delta, and we do a full backup say each 20 backups. Am I right to assume that when the full backup occurs (so after 20 backups), in theory, there will be 2 full sets on the backup storage?
Example
Day 19: Full.vhd delta.vhd On day 20: oldfull.vhd (which is a merge of the day 19 full + delta) newfull.vhd (which is the new VM) Day 21: full.vhd (which is now the old newfull.vhd) delta.vhd (delta for day 21)
If so, that means that if we have large VM's (some are 2.5TB in size), there will come a time that that VM will use up double it's size on the backup storage.
Lastly, as we will be moving to XCP Pro + support in January, we want to move our backups from the now historical / community edition to the new one. I know at some point I saw you had moved to NBD backups and there is also an option to "test" the backups. Would this then make periodical full backups irrelevant as there would be no need to worry about backup corruption over time?
-
RE: Best way to determine whether files in the backups are still relevant
@florent Good morning @florent
I once again have to thank you guys for your amazing support (even in a case like this). I have added the 2 remotes to our paid enterprise version (it's added verbatim as per our community version).
Other info:
- Backups are setup as incremental
- Backups run either daily or weekly (2 backup jobs) but both configured the same
- 1 concurrent backup
- 2 Backup retention
I will open a ticket now, thank you!
-
RE: Benchmarks between XCP & TrueNAS
@andrewperry yes, we replicate TrueNAS to a "standby" TrueNAS using zpool / truenas replication. Our current policy is hourly. We then (plan) to do Incremental replication of all VM's to a standby TrueNAS over weekends giving a 3 possible methods for recovery.
Currently we're running into a major drawback with VM's on TrueNAS over NFS (specifically VM's that rely on fast storage such as databases and recordings). We did not anticipate such a huge drop in performance having VHD files over NFS. We're reaching out to XCP to ask them to give us some advice as it could likely be in part due to customization we can make.
-
RE: Urgent: how to stop a backup task
@andrewperry I think healthchecks is the answer here. We're not backing up vms but rather incremental replication with health checks. If a vm does not fail health checks I cannot see a reason for a full backup unless it becomes a snapshot chain issue. Olivier might be able to provide better insights here, we're in process of implementing the above will keep you posted
-
RE: Benchmarks between XCP & TrueNAS
@nikade we'll be doing it right away, and given that I now know that there is potential challenges I can address with performance I'd rather give it a go. Thank you all for your feedback so far, a huge relief!
-
RE: Shared Storage Redundancy Testing
We will test this today and let you know. Ultimately the use case here is to be able to make use of a failover NAS (which is replicated at NAS level) so that it's a simpler process to switch to a failover in the event of failure (else there is no practical point to replicate the VHD's between external storage servers if we cannot "switch" to another NAS.
I will let you know the outcome, but agree with @Forza that if this does work it would be a great addition to the GUI to allow for a "switch to failover" scenario
-
RE: increase 24 hour timeout-limit on backup/copy
You guys are absolute rockstarts! Yannick connected and was able to resolve the issue, thank you again for all of your hard work!
-
RE: increase 24 hour timeout-limit on backup/copy
@olivierlambert strange one, thank you Olivier, I created a ticket 7714072
-
RE: XCP / XO and Truenas Scale or Core
Thank you all seems I've got my answer, will go core
Latest posts made by mauzilla
-
RE: Socket topology in a pool
@Greg_E we're running into an issue at the moment where one one of our hypervisors with 4 sockets, the CPU is approximately 30-50% utilized, but the VM's are battling with CPU usage (or contention rather). Most VM's run fine, but when the CPU comes into play, it's quite obvious that some of the VM's are dramatically slower.
I have a mix match of configurations (mostly 1-2 sockets x cores), so I am trying to assess whether there is a configuration issue (where it would be better to specify 4 sockets x cores than 1-2 sockets if you have a 4 socket system)
-
RE: Socket topology in a pool
So to further simplify (sorry previous reply was on my phone). I need to state upfront that although I know that we specify the topology / RAM / disk etc for each VM, when it comes down to the RAM and CPU aspect I am a complete novice in the understanding of the underlying technologies work in distributing the resources to the VM's. I know we can set it, but how it works is a new landscape.
What I am trying to assess is whether there is a possibility of bad design / rollout. If my hypervisor has 4 physical CPU's (or sockets I presume within XOA). Let's say I have 10 cores per socket, so the over simplification that I have in theory 40 cores available for my VM's.
Say I setup 10 VM's, each with 2 cores. In theory I am allocated / maximum consuming 20 cores. What I am trying to assess is that if I setup my VM's to use only 1 socket (thus my XOA setup is 1 socket 2 cores), is this setting referring to the actual socket / underlying physical CPU 1, or is this a virtualized topology (so the VM is under the impression it has 1 socket?)
If it is the underlying socket / physical CPU, would this then imply that the physical CPU 2, 3 and 4 would never be utilized because I have my VM's all setup as a 1 socket 2 cores setup? If however my understanding now is incorrect and that the setting of x sockets x cores is simply to give the VM a topology of what it think it has, what benefit is there then for the underlying VM in having different sockets / CPU's if this is simply a virtualized setting?
-
RE: Socket topology in a pool
I think I did not ask the question correctly. To simplify, if I have 4 sockets but set my vms up to use 2 sockets x cores, does this mean it will utilize socket 1 and 2 and never 3 and 4? If so, how would my config need to change to say use any sockets available but x cores.
I'm trying to assess if in certain situations I am not distributing the load of my vms correctly.
-
Socket topology in a pool
We're finalizing our pool. We have 2 hosts with 4x CPU's and a last host with 2x CPU's (same range, just the sockets).
This obviously means that if I have VM's that has a topology say of 4 sockets 2 Cores, I will not be able to move the VM's from a 4 socket host to a 2 socket host.
How does XCP distribute the load? If we change our topology to have all VM's utilize 2 sockets, will only 2 sockets be used (thus socket 3/4 on a 4 socket host will not have any VM's utilize those CPU's) or will XCP still distribute the load to all CPU's but utilize the least busy CPU when booting that VM up?
-
RE: Benchmarks between XCP & TrueNAS
@andrewperry yes, we replicate TrueNAS to a "standby" TrueNAS using zpool / truenas replication. Our current policy is hourly. We then (plan) to do Incremental replication of all VM's to a standby TrueNAS over weekends giving a 3 possible methods for recovery.
Currently we're running into a major drawback with VM's on TrueNAS over NFS (specifically VM's that rely on fast storage such as databases and recordings). We did not anticipate such a huge drop in performance having VHD files over NFS. We're reaching out to XCP to ask them to give us some advice as it could likely be in part due to customization we can make.
-
RE: Urgent: how to stop a backup task
@olivierlambert do you have a link where we can read up on chained backups?
-
RE: Benchmarks between XCP & TrueNAS
@andrewperry - we've started our migration from local storage to TrueNAS based NAS (NFS) the last couple of weeks, and ironically ran into our first "confirmed" issue on Friday. For most of the transferred VM's we had little "direct" issues, but we can definately see a major issue with VM's that are reliant on heavy databases (and call recordings).
We will be investigating options this week, will keep everyone posted. Right now we dont know here the issue is (or what a reasonable benchmark is) but with 4 x TrueNAS, 2 pools per TrueNAS with enterprise SSD, SLOG's for each pool and a dual 10GB Fibre connection to Arista having only 2-3 VM's per pool is giving mixed results.
We will first try to rule out other hardware but will update the thread here as we think it will be of value for others and maybe we find a solution. Right now I am of the opinion that it's NFS, hoping we can do some tweaks as the prospects of just running iSCSI is concerning as thin provisioning is then not possible anymore.
-
RE: Urgent: how to stop a backup task
@andrewperry I think healthchecks is the answer here. We're not backing up vms but rather incremental replication with health checks. If a vm does not fail health checks I cannot see a reason for a full backup unless it becomes a snapshot chain issue. Olivier might be able to provide better insights here, we're in process of implementing the above will keep you posted
-
RE: Incremental Replication Health Testing?
@olivierlambert sorry yes, I meant replication - Would be a nice little feature
Thank you again for everything your team is doing!
-
Incremental Replication Health Testing?
Backups have the ability to do health tests, do you have any plans on incorporating a similar feature for incremental backups? We're opting to rather do IR instead of backups as it vastly changes DR timeframes, but to avoid "full backups" after x backups, it would be a great addition if the consistency of IR can also be tested to ensure you can continue the chain until an issue occurs