@olivierlambert whoops! Well spotted Will post in the real one
Best posts made by mauzilla
-
RE: Best way to determine whether files in the backups are still relevant
-
RE: Best way to determine whether files in the backups are still relevant
@florent you guys are absolute geniuses, there isn't a company in the world that matches your quality support.
I have a question though, say I have my backup set with 2 retentions on delta, and we do a full backup say each 20 backups. Am I right to assume that when the full backup occurs (so after 20 backups), in theory, there will be 2 full sets on the backup storage?
Example
Day 19: Full.vhd delta.vhd On day 20: oldfull.vhd (which is a merge of the day 19 full + delta) newfull.vhd (which is the new VM) Day 21: full.vhd (which is now the old newfull.vhd) delta.vhd (delta for day 21)
If so, that means that if we have large VM's (some are 2.5TB in size), there will come a time that that VM will use up double it's size on the backup storage.
Lastly, as we will be moving to XCP Pro + support in January, we want to move our backups from the now historical / community edition to the new one. I know at some point I saw you had moved to NBD backups and there is also an option to "test" the backups. Would this then make periodical full backups irrelevant as there would be no need to worry about backup corruption over time?
-
RE: Best way to determine whether files in the backups are still relevant
@florent Good morning @florent
I once again have to thank you guys for your amazing support (even in a case like this). I have added the 2 remotes to our paid enterprise version (it's added verbatim as per our community version).
Other info:
- Backups are setup as incremental
- Backups run either daily or weekly (2 backup jobs) but both configured the same
- 1 concurrent backup
- 2 Backup retention
I will open a ticket now, thank you!
-
RE: Benchmarks between XCP & TrueNAS
@nikade we'll be doing it right away, and given that I now know that there is potential challenges I can address with performance I'd rather give it a go. Thank you all for your feedback so far, a huge relief!
-
RE: Shared Storage Redundancy Testing
We will test this today and let you know. Ultimately the use case here is to be able to make use of a failover NAS (which is replicated at NAS level) so that it's a simpler process to switch to a failover in the event of failure (else there is no practical point to replicate the VHD's between external storage servers if we cannot "switch" to another NAS.
I will let you know the outcome, but agree with @Forza that if this does work it would be a great addition to the GUI to allow for a "switch to failover" scenario
-
RE: increase 24 hour timeout-limit on backup/copy
You guys are absolute rockstarts! Yannick connected and was able to resolve the issue, thank you again for all of your hard work!
-
RE: increase 24 hour timeout-limit on backup/copy
@olivierlambert strange one, thank you Olivier, I created a ticket 7714072
-
RE: XCP / XO and Truenas Scale or Core
Thank you all seems I've got my answer, will go core
-
RE: Backups not starting after failed NFS mount
@olivierlambert you're a grandmaster, never even considered that It seems to be stuck but likely firewall will have a look thank you
-
log4j vulnerability impact
Just checking if there is any impact in XCP-NG and / or XO that may be impacted by the log4j vulerability? Not sure if some of the API's uses java?
Latest posts made by mauzilla
-
RE: Running XO / XCP's on a "backup" network
just bumping my post Hoping someone has some recommendations?
-
Running XO / XCP's on a "backup" network
We're replacing our 10GB switch with a redundant 10GB network next weekend. During this maintenance our 10GB network will be off and our normal 1GB network (WAN network) will remain active. Our 10GB network isnt used for any client facing / WAN traffic, it's only really there to facilitate backups and off course access between the XOA appliance and the various XCP hosts (they are all on the same private LAN).
During maintenance we still want to get a "view" of the hosts to ensure we can ensure these remain active (our teams work seperately and will not be at the DC, so the network team wants to ensure that we have access to the hosts whilst the 10GB network is taken out, new cabling etc which can take a couple of hours).
Our plan is to setup a temp XOA appliance that will run on the WAN network and add the hosts to the temp XOA via a designated IP range we will setup. This is where we need some assistance.
For example:
1 - 10GB LAN (current) - 10.1.1.0/24 (where each servers management IP is configured on the 10GB interface with an IP of say 10.1.1.1 and the XOA appliance is setup also on a 10GB port
2 - We want to setup a "temp" network on the 1GB network for each host and appliance, for example (1GB LAN 10.1.2.0/24) and give each host an additional IP (not management) on one of the 1GB interfaces and setup a XOA appliance with the same range say 10.1.2.1 for the host and 10.1.2.2 for the applianceWe have tested this in our testbench and it seems to work. We're able to access the XCP hosts on one of the 1GB nics instead of the 10GB nic using a seperate XOA server, our only concern is about the "management" part. We dont want to change the management interfaces of each host to the 1GB during this period, we effectively just want a "view" to see hosts are still online and operational. This is a short term option for a couple of hours so changing management interfaces seems like a bit of overkill as we will terminate the temp XOA after the maintenance is completed.
So questions:
- As it appears I am able to access the host on both switches in independant subnets (providing the XOA has access to the subnet either via firewall or just being on the same subnet, what is the difference between this and a "management" interface. As the management interface will remain in the example 10.1.1.0/24 range but I am still able to add the server on another subnet, what actions may not be operational if my access is not on the management interface but on another interface?
- Second question is related to the new redundant 10GB network. Currently each host has a single 10GB port on which the management interface is setup. After this, we need to create a LACP bond that goes to each switch. What is the process in achieving this as we would like to then have the management interface on the bond and not a single interface as it's now. I assume we need to create a new network > bond > choose the interfaces and choose LACP, but as the management interface is already setup on one of the interfaces we want to add to the bond, would we even be able to create a bond if the interface is active?
We have our testbench here and happy to run some individual tests, but hoping the XCP community can assist us here with some tips considering the above.
-
RE: XOSAN, what a time to be alive!
@nikade thank you, we're on premium, I see it indicates optional on the price matrix but I cannot see the optional costs anywhere?
-
RE: XOSAN, what a time to be alive!
@tjkreidl Im beginning to wonder if I have XOSAN and XOSTOR confused? Are these different solutions? Also, (we have a premium subscription), is this included in the XO license cost or are there other costs involved?
-
XOSAN, what a time to be alive!
I have to give it to your team at Vates, you guys really bring out amazing features!
In your recent blog I happened to read about XOSAN and although not a new service this is an amazing feature!
My understanding is that with XOSAN I can pool my local storages together and data is replicated to the other pool members storage, leaving a copy of that data on all of the hosts.
Can you exclude hosts or AKA do I select which local storages to pool together?
-
RE: Shared Storage Redundancy Testing
@olivierlambert, we're simulating the pbd disconnect to see what would happen in production. The NAS was shutdown (albeit the VM's were still running), we then force shutdown the VM's.
Running xe pbd-unplug is stuck (and I assume this is likely due to the Dom being unable to umount the now stale NFS mount point). This could normally be resolved (if one has access to the dom0 with a lazy unmount) but obviously we only interact with through XAPI so not sure if there is an option to achieve this?
What we're trying to do is to avoid a reboot if a NAS fails (as it may be for the entire pool and not just for 1 host). Any ideas?
-
RE: Benchmarks between XCP & TrueNAS
Considering the feedback we got yesterday (and not previously knowing TrueNAS support ISCSI), we took a chance this morning and setup a ISCSI connection:
- Created a "file" ISCSI connection on the same pool with the SLOG
- Setup in XCP / XO
- Migrated VM to the ISCSI SR
And wow, what a difference, jumping from approximately 44.6MB to 76MB/s and IOPS jumping from 10.9 to 17.8k
This leads me then to believe if we designate a sub section of the pool in TrueNAS for ISCSI, we will be able to reap the benefits of both by leaving "data" disks on the NFS (so that we can benefit from thin) and put disks within the ISCSI SR (understanding that it will be full VDI)
Questions:
- With regards to backups / Delta replication, can I mix / match backups where some disks are VHD (thin) and some are full LVM? I assume that the first backup will always be the longest (as it will need to backup the full say 50GB of the disk isntead of the thin file) - Will this impact delta replication / backups?
- Any risks with the intended setup? I can only assume that other people might be doing something similar?
I was really amazed with the simplicity of the ISCSI setup between TrueNAS and XCP - We have an equallogic and I attempted to setup ISCSI but it was extremely difficult if you dont understand the semantics involved
-
RE: Benchmarks between XCP & TrueNAS
@nikade we'll be doing it right away, and given that I now know that there is potential challenges I can address with performance I'd rather give it a go. Thank you all for your feedback so far, a huge relief!
-
RE: Benchmarks between XCP & TrueNAS
@john-c thank you for this info, I will investigate immediately tonight.
(not wanting to misuse the XO forum for TrueNAS questions), but we're moving into production 4x TrueNAS servers next week. We have a couple of TrueNAS core servers already and have been very happy with the reliability it presents, and we opted to install TrueNAS core on the new servers simply on the fact of "what you know is what you know". It however has also come to my attention that TrueNAS Core is effectively "end of life"
Based on what you are sharing now, it would be a stupid idea for us to continue using Core if the forecast of better performance for our usecase would be present. We don't use the TrueNAS servers for anything else than NFS shares (could care less about the rest of the apps / plugins etc).
Are you perhaps using Scale for NFS VM sharing? And if so, would you recommend we onboard ourselves with Scale rather than Core?
-
RE: Benchmarks between XCP & TrueNAS
@nikade thank you so much, this one would have given me a fair amount of sleepless nights for the next week I knew NFS would be in most cases the most viable solution but as my experience has mostly been local storage running some benchmarks had me a bit worried.
From where we are now (local storage) the jump to network storage is the obvious choice (redundancy, replication, can run shared pools instead of independant hosts etc). We fortunately dont have too many high intensive workloads, and we are spreading the load accross different TrueNAS servers instead of a single pool (not that it would probably make much of a difference as it's not the disks that are being throttled but rather the connection to the disks).
Just something worth noting with regards to the SMAPIv1 integration, we ran a test directly on one of the hosts with a NFS share and the benchmark was about the same, I am not sure if the SMAPIv3 would improve this, maybe something @olivierlambert and team can confirm?
Long story short, I would rather offer a slower but more reliable solution than offer something that is fast but when there's trouble the recovery time is affected