@olivierlambert
i meant the source for this package: xcp-ng-xapi-storage-volume-zfsvol
so that we can see how this new driver is implemented
@olivierlambert
i meant the source for this package: xcp-ng-xapi-storage-volume-zfsvol
so that we can see how this new driver is implemented
Can you provide a link to the github repo where we can find the source-code of this smapiv3 driver?
Thanks. The problem is taht we dont have a NFS datastore to use. So we had to do export/import.
@Carello
We also didnt get it to work with XO (vSphere 7 and 8). We used the Xen Conversion Manager VM (from XenServer) to do the conversion (With XCP-NG Center). This worked most of the time.
When it did not work we used CloneZilla.
Hi!
How can i pass through a serial port from the XCP-NG host to a VM
PCI passtrough is no problem but i didnt find a sample hot to do it with stty
Thanks, Franz
i would install the guest-tools right from the start after base install of the os.
create min 3 nodes, install them as you like with guest-tools and then install the first rke2-node. read the docs how to config metallb on rke2. you have to do a small config tweak in the nginx ingress controller config
then add the other nodes.
to manage rke2 i would use openlens.
you can use democtratic-csi but be aware that you completely trust truenas and this opensource-project for your data. i dont think that i would go this route when i am not expirienced with k8s.
i would use the nfs-provisioner and when everything works fine and you have a solid csi-enabled backup you can add democratic-csi in the mix.
for us, backup and restore was the biggest problem. in theory everything seebs easy with k10 or velero but if you completely shoot your cluster you will have a very hard time.
to be honest, after 6 month and some installations we gave up on k8s and migrated our customers and our internal IT to a setup where we use openSuse MicroOS VMs for every docker-compose project. We now have approximately the same amount of VMs which we had as namespace but with the benefit of complete control over resources with very little overhead. And we have the benefit of an optimal backup and restore.
K8S bite me quite a bit too often
Hi!
You can user the nfs provisioner: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
It gives you persistant-volumes as qcow2 images in a nfs share. this is a good solution if you have a high-available nfs storage.
k8s really can be very problematic so i would recommend to use as much default solutions and as few as possible vendor-solutions like csi-drivers. we had a really bad time with the vmware-csi-driver for example.
if you would like a capable k8s distribution i would recommend rke2 (from rancher) it is free and easy to install.
as loadbalancer we used metal-lb which is quite standard and very easy to configure.
if you are running the ingress-controller as node-port-service you must have an external load-balancer
We are using it exclusively on all our customers. Enterprise level. Not as easy to configure as Veeam. More complex but much more capable.
We are mostly doing VM-Backups for DR and additional agent-based backups because the Application-Agents are very good and support a lot of different Applications and do not cost additional licenses when used in VMs.
I am not a big fan of AppAware Backups which only rely on the VM-Backups.
But to be honest I would really like to see a CBT enabled XOA Backup for many tiny customers which cannot afford a CommVault installation.
For now i coded my own backup which does Xen Vm Snapshots and does raw backups of the VDIs to kopia (via stdin), which does the dedup to S3. So we have dedup and save a lot of space. we have to read the whole VDI for the incrementals anyway.
@olivierlambert
No. A Proxy-VM (Windows) has to be deployed in the Pool. Nothing in Dom0.
To my understanding, Commvault mounts the Snapshot-VDIs to the Proxy-VM and does the backup this way. They keep track of the block changes on the Proxy-VM.
In the simplest Setup the CommVault Server-VM itself can be the Proxy-VM. The only requirement is that it has access to all SRs where the VMs to backup reside on.
You can read the commvault docs on the internet. it is described very well.