no I am not bypassing the cloudinit, I am using a working template that have local cloud config that I was not able to get it working on my own. if you use the template, you would see 2 drives under disk like below.

no I am not bypassing the cloudinit, I am using a working template that have local cloud config that I was not able to get it working on my own. if you use the template, you would see 2 drives under disk like below.

@eperez539 just install anything you want, for example I added docker and kubernetes and converted it to template and now when I create new VM it retains everything.
@eperez539 this guide wont work, this is what I did, installed XOA, created the VM from HUB, modified it and converted again to the template.
@olivierlambert - ok, I will continue troubleshooting, the guide is pretty basic, i.e. install plain vanilla debian and install couldinit so was not sure what could go wrong.
@olivierlambert I will certainly try that, however I wanted to customize it for k8s cluster, so was wondering if the old guide need updating if at all.
@olivierlambert I wish, the community edition does not have hub support I believe.
@olivierlambert - I was trying Debian cloud template for XenServer with Debian 10, the installation process mostly works except 'dpkg-reconfigure cloud-init' does not bring up data source selection window. I also read XO Cloudinit guide which talks about openstack vs nocloud config drive, however I have tried the defaults and cloud config tab, but the VM does not pick up the cloud init settings, not sure if I should start a new topic or not but having similar issue.
@r1 the way the rook-ceph works that you pass raw disk to the Debian host and then it creates the OSD and cluster, I have few issues with this:
Instead if you get a xcp-ng native local cluster that can be used by VM's as local block type SR that would simplify the implementation many times.
@r1 I am not sure if I understand the question but let me explain, how its setup today:
I hope I was able to explain what I was trying to achieve but its never too simple lol
@olivierlambert in the VM's, and yes I am looking for something that is supported and do not break when upgrading, I have alternatives now that I can use but having a hyperconvergence storage like XOSAN is the best way to handle it.
@olivierlambert hyperconvergence - thats the goal.
@olivierlambert I have been following it and installed it on k8s cluster using heketi, dont know much more than that, my goal is to create replica of each SSD exposed as SR to VM's.
@olivierlambert thats great to hear! Is there a guide that I can look into?
@r1 I have multiple debian vm's that are running as slave nodes for my k8s cluster, for most app the config and data is on a NFS share, very few app require a block storage due to the internal database locking requirements. Fundamentally apps can start on any of the k8s node when restarted, so if an app need access to block storage, it should be available to each node. Another requirement is that it needs to be as fast as possible, so a gluster or ceph running over SSD's attached would do the job. Currently I am running ceph-rook within k8s and migrating it is a headache and hence evaluating other options.
@jmccoy555 I don't like block storage either, all my VMs are running off a NVMe NAS over 10Gig network, all my k8s app use dynamically provisioned NFS PV/PVC, unfortunately some of the apps today are not cloud native and need block storage for their inbuilt database. I looked at CephFS and it looks like another option to NFS though, will test the performance at some point...
@jmccoy555 that does not meet the need I have, I have a K8s cluster and everything works fine except some apps require block storage and should be fast, I have a rook-ceph cluster running in the k8s cluster and I was looking to move it out of that so I don't have to worry about it during k8s cluster upgrades/migrations.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/002815.html
@olivierlambert any update on this topic, almost over an year now so wondering, is it safe to use now?
Or I stumbled upon ceph implementation using SMAPIv3, I believe this a better option than glusterFS?
https://xcp-ng.org/forum/topic/1151/ceph-qemu-dp-in-xcp-ng-7-6/12
@eangulus what I am doing is running 2 instances, one on VM, another one in Intel NUC, only connected to one at a time and only one is running the backup, if one goes down, I can use the other by connecting to the pool.
I assumed that, yes I do have metadata, pool, daily delta, weekly full on schedule.
The issue I faced when the entire pool was destroyed including the xo vm. I had to manually figure out which one of the backup (on nfs share) was xo then import it using xcp-ng, then restore other VM's. So now I have one instance running as VM and other on Intel NUC...