-
@ronan-a said in XOSTOR hyperconvergence preview:
@Swen This tool is useful to dump all key-values, there is no interpretation during dump calls: all values are a string. And metadata is a special key with a JSON object dump, the quotes are escaped by the smapi driver to store an object.
I suppose we can probably add an option to "resolve" the values with the right type like what is done in the driver itself.
It would be a great help to add an option to create some kind of json output. With this you are able to copy&paste this into a json verifier to do troubelshooting. I find it hard so read the default output at the moment when using several volumes.
-
@Swen said in XOSTOR hyperconvergence preview:
@ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
I attached the SMlog of the host.
SMlog.txtOk I got it:
Mar 29 14:46:52 pc-xcp21 SM: [8299] ['/bin/dd', 'if=/dev/zero', 'of=/dev/drbd/by-res/xcp-volume-a44a5d25-24a8-4f83-8b74-63fe36d9ec44/0', 'bs=1', 'seek=5268045312', 'count=512'] Mar 29 14:46:52 pc-xcp21 SM: [8299] FAILED in util.pread: (rc 1) stdout: '', stderr: '/bin/dd: '/dev/drbd/by-res/xcp-volume-a44a5d25-24a8-4f83-8b74-63fe36d9ec44/0': cannot seek: Invalid argument Mar 29 14:46:52 pc-xcp21 SM: [8299] 0+0 records in Mar 29 14:46:52 pc-xcp21 SM: [8299] 0+0 records out Mar 29 14:46:52 pc-xcp21 SM: [8299] 0 bytes (0 B) copied, 0.0013104 s, 0.0 kB/s Mar 29 14:46:52 pc-xcp21 SM: [8299] '
It's related to this trace, the problem is fixed in the latest linbit packages, I haven't synced them to our own repository yet.
-
@Swen said in XOSTOR hyperconvergence preview:
perfect, thank you for fixing it. Is this fix already part of the code I download to install it from scratch?
Not yet, I will probably add other fixes before.
-
@ronan-a If you want me to test some of your fixed, please don't hesitate.
-
-
Hello all,
I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.
-
@andersonalipio said in XOSTOR hyperconvergence preview:
Hello all,
I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.
We are using a separate network in our lab. What we do is this:
- get the node list from the running controller via
linstor node list
- take a look at the node interface list via
linstor node interface list <node name>
- modify each nodes interface via
linstor node interface modify <node name> default --ip <ip>
- check addresses via this
linstor node list
Hope that helps!
-
@Swen Thanks bud! It did the trick!
I did the interface modify commands on master only, it changed all hosts online with running guest vms and no downtime at all!
-
-
Is it possible to only use 2 hosts for XOSTOR?
-
@TheiLLeniumStudios It should be possible, but not recommended. You can end up in a split-brain-scenario.
-
We do not want to support 2 hosts for now. In theory, you can work if you add a 3rd machine acting as a "Tie Breaker", but it's more complex to setup. However, for a home lab, that should be doable
-
@olivierlambert can you please provide an update or better a roadmap regarding the implementation of linstor in xcp-ng? I find it hard to understand in which status this project is at the moment. As you know we are really looking forward to use it in production with our Cloudstack installation. Thx for any news.
-
We are close to a first release (at least a RC). That will be CLI-only, but we already have plans to replace the XOSAN UI in Xen Orchestra by XOSTOR
-
@olivierlambert thx for the quick reply! Does close mean days, weeks or month?
-
RC weeks I think
-
@olivierlambert will XOSTOR support deduplication?
-
Not yet, but I think I've read LINSTOR supports VDO, so it's possible in a future addition
-
@ronan-a did you test some linstor vars like:
DrbdOptions/auto-diskful': Makes a resource diskful if it was continuously diskless primary for X minutes
'DrbdOptions/auto-diskful-allow-cleanup': Allows this resource to be cleaned up after toggle-disk + resync is finishedThx for your feedback!
-
@Swen I suppose that can work with our driver. Unfortunately I haven't tested it.
It can be useful to use it, however we would have to see what impact it has on the DRBD network: for example in a bad case where we would have a chain of diskless VHDs suddenly activated on a host -
@olivierlambert
Hi,- is it possible to create more then one linstor SR in pool?
I have this error:
Error code: SR_BACKEND_FAILURE_5006
Error parameters: , LINSTOR SR creation error [opterr=LINSTOR SR must be unique in a pool],- Also, is it possible to have hybrid volume ssd+hdd (cache/autotiring) ?