-
@Swen said in XOSTOR hyperconvergence preview:
Great to know, thx for the info. Is there a reason not to use the same uuid in xcp-ng and linstor? Does it make sense to add the vdi and/or vbd uuid to the output of the command?
The main reason is that you cannot rename a LINSTOR resource once it has been created. And we need to be able to do this to implement the snapshot feature. To workaround that, a shared dictionary is used to map XAPI UUIDs to the LINSTOR resources.
It's a non-sense for the readability to use the XAPI UUIDs for the LINSTOR resources due to VDI UUID renaming when a snapshot is created.
I don't see a good reason to add the VDB UUIDs in the dictionary. You already have the VDIs, you can use xe commands to fetch the other infos.
-
@ronan-a sorry, was just unsure if you need more than SMlog files.
I will send you the log files via mail, because of the size.
-
@ronan-a said in XOSTOR hyperconvergence preview:
The main reason is that you cannot rename a LINSTOR resource once it has been created. And we need to be able to do this to implement the snapshot feature. To workaround that, a shared dictionary is used to map XAPI UUIDs to the LINSTOR resources.
It's a non-sense for the readability to use the XAPI UUIDs for the LINSTOR resources due to VDI UUID renaming when a snapshot is created.
I don't see a good reason to add the VDB UUIDs in the dictionary. You already have the VDIs, you can use xe commands to fetch the other infos.
Ok, that makes sense. But what do you mean by "You already have the VDIs"? As far as I see the only mapping from linstor-kv-tool output to the disk on xcp-ng is the name_label, is that correct?
-
@Swen said in XOSTOR hyperconvergence preview:
Ok, that makes sense. But what do you mean by "You already have the VDIs"? As far as I see the only mapping from linstor-kv-tool output to the disk on xcp-ng is the name_label, is that correct?
No you have the VDI UUIDs:
"7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
The first UUID here is the VDI.
-
@ronan-a sorry, I totally missed that info.
-
@ronan-a said in XOSTOR hyperconvergence preview:
linstor-kv-tool -u xostor-2 -g xcp-sr-linstor_group_thin_device --dump-volumes -n xcp/volume
{
"7ca7b184-ec9e-40bd-addc-082483f8e420/metadata": "{"read_only": false, "snapshot_time": "", "vdi_type": "vhd", "snapshot_of": "", "name_label": "debian 11 hub disk", "name_description": "Created by XO", "type": "user", "metadata_of_pool": "", "is_a_snapshot": false}",
"7ca7b184-ec9e-40bd-addc-082483f8e420/not-exists": "0",
"7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
}One more question regarding the output of this command, if you don't mind.
Can you explain why it masks all quotation marks? It looks like it is JSON, but it is not really a JSON format. Are you open for reformating the output? MY goal is to be able to perform troubleshooting easier and faster. -
@Swen This tool is useful to dump all key-values, there is no interpretation during dump calls: all values are a string. And metadata is a special key with a JSON object dump, the quotes are escaped by the smapi driver to store an object.
I suppose we can probably add an option to "resolve" the values with the right type like what is done in the driver itself.
-
@Swen said in XOSTOR hyperconvergence preview:
@olivierlambert I need to me more clear about this: When doing the sr-create for the linstor storage no error is shown, but the pbd will not be plugged at the pool-master. On every other host in the cluster it works automatically. After doing a pdb-plug for the pool-master the SR will be plugged. No error is shown at all.
Ok so I confirm that it is a problem of timing when retrieving the list of volumes (more precisely their size) using the LINSTOR API. I modified the driver to retry in case of failure. So not a big issue.
-
@ronan-a perfect, thank you for fixing it. Is this fix already part of the code I download to install it from scratch?
-
@ronan-a said in XOSTOR hyperconvergence preview:
@Swen This tool is useful to dump all key-values, there is no interpretation during dump calls: all values are a string. And metadata is a special key with a JSON object dump, the quotes are escaped by the smapi driver to store an object.
I suppose we can probably add an option to "resolve" the values with the right type like what is done in the driver itself.
It would be a great help to add an option to create some kind of json output. With this you are able to copy&paste this into a json verifier to do troubelshooting. I find it hard so read the default output at the moment when using several volumes.
-
@Swen said in XOSTOR hyperconvergence preview:
@ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
I attached the SMlog of the host.
SMlog.txtOk I got it:
Mar 29 14:46:52 pc-xcp21 SM: [8299] ['/bin/dd', 'if=/dev/zero', 'of=/dev/drbd/by-res/xcp-volume-a44a5d25-24a8-4f83-8b74-63fe36d9ec44/0', 'bs=1', 'seek=5268045312', 'count=512'] Mar 29 14:46:52 pc-xcp21 SM: [8299] FAILED in util.pread: (rc 1) stdout: '', stderr: '/bin/dd: '/dev/drbd/by-res/xcp-volume-a44a5d25-24a8-4f83-8b74-63fe36d9ec44/0': cannot seek: Invalid argument Mar 29 14:46:52 pc-xcp21 SM: [8299] 0+0 records in Mar 29 14:46:52 pc-xcp21 SM: [8299] 0+0 records out Mar 29 14:46:52 pc-xcp21 SM: [8299] 0 bytes (0 B) copied, 0.0013104 s, 0.0 kB/s Mar 29 14:46:52 pc-xcp21 SM: [8299] '
It's related to this trace, the problem is fixed in the latest linbit packages, I haven't synced them to our own repository yet.
-
@Swen said in XOSTOR hyperconvergence preview:
perfect, thank you for fixing it. Is this fix already part of the code I download to install it from scratch?
Not yet, I will probably add other fixes before.
-
@ronan-a If you want me to test some of your fixed, please don't hesitate.
-
-
Hello all,
I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.
-
@andersonalipio said in XOSTOR hyperconvergence preview:
Hello all,
I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.
We are using a separate network in our lab. What we do is this:
- get the node list from the running controller via
linstor node list
- take a look at the node interface list via
linstor node interface list <node name>
- modify each nodes interface via
linstor node interface modify <node name> default --ip <ip>
- check addresses via this
linstor node list
Hope that helps!
-
@Swen Thanks bud! It did the trick!
I did the interface modify commands on master only, it changed all hosts online with running guest vms and no downtime at all!
-
-
Is it possible to only use 2 hosts for XOSTOR?
-
@TheiLLeniumStudios It should be possible, but not recommended. You can end up in a split-brain-scenario.
-
We do not want to support 2 hosts for now. In theory, you can work if you add a 3rd machine acting as a "Tie Breaker", but it's more complex to setup. However, for a home lab, that should be doable