<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[XOSTOR Creation Issues]]></title><description><![CDATA[<p dir="auto">I originally presented this in the Discord channel, but I think it may be better suited here as to not flood the chat.</p>
<p dir="auto">First, I want to state that even though this is a lab, and we're not a customer yet, the support and community has been phenomenal.</p>
<p dir="auto">We are testing the XOSTOR in our lab. I got it running, HA wouldn't enable, so I decided to burn it down and rebuild it. For whatever reason the XOSTOR will not build. But I do have a log file. And for clarity, I tried it via CLI as well. Basically got the same result. Here is the log file.</p>
<pre><code>xostor.create
{
  "description": "Test Virtual SAN Part 2",
  "disksByHost": {
    "8c2c3bff-71ab-491a-9240-66973b0f1fe0": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "872a4289-595b-4428-aaa5-2caa1e70162a": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "9a7c2fb9-8db8-4032-b7fb-e4c3206bfe9c": [
      "/dev/sdb",
      "/dev/sdc"
    ]
  },
  "name": "XCP Storage 2",
  "provisioning": "thin",
  "replication": 2
}
{
  "errors": [
    {
      "code": "LVM_ERROR(5)",
      "params": [
        "File descriptor 3 (/var/log/lvm-plugin.log) leaked on pvcreate invocation. Parent PID 18436: python
File descriptor 9 (/dev/urandom) leaked on pvcreate invocation. Parent PID 18436: python
  Can't initialize physical volume \"/dev/sdb\" of volume group \"linstor_group\" without -ff
  /dev/sdb: physical volume not initialized.
  Can't initialize physical volume \"/dev/sdc\" of volume group \"linstor_group\" without -ff
  /dev/sdc: physical volume not initialized.
",
        "",
        "",
        "[XO] This error can be triggered if one of the disks is a 'tapdevs' disk.",
        "[XO] This error can be triggered if one of the disks have children"
      ],
      "call": {
        "method": "host.call_plugin",
        "params": [
          "OpaqueRef:bfc91b0f-1edb-4962-a784-29c02b603bef",
          "lvm.py",
          "create_physical_volume",
          {
            "devices": "/dev/sdb,/dev/sdc",
            "ignore_existing_filesystems": "false",
            "force": "false"
          }
        ]
      }
    },
    {
      "code": "LVM_ERROR(5)",
      "params": [
        "File descriptor 3 (/var/log/lvm-plugin.log) leaked on pvcreate invocation. Parent PID 6553: python
File descriptor 9 (/dev/urandom) leaked on pvcreate invocation. Parent PID 6553: python
  Can't initialize physical volume \"/dev/sdb\" of volume group \"linstor_group\" without -ff
  /dev/sdb: physical volume not initialized.
  Can't initialize physical volume \"/dev/sdc\" of volume group \"linstor_group\" without -ff
  /dev/sdc: physical volume not initialized.
",
        "",
        "",
        "[XO] This error can be triggered if one of the disks is a 'tapdevs' disk.",
        "[XO] This error can be triggered if one of the disks have children"
      ],
      "call": {
        "method": "host.call_plugin",
        "params": [
          "OpaqueRef:629e58ed-eafb-49af-b45f-7f6c21d1458a",
          "lvm.py",
          "create_physical_volume",
          {
            "devices": "/dev/sdb,/dev/sdc",
            "ignore_existing_filesystems": "false",
            "force": "false"
          }
        ]
      }
    },
    {
      "code": "LVM_ERROR(5)",
      "params": [
        "File descriptor 3 (/var/log/lvm-plugin.log) leaked on pvcreate invocation. Parent PID 8643: python
File descriptor 9 (/dev/urandom) leaked on pvcreate invocation. Parent PID 8643: python
  Can't initialize physical volume \"/dev/sdb\" of volume group \"linstor_group\" without -ff
  /dev/sdb: physical volume not initialized.
  Can't initialize physical volume \"/dev/sdc\" of volume group \"linstor_group\" without -ff
  /dev/sdc: physical volume not initialized.
",
        "",
        "",
        "[XO] This error can be triggered if one of the disks is a 'tapdevs' disk.",
        "[XO] This error can be triggered if one of the disks have children"
      ],
      "call": {
        "method": "host.call_plugin",
        "params": [
          "OpaqueRef:51a2ab7e-f792-4aad-a613-ddbe0a03c9f7",
          "lvm.py",
          "create_physical_volume",
          {
            "devices": "/dev/sdb,/dev/sdc",
            "ignore_existing_filesystems": "false",
            "force": "false"
          }
        ]
      }
    }
  ],
  "message": "",
  "name": "Error",
  "stack": "Error: 
    at next (/usr/local/lib/node_modules/xo-server/node_modules/@vates/async-each/index.js:83:24)
    at onRejected (/usr/local/lib/node_modules/xo-server/node_modules/@vates/async-each/index.js:65:11)
    at onRejectedWrapper (/usr/local/lib/node_modules/xo-server/node_modules/@vates/async-each/index.js:67:41)"
}

</code></pre>
<p dir="auto">The only thing I have left to try is completely swapping out the 6TB HDDs for new ones.<br />
Thanks in advance!</p>
]]></description><link>https://xcp-ng.org/forum/topic/8340/xostor-creation-issues</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 15:30:49 GMT</lastBuildDate><atom:link href="https://xcp-ng.org/forum/topic/8340.rss" rel="self" type="application/rss+xml"/><pubDate>Thu, 08 Feb 2024 21:15:57 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to XOSTOR Creation Issues on Tue, 13 Feb 2024 21:19:09 GMT]]></title><description><![CDATA[<p dir="auto">Quick update. I ran this command for each drive on each host...</p>
<pre><code>wipefs --all --force /dev/sdX
</code></pre>
<p dir="auto">Then tried building the XOSTOR again. This time I got this an error in the XOSTOR page that some random UUID already had XOSTOR on it, but it built the XOSTOR? I have no idea how, or what happened, but it did.</p>
<p dir="auto">So I have my XOSTOR back.</p>
]]></description><link>https://xcp-ng.org/forum/post/71248</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71248</guid><dc:creator><![CDATA[Midget]]></dc:creator><pubDate>Tue, 13 Feb 2024 21:19:09 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Tue, 13 Feb 2024 21:14:06 GMT]]></title><description><![CDATA[<p dir="auto">So I burnt it all down. I thought it was going to go through. But it didn't create the XOSTOR, but I have this log...</p>
<pre><code>xostor.create
{
  "description": "Test Virtual SAN Part 2",
  "disksByHost": {
    "e9b5aa92-660c-4dad-98c7-97de52556f22": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "eb4cab8c-2234-4c7f-af84-d1b1494da60e": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "68b9dc54-0bf3-4dc0-854f-d4cdabb47c23": [
      "/dev/sdb",
      "/dev/sdc"
    ]
  },
  "name": "XCP Storage 2",
  "provisioning": "thick",
  "replication": 2
}
{
  "code": "SR_UNKNOWN_DRIVER",
  "params": [
    "linstor"
  ],
  "call": {
    "method": "SR.create",
    "params": [
      "e9b5aa92-660c-4dad-98c7-97de52556f22",
      {
        "group-name": "linstor_group/thin_device",
        "redundancy": "2",
        "provisioning": "thick"
      },
      0,
      "XCP Storage 2",
      "Test Virtual SAN Part 2",
      "linstor",
      "user",
      true,
      {}
    ]
  },
  "message": "SR_UNKNOWN_DRIVER(linstor)",
  "name": "XapiError",
  "stack": "XapiError: SR_UNKNOWN_DRIVER(linstor)
    at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)
    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/transports/json-rpc.mjs:35:21
    at runNextTicks (node:internal/process/task_queues:60:5)
    at processImmediate (node:internal/timers:447:9)
    at process.callbackTrampoline (node:internal/async_hooks:130:17)"
}
</code></pre>
<p dir="auto">And after this I got an alert the pool needed to be updated again. So I did the updates, rebooted the hosts, and tried to make the XOSTOR again. This time I got this...</p>
<pre><code>xostor.create
{
  "description": "Test Virtual SAN Part 2",
  "disksByHost": {
    "e9b5aa92-660c-4dad-98c7-97de52556f22": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "eb4cab8c-2234-4c7f-af84-d1b1494da60e": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "68b9dc54-0bf3-4dc0-854f-d4cdabb47c23": [
      "/dev/sdb",
      "/dev/sdc"
    ]
  },
  "name": "XCP Storage 2",
  "provisioning": "thick",
  "replication": 2
}
{
  "errors": [
    {
      "code": "LVM_ERROR(5)",
      "params": [
        "File descriptor 3 (/var/log/lvm-plugin.log) leaked on pvcreate invocation. Parent PID 5262: python
File descriptor 9 (/dev/urandom) leaked on pvcreate invocation. Parent PID 5262: python
  Can't initialize physical volume \"/dev/sdb\" of volume group \"linstor_group\" without -ff
  /dev/sdb: physical volume not initialized.
  Can't initialize physical volume \"/dev/sdc\" of volume group \"linstor_group\" without -ff
  /dev/sdc: physical volume not initialized.
",
        "",
        "",
        "[XO] This error can be triggered if one of the disks is a 'tapdevs' disk.",
        "[XO] This error can be triggered if one of the disks have children"
      ],
      "call": {
        "method": "host.call_plugin",
        "params": [
          "OpaqueRef:fd2fcfdf-576b-4ea9-b4ac-20e91e1b4bbd",
          "lvm.py",
          "create_physical_volume",
          {
            "devices": "/dev/sdb,/dev/sdc",
            "ignore_existing_filesystems": "false",
            "force": "false"
          }
        ]
      }
    },
    {
      "code": "LVM_ERROR(5)",
      "params": [
        "File descriptor 3 (/var/log/lvm-plugin.log) leaked on pvcreate invocation. Parent PID 4884: python
File descriptor 9 (/dev/urandom) leaked on pvcreate invocation. Parent PID 4884: python
  Can't initialize physical volume \"/dev/sdb\" of volume group \"linstor_group\" without -ff
  /dev/sdb: physical volume not initialized.
  Can't initialize physical volume \"/dev/sdc\" of volume group \"linstor_group\" without -ff
  /dev/sdc: physical volume not initialized.
",
        "",
        "",
        "[XO] This error can be triggered if one of the disks is a 'tapdevs' disk.",
        "[XO] This error can be triggered if one of the disks have children"
      ],
      "call": {
        "method": "host.call_plugin",
        "params": [
          "OpaqueRef:057c701d-7d4a-4d59-8a36-db0a0ef65960",
          "lvm.py",
          "create_physical_volume",
          {
            "devices": "/dev/sdb,/dev/sdc",
            "ignore_existing_filesystems": "false",
            "force": "false"
          }
        ]
      }
    },
    {
      "code": "LVM_ERROR(5)",
      "params": [
        "File descriptor 3 (/var/log/lvm-plugin.log) leaked on pvcreate invocation. Parent PID 4623: python
File descriptor 9 (/dev/urandom) leaked on pvcreate invocation. Parent PID 4623: python
  Can't initialize physical volume \"/dev/sdb\" of volume group \"linstor_group\" without -ff
  /dev/sdb: physical volume not initialized.
  Can't initialize physical volume \"/dev/sdc\" of volume group \"linstor_group\" without -ff
  /dev/sdc: physical volume not initialized.
",
        "",
        "",
        "[XO] This error can be triggered if one of the disks is a 'tapdevs' disk.",
        "[XO] This error can be triggered if one of the disks have children"
      ],
      "call": {
        "method": "host.call_plugin",
        "params": [
          "OpaqueRef:48af9637-fc0f-402b-94da-64eac63d31f8",
          "lvm.py",
          "create_physical_volume",
          {
            "devices": "/dev/sdb,/dev/sdc",
            "ignore_existing_filesystems": "false",
            "force": "false"
          }
        ]
      }
    }
  ],
  "message": "",
  "name": "Error",
  "stack": "Error: 
    at next (/usr/local/lib/node_modules/xo-server/node_modules/@vates/async-each/index.js:83:24)
    at onRejected (/usr/local/lib/node_modules/xo-server/node_modules/@vates/async-each/index.js:65:11)
    at onRejectedWrapper (/usr/local/lib/node_modules/xo-server/node_modules/@vates/async-each/index.js:67:41)"
}
</code></pre>
]]></description><link>https://xcp-ng.org/forum/post/71247</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71247</guid><dc:creator><![CDATA[Midget]]></dc:creator><pubDate>Tue, 13 Feb 2024 21:14:06 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Tue, 13 Feb 2024 20:54:53 GMT]]></title><description><![CDATA[<p dir="auto">So I burnt it all down to ashes. Completely redid the storage. Reinstalled XCP-ng. Let's see what happens...</p>
]]></description><link>https://xcp-ng.org/forum/post/71242</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71242</guid><dc:creator><![CDATA[Midget]]></dc:creator><pubDate>Tue, 13 Feb 2024 20:54:53 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Tue, 13 Feb 2024 14:29:41 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/learningdaily" aria-label="Profile: learningdaily">@<bdi>learningdaily</bdi></a> I have to believe there is a way to fix this. Maybe once I get the time I will reload everything. It won't take long I guess.</p>
]]></description><link>https://xcp-ng.org/forum/post/71201</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71201</guid><dc:creator><![CDATA[Midget]]></dc:creator><pubDate>Tue, 13 Feb 2024 14:29:41 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Tue, 13 Feb 2024 13:23:29 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/midget" aria-label="Profile: Midget">@<bdi>Midget</bdi></a> I misunderstood, I thought you were mentioning the linstor error and attempting to troubleshoot that. Reviewing the rest of your thread that is not the case.</p>
<p dir="auto">You mentioned you attempted to burn down the XOSTOR and rebuild it. Here's the issue, your steps of burning down XOSTOR aren't complete.</p>
<p dir="auto">Simplest method if you're okay with it, format all drives to no partitions, no data, on all of your XCP-ng hosts and rebuild XCP-ng Hosts from scratch from the ISO. Doing that will allow the ISO to build the storage as expected by XOSTOR.</p>
<p dir="auto">Advanced method - If that seems too drastic a measure, you'll probably need to review the documentation within the Linstor project to find out how you fully remove the partially removed remnants. It involves deleting volume groups, logical groups, and manually removing some linstor components.</p>
]]></description><link>https://xcp-ng.org/forum/post/71200</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71200</guid><dc:creator><![CDATA[learningdaily]]></dc:creator><pubDate>Tue, 13 Feb 2024 13:23:29 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Mon, 12 Feb 2024 14:01:05 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/learningdaily" aria-label="Profile: learningdaily">@<bdi>learningdaily</bdi></a> said in <a href="/forum/post/71100">XOSTOR Creation Issues</a>:</p>
<blockquote>
<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/midget" aria-label="Profile: Midget">@<bdi>Midget</bdi></a> I believe the linstor manager only runs on one XCP-ng Host at a time. So if you ssh to each of your XCP-ng hosts, and run the command:</p>
<pre><code>linstor resource list
</code></pre>
<p dir="auto">The XCP-ng Host running the linstor manager would display the expected results. The other XCP-ng Hosts will display an error similar to what you saw.</p>
<p dir="auto">Prior to implementing your fix, did you attempt the command from each XCP-ng host and what were the results?</p>
<p dir="auto">I'd recommend undoing the 127.0.0.1 change and attempting from each host.</p>
</blockquote>
<p dir="auto">I haven't implemented any fix. It was just something I read.</p>
]]></description><link>https://xcp-ng.org/forum/post/71127</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71127</guid><dc:creator><![CDATA[Midget]]></dc:creator><pubDate>Mon, 12 Feb 2024 14:01:05 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Sat, 10 Feb 2024 19:23:53 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/midget" aria-label="Profile: Midget">@<bdi>Midget</bdi></a> I believe the linstor manager only runs on one XCP-ng Host at a time. So if you ssh to each of your XCP-ng hosts, and run the command:</p>
<pre><code>linstor resource list
</code></pre>
<p dir="auto">The XCP-ng Host running the linstor manager would display the expected results. The other XCP-ng Hosts will display an error similar to what you saw.</p>
<p dir="auto">Prior to implementing your fix, did you attempt the command from each XCP-ng host and what were the results?</p>
<p dir="auto">I'd recommend undoing the 127.0.0.1 change and attempting from each host.</p>
]]></description><link>https://xcp-ng.org/forum/post/71100</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71100</guid><dc:creator><![CDATA[learningdaily]]></dc:creator><pubDate>Sat, 10 Feb 2024 19:23:53 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Fri, 09 Feb 2024 17:39:58 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/ronan-a" aria-label="Profile: ronan-a">@<bdi>ronan-a</bdi></a> will come here as soon as he can (he's busy rebuilding a more recent version of DRBD <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f604.png?v=0594cb2b96d" class="not-responsive emoji emoji-android emoji--smile" style="height:23px;width:auto;vertical-align:middle" title=":D" alt="😄" /> )</p>
]]></description><link>https://xcp-ng.org/forum/post/71073</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71073</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Fri, 09 Feb 2024 17:39:58 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Fri, 09 Feb 2024 17:35:54 GMT]]></title><description><![CDATA[<p dir="auto">A small update, not sure if it matters because the XOSTOR isn't built. But I got this while poking around...</p>
<pre><code>[12:09 xcp-ng2 ~]# linstor resource list
Error: Unable to connect to linstor://localhost:3370: [Errno 99] Cannot assign requested address
</code></pre>
<p dir="auto">So something from linstor exists on hosts. Again, not sure that matters.</p>
<p dir="auto"><strong>EDIT</strong><br />
I did find multiple threads of people having the same issue and the answer was to bind 127.0.0.1 and not local host.</p>
]]></description><link>https://xcp-ng.org/forum/post/71072</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71072</guid><dc:creator><![CDATA[Midget]]></dc:creator><pubDate>Fri, 09 Feb 2024 17:35:54 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Fri, 09 Feb 2024 15:25:45 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/danp" aria-label="Profile: Danp">@<bdi>Danp</bdi></a> said in <a href="/forum/post/71015">XOSTOR Creation Issues</a>:</p>
<blockquote>
<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/midget" aria-label="Profile: Midget">@<bdi>Midget</bdi></a> said in <a href="/forum/post/71012">XOSTOR Creation Issues</a>:</p>
<blockquote>
<p dir="auto">linstor_group</p>
</blockquote>
<p dir="auto">It thinks that there's still a volume group with this name. You can try wiping the drive with wipefs to remove the previous partition.</p>
<p dir="auto">Regards, Dan</p>
</blockquote>
<p dir="auto">Thanks for the input. I ran this command across all drives...</p>
<pre><code>wipefs --all --force /dev/sdb
</code></pre>
<p dir="auto">I then rebooted each host and attempted the XOSTOR creation again. This is the log file I got...</p>
<pre><code>xostor.create
{
  "description": "Test Virtual SAN Part 2",
  "disksByHost": {
    "8c2c3bff-71ab-491a-9240-66973b0f1fe0": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "872a4289-595b-4428-aaa5-2caa1e70162a": [
      "/dev/sdb",
      "/dev/sdc"
    ],
    "9a7c2fb9-8db8-4032-b7fb-e4c3206bfe9c": [
      "/dev/sdb",
      "/dev/sdc"
    ]
  },
  "name": "XCP Storage 2",
  "provisioning": "thick",
  "replication": 2
}
{
  "code": "SR_BACKEND_FAILURE_5006",
  "params": [
    "",
    "LINSTOR SR creation error [opterr=Failed to remove old node `xcp-ng4`: No connection to satellite 'xcp-ng2', No connection to satellite 'XCP-ng3', No connection to satellite 'xcp-ng4', No connection to satellite 'xcp-ng2', No connection to satellite 'XCP-ng3', No connection to satellite 'xcp-ng4']",
    ""
  ],
  "call": {
    "method": "SR.create",
    "params": [
      "8c2c3bff-71ab-491a-9240-66973b0f1fe0",
      {
        "group-name": "linstor_group/thin_device",
        "redundancy": "2",
        "provisioning": "thick"
      },
      0,
      "XCP Storage 2",
      "Test Virtual SAN Part 2",
      "linstor",
      "user",
      true,
      {}
    ]
  },
  "message": "SR_BACKEND_FAILURE_5006(, LINSTOR SR creation error [opterr=Failed to remove old node `xcp-ng4`: No connection to satellite 'xcp-ng2', No connection to satellite 'XCP-ng3', No connection to satellite 'xcp-ng4', No connection to satellite 'xcp-ng2', No connection to satellite 'XCP-ng3', No connection to satellite 'xcp-ng4'], )",
  "name": "XapiError",
  "stack": "XapiError: SR_BACKEND_FAILURE_5006(, LINSTOR SR creation error [opterr=Failed to remove old node `xcp-ng4`: No connection to satellite 'xcp-ng2', No connection to satellite 'XCP-ng3', No connection to satellite 'xcp-ng4', No connection to satellite 'xcp-ng2', No connection to satellite 'XCP-ng3', No connection to satellite 'xcp-ng4'], )
    at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)
    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/transports/json-rpc.mjs:35:21
    at runNextTicks (node:internal/process/task_queues:60:5)
    at processImmediate (node:internal/timers:447:9)
    at process.callbackTrampoline (node:internal/async_hooks:130:17)"
}
</code></pre>
]]></description><link>https://xcp-ng.org/forum/post/71062</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71062</guid><dc:creator><![CDATA[Midget]]></dc:creator><pubDate>Fri, 09 Feb 2024 15:25:45 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Fri, 09 Feb 2024 09:12:22 GMT]]></title><description><![CDATA[<p dir="auto">Sounds like this, adding <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/ronan-a" aria-label="Profile: ronan-a">@<bdi>ronan-a</bdi></a> in the loop too <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=0594cb2b96d" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
]]></description><link>https://xcp-ng.org/forum/post/71040</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71040</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Fri, 09 Feb 2024 09:12:22 GMT</pubDate></item><item><title><![CDATA[Reply to XOSTOR Creation Issues on Thu, 08 Feb 2024 22:12:00 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/midget" aria-label="Profile: Midget">@<bdi>Midget</bdi></a> said in <a href="/forum/post/71012">XOSTOR Creation Issues</a>:</p>
<blockquote>
<p dir="auto">linstor_group</p>
</blockquote>
<p dir="auto">It thinks that there's still a volume group with this name. You can try wiping the drive with wipefs to remove the previous partition.</p>
<p dir="auto">Regards, Dan</p>
]]></description><link>https://xcp-ng.org/forum/post/71015</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/71015</guid><dc:creator><![CDATA[Danp]]></dc:creator><pubDate>Thu, 08 Feb 2024 22:12:00 GMT</pubDate></item></channel></rss>