<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[CEPH FS Storage Driver]]></title><description><![CDATA[<p dir="auto">As a side experiment - I could extract and test CEPH FS kernel module for XCP-NG 7.4+ and mount a multi-mon ceph cluster (luminous).</p>
<p dir="auto">Once ceph.ko module is in place, XCP-NG can mount the ceph fs mount point similar to NFS.</p>
<p dir="auto">e.g. In NFSSR we use <code>#mount.nfs4 addr:remotepath localpath</code></p>
<p dir="auto">while for CEPHSR we can use <code>#mount.ceph addr1,addr2,addr3,addr4:remotepath localpath</code></p>
<p dir="auto">I'm currently looking at <a href="http://NFSSR.py" target="_blank" rel="noopener noreferrer nofollow ugc">NFSSR.py</a> to create <a href="http://CEPHFSSR.py" target="_blank" rel="noopener noreferrer nofollow ugc">CEPHFSSR.py</a> - and would share once ready, meanwhile if anyone wants to help me in this by testing ceph.ko, developing <a href="http://CEPHFSSR.py" target="_blank" rel="noopener noreferrer nofollow ugc">CEPHFSSR.py</a>, kindly ping here. Like EXT4 and XFS, this too will have some cleanup issues..</p>
<p dir="auto">If this works as expected, I'd request <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> and <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/borzel" aria-label="Profile: borzel">@<bdi>borzel</bdi></a> to look at possibilities of integrating CEPH FS option in XO/XC near NFS SR.</p>
<p dir="auto"><code>Note</code>: This is completely different than RBDSR which is worked by <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/rposudnevskiy" aria-label="Profile: rposudnevskiy">@<bdi>rposudnevskiy</bdi></a> and uses rbd image protocol of ceph.</p>
<p dir="auto"><code>Bonus</code> : If someone knows how nfs-ganesha fits in this - buzzz... e.g. Each CEPH FS node can have a NFS server while each XCP-NG host mounts all of them using NFS 4.1 (pNFS). Bypassing need of ceph.ko. We can also rebuild NFS module as CONFIG_NFSD_PNFS is not set by default.</p>
<p dir="auto"><code>Caveats</code> : CEPH FS caching is not fully explored.</p>
<p dir="auto"><code>State</code>: Ready. See post below for instructions on how to use.</p>
]]></description><link>https://xcp-ng.org/forum/topic/1006/ceph-fs-storage-driver</link><generator>RSS for Node</generator><lastBuildDate>Tue, 10 Mar 2026 00:09:07 GMT</lastBuildDate><atom:link href="https://xcp-ng.org/forum/topic/1006.rss" rel="self" type="application/rss+xml"/><pubDate>Sat, 23 Feb 2019 19:47:11 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to CEPH FS Storage Driver on Wed, 23 Nov 2022 19:35:56 GMT]]></title><description><![CDATA[<p dir="auto">We always try to work with Citrix ("XenServer" division now), to push things upstream and manage to get it merged.</p>
]]></description><link>https://xcp-ng.org/forum/post/55349</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55349</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Wed, 23 Nov 2022 19:35:56 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Wed, 23 Nov 2022 17:21:56 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> so are you completely on your own release schedule now or are you still tied to citrix version releases? I've used you since 7.x versions and have had zero issues and still have one 6.5 I'm going to migrate a vm off of because it was on local storage and it's quite large and I didn't want to redo it because of the storage changes from 6 to 7 and then I'll have a pool with all 8.2 after I patch up my other 8.2 and then rebuild the 6.5 to 8.2.</p>
]]></description><link>https://xcp-ng.org/forum/post/55347</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55347</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Wed, 23 Nov 2022 17:21:56 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Wed, 23 Nov 2022 17:09:56 GMT]]></title><description><![CDATA[<p dir="auto">Okay good to know. We hope to do an even better native integration on SMAPIv3, but the hardest part isn't on writing the driver itself, but to improve the SMAPIv3 itself to support what's missing (live storage migration and so on).</p>
]]></description><link>https://xcp-ng.org/forum/post/55345</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55345</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Wed, 23 Nov 2022 17:09:56 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Wed, 23 Nov 2022 17:11:36 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> that is right and its up and running using the driver you guys built and taking a good load no issues. Hats off to you and your team for making this work. I used nfs for exporting off my old ceph system but the importing is strictly on your native cephfs drivers. On my old one I was on xcp-ng 7.6 and had nfs sitting over the cephfs and just gigabit network and it worked but no live movement at all and now I can live migrate no problems with bonded dual 10gb fiber and single 10gb fiber to the hosts.</p>
]]></description><link>https://xcp-ng.org/forum/post/55344</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55344</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Wed, 23 Nov 2022 17:11:36 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Wed, 23 Nov 2022 16:55:59 GMT]]></title><description><![CDATA[<p dir="auto">So just to be sure I get it, you have a dedicated Ceph storage on dedicated hardware and you wanted to connect to it via XCP-ng without using NFS or iSCSI, right?</p>
]]></description><link>https://xcp-ng.org/forum/post/55342</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55342</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Wed, 23 Nov 2022 16:55:59 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Wed, 23 Nov 2022 16:41:48 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> this is holding up quite well. I've pounded it good doing 3 exports and an import simultaneously and maintained about 60mb sec writes while doing reads and writes asynchronous as the imports and exports are on the same cephfs. Had an issue with the ceph repository moving to the pool when I created the pool and wound up hosing my pool manager trying to fix it but setting new pool manager and rebuilding the other and it has been flawless since. I've built the XO from source and it has come a long way from the last time I looked at it. We went with a small vendor out of canada called 45drives for the ceph hardware and deployment and they have a nice ansible derived delivery and config package if anyone is looking for a supported solution that is pretty slick.</p>
]]></description><link>https://xcp-ng.org/forum/post/55339</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55339</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Wed, 23 Nov 2022 16:41:48 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Mon, 21 Nov 2022 16:25:39 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> nevermind I fixed it, I had forgot to add the public side of my ceph network onto the new host and then it all scanned correctly. Thanks for being responsive and a little education as it always helps.</p>
]]></description><link>https://xcp-ng.org/forum/post/55260</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55260</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Mon, 21 Nov 2022 16:25:39 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Mon, 21 Nov 2022 16:21:06 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a></p>
<p dir="auto">Nov 21 10:20:11 xcp4-1 SM: [30250] vhd=/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927/<em>.vhd scan-error=-5 error-message='failure scanning target'<br />
Nov 21 10:20:11 xcp4-1 SM: [30250] scan failed: -5<br />
Nov 21 10:20:11 xcp4-1 SM: [30250] ', stderr: ''<br />
Nov 21 10:20:12 xcp4-1 SM: [30250] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927/</em>.vhd']<br />
Nov 21 10:20:12 xcp4-1 SM: [30250] FAILED in util.pread: (rc 5) stdout: 'vhd=/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927 scan-error=2 error-message='failure<br />
scanning target'<br />
Nov 21 10:20:12 xcp4-1 SM: [30250] vhd=/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927/*.vhd scan-error=-5 error-message='failure scanning target'<br />
Nov 21 10:20:12 xcp4-1 SM: [30250] scan failed: -5</p>
]]></description><link>https://xcp-ng.org/forum/post/55259</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55259</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Mon, 21 Nov 2022 16:21:06 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Mon, 21 Nov 2022 16:04:50 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/scboley" aria-label="Profile: scboley">@<bdi>scboley</bdi></a> Storage related errors will be in SMlog</p>
]]></description><link>https://xcp-ng.org/forum/post/55257</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55257</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Mon, 21 Nov 2022 16:04:50 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Mon, 21 Nov 2022 16:02:34 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> adding another host to the pool and it fails to connect to the ceph shared storage:</p>
<p dir="auto">Nov 21 09:57:48 xcp4-1 xapi: [debug||116026 /var/lib/xcp/xapi|SR.scan R:05af02328263|helpers] Waiting for up to 12.902806 seconds before retrying...<br />
Nov 21 09:57:59 xcp4-1 xapi: [debug||116027 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:79aefd48b34b created by task D:f32e5efdeec8<br />
Nov 21 09:57:59 xcp4-1 xapi: [ info||116027 /var/lib/xcp/xapi|session.logout D:67032978d90c|xapi_session] Session.destroy trackid=c8a5d1fe7e932298b267edb677909a4b<br />
Nov 21 09:57:59 xcp4-1 xapi: [debug||116028 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:0366d884ee46 created by task D:f32e5efdeec8<br />
Nov 21 09:57:59 xcp4-1 xapi: [ info||116028 /var/lib/xcp/xapi|session.slave_login D:b39585e0b07e|xapi_session] Session.create trackid=fc78c651286146c61742b0ca74212bb9 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49<br />
Nov 21 09:57:59 xcp4-1 xapi: [debug||116029 /var/lib/xcp/xapi||dummytaskhelper] task dis</p>
<p dir="auto">Nov 21 09:59:34 xcp4-1 xapi: [ info||116009 HTTPS 192.168.254.101-&gt;|Async.PBD.plug R:631710626e67|xapi_session] Session.destroy trackid=726402fee499e51bb72de7fd054a93d0<br />
Nov 21 09:59:34 xcp4-1 xapi: [debug||116009 HTTPS 192.168.254.101-&gt;|Async.PBD.plug R:631710626e67|message_forwarding] Unmarking SR after PBD.plug (task=OpaqueRef:63171062-6e67-4cbd-b3be-91bb534a94bf)<br />
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] Async.PBD.plug R:631710626e67 failed with exception Server_error(SR_BACKEND_FAILURE_12, [ ; mount failed with return code 32;  ])<br />
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] Raised Server_error(SR_BACKEND_FAILURE_12, [ ; mount failed with return code 32;  ])<br />
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] 1/1 xapi Raised at file (Thread 116009 has no backtrace table. Was with_backtraces called?, line 0<br />
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace]</p>
]]></description><link>https://xcp-ng.org/forum/post/55255</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/55255</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Mon, 21 Nov 2022 16:02:34 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Mon, 14 Nov 2022 19:37:44 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/scboley" aria-label="Profile: scboley">@<bdi>scboley</bdi></a> I figured it out finally. I used another key created by the cluster and got it to connect and mount the ceph.</p>
]]></description><link>https://xcp-ng.org/forum/post/54951</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54951</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Mon, 14 Nov 2022 19:37:44 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Mon, 14 Nov 2022 19:16:09 GMT]]></title><description><![CDATA[<p dir="auto">Ok I've got this setup and I have a cluster serving the cephfs and here's my errors:<br />
xe sr-create type=cephfs name-label=ceph device-config:server=172.30.254.23,172.30.254.24,172.30.254.25 device-config:serverport=6789 device-config:serverpath=/fsgw/xcpsr device-config:options=name=admin,secretfile=/etc/ceph/admin.secret<br />
Error code: SR_BACKEND_FAILURE_111<br />
Error parameters: , CephFS mount error [opterr=mount failed with return code 1],</p>
]]></description><link>https://xcp-ng.org/forum/post/54948</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54948</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Mon, 14 Nov 2022 19:16:09 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Fri, 11 Nov 2022 11:31:51 GMT]]></title><description><![CDATA[<p dir="auto">We use an officially supported kernel (4.19 in LTS) and yes, sometimes we even backport stuff to it specifically for XCP-ng <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=c63c1619ba5" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
<p dir="auto">A kernel isn't "linked" to a distro, it's all about the distro maintainers to choose which kernel they want. We do that for XCP-ng and XenServer (with Citrix).</p>
<p dir="auto">In short: we make our own choices regarding Xen and the kernel, entirely outside CentOS project.</p>
]]></description><link>https://xcp-ng.org/forum/post/54886</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54886</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Fri, 11 Nov 2022 11:31:51 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 21:20:03 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> I know <a href="http://kernel.org" target="_blank" rel="noopener noreferrer nofollow ugc">kernel.org</a> maintains a lot of very new kernels for centos versions way newer than the locked and back ported mess the default kernels are so do you build off that base and change out the virtual parts and put them in your builds?</p>
]]></description><link>https://xcp-ng.org/forum/post/54876</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54876</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Thu, 10 Nov 2022 21:20:03 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 20:33:23 GMT]]></title><description><![CDATA[<p dir="auto">We don't use any kernel from CentOS project (nor the Xen package). We only use "the rest".</p>
<p dir="auto">So in order, it will be:</p>
<ul>
<li>newer Xen version (easiest thing)</li>
<li>more recent kernel (some patches are needed at different places)</li>
<li>more recent user space/base distro (bigger work, but started already, like migrating all Python 2 stuff to Python 3!)</li>
</ul>
<p dir="auto">SMAPIv3 is done in parallel and with XS teams too <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=c63c1619ba5" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
]]></description><link>https://xcp-ng.org/forum/post/54874</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54874</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Thu, 10 Nov 2022 20:33:23 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 19:29:00 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> so what are your plans for going to a streams 8 version which would give the updated kernel platform and hopefully soon after SMAPIv3? IO throughput on 8 over 7 is vastly superior and not near as big as the 6 to 7 changes were.</p>
]]></description><link>https://xcp-ng.org/forum/post/54869</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54869</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Thu, 10 Nov 2022 19:29:00 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 18:33:56 GMT]]></title><description><![CDATA[<p dir="auto">No, not really, see <a href="https://xcp-ng.org/blog/2020/12/17/centos-and-xcpng-future/" target="_blank" rel="noopener noreferrer nofollow ugc">https://xcp-ng.org/blog/2020/12/17/centos-and-xcpng-future/</a> (so no biggie)</p>
]]></description><link>https://xcp-ng.org/forum/post/54867</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54867</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Thu, 10 Nov 2022 18:33:56 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 18:24:31 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> Ok I see even with 8.x you are still based on centos 7 when is it going up to 8 and I'd assume rocky would be the choice since the redhat streaming snafu cough cough.</p>
]]></description><link>https://xcp-ng.org/forum/post/54866</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54866</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Thu, 10 Nov 2022 18:24:31 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 18:13:36 GMT]]></title><description><![CDATA[<p dir="auto">Very likely when the platform will be more modern (upgrading the kernel, platform and using SMAPIv3)</p>
]]></description><link>https://xcp-ng.org/forum/post/54865</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54865</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Thu, 10 Nov 2022 18:13:36 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 18:08:47 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> what are the plans to elevate it? I have a feeling its really starting to gain traction in the storage world.</p>
]]></description><link>https://xcp-ng.org/forum/post/54864</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54864</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Thu, 10 Nov 2022 18:08:47 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 18:04:54 GMT]]></title><description><![CDATA[<p dir="auto">I don't think we updated anything on that aspect, since Ceph isn't a "main" supported SR <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=c63c1619ba5" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
]]></description><link>https://xcp-ng.org/forum/post/54863</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54863</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Thu, 10 Nov 2022 18:04:54 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 17:51:52 GMT]]></title><description><![CDATA[<p dir="auto">Ok I see the package listed in the documentation is still nautilus has that been updated to any newer ceph versions as of yet? <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a></p>
]]></description><link>https://xcp-ng.org/forum/post/54862</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54862</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Thu, 10 Nov 2022 17:51:52 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 17:33:21 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/scboley" aria-label="Profile: scboley">@<bdi>scboley</bdi></a> Yes, it's mostly just a matter of doing yum update: <a href="https://xcp-ng.org/docs/updates.html" target="_blank" rel="noopener noreferrer nofollow ugc">https://xcp-ng.org/docs/updates.html</a></p>
]]></description><link>https://xcp-ng.org/forum/post/54859</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54859</guid><dc:creator><![CDATA[msgerbs]]></dc:creator><pubDate>Thu, 10 Nov 2022 17:33:21 GMT</pubDate></item><item><title><![CDATA[Reply to CEPH FS Storage Driver on Thu, 10 Nov 2022 17:16:00 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/jmccoy555" aria-label="Profile: jmccoy555">@<bdi>jmccoy555</bdi></a> I'm talking about 8.2.1 and 8.2.2 and so forth. Is that a simple yum update on the system? I've just left it default version and never updated I was on 7.6 for a long time and just took it all to 8.2 with one straggler xenserver 6.5 still in production. I've loved the stability I've had with xcp-ng not even messing with it at all.</p>
]]></description><link>https://xcp-ng.org/forum/post/54858</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/54858</guid><dc:creator><![CDATA[scboley]]></dc:creator><pubDate>Thu, 10 Nov 2022 17:16:00 GMT</pubDate></item></channel></rss>