[ceph-users] mount cephfs on ceph servers
devster at posteo.net
Wed Mar 6 15:18:07 PST 2019
is the deadlock risk still an issue in containerized deployments? For
example with OSD daemons in containers and mounting the filesystem on
the host machine?
On 06/03/19 16:40, Jake Grimmett wrote:
> Just to add "+1" on this datapoint, based on one month usage on Mimic
> 13.2.4 essentially "it works great for us"
> Prior to this, we had issues with the kernel driver on 12.2.2. This
> could have been due to limited RAM on the osd nodes (128GB / 45 OSD),
> and an older kernel.
> Upgrading the RAM to 256GB and using a RHEL 7.6 derived kernel has
> allowed us to reliably use the kernel driver.
> We keep 30 snapshots ( one per day), have one active metadata server,
> and change several TB daily - it's much, *much* faster than with fuse.
> Cluster has 10 OSD nodes, currently storing 2PB, using ec 8:2 coding.
> ta ta
> On 3/6/19 11:10 AM, Hector Martin wrote:
>> On 06/03/2019 12:07, Zhenshi Zhou wrote:
>>> I'm gonna mount cephfs from my ceph servers for some reason,
>>> including monitors, metadata servers and osd servers. I know it's
>>> not a best practice. But what is the exact potential danger if I mount
>>> cephfs from its own server?
>> As a datapoint, I have been doing this on two machines (single-host Ceph
>> clusters) for months with no ill effects. The FUSE client performs a lot
>> worse than the kernel client, so I switched to the latter, and it's been
>> working well with no deadlocks.
> ceph-users mailing list
> ceph-users at lists.ceph.com
More information about the ceph-users