[ceph-users] cephfs manila snapshots best practices

Paul Emmerich paul.emmerich at croit.io
Fri Mar 22 08:16:02 PDT 2019


On Fri, Mar 22, 2019 at 4:05 PM Dan van der Ster <dan at vanderster.com> wrote:
>
> Perfect, thanks for that input.
> So, in your experience the async snaptrim mechanism doesn't make it as transparent as scrubbing etc.?
> On the rbd side, the impact of snaptrim seems invisible to us.

Snaptrim works great (with the snap sleep option set to a low value
instead of the default of 0)

The problem is just that users tend to do crazy things that will lead
to slowness for everyone. It's just that you might end up with way too
many snapshots of lots of small objects leading to more overhead and
system load than you initially expected or scaled for....

(But I'm biased towards seeing clusters that have a problem as we
offer "fix up your cluster" consulting services...)


Paul

>
> -- dan
>
>
> On Fri, Mar 22, 2019 at 3:42 PM Paul Emmerich <paul.emmerich at croit.io> wrote:
>>
>> I wouldn't give users the ability to perform snapshots directly on
>> Ceph unless you have full control over your users or fully trust them.
>> Too easy to ruin your day by creating lots of small files and lots of
>> snapshots that will wreck your performance...
>>
>> Also, snapshots aren't really accounted in their quota by CephFS :/
>>
>>
>> Paul
>> On Wed, Mar 20, 2019 at 4:34 PM Dan van der Ster <dan at vanderster.com> wrote:
>> >
>> > Hi all,
>> >
>> > We're currently upgrading our cephfs (managed by OpenStack Manila)
>> > clusters to Mimic, and want to start enabling snapshots of the file
>> > shares.
>> > There are different ways to approach this, and I hope someone can
>> > share their experiences with:
>> >
>> > 1. Do you give users the 's' flag in their cap, so that they can
>> > create snapshots themselves? We're currently planning *not* to do this
>> > -- we'll create snapshots for the users.
>> > 2. We want to create periodic snaps for all cephfs volumes. I can see
>> > pros/cons to creating the snapshots in /volumes/.snap or in
>> > /volumes/_nogroup/<uuid>/.snap. Any experience there? Or maybe even
>> > just an fs-wide snap in /.snap is the best approach ?
>> > 3. I found this simple cephfs-snap script which should do the job:
>> > http://images.45drives.com/ceph/cephfs/cephfs-snap  Does anyone have a
>> > different recommendation?
>> >
>> > Thanks!
>> >
>> > Dan
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users at lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


More information about the ceph-users mailing list