[ceph-users] Cephfs snapshot work
jspray at redhat.com
Tue Nov 7 07:51:55 PST 2017
On Tue, Nov 7, 2017 at 3:28 PM, Dan van der Ster <dan at vanderster.com> wrote:
> On Tue, Nov 7, 2017 at 4:15 PM, John Spray <jspray at redhat.com> wrote:
>> On Tue, Nov 7, 2017 at 3:01 PM, Dan van der Ster <dan at vanderster.com> wrote:
>>> On Tue, Nov 7, 2017 at 12:57 PM, John Spray <jspray at redhat.com> wrote:
>>>> On Sun, Nov 5, 2017 at 4:19 PM, Brady Deetz <bdeetz at gmail.com> wrote:
>>>>> My organization has a production cluster primarily used for cephfs upgraded
>>>>> from jewel to luminous. We would very much like to have snapshots on that
>>>>> filesystem, but understand that there are risks.
>>>>> What kind of work could cephfs admins do to help the devs stabilize this
>>>> If you have a disposable test system, then you could install the
>>>> latest master branch of Ceph (which has a stream of snapshot fixes in
>>>> it) and run a replica of your intended workload. If you can find
>>>> snapshot bugs (especially crashes) on master then they will certainly
>>>> attract interest.
>>> Hi John. Are those fixes in master expected to make snapshots stable
>>> with rados namespaces (a la Manila) ?
>> Was there a specific bug with snapshots+namespaces? I may be having a
>> failure of memory or have not been paying enough attention :-)
> I might be misremembering, but I recall a mail from Greg? a few months
> ago saying that snapshots would surely not work with multiple
> Can't find the mail, though....
I think that was about not being able to safely use snapshots when
multiple filesystems share the same data pool (and are separated by
If you have a single filesystem using multiple namespaces within the
same data pool then I don't think there's an issue.
More information about the ceph-users