[ceph-users] Cephfs snapshot work

John Spray jspray at redhat.com
Tue Nov 7 06:45:04 PST 2017

On Tue, Nov 7, 2017 at 2:40 PM, Brady Deetz <bdeetz at gmail.com> wrote:
> Are there any existing fuzzing tools you'd recommend? I know about ceph osd
> thrash, which could be tested against, but what about on the client side? I
> could just use something pre-built for posix, but that wouldn't coordinate
> simulated failures on the storage side with actions against the fs. If there
> is not any current tooling for coordinating server and client simulation,
> maybe that's where I start.

We do have "thrasher" classes for randomly failing MDS daemons in the
main test suite, but those will only be useful for you if you're
working on automated tests that run inside that framework

If you're working on a local cluster, then it's pretty simple (and
useful) to write a small shell script that e.g. sleeps a few minutes,
then does a "ceph mds fail" on a random rank.

Something more sophisticated that would include client failures is a
long standing wishlist item for the automated testing.


> On Nov 7, 2017 5:57 AM, "John Spray" <jspray at redhat.com> wrote:
>> On Sun, Nov 5, 2017 at 4:19 PM, Brady Deetz <bdeetz at gmail.com> wrote:
>> > My organization has a production  cluster primarily used for cephfs
>> > upgraded
>> > from jewel to luminous. We would very much like to have snapshots on
>> > that
>> > filesystem, but understand that there are risks.
>> >
>> > What kind of work could cephfs admins do to help the devs stabilize this
>> > feature?
>> If you have a disposable test system, then you could install the
>> latest master branch of Ceph (which has a stream of snapshot fixes in
>> it) and run a replica of your intended workload.  If you can find
>> snapshot bugs (especially crashes) on master then they will certainly
>> attract interest.
>> John
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users at lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >

More information about the ceph-users mailing list