[ceph-users] Slow requests in cache tier with rep_size 2

David Turner drakonstein at gmail.com
Wed Nov 1 07:08:49 PDT 2017


PPS - or min_size 1 in production

On Wed, Nov 1, 2017 at 10:08 AM David Turner <drakonstein at gmail.com> wrote:

> What is your min_size in the cache pool?  If your min_size is 2, then the
> cluster would block requests to that pool due to it having too few copies
> available.
>
> PS - Please don't consider using rep_size 2 in production.
>
> On Wed, Nov 1, 2017 at 5:14 AM Eugen Block <eblock at nde.ag> wrote:
>
>> Hi experts,
>>
>> we have upgraded our cluster to Luminous successfully, no big issues
>> so far. We are also testing cache tier with only 2 SSDs (we know it's
>> not recommended), and there's one issue to be resolved:
>>
>> Evertime we have to restart the cache OSDs we get slow requests with
>> impacts on our client VMs. We never restart both SSDs at the same
>> time, of course, so we are wondering why the slow requests? What
>> exactly is happening with a replication size of 2? We assumed that if
>> one SSD is restarted the data is still available on the other disc, we
>> didn't expect any problems. But this occurs everytime we have to
>> restart them. Can anyone explain what is going on there? We could use
>> your expertise on this :-)
>>
>> Best regards,
>> Eugen
>>
>> --
>> Eugen Block                             voice   : +49-40-559 51 75
>> <+49%2040%205595175>
>> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
>> <+49%2040%205595177>
>> Postfach 61 03 15
>> D-22423 Hamburg                         e-mail  : eblock at nde.ag
>>
>>          Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>            Sitz und Registergericht: Hamburg, HRB 90934
>>                    Vorstand: Jens-U. Mozdzen
>>                     USt-IdNr. DE 814 013 983
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171101/00db1e09/attachment.html>


More information about the ceph-users mailing list