[ceph-users] io-schedulers

Sergey Malinin hell at newmail.com
Mon Nov 5 15:25:36 PST 2018


Using "noop" makes sense only with ssd/nvme drives. "noop" is a simple fifo and using it with HDDs can result in unexpected blocking of useful IO in case when the queue is poisoned with burst of IO requests like background purge, which would become foreground in such case.


> On 5.11.2018, at 23:39, Jack <ceph at jack.fr.eu.org> wrote:
> 
> We simply use the "noop" scheduler on our nand-based ceph cluster
> 
> 
> On 11/05/2018 09:33 PM, solarflow99 wrote:
>> I'm interested to know about this too.
>> 
>> 
>> On Mon, Nov 5, 2018 at 10:45 AM Bastiaan Visser <bvisser at flexyz.com> wrote:
>> 
>>> 
>>> There are lots of rumors around about the benefit of changing
>>> io-schedulers for OSD disks.
>>> Even some benchmarks can be found, but they are all more than a few years
>>> old.
>>> Since ceph is moving forward with quite a pace, i am wondering what the
>>> common practice is to use as io-scheduler on OSD's.
>>> 
>>> And since blk-mq is around these days, are the multi-queue schedules
>>> already being used in production clusters?
>>> 
>>> Regards,
>>> Bastiaan
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



More information about the ceph-users mailing list