[ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

David Turner drakonstein at gmail.com
Sat Oct 7 11:08:48 PDT 2017


Just to make sure you understand that the reads will happen on the primary
osd for the PG and not the nearest osd, meaning that reads will go between
the datacenters. Also that each write will not ack until all 3 writes
happen adding the latency to the writes and reads both.

On Sat, Oct 7, 2017, 1:48 PM Peter Linder <peter.linder at fiberdirekt.se>
wrote:

> On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote:
>
> Hello!
>
> 2017-10-07 19:12 GMT+05:00 Peter Linder <peter.linder at fiberdirekt.se>:
>
> The idea is to select an nvme osd, and
>> then select the rest from hdd osds in different datacenters (see crush
>> map below for hierarchy).
>>
>> It's a little bit aside of the question, but why do you want to mix SSDs
> and HDDs in the same pool? Do you have read-intensive workload and going to
> use primary-affinity to get all reads from nvme?
>
>
> Yes, this is pretty much the idea, getting the performance from NVMe
> reads, while still maintaining triple redundancy and a reasonable cost.
>
>
> --
> Regards,
> Vladimir
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171007/8b22b52f/attachment.html>


More information about the ceph-users mailing list