[ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

Peter Linder peter.linder at fiberdirekt.se
Sat Oct 7 12:31:25 PDT 2017


Yes, I realized that, I updated it to 3.

On 10/7/2017 8:41 PM, Sinan Polat wrote:
> You are talking about the min_size, which should be 2 according to
> your text.
>
> Please be aware, the min_size in your CRUSH is _not_ the replica size.
> The replica size is set with your pools.
>
> Op 7 okt. 2017 om 19:39 heeft Peter Linder
> <peter.linder at fiberdirekt.se <mailto:peter.linder at fiberdirekt.se>> het
> volgende geschreven:
>
>> On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote:
>>> Hello!
>>>
>>> 2017-10-07 19:12 GMT+05:00 Peter Linder <peter.linder at fiberdirekt.se
>>> <mailto:peter.linder at fiberdirekt.se>>:
>>>
>>>     The idea is to select an nvme osd, and
>>>     then select the rest from hdd osds in different datacenters (see
>>>     crush
>>>     map below for hierarchy). 
>>>
>>> It's a little bit aside of the question, but why do you want to mix
>>> SSDs and HDDs in the same pool? Do you have read-intensive workload
>>> and going to use primary-affinity to get all reads from nvme?
>>>  
>>>
>> Yes, this is pretty much the idea, getting the performance from NVMe
>> reads, while still maintaining triple redundancy and a reasonable cost.
>>
>>
>>> -- 
>>> Regards,
>>> Vladimir
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171007/d5a74776/attachment.html>


More information about the ceph-users mailing list