[ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
peter.linder at fiberdirekt.se
Sat Oct 7 12:36:16 PDT 2017
On 10/7/2017 8:08 PM, David Turner wrote:
> Just to make sure you understand that the reads will happen on the
> primary osd for the PG and not the nearest osd, meaning that reads
> will go between the datacenters. Also that each write will not ack
> until all 3 writes happen adding the latency to the writes and reads both.
Yes, I understand this. It is actually fine, the datacenters have been
selected so that they are about 10-20km apart. This yields around a 0.1
- 0.2ms round trip time due to speed of light being too low.
Nevertheless, latency due to network shouldn't be a problem and it's all
40G (dedicated) TRILL network for the moment.
I just want to be able to select 1 SSD and 2 HDDs, all spread out. I can
do that, but one of the HDDs end up in the same datacenter, probably
because I'm using the "take" command 2 times (resets selecting buckets?).
> On Sat, Oct 7, 2017, 1:48 PM Peter Linder <peter.linder at fiberdirekt.se
> <mailto:peter.linder at fiberdirekt.se>> wrote:
> On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote:
>> 2017-10-07 19:12 GMT+05:00 Peter Linder
>> <peter.linder at fiberdirekt.se <mailto:peter.linder at fiberdirekt.se>>:
>> The idea is to select an nvme osd, and
>> then select the rest from hdd osds in different datacenters
>> (see crush
>> map below for hierarchy).
>> It's a little bit aside of the question, but why do you want to
>> mix SSDs and HDDs in the same pool? Do you have read-intensive
>> workload and going to use primary-affinity to get all reads from
> Yes, this is pretty much the idea, getting the performance from
> NVMe reads, while still maintaining triple redundancy and a
> reasonable cost.
> ceph-users mailing list
> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ceph-users