[ceph-users] Supermicro server 5019D8-TR12P for new Ceph cluster
singapore at amerrick.co.uk
Tue Nov 13 06:15:03 PST 2018
I’d say them CPU’s should more than be fine for your use case and
You have more than one thread per an OSD which seems to be the ongoing
On Tue, 13 Nov 2018 at 10:12 PM, Michal Zacek <zacekm at img.cas.cz> wrote:
> The server support up to 128GB RAM, so upgrade RAM will not be problem.
> The storage will be used for storing data from a microscopes. Users will
> download data from the storage to local PC, make some changes and then will
> upload data back to the storage. We want use the cluster for direct
> computing in the future, but now we only need separate data from the
> microscopes from normal office data. We are expecting up to 10TB
> upload/download per day.
> Dne 13. 11. 18 v 14:50 Ashley Merrick napsal(a):
> Not sure about CPU, but I would definitely suggest more than 64GB of ram.
> With the next release of Mimic the default memory will be set to 4GB per a
> OSD (if I am correct), this only includes the bluestore layer, so id easily
> expect to see you getting close to 64GB after OS cache's e.t.c, and the
> last thing you want on a CEPH OSD box is a OOM.
> Are you looking at near to cold storage for these photos? Or is it storage
> for designers working out of programs which require low latency and quick
> On Tue, Nov 13, 2018 at 9:43 PM Michal Zacek <zacekm at img.cas.cz> wrote:
>> what do you think about this Supermicro server:
>> http://www.supermicro.com/products/system/1U/5019/SSG-5019D8-TR12P.cfm ?
>> We are considering about eight or ten server each with twelve 10TB SATA
>> drives, one m.2 SSD and 64GB RAM. Public and cluster network will be
>> 10Gbit/s. The question is if one Intel XEON D-2146NT wit eight cores (16
>> with HT) will be enough for 12 SAT disks. Cluster will be used for storing
>> pictures. File size from 1MB to 2TB ;-).
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
> ceph-users mailing list
> ceph-users at lists.ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ceph-users