[ceph-users] Ceph SSD array with Intel DC S3500's

Adam Boyhan adamb at medent.com
Fri Oct 3 03:39:37 PDT 2014


I appreciate the input! 

Yep yesterday's discussion is what really lead my to make this post to get input. Looks like the 800GB S3500 has a TBW of 450 which if my math is correct is about 246GB per day. I haven't sat down and crunched any legit numbers yet as Im still trying to figure out exactly how to do that. Our application is so read intensive that I think the 3500 is more than enough for our purpose. We really need fast read performance across the board. 

Unsure on the the cache idea, we have done significant testing with a number of cache solutions and they all seem to fall short (LSI, intel, bcache, ZFS). In our physical servers we actually do ram cache with vmtouch. I know it sounds crazy but the performance is fantastic and two years ago it was cheaper for us to load servers with ram than SSD's. Unfortunately for our cloud its not a viable solution to have that much ram provided to a hosted client so we will be forced down the SSD path. 

Going to starting digging through the archives on how to best calculate some base numbers to work off of. If anyone has any tips or suggestions, throw them my way! 

Thanks, 
Adam Boyhan 
System Administrator 
MEDENT(EMR/EHR) 
15 Hulbert Street - P.O. Box 980 
Auburn, New York 13021 
www.medent.com 
Phone: (315)-255-1751 
Fax: (315)-255-3539 
Cell: (315)-729-2290 
adamb at medent.com 

This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. 

----- Original Message -----

From: "Christian Balzer" <chibi at gol.com> 
To: ceph-users at lists.ceph.com 
Sent: Thursday, October 2, 2014 8:54:41 PM 
Subject: Re: [ceph-users] Ceph SSD array with Intel DC S3500's 


Hello, 

On Thu, 2 Oct 2014 13:48:27 -0400 (EDT) Adam Boyhan wrote: 

> Hey everyone, loving Ceph so far! 
> 
> We are looking to role out a Ceph cluster with all SSD's. Our 
> application is around 30% writes and 70% reads random IO. The plan is to 
> start with roughly 8 servers with 8 800GB Intel DC S3500's per server. I 
> wanted to get some input on the use of the DC S3500. Seeing that we are 
> primarily a read environment, I was thinking we could easily get away 
> with the S3500 instead of the S3700 but I am unsure? Obviously the price 
> point of the S3500 is very attractive but if they start failing on us 
> too soon, it might not be worth the savings. My largest concern is the 
> journaling of Ceph, so maybe I could use the S3500's for the bulk of the 
> data and utilize a S3700 for the journaling? 
> 
Essentially Mark touched all the pertinent points in his answer, run the 
numbers, don't bother with extra journals. 

And not directed to you in particular, but searching the ML archives can 
be quite helpful, as part of this has been discussed as recent as 
yesterday, see the "SSD MTBF" thread. 

As another example I recently purchased journal SSDs and did run all the 
numbers. A DC S3500 240GB was going to survive 5 years according to my 
calculations, but a DC S3700 100GB was still going to be fast enough, 
while being CHEAPER in purchase costs and of course a no-brainer when it 
comes to TBW/$ and good night sleep. ^^ 

I'd venture a cluster made of 2 really fast (and thus expensive) nodes for 
a cache pool and 6 "classic" Ceph storage nodes for permanent storage 
might be good enough for your use case with a future version of Ceph. 
But unfortunately currently cache pools aren't quite there yet. 

Christian 
-- 
Christian Balzer Network/Systems Engineer 
chibi at gol.com Global OnLine Japan/Fusion Communications 
http://www.gol.com/ 
_______________________________________________ 
ceph-users mailing list 
ceph-users at lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20141003/a03d2522/attachment.htm>


More information about the ceph-users mailing list