[ceph-users] USB 3.0 or eSATA for externally mounted OSDs?

Stuart Longland stuartl at longlandclan.id.au
Sat Feb 2 16:34:07 PST 2019

Hi all,

I'm looking at expanding the storage of my cluster with some external
HDDs and was looking for advice on the connection interface.

I have 3 storage nodes that are combined Ceph
monitor+manager+metadata+OSD with 1TB hard drives (HGST
HTS541010A9E680).  The nodes themselves are built on Supermicro
A1SAi-2750F motherboards with 16GB* RAM each and 8-core Intel Atom C2750
CPUs soldered on, with the 4 gigabit Ethernet interfaces bonded into
pairs, one for the private cluster VLAN and one for the public network.

(* actually one with 8GB RAM because a stick died, I have replacement
RAM though.  I'll wait until I have another reason to power the node
down before putting the extra 8GB back in as right now it's going fine

I'm not making a lot of use of CephFS yet, but I intend to use it with
Docker and for some OpenNebula tasks.  99% of the workload right now is
RBDs for virtual machines.

These have served well, but with the addition of a few VMs, space is
getting just a fraction tight.  The cases can fit two 2.5" HDDs, and
have one 120GB SSD and the aforementioned 1TB HDD fitted.  No more space
inside.  The cases are mounted on a DIN rail.

2TB 2.5" HDDs are an option, but that seems to be where 2.5" HDDs max
out, unless I want to trust my data to Seagate.  (Many times bitten,
quite shy.)  I'm not made of money, so that rules out SSDs at this size.

The other option is to move to 3.5" HDDs, which will have to go in a
separate external case.  4TB HDDs are pretty cheap these days and so
that would see my needs out for a while.

I can buy off-the-shelf USB 3.0 HDD cases that will plug directly in to
the Ceph nodes to provide additional OSDs.  There's two USB 3.0 ports
that I can use for this -- I then just need to add a DIN rail bracket to
the case and rig up a 12V regulator to power it -- not rocket science.

Alternatively, I can go eSATA.  It seems eSATA cases have gone the way
of the dinosaur in favour of USB3 and ThunderBolt (which I do not have).

There's no eSATA ports on the motherboard, but I can buy for AU$20 an
adaptor bracket that has two SATA to eSATA adaptors mounted, so adding
an eSATA port to the server is no problem.  (The cable length will be
less than 1m, so no need for a dedicated eSATA HBA.)

I *think* I might be able to use the same adaptor to go from eSATA back
to the HDD itself, so DIY may be an option here.

My understanding is that eSATA has a lower CPU overhead than USB 3.0.
ceph-osd can be quite CPU intensive at times (recommended dual-core
CPUs), and the nodes also run the metadata storage daemons which are
said to be hungry for CPU (recommended quad-core).

I did do a migration of my cluster from FileStore/btrfs to BlueStore,
and that involved me plugging in a temporary OSD over USB 3.0 (WDC WD10
02FAEX-00Z3A0 in a 3.5" drive dock) and while that worked fine, it's not
really a real-world test.

The question is, is the CPU overhead of USB 3.0 likely to matter at
speed?  Anyone tried it long term and have any comments about

Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.

More information about the ceph-users mailing list