[ceph-users] Set existing pools to use hdd device class only

Enrico Kern enrico.kern at glispa.com
Mon Aug 20 02:42:15 PDT 2018


Hmm then is not really an option for me. Maybe someone from the devs can
shed a light why it is doing migration as long you only have OSDs with the
same class? I have a few Petabyte of Storage in each cluster. When it
starts migrating everything again that will result in a super big
performance bottleneck. My plan was to set the existing pools to use the
new crush rule hdd only and add ssd osds later.

On Mon, Aug 20, 2018 at 11:22 AM Marc Roos <M.Roos at f1-outsourcing.eu> wrote:

>
> I just recently did the same. Take into account that everything starts
> migrating. How weird it maybe, I had hdd test cluster only and changed
> the crush rule to having hdd. Took a few days, totally unnecessary as
> far as I am concerned.
>
>
>
>
> -----Original Message-----
> From: Enrico Kern [mailto:enrico.kern at glispa.com]
> Sent: maandag 20 augustus 2018 11:18
> To: ceph-users at lists.ceph.com
> Subject: [ceph-users] Set existing pools to use hdd device class only
>
> Hello,
>
> right now we have multiple HDD only clusters with ether filestore
> journals on SSDs or on newer installations WAL etc. on SSD.
>
> I plan to extend our ceph clusters with SSDs to provide ssd only pools.
> In luminous we have devices classes so that i should be able todo this
> without editing around crush map.
>
> In the device class doc it says i can create "new" pools to use only
> ssds as example:
>
> ceph osd crush rule create-replicated fast default host ssd
>
> what happens if i fire this on an existing pool but with hdd device
> class? I wasnt able to test thi yet in our staging cluster and wanted to
> ask whats the way todo this.
>
> I want to set an existing pool called volumes to only use osds with hdd
> class. Right now all OSDs have hdds. So in theory it should not use
> newly created SSD osds once i set them all up for hdd classes right?
>
> So for existing pool running:
>
> ceph osd crush rule create-replicated volume-hdd-only volumes default
> hdd ceph osd pool set volumes crush_rule volume-hdd-only
>
>
> should be the way to go right?
>
>
> Regards,
>
> Enrico
>
> --
>
>
> Enrico Kern
>
> VP IT Operations
>
>
> enrico.kern at glispa.com
> +49 (0) 30 555713017 / +49 (0) 152 26814501
>
> skype: flyersa
> LinkedIn Profile <https://www.linkedin.com/in/enricokern>
>
>
> <http://goog_59398030/>  <https://www.glispa.com/>
>
>
> Glispa GmbH | Berlin Office
>
> Stromstr. 11-17  <https://goo.gl/maps/6mwNA77gXLP2>
> Berlin, Germany, 10551  <https://goo.gl/maps/6mwNA77gXLP2>
>
> Managing Director Din Karol-Gavish
> Registered in Berlin
> AG Charlottenburg |
> <
> https://maps.google.com/?q=Sonnenburger+Stra%C3%9Fe+73+10437+Berlin%C2%A0%7C+%3Chttps://maps.google.com/?q%3DSonnenburgerstra%25C3%259Fe%2B73%2B10437%2BBerlin%25C2%25A0%257C%25C2%25A0Germany%26entry%3Dgmail%26source%3Dg%3E%C2%A0Germany&entry=gmail&source=g>
>
> HRB 114678B –––––––––––––––––––––––––––––
>
>
>

-- 

*Enrico Kern*
VP IT Operations

enrico.kern at glispa.com
+49 (0) 30 555713017 / +49 (0) 152 26814501
skype: flyersa
LinkedIn Profile <https://www.linkedin.com/in/enricokern>


<http://goog_59398030/> <https://www.glispa.com/>

*Glispa GmbH* | Berlin Office
Stromstr. 11-17  <https://goo.gl/maps/6mwNA77gXLP2>
Berlin, Germany, 10551  <https://goo.gl/maps/6mwNA77gXLP2>

Managing Director Din Karol-Gavish
Registered in Berlin
AG Charlottenburg |
<https://maps.google.com/?q=Sonnenburger+Stra%C3%9Fe+73+10437+Berlin%C2%A0%7C+%3Chttps://maps.google.com/?q%3DSonnenburgerstra%25C3%259Fe%2B73%2B10437%2BBerlin%25C2%25A0%257C%25C2%25A0Germany%26entry%3Dgmail%26source%3Dg%3E%C2%A0Germany&entry=gmail&source=g>
HRB
114678B
–––––––––––––––––––––––––––––
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20180820/96d1bbcb/attachment.html>


More information about the ceph-users mailing list