[ceph-users] Erasure Pools.

huang jun hjwsm1989 at gmail.com
Sun Mar 31 03:13:50 PDT 2019


What's the output of 'ceph osd dump' and 'ceph osd crush dump' and
'ceph health detail'?

Andrew J. Hutton <andrew.john.hutton at gmail.com> 于2019年3月30日周六 上午7:05写道:
>
> I have tried to create erasure pools for CephFS using the examples given
> at
> https://swamireddy.wordpress.com/2016/01/26/ceph-diff-between-erasure-and-replicated-pool-type/
> but this is resulting in some weird behaviour.  The only number in
> common is that when creating the metadata store; is this related?
>
> [ceph at thor ~]$ ceph -s
>    cluster:
>      id:     b688f541-9ad4-48fc-8060-803cb286fc38
>      health: HEALTH_WARN
>              Reduced data availability: 128 pgs inactive, 128 pgs incomplete
>
>    services:
>      mon: 3 daemons, quorum thor,odin,loki
>      mgr: odin(active), standbys: loki, thor
>      mds: cephfs-1/1/1 up  {0=thor=up:active}, 1 up:standby
>      osd: 5 osds: 5 up, 5 in
>
>    data:
>      pools:   2 pools, 256 pgs
>      objects: 21 objects, 2.19KiB
>      usage:   5.08GiB used, 7.73TiB / 7.73TiB avail
>      pgs:     50.000% pgs not active
>               128 creating+incomplete
>               128 active+clean
>
> Pretty sure these were the commands used.
>
> ceph osd pool create storage 1024 erasure ec-42-profile2
> ceph osd pool create storage 128 erasure ec-42-profile2
> ceph fs new cephfs storage_metadata storage
> ceph osd pool create storage_metadata 128
> ceph fs new cephfs storage_metadata storage
> ceph fs add_data_pool cephfs storage
> ceph osd pool set storage allow_ec_overwrites true
> ceph osd pool application enable storage cephfs
> fs add_data_pool default storage
> ceph fs add_data_pool cephfs storage
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Thank you!
HuangJun


More information about the ceph-users mailing list