[ceph-users] XFS no space left on device

Irek Fasikhov malmyzh at gmail.com
Tue Oct 25 05:55:39 PDT 2016


Привет, Василий.
Hi,Vasily.

You are busy inode. see "df -i"

С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757

2016-10-25 15:52 GMT+03:00 Василий Ангапов <angapov at gmail.com>:

> This is a a bit more information about that XFS:
>
> root at ed-ds-c178:[~]:$ xfs_info /dev/mapper/disk23p1
> meta-data=/dev/mapper/disk23p1   isize=2048   agcount=6, agsize=268435455
> blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=1465130385, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
> root at ed-ds-c178:[~]:$ xfs_db /dev/mapper/disk23p1
> xfs_db> frag
> actual 25205642, ideal 22794438, fragmentation factor 9.57%
>
> 2016-10-25 14:59 GMT+03:00 Василий Ангапов <angapov at gmail.com>:
> > Actually all OSDs are already mounted with inode64 option. Otherwise I
> > could not write beyond 1TB.
> >
> > 2016-10-25 14:53 GMT+03:00 Ashley Merrick <ashley at amerrick.co.uk>:
> >> Sounds like 32bit Inode limit, if you mount with -o inode64 (not 100%
> how you would do in ceph), would allow data to continue to be wrote.
> >>
> >> ,Ashley
> >>
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf
> Of ??????? ???????
> >> Sent: 25 October 2016 12:38
> >> To: ceph-users <ceph-users at lists.ceph.com>
> >> Subject: [ceph-users] XFS no space left on device
> >>
> >> Hello,
> >>
> >> I got Ceph 10.2.1 cluster with 10 nodes, each having 29 * 6TB OSDs.
> >> Yesterday I found that 3 OSDs were down and out with 89% space
> utilization.
> >> In logs there is:
> >> 2016-10-24 22:36:37.599253 7f8309c5e800  0 ceph version 10.2.1 (
> 3a66dd4f30852819c1bdaa8ec23c795d4ad77269), process ceph-osd, pid
> >> 2602081
> >> 2016-10-24 22:36:37.600129 7f8309c5e800  0 pidfile_write: ignore empty
> --pid-file
> >> 2016-10-24 22:36:37.635769 7f8309c5e800  0
> >> filestore(/var/lib/ceph/osd/ceph-123) backend xfs (magic 0x58465342)
> >> 2016-10-24 22:36:37.635805 7f8309c5e800 -1
> >> genericfilestorebackend(/var/lib/ceph/osd/ceph-123) detect_features:
> >> unable to create /var/lib/ceph/osd/ceph-123/fiemap_test: (28) No space
> left on device
> >> 2016-10-24 22:36:37.635814 7f8309c5e800 -1
> >> filestore(/var/lib/ceph/osd/ceph-123) _detect_fs: detect_features
> >> error: (28) No space left on device
> >> 2016-10-24 22:36:37.635818 7f8309c5e800 -1
> >> filestore(/var/lib/ceph/osd/ceph-123) FileStore::mount: error in
> >> _detect_fs: (28) No space left on device
> >> 2016-10-24 22:36:37.635824 7f8309c5e800 -1 osd.123 0 OSD:init: unable
> to mount object store
> >> 2016-10-24 22:36:37.635827 7f8309c5e800 -1 ESC[0;31m ** ERROR: osd init
> failed: (28) No space left on deviceESC[0m
> >>
> >> root at ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ df -h
> /var/lib/ceph/osd/ceph-123
> >> Filesystem            Size  Used Avail Use% Mounted on
> >> /dev/mapper/disk23p1  5.5T  4.9T  651G  89% /var/lib/ceph/osd/ceph-123
> >>
> >> root at ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ df -i
> /var/lib/ceph/osd/ceph-123
> >> Filesystem              Inodes    IUsed     IFree IUse% Mounted on
> >> /dev/mapper/disk23p1 146513024 22074752 124438272   16%
> >> /var/lib/ceph/osd/ceph-123
> >>
> >> root at ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ touch 123
> >> touch: cannot touch ‘123’: No space left on device
> >>
> >> root at ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ grep ceph-123
> /proc/mounts
> >> /dev/mapper/disk23p1 /var/lib/ceph/osd/ceph-123 xfs
> rw,noatime,attr2,inode64,noquota 0 0
> >>
> >> The same situation is for all three down OSDs. OSD can be unmounted and
> mounted without problem:
> >> root at ed-ds-c178:[~]:$ umount /var/lib/ceph/osd/ceph-123 root at ed-ds-c178:[~]:$
> root at ed-ds-c178:[~]:$ mount /var/lib/ceph/osd/ceph-123 root at ed-ds-c178:[~]:$
> touch /var/lib/ceph/osd/ceph-123/123
> >> touch: cannot touch ‘/var/lib/ceph/osd/ceph-123/123’: No space left on
> device
> >>
> >> xfs_repair gives no error for FS.
> >>
> >> Kernel is
> >> root at ed-ds-c178:[~]:$ uname -r
> >> 4.7.0-1.el7.wg.x86_64
> >>
> >> What else can I do to rectify that situation?
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users at lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20161025/dc665f8a/attachment.htm>


More information about the ceph-users mailing list