[ceph-users] pg scrub and auto repair in hammer

Christian Balzer chibi at gol.com
Tue Jun 28 00:06:57 PDT 2016


On Tue, 28 Jun 2016 08:34:26 +0200 Stefan Priebe - Profihost AG wrote:

> Am 27.06.2016 um 02:14 schrieb Christian Balzer:
> > On Sun, 26 Jun 2016 19:48:18 +0200 Stefan Priebe wrote:
> > 
> >> Hi,
> >>
> >> is there any option or chance to have auto repair of pgs in hammer?
> >>
> > Short answer: 
> > No, in any version of Ceph.
> > 
> > Long answer:
> > There are currently no checksums generated by Ceph and present to
> > facilitate this.
> Yes but if you have a replication count of 3 ceph pg repair was always
> working for me since bobtail. I've never seen corrupted data.
That's good and lucky for you.

Not seeing corrupted data also doesn't mean there wasn't any corruption,
it could simply mean that the data in question wasn't used or overwritten
before being read again.

In the handful of scrub errors I ever encountered there was one case where
blindly doing a repair from the primary PG would have been the wrong thing
to do.
> > If you'd run BTRFS or ZFS with filestore you'd be closer to an
> > automatic state of affairs, as these filesystems do strong checksums
> > and check them on reads and would create an immediate I/O error if
> > something got corrupted, thus making it clear which OSD is in need of
> > the hammer of healing.
> Yes but at least BTRFS is still not working for ceph due to
> fragmentation. I've even tested a 4.6 kernel a few weeks ago. But it
> doubles it's I/O after a few days.
Nobody (well, certainly not me) suggested to use BTRFS, especially with
Bluestore "around the corner".

Just pointing out that it has the necessary checksumming features.

Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Rakuten Communications

More information about the ceph-users mailing list