[ceph-users] bad crc/signature errors
jdillama at redhat.com
Wed Oct 4 11:44:56 PDT 2017
Perhaps this is related to a known issue on some 4.4 and later kernels
 where the stable write flag was not preserved by the kernel?
On Wed, Oct 4, 2017 at 2:36 PM, Gregory Farnum <gfarnum at redhat.com> wrote:
> That message indicates that the checksums of messages between your kernel
> client and OSD are incorrect. It could be actual physical transmission
> errors, but if you don't see other issues then this isn't fatal; they can
> recover from it.
> On Wed, Oct 4, 2017 at 8:52 AM Josy <josy at colossuscloudtech.com> wrote:
>> We have setup a cluster with 8 OSD servers (31 disks)
>> Ceph health is Ok.
>> [root at las1-1-44 ~]# ceph -s
>> id: de296604-d85c-46ab-a3af-add3367f0e6d
>> health: HEALTH_OK
>> mon: 3 daemons, quorum
>> mgr: ceph-las-mon-a1(active), standbys: ceph-las-mon-a2
>> osd: 31 osds: 31 up, 31 in
>> pools: 4 pools, 510 pgs
>> objects: 459k objects, 1800 GB
>> usage: 5288 GB used, 24461 GB / 29749 GB avail
>> pgs: 510 active+clean
>> We created a pool and mounted it as RBD in one of the client server.
>> While adding data to it, we see this below error :
>> [939656.039750] libceph: osd20 10.255.0.9:6808 bad crc/signature
>> [939656.041079] libceph: osd16 10.255.0.8:6816 bad crc/signature
>> [939735.627456] libceph: osd11 10.255.0.7:6800 bad crc/signature
>> [939735.628293] libceph: osd30 10.255.0.11:6804 bad crc/signature
>> Can anyone explain what is this and if I can fix it ?
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
> ceph-users mailing list
> ceph-users at lists.ceph.com
More information about the ceph-users