[ceph-users] CEPH DR RBD Mount

David C dcsysengineer at gmail.com
Fri Nov 30 08:24:11 PST 2018


Is that one big xfs filesystem? Are you able to mount with krbd?

On Tue, 27 Nov 2018, 13:49 Vikas Rana <vikasrana3 at gmail.com wrote:

> Hi There,
>
> We are replicating a 100TB RBD image to DR site. Replication works fine.
>
> rbd --cluster cephdr mirror pool status nfs --verbose
>
> health: OK
>
> images: 1 total
>
>     1 replaying
>
>
>
> dir_research:
>
>   global_id:   11e9cbb9-ce83-4e5e-a7fb-472af866ca2d
>
>   state:       up+replaying
>
>   description: replaying, master_position=[object_number=591701,
> tag_tid=1, entry_tid=902879873], mirror_position=[object_number=446354,
> tag_tid=1, entry_tid=727653146], entries_behind_master=175226727
>
>   last_update: 2018-11-14 16:17:23
>
>
>
>
> We then, use nbd to map the RBD image at the DR site but when we try to
> mount it, we get
>
>
> # mount /dev/nbd2 /mnt
>
> mount: block device /dev/nbd2 is write-protected, mounting read-only
>
> *mount: /dev/nbd2: can't read superblock*
>
>
>
> We are using 12.2.8.
>
>
> Any help will be greatly appreciated.
>
>
> Thanks,
>
> -Vikas
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181130/f1c53875/attachment.html>


More information about the ceph-users mailing list