[ceph-users] NVME Intel Optane - same servers different performance

Steven Vacaroaia stef97 at gmail.com
Thu Oct 25 09:32:06 PDT 2018


Thanks for suggestion
However same "bad"server was working fine until I updated firmware
Now, all 4 server have the same firmware but one has lower performance

I will try what you suggested though although, as I said, same server with
same NVME had good performance before updating server firmware

Thanks

On Thu, 25 Oct 2018 at 12:20, Martin Verges <martin.verges at croit.io> wrote:

> Hello Steven,
>
> You could swap the SSDs from the hosts to see if the error is migrating.
>
> If it migrates, I would appreciate that the affected SSD simply offers
> less performance than the others. Possibly an RMA reason depending on
> what the manufacturer guarantees.
> If not, the system must be further searched for possible sources of error.
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695
> E-Mail: martin.verges at croit.io
> Chat: https://t.me/MartinVerges
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
>
>
> 2018-10-25 18:06 GMT+02:00 Steven Vacaroaia <stef97 at gmail.com>:
> > Hi Martin,
> >
> > Yes, they are in the same slot - also I checked
> >      the BIOS and the PCI speed and type is properly negotiated
> >      system profile ( performance)
> >
> > note
> > This happened after I upgrade firmware on the servers - however they all
> > have the same firmware
> >
> > BAD server
> > lspci | grep -i Optane
> > 04:00.0 Non-Volatile memory controller: Intel Corporation Optane DC
> P4800X
> > Series SSD
> >
> > GOOD server
> > lspci | grep -i Optane
> > 04:00.0 Non-Volatile memory controller: Intel Corporation Optane DC
> P4800X
> > Series SSD
> >
> >
> >
> > On Thu, 25 Oct 2018 at 11:59, Martin Verges <martin.verges at croit.io>
> wrote:
> >>
> >> Hello Steven,
> >>
> >> are you sure that the systems are exact the same? Sometimes vendors
> >> place extension cards into different PCIe slots.
> >>
> >> --
> >> Martin Verges
> >> Managing director
> >>
> >> Mobile: +49 174 9335695
> >> E-Mail: martin.verges at croit.io
> >> Chat: https://t.me/MartinVerges
> >>
> >> croit GmbH, Freseniusstr. 31h, 81247 Munich
> >> CEO: Martin Verges - VAT-ID: DE310638492
> >> Com. register: Amtsgericht Munich HRB 231263
> >>
> >> Web: https://croit.io
> >> YouTube: https://goo.gl/PGE1Bx
> >>
> >>
> >> 2018-10-25 17:46 GMT+02:00 Steven Vacaroaia <stef97 at gmail.com>:
> >> > Hi,
> >> > I have 4 x DELL R630 servers with exact same specs
> >> > I installed Intel Optane SSDPED1K375GA in all
> >> >
> >> > When comparing fio performance ( both read and write), when is lower
> >> > than
> >> > the other 3
> >> > ( see below - just read)
> >> >
> >> > Any suggestions as to what to check/fix ?
> >> >
> >> > BAD server
> >> > [root at osd04 ~]# fio --filename=/dev/nvme0n1 --direct=1 --sync=1
> >> > --rw=read
> >> > --bs=4k --numjobs=100 --iodepth=1 --runtime=60 --time_based
> >> > --group_reporting --name=journal-test
> >> > journal-test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> >> > 4096B-4096B, ioengine=psync, iodepth=1
> >> > ...
> >> > fio-3.1
> >> > Starting 100 processes
> >> > Jobs: 100 (f=100): [R(100)][100.0%][r=2166MiB/s,w=0KiB/s][r=554k,w=0
> >> > IOPS][eta 00m:00s]
> >> >
> >> >
> >> > GOOD server
> >> > [root at osd02 ~]# fio --filename=/dev/nvme0n1 --direct=1 --sync=1
> >> > --rw=read
> >> > --bs=4k --numjobs=100 --iodepth=1 --runtime=60 --time_based
> >> > --group_reporting --name=journal-test
> >> > journal-test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> >> > 4096B-4096B, ioengine=psync, iodepth=1
> >> > ...
> >> > fio-3.1
> >> > Starting 100 processes
> >> > Jobs: 100 (f=100): [R(100)][100.0%][r=2278MiB/s,w=0KiB/s][r=583k,w=0
> >> > IOPS][eta 00m:00s]
> >> >
> >> >
> >> > many thanks
> >> > steven
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > ceph-users at lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181025/5f57b26b/attachment.html>


More information about the ceph-users mailing list