방금 새 NVMe 드라이브에서 큰 이득을 기대하는 하드웨어 빌드를 완료했습니다. 이전의 성능이 예상보다 낮았으므로 (~ 3GB 전송) 마더 보드 / cpu / memory / hdd를 교체했습니다. 성능이 무엇을 두 번 동안 했다 , 그것은 여전히 절반 나는 SATA6 드라이브 프로 맥북 내 3 년 오래 된에 무엇을 얻을.
- CPU : i7-5820k 6 코어
- Mobo : MSI X99A MPOWER
- 메모리 : 32GB
- 드라이브 : Samsung 950 pro NVMe PCIe
우분투 (또는로 확인 16.04.1 LTS
) :
Release: 15.10
Codename: wily
4.2.0-16-generic
$ sudo blkid
[sudo] password for kross:
/dev/nvme0n1p4: UUID="2997749f-1895-4581-abd3-6ccac79d4575" TYPE="swap"
/dev/nvme0n1p1: LABEL="SYSTEM" UUID="C221-7CA5" TYPE="vfat"
/dev/nvme0n1p3: UUID="c7dc0813-3d18-421c-9c91-25ce21892b9d" TYPE="ext4"
내 테스트 결과는 다음과 같습니다.
sysbench --test=fileio --file-total-size=128G prepare
sysbench --test=fileio --file-total-size=128G --file-test-mode=rndrw --max-time=300 --max-requests=0 run
sysbench --test=fileio --file-total-size=128G cleanup
Operations performed: 228000 Read, 152000 Write, 486274 Other = 866274 Total
Read 3.479Gb Written 2.3193Gb Total transferred 5.7983Gb (19.791Mb/sec)
1266.65 Requests/sec executed
Test execution summary:
total time: 300.0037s
total number of events: 380000
total time taken by event execution: 23.6549
per-request statistics:
min: 0.01ms
avg: 0.06ms
max: 4.29ms
approx. 95 percentile: 0.13ms
Threads fairness:
events (avg/stddev): 380000.0000/0.00
execution time (avg/stddev): 23.6549/0.00
스케줄러는 다음과 none
같이 설정됩니다 .
# cat /sys/block/nvme0n1/queue/scheduler
none
lspci
정보 는 다음과 같습니다 .
# lspci -vv -s 02:00.0
02:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd Device a802 (rev 01) (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd Device a801
Physical Slot: 2-1
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 45
Region 0: Memory at fb610000 (64-bit, non-prefetchable) [size=16K]
Region 2: I/O ports at e000 [size=256]
Expansion ROM at fb600000 [disabled] [size=64K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [50] MSI: Enable- Count=1/8 Maskable- 64bit+
Address: 0000000000000000 Data: 0000
Capabilities: [70] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-
LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s <4us, L1 <64us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR+, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
Capabilities: [b0] MSI-X: Enable+ Count=9 Masked-
Vector table: BAR=0 offset=00003000
PBA: BAR=0 offset=00002000
Capabilities: [100 v2] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
Capabilities: [148 v1] Device Serial Number 00-00-00-00-00-00-00-00
Capabilities: [158 v1] Power Budgeting <?>
Capabilities: [168 v1] #19
Capabilities: [188 v1] Latency Tolerance Reporting
Max snoop latency: 0ns
Max no snoop latency: 0ns
Capabilities: [190 v1] L1 PM Substates
L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
Kernel driver in use: nvme
hdparm
:
$ sudo hdparm -tT --direct /dev/nvme0n1
/dev/nvme0n1:
Timing O_DIRECT cached reads: 2328 MB in 2.00 seconds = 1163.98 MB/sec
Timing O_DIRECT disk reads: 5250 MB in 3.00 seconds = 1749.28 MB/sec
hdparm -v
:
sudo hdparm -v /dev/nvme0n1
/dev/nvme0n1:
SG_IO: questionable sense data, results may be incorrect
multcount = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 488386/64/32, sectors = 1000215216, start = 0
fstab
UUID=453cf71b-38ca-49a7-90ba-1aaa858f4806 / ext4 noatime,nodiratime,errors=remount-ro 0 1
# /boot/efi was on /dev/sda1 during installation
#UUID=C221-7CA5 /boot/efi vfat defaults 0 1
# swap was on /dev/sda4 during installation
UUID=8f716653-e696-44b1-8510-28a1c53f0e8d none swap sw 0 0
UUID=C221-7CA5 /boot/efi vfat defaults 0 1
피오
여기에는 비슷한 벤치 마크 가 있습니다. fio로 테스트하고 disabled했을 때 sync
다른 이야기입니다.
sync=1
1 job - write: io=145712KB, bw=2428.5KB/s, iops=607, runt= 60002msec
7 jobs - write: io=245888KB, bw=4097.9KB/s, iops=1024, runt= 60005msec
sync=0
1 job - write: io=8157.9MB, bw=139225KB/s, iops=34806, runt= 60001msec
7 jobs - write: io=32668MB, bw=557496KB/s, iops=139373, runt= 60004msec
sync
한 작업과 7 개의 작업에 대한 전체 결과는 다음과 같습니다 .
$ sudo fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/2368KB/0KB /s] [0/592/0 iops] [eta 00m:00s]
journal-test: (groupid=0, jobs=1): err= 0: pid=18009: Wed Nov 18 18:14:03 2015
write: io=145712KB, bw=2428.5KB/s, iops=607, runt= 60002msec
clat (usec): min=1442, max=12836, avg=1643.09, stdev=546.22
lat (usec): min=1442, max=12836, avg=1643.67, stdev=546.23
clat percentiles (usec):
| 1.00th=[ 1480], 5.00th=[ 1496], 10.00th=[ 1512], 20.00th=[ 1528],
| 30.00th=[ 1576], 40.00th=[ 1592], 50.00th=[ 1608], 60.00th=[ 1608],
| 70.00th=[ 1608], 80.00th=[ 1624], 90.00th=[ 1640], 95.00th=[ 1672],
| 99.00th=[ 2192], 99.50th=[ 6944], 99.90th=[ 7328], 99.95th=[ 7328],
| 99.99th=[ 7520]
bw (KB /s): min= 2272, max= 2528, per=100.00%, avg=2430.76, stdev=61.45
lat (msec) : 2=98.44%, 4=0.58%, 10=0.98%, 20=0.01%
cpu : usr=0.39%, sys=3.11%, ctx=109285, majf=0, minf=8
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=36428/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=145712KB, aggrb=2428KB/s, minb=2428KB/s, maxb=2428KB/s, mint=60002msec, maxt=60002msec
Disk stats (read/write):
nvme0n1: ios=69/72775, merge=0/0, ticks=0/57772, in_queue=57744, util=96.25%
$ sudo fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=7 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.1.11
Starting 7 processes
Jobs: 6 (f=6): [W(2),_(1),W(4)] [50.4% done] [0KB/4164KB/0KB /s] [0/1041/0 iops] [eta 01m:00s]
journal-test: (groupid=0, jobs=7): err= 0: pid=18025: Wed Nov 18 18:15:10 2015
write: io=245888KB, bw=4097.9KB/s, iops=1024, runt= 60005msec
clat (usec): min=0, max=107499, avg=6828.48, stdev=3056.21
lat (usec): min=0, max=107499, avg=6829.10, stdev=3056.16
clat percentiles (usec):
| 1.00th=[ 0], 5.00th=[ 2992], 10.00th=[ 4512], 20.00th=[ 4704],
| 30.00th=[ 5088], 40.00th=[ 6176], 50.00th=[ 6304], 60.00th=[ 7520],
| 70.00th=[ 7776], 80.00th=[ 9024], 90.00th=[10048], 95.00th=[12480],
| 99.00th=[15936], 99.50th=[18048], 99.90th=[22400], 99.95th=[23936],
| 99.99th=[27008]
bw (KB /s): min= 495, max= 675, per=14.29%, avg=585.60, stdev=28.07
lat (usec) : 2=4.41%
lat (msec) : 2=0.57%, 4=4.54%, 10=80.32%, 20=9.92%, 50=0.24%
lat (msec) : 250=0.01%
cpu : usr=0.14%, sys=0.72%, ctx=173735, majf=0, minf=63
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=61472/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=245888KB, aggrb=4097KB/s, minb=4097KB/s, maxb=4097KB/s, mint=60005msec, maxt=60005msec
Disk stats (read/write):
nvme0n1: ios=21/122801, merge=0/0, ticks=0/414660, in_queue=414736, util=99.90%
조정
http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/ssd-partition-alignment-tech에parted
기반한 수학뿐만 아니라 와의 정렬을 확인 했습니다 . -brief.pdf
kross@camacho:~$ sudo parted
GNU Parted 3.2
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print all
Model: Unknown (unknown)
Disk /dev/nvme0n1: 1000215216s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2048s 206847s 204800s fat32 EFI system partition boot, esp
2 206848s 486957055s 486750208s ntfs msftdata
3 486957056s 487878655s 921600s ntfs hidden, diag
4 590608384s 966787071s 376178688s ext4
5 966787072s 1000214527s 33427456s linux-swap(v1)
kross@camacho:~$ sudo parted /dev/nvme0n1
GNU Parted 3.2
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) align-check opt 1
1 aligned
(parted) align-check opt 2
2 aligned
(parted) align-check opt 3
3 aligned
(parted) align-check opt 4
4 aligned
(parted) align-check opt 5
5 aligned
TLDR;
연구 결과가 밝혀지지 않았지만 근본적으로 잘못 설정된 것이 있다고 생각합니다. 3 년 전 맥북 프로 (SATA6 포함)의 ~ 4 배의 처리량을 기대하고 있으며 NVMe로 처리량의 1/2을 받고 있습니다. 나는 noatime,nodiratime
약간의 개선점을 주었지만 4x와 같은 것은 기대하지 않았다. 새로운 15.10 서버를 다시 파티션하거나 다시 설치하여 아무것도 남아 있지 않은지 확인하고 동일한 결과를 얻었습니다.
내 있습니까 fio
동기 / 문제없이 동기화 나타내는 위의 결과는?
그래서 나는 슬레이트가 깨끗하고 무엇이든 시도 할 수 있습니다. 성능을 최대로 끌어 올리려면 어떻게해야합니까? 모든 참조를 환영합니다.
apt-get install smartmontools
가 실패합니다 grub-probe: error: cannot find a GRUB drive for /dev/nvme0n1p3. Check your device.map.
. 오류 update-grub
로 인해 제대로 작동하지 않는 것으로 보입니다 (내 노력에 따라) grub-probe
. smartctl -i /dev/nvme0n1
반환 /dev/nvme0n1: Unable to detect device type. Please specify device type with the -d option.
의 NVMe가 표시되지 않는 smartctl -h
장치 유형으로.
uname --kernel-release&&lsb_release --code --short
??? 의 결과는 무엇 입니까?
4.2.0-16-generic wily
Skylake
SSD를 최고 속도로 실행하려면 프로세서가 필요하다 .
smartctl --scan
다음smartctl --all /dev/xxx
위치를xxx
첫 번째 명령에 와서 무엇이든 ???