Jump to content
IGNORED

CPU vs PCH PCIe lines for SSD


Recommended Posts

Has anyone tried to compare the real effect of location the music library on SSD connected to:
- M.2 on board, processor lines
- PCIe to M.2, processor lines
- M.2 on board, chipset lines
I'm struggling with the choice of motherboard - there are never enough direct lines and the bifurcation options are mostly limited.

Do you think it is more important for a system disk than a data disk?

Link to comment

12th gen Intel CPU DMI bus is PCIe Gen4 x8 lanes and it can accommodate two Gen4 M.2 or four Gen3 M.2 traffic and there is no meaningful difference between CPU direct PCIe ports and PCH PCIe ports, with music playback usage scenario. It is challenging to saturate 16GB/s DMI bus even with uncompressed WAV PCM file read from RAID0 M.2 array to main memory.  If it is achieved, 74 minutes of Compact Disc data can be transferred in 0.04 second.  (and FLAC read should be CPU bottleneck because uncompressing task takes more time than storage read.)

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment

I know that throughput is not the problem, but from the posts on this forum it is often stated that it depends on the latency, which is greater when connected via PCH. For example, plugging a USB board into a direct slot has, according to these opinions, a positive sound benefit. So my question was whether this would also be the case with a disc with a music library.
I need to plug in one USB card and three NVMe drives (system drive and two music library drives). Which directly on the CPU, no board for 12th gen that I've seen allows (only Z590 for 11th gen, if I don't count x299 etc.). If it didn't matter, the selection would be wider...

Link to comment

Surely your music player software doesn't send the music bits directly from the SSD to the DAC via the USB port. Hopefully there would be at least one RAM buffer that should obliterate any SSD connection latency differential.

Custom room treatments for headphone users.

Link to comment

I performed a comparison test to measure file read latency of PCH NVMe, Direct CPU PCIe NVMe and SATA SSD.

 

Tested computer is Intel i9-9900K, Asus Z370-i (DMI 3.0, PCIe Gen3), Windows 11.

 

Read file size: 1 byte.

Power plan tested: "High Performance" and "Power Saver".

Network is disconnected during the test.

 

Tested Cygwin batch file. ReadOneFileWithoutCache program  is https://sourceforge.net/p/playpcmwin/code/HEAD/tree/PlayPcmWin/00experiments/ReadOneFileWithoutCache/main.cpp 

#!/bin/bash

rammap="/cygdrive/c/apps/SysinternalsSuite/RAMMap.exe"

for i in `seq 0 100`;do
    $rammap -Ew
    $rammap -Es
    $rammap -Em
    $rammap -Et

    sleep 10
    
    ./x64/Release/ReadOneFileWithoutCache  C:\1byte.txt
    ./x64/Release/ReadOneFileWithoutCache  D:\1byte.txt
    ./x64/Release/ReadOneFileWithoutCache  E:\1byte.txt
    ./x64/Release/ReadOneFileWithoutCache  F:\1byte.txt
done

batch program has off-by-one error 😁

 

Test 1:

  • Samsung 960 Evo 500GB as C:\, connected to PCH M.2 port on Motherboard back.
  • Samsung 980 Pro 2TB as D:\, connected to PCH M.2 port on Motherboard front.
  • WD Blue 2TB SATA drive as E:\

Test 2:

  • Samsung 980 Pro 2TB (the same disk as in test 1) as D:\, connected to PCIe x16 slot (Direct connection) using PCIe to M.2 adapter.
  • Other drives are unchanged.


Result tables are Fig.1 below.

  • C:\ drive (PCH NVMe) is consistently faster than D:\ NVMe drive for unknown reason.
  • When "High performance"  power plan is selected, Direct PCIe is 10% faster than PCH. SATA is 20 times slower than NVMe. And C:\ PCH connected NVMe is faster than D:\ Direct PCIE NVMe. It seems there are other factors to determine file read latency than NVMe connection method.
  • When "Power Saver" is selected, Direct PCIe NVME performance is fluctuated and it is 50% slower than PCH PCIe on average, and the worst case NVMe latency is 4 times worse than SATA.

Conclusion


If read latency is important, "High Performance" power plan is recommended and when high performance is selected, NVMe drive is better than SATA. When Power saver" power plan is selected, sometimes NVMe read latency becomes worse than SATA. 

 

 

image.thumb.png.3df2736735671b84d5e10022d13a7097.png

Fig.1

 

image.thumb.png.a10509e96124ebaeb165fb48ec728625.png

 

image.thumb.jpeg.9ca5b881c8058e312261aaafbe011702.jpeg

 

image.thumb.png.d26fb8c7fb76aed040ec0e2fba6ef793.png

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment

Interesting topic, and one I am currently researching. I have been learning more about PMEM/NVDIMM. This is non-volatile memory that can act as block storage. In theory this is faster than the best NVMe SSD's (ex. Optane P5800X). It is slightly slower than DRAM.

 

It's probably not the best solution for a large music library, but intriguing as a boot drive. Some Linux distro's can boot from memory, but most OS's such as Windows can not.

Link to comment
3 hours ago, JJSim said:

Interesting topic, and one I am currently researching. I have been learning more about PMEM/NVDIMM. This is non-volatile memory that can act as block storage. In theory this is faster than the best NVMe SSD's (ex. Optane P5800X). It is slightly slower than DRAM.

 

Did you read the following article? It seems PMEM "requires significant changes to data center applications to leverage the full benefits" and it does not get support of apps, and eventually it will be replaced with PCIe Gen.5 CXL connected persistent memory.

 

https://www.servethehome.com/glorious-complexity-of-intel-optane-dimms-and-micron-exiting-3d-xpoint/2/

A lot of words there but it is simple, it is because PMEM is slower than DRAM and DRAM is better to be seated on DRAM slot.

 

I'm using hardware raid controller with 4GB DRAM cache with super capacitor backup power on one of the PC. IMO these technology is much easier to setup/use to improve read/write access latency of boot drive or system drive (operating system files). This is their latest products catalog

https://ww1.microchip.com/downloads/en/DeviceDoc/00003270.pdf

 

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment

I did not read that article before, but did watch STH's video on the topic. It's true that most applications are not PMEM aware, which does reduce its performance. It's also not as fast as DRAM. My interest in PMEM is as a boot drive for the OS.

 

Thanks for the info on those RAID adapters. Multiple SSD's in RAID can offer better throughput than PMEM, but PMEM still has the advantage in latency. The DRAM cache probably helps the RAID adapters.

 

https://www.storagereview.com/review/intel-storage-performance-windows-server

Link to comment

I tested Windows 10 Pro running on Proxmox Virtual Environment on the different computer (sorry therefore this is not apple to apple comparison), to cache operating system reads/writes in DRAM.

 

The experiment tests the read time of 1byte file: open the file, read 1 byte, close the file.

 

Results are as follows. Left two columns of the table are DRAM cached. DRAM is twice faster than native NVMe of bare-metal OS, but it is not order of 10x improvements I expected for DRAM. I'm not sure the exact reason but I guess filesystem directory hierarchy info is already DRAM cached and it takes constant time, or maybe antivirus disturbs the performance.

 

image.thumb.png.363ffae02c9b0afd64da7f384bd6aad2.png

 

Operating system cold boot is pretty quick.

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment
  • 7 months later...

I found a document to fully utilize Persistent memory on Windows https://learn.microsoft.com/en-us/windows-server/storage/storage-spaces/persistent-memory-direct-access

 

It says

  • Filesystem should be DAX enabled, with no parity with no redundancy.
  • App should use DAX aware API, it is some sort of mmap instead of classic fopen/fread, otherwise DAX provides no significant benefit.
  • When DAX works correctly, 40 μs latency can be achieved! 3x faster than 100% DRAM cache hit NTFS on NVMe

 

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment
  • 9 months later...
On 7/12/2022 at 11:07 AM, novaca said:

Has anyone tried to compare the real effect of location the music library on SSD connected to:
- M.2 on board, processor lines
- PCIe to M.2, processor lines
- M.2 on board, chipset lines
I'm struggling with the choice of motherboard - there are never enough direct lines and the bifurcation options are mostly limited.

Do you think it is more important for a system disk than a data disk?

The only effect it can have on a music library is the slight delay before loading a file or between files. Unless you put your library on a slow SD card or USB key, latency and bandwidth won't affect the music itself, as the file will be buffered faster than it's played.

 

Based on yamamoto2002's tests, it's in the few hundreds µs range for SSDs and ~5ms for a standard sata drive. Even a 5ms delay before a song starts would be quite hard to notice.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...