[chibi@stream9 ~]$ cd NVIDIA_CUDA-11.5_Samples [chibi@stream9 NVIDIA_CUDA-11.5_Samples]$ ls 0_Simple 4_Finance EULA.txt common 1_Utilities 5_Simulations Makefile 2_Graphics 6_Advanced README_Samples_are_Moving.txt 3_Imaging 7_CUDALibraries bin [chibi@stream9 NVIDIA_CUDA-11.5_Samples]$ cd 0_Simple [chibi@stream9 0_Simple]$ ls UnifiedMemoryStreams simpleCubemapTexture asyncAPI simpleCudaGraphs bf16TensorCoreGemm simpleDrvRuntime binaryPartitionCG simpleIPC cdpSimplePrint simpleLayeredTexture cdpSimpleQuicksort simpleMPI clock simpleMultiCopy clock_nvrtc simpleMultiGPU cppIntegration simpleOccupancy cppOverload simpleP2P cudaNvSci simplePitchLinearTexture cudaOpenMP simplePrintf cudaTensorCoreGemm simpleSeparateCompilation dmmaTensorCoreGemm simpleStreams fp16ScalarProduct simpleSurfaceWrite globalToShmemAsyncCopy simpleTemplates immaTensorCoreGemm simpleTemplates_nvrtc inlinePTX simpleTexture inlinePTX_nvrtc simpleTextureDrv matrixMul simpleVoteIntrinsics matrixMulCUBLAS simpleVoteIntrinsics_nvrtc matrixMulDrv simpleZeroCopy matrixMul_nvrtc streamOrderedAllocation memMapIPCDrv streamOrderedAllocationIPC simpleAWBarrier streamOrderedAllocationP2P simpleAssert systemWideAtomics simpleAssert_nvrtc template simpleAtomicIntrinsics tf32TensorCoreGemm simpleAtomicIntrinsics_nvrtc vectorAdd simpleAttributes vectorAddDrv simpleCallback vectorAddMMAP simpleCooperativeGroups vectorAdd_nvrtc [chibi@stream9 0_Simple]$ cd simpleP2P [chibi@stream9 simpleP2P]$ ./simpleP2P [./simpleP2P] - Starting... Checking for multiple GPUs... CUDA-capable device count: 2 Checking GPU(s) for support of peer to peer memory access... > Peer access from NVIDIA TITAN RTX (GPU0) -> NVIDIA TITAN RTX (GPU1) : Yes > Peer access from NVIDIA TITAN RTX (GPU1) -> NVIDIA TITAN RTX (GPU0) : Yes Enabling peer access between GPU0 and GPU1... Allocating buffers (64MB on GPU0, GPU1 and CPU Host)... Creating event handles... cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 43.53GB/s Preparing host buffer and memcpy to GPU0... Run kernel on GPU1, taking source data from GPU0 and writing to GPU1... Run kernel on GPU0, taking source data from GPU1 and writing to GPU0... Copy data back to host from GPU0 and verify results... Disabling peer access... Shutting down... Test passed [chibi@stream9 simpleP2P]$ cd ~/NVIDIA_CUDA-11.5_Samples/1_Utilities [chibi@stream9 1_Utilities]$ ls UnifiedMemoryPerf deviceQuery p2pBandwidthLatencyTest bandwidthTest deviceQueryDrv topologyQuery [chibi@stream9 1_Utilities]$ cd p2pBandwidthLatencyTest [chibi@stream9 p2pBandwidthLatencyTest]$ ./p2pBandwidthLatencyTest [P2P (Peer-to-Peer) GPU Bandwidth Latency Test] Device: 0, NVIDIA TITAN RTX, pciBusID: 81, pciDeviceID: 0, pciDomainID:0 Device: 1, NVIDIA TITAN RTX, pciBusID: 82, pciDeviceID: 0, pciDomainID:0 Device=0 CAN Access Peer Device=1 Device=1 CAN Access Peer Device=0 ***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure. So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases. P2P Connectivity Matrix D\D 0 1 0 1 1 1 1 1 Unidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 0 559.95 5.89 1 5.89 565.85 Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s) D\D 0 1 0 546.81 47.11 1 47.09 558.95 Bidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 0 549.41 8.25 1 8.39 555.46 Bidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 0 555.67 94.13 1 94.13 554.67 P2P=Disabled Latency Matrix (us) GPU 0 1 0 1.37 22.69 1 12.58 1.27 CPU 0 1 0 3.38 10.34 1 10.27 3.31 P2P=Enabled Latency (P2P Writes) Matrix (us) GPU 0 1 0 1.35 1.74 1 1.70 1.27 CPU 0 1 0 3.35 2.71 1 2.79 3.35 NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. [chibi@stream9 p2pBandwidthLatencyTest]$ cd [chibi@stream9 ~]$ nvidia-smi Mon Dec 6 05:01:00 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA TITAN RTX Off | 00000000:81:00.0 On | N/A | | 41% 29C P8 34W / 280W | 205MiB / 24217MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA TITAN RTX Off | 00000000:82:00.0 Off | N/A | | 41% 26C P8 17W / 280W | 6MiB / 24220MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 2613 G /usr/libexec/Xorg 172MiB | | 0 N/A N/A 2782 G /usr/bin/gnome-shell 31MiB | | 1 N/A N/A 2613 G /usr/libexec/Xorg 4MiB | +-----------------------------------------------------------------------------+ [chibi@stream9 ~]$ nvidia-smi nvlink -c GPU 0: NVIDIA TITAN RTX (UUID: GPU-5a71d61e-f130-637a-b33d-4df555b0ed88) Link 0, P2P is supported: true Link 0, Access to system memory supported: true Link 0, P2P atomics supported: true Link 0, System memory atomics supported: true Link 0, SLI is supported: true Link 0, Link is supported: false Link 1, P2P is supported: true Link 1, Access to system memory supported: true Link 1, P2P atomics supported: true Link 1, System memory atomics supported: true Link 1, SLI is supported: true Link 1, Link is supported: false GPU 1: NVIDIA TITAN RTX (UUID: GPU-7fb51c1d-c1e7-35cc-aad7-66971f05ddb7) Link 0, P2P is supported: true Link 0, Access to system memory supported: true Link 0, P2P atomics supported: true Link 0, System memory atomics supported: true Link 0, SLI is supported: true Link 0, Link is supported: false Link 1, P2P is supported: true Link 1, Access to system memory supported: true Link 1, P2P atomics supported: true Link 1, System memory atomics supported: true Link 1, SLI is supported: true Link 1, Link is supported: false [chibi@stream9 ~]$ cat /etc/redhat-release CentOS Stream release 9 [chibi@stream9 ~]$ sudo nvme list [sudo] chibi のパスワード: Node SN Model Namespace Usage Format FW Rev --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 7QF006NZ Seagate FireCuda 520 SSD ZP500GM30002 1 500.11 GB / 500.11 GB 512 B + 0 B STNSC014 [chibi@stream9 ~]$ sudo nvme smart-log /dev/nvme0n1 Smart Log for NVME device:nvme0n1 namespace-id:ffffffff critical_warning : 0 temperature : 24 C available_spare : 100% available_spare_threshold : 5% percentage_used : 0% endurance group critical warning summary: 0 data_units_read : 9,362,611 data_units_written : 8,813,148 host_read_commands : 165,953,120 host_write_commands : 154,503,501 controller_busy_time : 713 power_cycles : 200 power_on_hours : 2,262 unsafe_shutdowns : 68 media_errors : 0 num_err_log_entries : 540 Warning Temperature Time : 0 Critical Composite Temperature Time : 0 Thermal Management T1 Trans Count : 0 Thermal Management T2 Trans Count : 0 Thermal Management T1 Total Time : 0 Thermal Management T2 Total Time : 0 [chibi@stream9 ~]$ sensors i350bb-pci-4100 Adapter: PCI adapter loc1: +38.0°C (high = +120.0°C, crit = +110.0°C) k10temp-pci-00c3 Adapter: PCI adapter Tctl: +24.0°C Tccd1: +22.5°C Tccd3: +23.0°C Tccd5: +23.0°C Tccd7: +22.8°C [chibi@stream9 ~]$ cat /etc/redhat-release CentOS Stream release 9 [chibi@stream9 ~]$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Thu_Nov_18_09:45:30_PST_2021 Cuda compilation tools, release 11.5, V11.5.119 Build cuda_11.5.r11.5/compiler.30672275_0 [chibi@stream9 ~]$