{"id":14422,"date":"2023-01-22T04:11:48","date_gmt":"2023-01-21T19:11:48","guid":{"rendered":"https:\/\/wp.study3.biz\/?p=14422"},"modified":"2023-01-22T04:12:30","modified_gmt":"2023-01-21T19:12:30","slug":"amd-epyc-7h12-64-core-processor-128gb-windpws-11pro-rtx2080ti-x2-cuda-11-3-samples-nvidia-smi-nvcc-v-nvidia-smi-nvlink-c-devicequery%e3%82%92%e8%a1%a8%e7%a4%ba%e3%81%95%e3%81%9b%e3%81%a6%e3%81%bf","status":"publish","type":"post","link":"https:\/\/wp.study3.biz\/?p=14422","title":{"rendered":"AMD EPYC 7H12 64-Core Processor 128GB Windpws 11Pro RTX2080Ti x2 CUDA 11.3 Samples nvidia-smi nvcc -V nvidia-smi nvlink -c deviceQuery\u3092\u8868\u793a\u3055\u305b\u3066\u307f\u305f"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/wp.study3.biz\/wp-content\/uploads\/2022\/11\/7H12-windows-11-pro.jpg\" alt=\"\" width=\"3840\" height=\"2160\" class=\"alignnone size-full wp-image-14431\" \/><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/wp.study3.biz\/wp-content\/uploads\/2022\/11\/deviceQuery-2.jpg\" alt=\"\" width=\"3840\" height=\"2160\" class=\"alignnone size-full wp-image-14432\" \/><\/p>\n<p>C:\\ProgramData\\NVIDIA Corporation\\CUDA Samples\\v11.3\\bin\\win64\\Debug>deviceQuery<br \/>\ndeviceQuery Starting&#8230;<\/p>\n<p> CUDA Device Query (Runtime API) version (CUDART static linking)<\/p>\n<p>Detected 2 CUDA Capable device(s)<\/p>\n<p>Device 0: &#8220;NVIDIA GeForce RTX 2080 Ti&#8221;<br \/>\n  CUDA Driver Version \/ Runtime Version          11.3 \/ 11.3<br \/>\n  CUDA Capability Major\/Minor version number:    7.5<br \/>\n  Total amount of global memory:                 11264 MBytes (11811160064 bytes)<br \/>\n  (068) Multiprocessors, (064) CUDA Cores\/MP:    4352 CUDA Cores<br \/>\n  GPU Max Clock rate:                            1635 MHz (1.63 GHz)<br \/>\n  Memory Clock rate:                             7000 Mhz<br \/>\n  Memory Bus Width:                              352-bit<br \/>\n  L2 Cache Size:                                 5767168 bytes<br \/>\n  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)<br \/>\n  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers<br \/>\n  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers<br \/>\n  Total amount of constant memory:               65536 bytes<br \/>\n  Total amount of shared memory per block:       49152 bytes<br \/>\n  Total shared memory per multiprocessor:        65536 bytes<br \/>\n  Total number of registers available per block: 65536<br \/>\n  Warp size:                                     32<br \/>\n  Maximum number of threads per multiprocessor:  1024<br \/>\n  Maximum number of threads per block:           1024<br \/>\n  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)<br \/>\n  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)<br \/>\n  Maximum memory pitch:                          2147483647 bytes<br \/>\n  Texture alignment:                             512 bytes<br \/>\n  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)<br \/>\n  Run time limit on kernels:                     Yes<br \/>\n  Integrated GPU sharing Host Memory:            No<br \/>\n  Support host page-locked memory mapping:       Yes<br \/>\n  Alignment requirement for Surfaces:            Yes<br \/>\n  Device has ECC support:                        Disabled<br \/>\n  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)<br \/>\n  Device supports Unified Addressing (UVA):      Yes<br \/>\n  Device supports Managed Memory:                Yes<br \/>\n  Device supports Compute Preemption:            Yes<br \/>\n  Supports Cooperative Kernel Launch:            Yes<br \/>\n  Supports MultiDevice Co-op Kernel Launch:      No<br \/>\n  Device PCI Domain ID \/ Bus ID \/ location ID:   0 \/ 129 \/ 0<br \/>\n  Compute Mode:<br \/>\n     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) ><\/p>\n<p>Device 1: &#8220;NVIDIA GeForce RTX 2080 Ti&#8221;<br \/>\n  CUDA Driver Version \/ Runtime Version          11.3 \/ 11.3<br \/>\n  CUDA Capability Major\/Minor version number:    7.5<br \/>\n  Total amount of global memory:                 11264 MBytes (11811160064 bytes)<br \/>\n  (068) Multiprocessors, (064) CUDA Cores\/MP:    4352 CUDA Cores<br \/>\n  GPU Max Clock rate:                            1635 MHz (1.63 GHz)<br \/>\n  Memory Clock rate:                             7000 Mhz<br \/>\n  Memory Bus Width:                              352-bit<br \/>\n  L2 Cache Size:                                 5767168 bytes<br \/>\n  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)<br \/>\n  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers<br \/>\n  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers<br \/>\n  Total amount of constant memory:               65536 bytes<br \/>\n  Total amount of shared memory per block:       49152 bytes<br \/>\n  Total shared memory per multiprocessor:        65536 bytes<br \/>\n  Total number of registers available per block: 65536<br \/>\n  Warp size:                                     32<br \/>\n  Maximum number of threads per multiprocessor:  1024<br \/>\n  Maximum number of threads per block:           1024<br \/>\n  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)<br \/>\n  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)<br \/>\n  Maximum memory pitch:                          2147483647 bytes<br \/>\n  Texture alignment:                             512 bytes<br \/>\n  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)<br \/>\n  Run time limit on kernels:                     Yes<br \/>\n  Integrated GPU sharing Host Memory:            No<br \/>\n  Support host page-locked memory mapping:       Yes<br \/>\n  Alignment requirement for Surfaces:            Yes<br \/>\n  Device has ECC support:                        Disabled<br \/>\n  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)<br \/>\n  Device supports Unified Addressing (UVA):      Yes<br \/>\n  Device supports Managed Memory:                Yes<br \/>\n  Device supports Compute Preemption:            Yes<br \/>\n  Supports Cooperative Kernel Launch:            Yes<br \/>\n  Supports MultiDevice Co-op Kernel Launch:      No<br \/>\n  Device PCI Domain ID \/ Bus ID \/ location ID:   0 \/ 130 \/ 0<br \/>\n  Compute Mode:<br \/>\n     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) ><\/p>\n<p>deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.3, CUDA Runtime Version = 11.3, NumDevs = 2<br \/>\nResult = PASS<\/p>\n<p>C:\\ProgramData\\NVIDIA Corporation\\CUDA Samples\\v11.3\\bin\\win64\\Debug>\u3000<strong>\u8a73\u7d30\u306f\u3053\u3053\u2193<\/strong><br \/>\n<a href=\"https:\/\/wp.study3.biz\/wp-content\/uploads\/2022\/11\/AMD-EPYC-7H12-64-Core-Processor-128GB-Windpws-11Pro-CUDA-11.3-Samples-nvidia-smi-nvcc-V-nvidia-smi-nvlink-c-deviceQuery.txt\">AMD EPYC 7H12 64-Core Processor 128GB Windpws 11Pro CUDA 11.3 Samples nvidia-smi nvcc -V nvidia-smi nvlink -c deviceQuery<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>C:\\ProgramData\\NVIDIA Corporation\\CUDA Samples\\v11.3\\bin\\win64\\Debug>deviceQuery deviceQuery Starting&#8230; C &hellip; <a href=\"https:\/\/wp.study3.biz\/?p=14422\">\u7d9a\u304d\u3092\u8aad\u3080 <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[18,10],"tags":[],"class_list":["post-14422","post","type-post","status-publish","format-standard","hentry","category-nvidia","category-windows"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/wp.study3.biz\/index.php?rest_route=\/wp\/v2\/posts\/14422","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wp.study3.biz\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wp.study3.biz\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wp.study3.biz\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wp.study3.biz\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14422"}],"version-history":[{"count":4,"href":"https:\/\/wp.study3.biz\/index.php?rest_route=\/wp\/v2\/posts\/14422\/revisions"}],"predecessor-version":[{"id":14435,"href":"https:\/\/wp.study3.biz\/index.php?rest_route=\/wp\/v2\/posts\/14422\/revisions\/14435"}],"wp:attachment":[{"href":"https:\/\/wp.study3.biz\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14422"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wp.study3.biz\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14422"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wp.study3.biz\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14422"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}