이것은 Tensorflow가 작동하는지 확인하기 위해 스크립트를 실행하여 수신 한 메시지입니다.
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
SSE4.2와 AVX를 언급 한 것으로 나타났습니다.
- SSE4.2와 AVX 란 무엇입니까?
- 이러한 SSE4.2 및 AVX는 어떻게 Tensorflow 작업에 대한 CPU 계산을 개선합니까?
- 두 라이브러리를 사용하여 Tensorflow를 컴파일하는 방법은 무엇입니까?
NOTE on gcc 5 or later: the binary pip packages available on the TensorFlow website are built with gcc 4, which uses the older ABI. To make your build compatible with the older ABI, you need to add --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" to your bazel build command. ABI compatibility allows custom ops built against the TensorFlow pip package to continue to work against your built package.
여기서 잊지 마세요 tensorflow.org/install/install_sources
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
(-> 1.05 T 작전 / 초 0.35) 나에게 공식 릴리스에 비해 8K matmul CPU 속도 3 배 향상을 제공 제온 E5 v3에서