Tensorflow-gpu not working with Blas GEMM launch

I have installed tensorflow-gpu to run my tensorflow code on my GPU. But I cannot get it to work. It keeps on giving the above error. Below is my sample code followed by the error stack trace:

import tensorflow as tf
import numpy as np

def check(W,X):
    return tf.matmul(W,X)


def main():
    W = tf.Variable(tf.truncated_normal([2,3], stddev=0.01))
    X = tf.placeholder(tf.float32, [3,2])
    check_handle = check(W,X)
    with tf.Session() as sess:
        tf.initialize_all_variables().run()
        num = sess.run(check_handle, feed_dict = 
            {X:np.reshape(np.arange(6), (3,2))})
        print(num)
if __name__ == '__main__':
    main()

      

My GPU is a pretty good GeForce GTX 1080 Ti with an 11GB graphics engine, and there is nothing essential to it (just chrome) as you can see in nvidia-smi:

Fri Aug  4 16:34:49 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 381.22                 Driver Version: 381.22                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 0000:07:00.0      On |                  N/A |
| 30%   55C    P0    79W / 250W |    711MiB / 11169MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      7650    G   /usr/lib/xorg/Xorg                             380MiB |
|    0      8233    G   compiz                                         192MiB |
|    0     24226    G   ...el-token=963C169BB38ADFD67B444D57A299CE0A   136MiB |
+-----------------------------------------------------------------------------+

      

Below is the error stack trace:

2017-08-04 15:44:21.585091: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-04 15:44:21.585110: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-04 15:44:21.585114: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-04 15:44:21.585118: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-04 15:44:21.585122: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-04 15:44:21.853700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.582
pciBusID 0000:07:00.0
Total memory: 10.91GiB
Free memory: 9.89GiB
2017-08-04 15:44:21.853724: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 
2017-08-04 15:44:21.853728: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y 
2017-08-04 15:44:21.853734: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:07:00.0)
2017-08-04 15:44:24.948616: E tensorflow/stream_executor/cuda/cuda_blas.cc:365] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2017-08-04 15:44:24.948640: W tensorflow/stream_executor/stream.cc:1601] attempting to perform BLAS operation using StreamExecutor without BLAS support
2017-08-04 15:44:24.948805: W tensorflow/core/framework/op_kernel.cc:1158] Internal: Blas GEMM launch failed : a.shape=(1, 5), b.shape=(5, 10), m=1, n=10, k=5
     [[Node: layer1/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_arg_Placeholder_0_0/_11, layer1/weights/read)]]
Traceback (most recent call last):
  File "test.py", line 51, in <module>
    _, loss_out, res_out = sess.run([train_op, loss, res], feed_dict)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(1, 5), b.shape=(5, 10), m=1, n=10, k=5
     [[Node: layer1/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_arg_Placeholder_0_0/_11, layer1/weights/read)]]
     [[Node: layer2/MatMul/_17 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_158_layer2/MatMul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op u'layer1/MatMul', defined at:
  File "test.py", line 18, in <module>
    pre_activation = tf.matmul(input_ph, weights)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1816, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 1217, in _mat_mul
    transpose_b=transpose_b, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in __init__
    self._traceback = _extract_stack()

InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(1, 5), b.shape=(5, 10), m=1, n=10, k=5
     [[Node: layer1/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_arg_Placeholder_0_0/_11, layer1/weights/read)]]
     [[Node: layer2/MatMul/_17 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_158_layer2/MatMul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

      

To add to this, my previous tensorflow CPU setup worked very well. Any help is appreciated. Thank!

Note. I have cuda-8.0 with cudnn-5.1 installed and their paths added to my bashrc profile.

+3


source to share


2 answers


So, for me, the reason for this error was because my where and all subdirectories and files required root privileges. Thus, tensorflow requires root privileges as well as the ability to use cuda. So uninstalling tensorflow and installing it again as root user solved the problem for me.



0


source


I had a very similar problem. For me, this coincided with an update to the nvidia driver. So although it was a driver issue. But the driver change had no effect. What ultimately worked for me was clearing the nvidia cache:

sudo rm -rf ~/.nv/

      



Found this suggestion on the NVIDIA Developer Forum: https://devtalk.nvidia.com/default/topic/1007071/cuda-setup-and-installation/cuda-error-when-running-matrixmulcublas-sample-ubuntu-16-04/ post / 5169223 /

I suspect that during the driver update there were still old compiled files that were not compatible or even damaged during the process. Assumptions aside, this solved the problem for me.

+3


source







All Articles