Keras ignores GPU when using TensorFlow

The docs say that when using TensorFlow Server, Keras will automatically launch on the GPU if detected. I went into a remote GPU and I am trying to run the Keras program, but for some reason I am only using CPUs. How do I get Keras to run on the GPU to speed things up?

If it helps, it looks like this:

model = Sequential()
model.add(SimpleRNN(out_dim, input_shape = (X_train.shape[1], X_train.shape[2]), return_sequences = False))
model.add(Dense(num_classes, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer = "adam", metrics = ['accuracy'])
hist = model.fit(X_train, dummy_y, validation_data=(X_test, dummy_y_test), nb_epoch = epochs, batch_size = b_size)

      

and here's the conclusion which python

and proof that Keras is using the TensorFlow backend:

user@GPU6:~$ which python
/mnt/data/user/pkgs/anaconda2/bin/python
user@GPU6:~$ python
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import keras
Using TensorFlow backend.

      

and here's the conclusion nvidia-smi

. I have several processes similar to the ones above, but they only use the processor:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX TIT...  Off  | 0000:03:00.0     Off |                  N/A |
| 26%   27C    P8    13W / 250W |      9MiB /  6082MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX TIT...  Off  | 0000:83:00.0     Off |                  N/A |
| 26%   31C    P8    13W / 250W |      9MiB /  6082MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX TIT...  Off  | 0000:84:00.0     Off |                  N/A |
| 26%   31C    P8    14W / 250W |      9MiB /  6082MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      2408    G   Xorg                                             9MiB |
|    1      2408    G   Xorg                                             9MiB |
|    2      2408    G   Xorg                                             9MiB |
+-----------------------------------------------------------------------------+

      

None of my processes are running on GPU. How can I fix this?

+3


source to share


1 answer


Perhaps you have tensor version of processor installed.

Since it seems like you are using Anaconda and py2.7: follow these steps to set the GPU version for tensorflow in conda env using py2.7



conda create -n tensorflow
source activate tensorflow
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.1-cp27-none-linux_x86_64.whl

      

see this github issue

+2


source







All Articles