Tensor rank 3 tensor multiplication with rank 2 tensor in Tensorflow

Considering a tensor of rank 3:

sentence_max_length = 5
batch_size = 3
n_hidden = 10
n_classes = 2
x = tf.constant(np.reshape(np.arange(150),(batch_size,sentence_max_length, n_hidden)), dtype = tf.float32)

      

And a tensor of rank 2:

W = tf.constant(np.reshape(np.arange(20), (n_hidden, n_classes)), dtype = tf.float32)

      

And the displacement tensor of rank 1:

b = tf.constant(np.reshape(np.arange(5), (n_classes), dtype = tf.float32))

      

I was wondering how the last two axes are x

on W

such that the resulting vector Z

will have a shape (batch_size, max_length, n_classes)

, although the batch_size will not be known at the time of plotting which I just gave this value here for demo purposes

So to clarify:

Z[0] = tf.matmul(x[0,:,:], W) + b

Thus, W

and b

are divided across all parties. The reason for this is that I am trying to use output

of tf.dynamic_rnn

, causing the output to have a shape (batch_size, sentence_max_length, n_hidden)

and create another ontop layer output

that has the same weights W

and b

.

+3


source to share


1 answer


One approach might be ...

import tensorflow as tf
import numpy as np
from tensorflow.python.layers.core import Dense

sentence_max_length = 5
batch_size = 3
n_hidden = 10
n_classes = 2
x = tf.constant(np.reshape(np.arange(150),(batch_size,sentence_max_length, n_hidden)), dtype = tf.float32)

linear_layer = Dense(n_classes, use_bias=True) #required projection value
z = linear_layer(x)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    res = sess.run(z)

res.shape
(3, 5, 2)

      



The inner Dense layer creates trainable variables W and b. And it uses the standard_ops.tensordot operation to convert the last dimension to the predicted value. For more information see the source code here. https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/layers/core.py

+1


source







All Articles