Tensorflow conv2d_transpose output_shape
I want to implement Generative adversarial network (GAN) with uncommitted input size eg 4-D Tensor (Batch_size, None, None, 3)
.
But when I use conv2d_transpose there is a parameter output_shape
, this parameter should be passed true size
after deconvolution opeartion.
For example, if the size of batch_img is (64, 32, 32, 128), w is weight with (3, 3, 64, 128)
, after
deconv = tf.nn.conv2d_transpose(batch_img, w, output_shape=[64, 64, 64, 64],stride=[1,2,2,1], padding='SAME')
So what I get deconv
with is size (64, 64, 64, 64)
, it's ok if I pass true size of output_shape
.
But I want to use the uncommitted input size (64, None, None, 128)
and get deconv
with (64, None, None, 64)
.
But it throws an error like below.
TypeError: Failed to convert object of type <type'list'> to Tensor...
So what can I do to avoid this setting in deconv? or is there another way to implement uncommitted GAN?
source to share
- The output list does not take the form, does not have any in the list, since the Missing object cannot be converted to a tensor object
- None are allowed only in forms
tf.placeholder
- to resize output_shape instead of None try -1, for example you need the size
(64, None, None, 128)
so try[64, -1, -1, 128]
... I'm not quite sure[64, -1, -1, 128]
if this will work ... It worked for me for batch_size, i.e. my first argument was not a fixed size, so i used -1 -
tf.layers.conv2d_transpose()
there is also one high level API to transpose convolutiontf.layers.conv2d_transpose()
- I'm sure the high level api
tf.layers.conv2d_transpose()
will work for you because it takes dozens of different inputs - You don't even need to specify
output-shape
you just need to specifyoutput_channel
and usedkernel
- For more details: https://www.tensorflow.org/api_docs/python/tf/layers/conv2d_transpose ... I hope this helps
source to share
I faced this problem too. Using -1, as suggested in another answer here, doesn't work. Instead, you need to get the form of the incoming tensor and the output_size
argument output_size
. Here is an excerpt from a test I wrote. In this case, this is the first measurement that is unknown, but it should work for any combination of known and unknown parameters.
output_shape = [8, 8, 4] # width, height, channels-out. Handle batch size later
xin = tf.placeholder(dtype=tf.float32, shape = (None, 4, 4, 2), name='input')
filt = tf.placeholder(dtype=tf.float32, shape = filter_shape, name='filter')
## Find the batch size of the input tensor and add it to the front
## of output_shape
dimxin = tf.shape(xin)
ncase = dimxin[0:1]
oshp = tf.concat([ncase,output_shape], axis=0)
z1 = tf.nn.conv2d_transpose(xin, filt, oshp, strides=[1,2,2,1], name='xpose_conv')
source to share
I find a solution to use tf.shape for the unspecified form and get_shape () for the specified form.
def get_deconv_lens(H, k, d):
return tf.multiply(H, d) + k - 1
def deconv2d(x, output_shape, k_h=2, k_w=2, d_h=2, d_w=2, stddev=0.02, name='deconv2d'):
# output_shape: the output_shape of deconv op
shape = tf.shape(x)
H, W = shape[1], shape[2]
N, _, _, C = x.get_shape().as_list()
H1 = get_deconv_lens(H, k_h, d_h)
W1 = get_deconv_lens(W, k_w, d_w)
with tf.variable_scope(name):
w = tf.get_variable('weights', [k_h, k_w, C, x.get_shape()[-1]], initializer=tf.random_normal_initializer(stddev=stddev))
biases = tf.get_variable('biases', shape=[C], initializer=tf.zeros_initializer())
deconv = tf.nn.conv2d_transpose(x, w, output_shape=[N, H1, W1, C], strides=[1, d_h, d_w, 1], padding='VALID')
deconv = tf.nn.bias_add(deconv, biases)
return deconv
source to share