How to avoid rebuilding the graph in Tensorflow when optimizing hyperparameters

I have a Tensorflow model that I am optimizing using Optunity.

What I do is that I have an objective function that creates a model and returns the best loss of my model. I pass this function to optunity, which runs different tests with different parameters each time it plots the graph.

In my code I am using tf.reset_default_graph()

before instantiating my model. Hence, it rebuilds the model every time.

My problem is that collecting the graph every time I use a new combination of hyperparameters takes a long time. Is there a way to make things faster?

If I don't use tf.reset_default_graph()

, I get errors about conflicting tensors.

+3


source to share





All Articles