Function vectors in a radial core network

I'm trying to use RBFNN to reconstruct cloud points on the surface, but I couldn't figure out what my feature vectors would be in RBFNN.

Can anyone help me figure this out.

Purpose: http://www.creatis.insa-lyon.fr/site/sites/default/files/resize/bunny5-200x200.jpg

From such inputs: enter image description here

+3


source to share


1 answer


The RBF network essentially involves setting data with a linear combination of functions that obey a set of basic properties - the main one being radial symmetry. The parameters of each of these functions are learned by incremental tuning based on errors that occur when re-presenting inputs.

If I understand (it's been a very long time since I was using one of these networks) your question is about preprocessing data in a point cloud. I believe that each of the points in the point cloud should serve as one input. If I understand correctly, functions are your three dimensions, and therefore each point can already be considered a "feature sign".

You have other options that remain, namely the number of radial base neurons in your hidden layer and the radial base functions to use (Gaussian popular first choice). Network training and surface restoration can be done in several ways, but I believe this is outside the scope of the question.

I don't know if it helps, but here's a simple RBF network python implementation doing an approximation function with one-dimensional inputs:



import numpy as np
import matplotlib.pyplot as plt

def fit_me(x):
    return (x-2) * (2*x+1) / (1+x**2)

def rbf(x, mu, sigma=1.5):
    return np.exp( -(x-mu)**2 / (2*sigma**2));

# Core parameters including number of training
# and testing points, minimum and maximum x values
# for training and testing points, and the number
# of rbf (hidden) nodes to use
num_points = 100    # number of inputs (each 1D)
num_rbfs = 20.0     # number of centers
x_min = -5
x_max = 10

# Training data, evenly spaced points
x_train = np.linspace(x_min, x_max, num_points)
y_train = fit_me(x_train)

# Testing data, more evenly spaced points
x_test  = np.linspace(x_min, x_max, num_points*3)
y_test  = fit_me(x_test)

# Centers of each of the rbf nodes
centers = np.linspace(-5, 10, num_rbfs)

# Everything is in place to train the network
# and attempt to approximate the function 'fit_me'.

# Start by creating a matrix G in which each row
# corresponds to an x value within the domain and each 
# column i contains the values of rbf_i(x).
center_cols, x_rows = np.meshgrid(centers, x_train)
G = rbf(center_cols, x_rows)

plt.plot(G)
plt.title('Radial Basis Functions')
plt.show()

# Simple training in this case: use pseudoinverse to get weights
weights = np.dot(np.linalg.pinv(G), y_train)

# To test, create meshgrid for test points
center_cols, x_rows = np.meshgrid(centers, x_test)
G_test = rbf(center_cols, x_rows)

# apply weights to G_test
y_predict = np.dot(G_test, weights)

plt.plot(y_predict)
plt.title('Predicted function')
plt.show()

error = y_predict - y_test

plt.plot(error)
plt.title('Function approximation error')
plt.show()

      

First, you can learn how inputs are introduced into the network and how RBF nodes are used. This should extend to 2D inputs in a simple way, although learning can be a bit stepped up.

To properly reconstruct a surface, you will most likely need a surface representation that is completely different from the function representation that was learned here. Not sure how to go about this last step.

+3


source







All Articles