We approximate the function using a neural network
3r33333. 3r3-31. 3r33352. In order to master libraries for working with neural networks, we will solve the problem of approximating the function of a single argument using neural network algorithms for learning and prediction. 3r33333.
3r33354. 3r33350. 3r33333.
Introduction
3r33350. 3r33333. 3r33352. Let the function f:[x0,x1]be given. -> R3r3355. 3r33350. 3r33333. 3r33352. We approximate the given function f by the formula 3r3333355. 3r33350. 3r33333. 3r3306. 3r3307. P (x) = SUM W[i]* E (x, M[i]) 3r33333. 3r33350. 3r33333. 3r33352. where
3r33350. 3r33333.
3r33333.
i = 1n
3r33333.
M[i]from R3r3297. 3r33333.
W[i]from R3r3297. 3r33333.
E (x, M) = {? with x
M
3r33333. 3r33232. 3r33350. 3r33333. 3r33352. Obviously, with a uniform distribution of the values of M[i]on the interval (x? x1) there are such quantities W[i]in which the formula P (x) will best approximate the function f (x). In this case, for the given values of M[i], defined on the segment (x? x1) and ordered in ascending order, we can describe a sequential algorithm for calculating W[i]values. for the formula P (x). 3r33333. 3r33350. 3r33333.
And here is the
neural network. 3r33350. 3r33333. 3r33352. We transform the formula P (x) = SUM W[i]* E (x, M[i]) To a neural network model with one input neuron, one output neuron, and n neurons of the hidden layer 3r-3355. 3r33350. 3r33333. 3r3306. 3r3307. P (x) = SUM W[i]* S (K[i]* X + B[i]) + C 3r33333. 3r33350. 3r33333. 3r33352. where
3r33350. 3r33333.
3r33333.
The variable x is the "input" layer consisting of a single neuron
3r33333.
{K, B} - the parameters of the "hidden" layer, consisting of n neurons and the activation function - sigmoid
3r33333.
{W, C} - the parameters of the "output" layer, consisting of one neuron, which calculates the weighted sum of its inputs.
3r33333.
S - sigmoid,
3r33333. 3r33232. 3r33350. 3r33333. 3r33352. with
3r33350. 3r33333.
3r33333.
initial parameters of the "hidden" layer K[i]= 1
3r33333.
initial parameters of the “hidden” layer B[i]evenly distributed on the segment (-x? -x0)
3r33333. 3r33232. 3r33350. 3r33333. 3r33352. All parameters of the neural network K, B, W, and C are determined by training the neural network on samples (x, y) of the values of the function f. 3r33333. 3r33350. 3r33333.
Sigmoid
3r33350. 3r33333. 3r33352. A sigmoid is a smooth monotone increasing non-linear function 3r-3355. 3r33350. 3r33333.
3r33333.
S (x) = 1 /(1 + exp (-x)).
3r33333. 3r33232. 3r33350. 3r33333. 3r3122. Program 3r3349. 3r33350. 3r33333. 3r33352. Use to describe our neural network package Tensorflow
3r33350. 3r33333. 3r3306. 3r3186. # node to which we will submit the function arguments
x = tf.placeholder (tf.float3?[None, 1], name = "x")
3r33333. # node to which we will supply the values of the function
y = tf.placeholder (tf.float3?[None, 1], name = "y")
3r33333. # hidden layer
nn = tf.layers.dense (x, hiddenSize,
activation = tf.nn.sigmoid,
kernel_initializer = tf.initializers.ones (),
bias_initializer = tf.initializers.random_uniform (minval-x_initializer = tf.initializers.random_uniform (minval-x_initializer = tf.initializers.random_uniform (minval-x_initializer = tf.initializers.random_uniform) = -x0),
name = "hidden")
3r33333. # output layer
model = tf.layers.dense (nn, ?
activation = None,
name = "output")
3r33333. # error calculation function
cost = tf.losses.mean_squared_error (y, model)
3r33333. train = tf.train.GradientDescentOptimizer (learn_rate) .minimize (cost)
3r33333. 3r33333. 3r33350. 3r33333. 3r3158. Training
3r33350. 3r33333. 3r3306. 3r3186. init = tf.initializers.global_variables ()
3r33333. with tf.Session () as session:
session.run (init)
3r33333. for _ in range (iterations):
3r33333. train_dataset, train_values = generate_test_values ()
3r33333. session.run (train, feed_dict = {3r333336. x: train_dataset,
y: train_values
})
3r33333. 3r33333. 3r33350. 3r33333. 3r3181. Full text
3r33350. 3r33333. 3r3306. 3r3186. import math
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
3r33333. x? x1 = 1? 20 # function argument range
3r33333. test_data_size = 2000 # is the amount of data for learning iteration
iterations = 2?000 # the number of iterations of learning
learn_rate = ??? # retraining ratio
3r33333. hiddenSize = 10 # the size of the hidden layer
3r33333. # function for generating test values
def generate_test_values ():
train_x =[]3r33333. train_y =[]3r33333. 3r33333. for _ in range (test_data_size):
x = x0 + (x1-x0) * np.random.rand ()
y = math.sin (x) # function under investigation
train_x.append ([x])
train_y.append ([y])
3r33333. return np.array (train_x), np.array (train_y)
3r33333. # node to which we will submit the function arguments
x = tf.placeholder (tf.float3?[None, 1], name = "x")
3r33333. # node to which we will supply the values of the function
y = tf.placeholder (tf.float3?[None, 1], name = "y")
3r33333. # hidden layer
nn = tf.layers.dense (x, hiddenSize,
activation = tf.nn.sigmoid,
kernel_initializer = tf.initializers.ones (),
bias_initializer = tf.initializers.random_uniform (minval-x_initializer = tf.initializers.random_uniform (minval-x_initializer = tf.initializers.random_uniform (minval-x_initializer = tf.initializers.random_uniform) = -x0),
name = "hidden")
3r33333. # output layer
model = tf.layers.dense (nn, ?
activation = None,
name = "output")
3r33333. # error calculation function
cost = tf.losses.mean_squared_error (y, model)
3r33333. train = tf.train.GradientDescentOptimizer (learn_rate) .minimize (cost)
3r33333. init = tf.initializers.global_variables ()
3r33333. with tf.Session () as session:
session.run (init)
3r33333. for _ in range (iterations):
3r33333. train_dataset, train_values = generate_test_values ()
3r33333. session.run (train, feed_dict = {3r333336. x: train_dataset,
y: train_values
})
3r33333. if (_% 1000 == 999): 3r3333366. print ("cost = {}". format (session.run (cost, feed_dict = {3r33366. x: train_dataset,
y: train_values
})))
3r33333. train_dataset, train_values = generate_test_values ()
3r33333. train_values1 = session.run (model, feed_dict = {
x: train_dataset,
})
3r33333. plt.plot (train_dataset, train_values, "bo",
train_dataset, train_values? "ro")
plt.show ()
3r33333. with tf.variable_scope ("hidden", reuse = True):
w = tf.get_variable ("kernel")
b = tf.get_variable ("bias")
print ("hidden:")
print ("kernel =", w.eval ())
print ("bias =", b.eval ())
3r33333. with tf.variable_scope ("output", reuse = True):
w = tf.get_variable ("kernel")
b = tf.get_variable ("bias")
print ("output:")
print ("kernel =", w.eval ())
print ("bias =", b.eval ()) 3r33333. 3r33350. 3r33333.
That's what happened
3r33350. 3r33333. 3r33352. 3r33333. 3r33350. 3r33333.
3r33333.
The blue color is the original function
3r33333.
Red color - approximation of function
3r33333. 3r33232. 3r33350. 3r33333. 3r3302. Conclusion console
3r33350. 3r33333. 3r3306. 3r3307. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. cost = ???r3r3366. hidden:
kernel =[[1.1523403 1.181032
1.1671464 0.9644377 0.8377886 1.0919508
0.87283015 1.0875995 0.9677301 0.6194152 ]]
bias =[-14.812331 -12.219926 -12.067375 -14.872566 -10.633507 -14.014006
-13.379829 -20.508204 -14.923473 -19.354435]3r33333. output:
kernel =[[2.0069902 ]3r33333.[-1.0321712 ]3r33333.[-0.8878887 ]3r33333.[-2.0531905 ]3r33333.[1.4293027 ]3r33333.[2.1250408 ]3r33333.[-1.578137 ]3r33333.[4.141281 ]3r33333.[-2.1264815 ]3r33333.[-0.60681605]]
bias =[-0.2812019]3r33333. 3r33333. 3r33350. 3r33333.
Source code
3r33350. 3r33333. 3r33352. 3r33333. https://github.com/dprotopopov/nnfunc
3r33333. 3r33333. 3r33333. 3r33333.
3r33333. 3r33333. 3r33333. 3r33333. 3r33333. 3r33333.
It may be interesting
weber
Author30-10-2018, 23:21
Publication DateMathematics / Machine learning / Programming
Category- Comments: 0
- Views: 367
entegrasyon programları
entegrasyon programları
Corvus Health provides medical training services as well as recruiting high quality health workers for you or placing our own best team in your facility. Check Out: Health Workforce Recruitment
I.T HATCH offers a wide range of IT services including remote access setup, small business servers, data storage solutions, IT strategy services, and more. Check Out: IT strategy services