0. Overall Explanation: Calculate the hidden neuron value using weights from the input, and calculate the output of network using those values. Obtain the error value of the obtained output and target values and update the weight value to minimize the error value through backpropagation.
1. Each neuron is composed of two units. First unit adds products of weights coefficients and input signals. The second unit realise nonlinear function, called neuron activation function. Signal e is adder output signal, and y = f(e) is output signal of nonlinear element. Signal y is also output signal of neuron.
2. Symbols W(Xm)n represent weights of connections between network input Xm and neuron n in input layer. Symbols Yn represents output signal of neuron n.
3. The training data set consists of input signals (x1 and x2 ) assigned with corresponding target (desired output) z. Error signal is difference the output signal of the network y and the desired output value z (the target).
4. The idea is to propagate error signal (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron. The weights' coefficients wmn used to propagate errors back are equal to this used during computing output value.
5. the weights coefficients(learning rate) of each neuron input node may be modified. In formulas(공식) below df(e)/de represents derivative(미분) of neuron activation function
# 참고 : http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html
# 참고 : https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/