The module contains the basic network architectures
Network Type | Function | Count of layers | Support train fcn | Error fcn |
---|---|---|---|---|
Single-layer perceptron | newp | 1 | train_delta | SSE |
Multi-layer perceptron | newff | more 1 | train_gd, train_gdm, train_gda, train_gdx*, train_rprop, train_bfgs, train_cg | SSE |
Competitive layer | newc | 1 | train_wta, train_cwta* | SAE |
LVQ | newlvq | 2 | train_lvq | MSE |
Note
* - defaulf function
Create competitive layer (Kohonen network)
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs and 10 neurons
>>> net = newc([[-1, 1], [-1, 1]], 10)
|
Create multilayer perceptron
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create neural net with 2 inputs, 1 output and 2 layers
>>> net = newff([[-0.5, 0.5], [-0.5, 0.5]], [3, 1])
>>> net.ci
2
>>> net.co
1
>>> len(net.layers)
2
|
Create a learning vector quantization network
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs,
>>> # 2 layers and 10 neurons in each layer
>>> net = newlvq([[-1, 1], [-1, 1]], 10, [0.6, 0.4])
|
Create one layer perceptron
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs and 10 neurons
>>> net = newp([[-1, 1], [-1, 1]], 10)
|
Gradient descent backpropagation
Support networks: | |
---|---|
newff (multy-layers perceptron) |
|
Parameters : |
|
Gradient descent with momentum backpropagation
Support networks: | |
---|---|
newff (multy-layers perceptron) |
|
Parameters : |
|
Gradient descent with adaptive learning rate
Support networks: | |
---|---|
newff (multy-layers perceptron) |
|
Parameters : |
|
Gradient descent with momentum backpropagation and adaptive lr
Support networks: | |
---|---|
newff (multy-layers perceptron) |
|
Рarameters : |
|
Resilient Backpropagation
Support networks: | |
---|---|
newff (multy-layers perceptron) |
|
Parameters : |
|
Winner Take All algoritm
Support networks: | |
---|---|
newc (cohonen layer) |
|
Parameters : |
|
Conscience Winner Take All algoritm
Support networks: | |
---|---|
newc (cohonen layer) |
|
Parameters : |
|
BroydenFletcherGoldfarbShanno (BFGS) method Using scipy.optimize.fmin_bfgs
Support networks: | |
---|---|
newff (multy-layers perceptron) |
|
Parameters : |
|
Conjugate gradient algorithm Using scipy.optimize.fmin_cg
Support networks: | |
---|---|
newff (multy-layers perceptron) |
|
Parameters : |
|
LVQ1 train function
Support networks: | |
---|---|
newlvq |
|
Parameters : |
|
Train with Delta rule
Support networks: | |
---|---|
newp (one-layers perceptron) |
|
Parameters : |
|
Train error functions with derivatives
Example: | >>> msef = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> msef(x)
1.25
>>> # calc derivative:
>>> msef.deriv(x[0])
array([ 1., 0.])
|
---|
Mean squared error function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> f(x)
1.25
|
Derivative of MSE error function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = MSE()
>>> x = np.array([1.0, 0.0])
>>> # calc derivative:
>>> f.deriv(x)
array([ 1., 0.])
|
Transfer function with derivatives
Example: | >>> import numpy as np
>>> f = TanSig()
>>> x = np.linspace(-5,5,100)
>>> y = f(x)
>>> df_on_dy = f.deriv(x, y) # calc derivative
>>> f.out_minmax # list output range [min, max]
[-1, 1]
>>> f.inp_active # list input active range [min, max]
[-2, 2]
|
---|
Competitive transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = Competitive()
>>> f([-5, -0.1, 0, 0.1, 100])
array([ 1., 0., 0., 0., 0.])
>>> f([-5, -0.1, 0, -6, 100])
array([ 0., 0., 0., 1., 0.])
|
Hard limit transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = HardLim()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0., 0., 0., 1., 1.])
|
Derivative of transfer function HardLim
Symmetric hard limit transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = HardLims()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([-1., -1., -1., 1., 1.])
|
Derivative of transfer function HardLims
Logarithmic sigmoid transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = LogSig()
>>> x = np.array([-np.Inf, 0.0, np.Inf])
>>> f(x).tolist()
[0.0, 0.5, 1.0]
|
Derivative of transfer function LogSig
Linear transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> import numpy as np
>>> f = PureLin()
>>> x = np.array([-100., 50., 10., 40.])
>>> f(x).tolist()
[-100.0, 50.0, 10.0, 40.0]
|
Derivative of transfer function PureLin
Hyperbolic tangent sigmoid transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = TanSig()
>>> f([-np.Inf, 0.0, np.Inf])
array([-1., 0., 1.])
|
Derivative of transfer function TanSig