The module contains the basic network architectures
Network Type | Function | Count of layers | Support train fcn | Error fcn |
---|---|---|---|---|
Single-layer perceptron | newp | 1 | train_delta | SSE |
Multi-layer perceptron | newff | >=1 | train_gd, train_gdm, train_gda, train_gdx*, train_rprop, train_bfgs, train_cg | SSE |
Competitive layer | newc | 1 | train_wta, train_cwta* | SAE |
LVQ | newlvq | 2 | train_lvq | MSE |
Elman | newelm | >=1 | train_gdx | MSE |
Hopield | newhop | 1 | None | None |
Note
* - default function
Create competitive layer (Kohonen network)
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs and 10 neurons
>>> net = newc([[-1, 1], [-1, 1]], 10)
|
Create a Elman recurrent network
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> net = newelm([[-1, 1]], [1], [trans.PureLin()])
>>> net.layers[0].np['w'][:] = 1
>>> net.layers[0].np['b'][:] = 0
>>> net.sim([[1], [1] ,[1], [3]])
array([[ 1.],
[ 2.],
[ 3.],
[ 6.]])
|
Create multilayer perceptron
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create neural net with 2 inputs, 1 output and 2 layers
>>> net = newff([[-0.5, 0.5], [-0.5, 0.5]], [3, 1])
>>> net.ci
2
>>> net.co
1
>>> len(net.layers)
2
|
Create a Hopfield recurrent network
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> net = newhop([[-1, -1, -1], [1, -1, 1]])
|
Create a learning vector quantization (LVQ) network
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs,
>>> # 2 layers and 10 neurons in each layer
>>> net = newlvq([[-1, 1], [-1, 1]], 10, [0.6, 0.4])
|
Create one layer perceptron
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs and 10 neurons
>>> net = newp([[-1, 1], [-1, 1]], 10)
|
Gradient descent backpropagation
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters : |
|
Gradient descent with momentum backpropagation
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters : |
|
Gradient descent with adaptive learning rate
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters : |
|
Gradient descent with momentum backpropagation and adaptive lr
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Рarameters : |
|
Resilient Backpropagation
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters : |
|
Winner Take All algorithm
Support networks: | |
---|---|
newc (Kohonen layer) |
|
Parameters : |
|
Conscience Winner Take All algorithm
Support networks: | |
---|---|
newc (Kohonen layer) |
|
Parameters : |
|
BroydenFletcherGoldfarbShanno (BFGS) method Using scipy.optimize.fmin_bfgs
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters : |
|
Newton-CG method Using scipy.optimize.fmin_ncg
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters : |
|
Conjugate gradient algorithm Using scipy.optimize.fmin_ncg
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters : |
|
LVQ1 train function
Support networks: | |
---|---|
newlvq |
|
Parameters : |
|
Train with Delta rule
Support networks: | |
---|---|
newp (one-layer perceptron) |
|
Parameters : |
|
Train error functions with derivatives
Example: | >>> msef = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> msef(x)
1.25
>>> # calc derivative:
>>> msef.deriv(x[0])
array([ 1., 0.])
|
---|
Mean absolute error function
Parameters : |
|
---|---|
Returns : |
|
Derivative of SAE error function
Parameters : |
|
---|---|
Returns : |
|
Mean squared error function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> f(x)
1.25
|
Derivative of MSE error function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = MSE()
>>> x = np.array([1.0, 0.0])
>>> # calc derivative:
>>> f.deriv(x)
array([ 1., 0.])
|
Transfer function with derivatives
Example: | >>> import numpy as np
>>> f = TanSig()
>>> x = np.linspace(-5,5,100)
>>> y = f(x)
>>> df_on_dy = f.deriv(x, y) # calc derivative
>>> f.out_minmax # list output range [min, max]
[-1, 1]
>>> f.inp_active # list input active range [min, max]
[-2, 2]
|
---|
Competitive transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = Competitive()
>>> f([-5, -0.1, 0, 0.1, 100])
array([ 1., 0., 0., 0., 0.])
>>> f([-5, -0.1, 0, -6, 100])
array([ 0., 0., 0., 1., 0.])
|
Hard limit transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = HardLim()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0., 0., 0., 1., 1.])
|
Derivative of transfer function HardLim
Symmetric hard limit transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = HardLims()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([-1., -1., -1., 1., 1.])
|
Derivative of transfer function HardLims
Logarithmic sigmoid transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = LogSig()
>>> x = np.array([-np.Inf, 0.0, np.Inf])
>>> f(x).tolist()
[0.0, 0.5, 1.0]
|
Derivative of transfer function LogSig
Linear transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> import numpy as np
>>> f = PureLin()
>>> x = np.array([-100., 50., 10., 40.])
>>> f(x).tolist()
[-100.0, 50.0, 10.0, 40.0]
|
Derivative of transfer function PureLin
Saturating linear transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = SatLin()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0. , 0. , 0. , 0.1, 1. ])
|
Derivative of transfer function SatLin
Symmetric saturating linear transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = SatLins()
>>> x = np.array([-5, -1, 0, 0.1, 100])
>>> f(x)
array([-1. , -1. , 0. , 0.1, 1. ])
|
Derivative of transfer function SatLins
Hyperbolic tangent sigmoid transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = TanSig()
>>> f([-np.Inf, 0.0, np.Inf])
array([-1., 0., 1.])
|
Derivative of transfer function TanSig
Functions of initialization layers
Initialize the specified properties of the layer random numbers within specified limits
Initialize the specified property of the layer random numbers within specified limits
Parameters : |
|
---|
Set all layer properties of zero
Initialize weights and bias in the range defined by the activation function (transf.inp_active)
Sets weight to the center of the input ranges