The module contains the basic network architectures
Network Type | Function | Count of layers | Support train fcn | Error fcn |
---|---|---|---|---|
Single-layer perceptron | newp | 1 | TrainDelta | SSE |
Multi-layer perceptron | newff | more 1 | TrainGD, TrainGDM, TrainGDA, TrainGDX*, TrainRprop, TrainRpropM, TrainBFGS, TrainCG | SSE |
Competitive layer | newc | 1 | TrainWTA, TrainCWTA* | SAE |
LVQ | newlvq | 2 | TrainLVQ | MSE |
Note
* - defaulf function
Create competitive layer (Kohonen network)
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs and 10 neurons
>>> net = newc([[-1, 1], [-1, 1]], 10)
|
Create multilayer perceptron
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create neural net with 2 inputs, 1 output and 2 layers
>>> net = newff([[-0.5, 0.5], [-0.5, 0.5]], [3, 1])
>>> net.ci
2
>>> net.co
1
>>> len(net.layers)
2
|
Create a learning vector quantization network
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs,
>>> # 2 layers and 10 neurons in each layer
>>> net = newlvq([[-1, 1], [-1, 1]], 10, [0.6, 0.4], 0.02)
>>> net.trainf.defaults['lr']
0.02
|
Create one layer perceptron
Parameters : |
|
---|---|
Returns : | net: Net |
Example : | >>> # create network with 2 inputs and 10 neurons
>>> net = newp([[-1, 1], [-1, 1]], 10)
|
Broyden–Fletcher–Goldfarb–Shanno (BFGS) method Using scipy.optimize.fmin_bfgs
Returns : | error: error process |
---|---|
Parameters : |
|
Support networks: | |
newff (multy-layers perceptron) |
Conjugate gradient algorithm Using scipy.optimize.fmin_cg
Returns : | error: error process |
---|---|
Parameters : |
|
Support networks: | |
newff (multy-layers perceptron) |
Conscience Winner Take All algoritm
Returns : | error: error process |
---|---|
Parameters : |
|
Support networks: | |
newc |
Train with Delta rule
Returns : | error: error process |
---|---|
Parameters : |
|
Support networks: | |
newp (one-layers perceptron) |
Gradient descent backpropagation
Returns : | error: error process |
---|---|
Parameters : |
|
Other parameters: | |
|
|
Support networks: | |
newff (multy-layers perceptron) |
Gradient descent with adaptive learning rate
Returns : | error: error process |
---|---|
Parameters : |
|
Other parameters: | |
|
|
Support networks: | |
newff (multy-layers perceptron) |
Gradient descent with momentum backpropagation
Returns : | error: error process |
---|---|
Parameters : |
|
Other parameters: | |
|
|
Support networks: | |
newff (multy-layers perceptron) |
Gradient descent with momentum backpropagation and adaptive lr
Returns : | error: error process |
---|---|
Parameters : |
|
Other parameters: | |
|
|
Support networks: | |
newff (multy-layers perceptron) |
LVQ1 train function
Returns : | error: error process |
---|---|
Parameters : |
|
Support networks: | |
newlvq |
Resilient Backpropagation
Returns : | error: error process |
---|---|
Parameters : |
|
Other parameters: | |
|
|
Support networks: | |
newff (multy-layers perceptron) |
Resilient Backpropagation Modified
Returns : | error: error process |
---|---|
Parameters : |
|
Other parameters: | |
|
|
Support networks: | |
newff (multy-layers perceptron) |
Winner Take All algoritm
Returns : | error: error process |
---|---|
Parameters : |
|
Support networks: | |
newc |
Train error functions with derivatives
Example: | >>> msef = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> msef(x)
1.25
>>> # calc derivative:
>>> msef.deriv(x[0])
array([ 1., 0.])
|
---|
Mean squared error function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> f(x)
1.25
|
Derivative of MSE error function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = MSE()
>>> x = np.array([1.0, 0.0])
>>> # calc derivative:
>>> f.deriv(x)
array([ 1., 0.])
|
Transfer function with derivatives
Example: | >>> import numpy as np
>>> f = TanSig()
>>> x = np.linspace(-5,5,100)
>>> y = f(x)
>>> df_on_dy = f.deriv(x, y) # calc derivative
>>> f.out_minmax # list output range [min, max]
[-1, 1]
>>> f.inp_active # list input active range [min, max]
[-2, 2]
|
---|
Competitive transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = Competitive()
>>> f([-5, -0.1, 0, 0.1, 100])
array([ 1., 0., 0., 0., 0.])
>>> f([-5, -0.1, 0, -6, 100])
array([ 0., 0., 0., 1., 0.])
|
Hard limit transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = HardLim()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0., 0., 1., 1., 1.])
|
Derivative of transfer function HardLim
Symmetric hard limit transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = HardLims()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([-1., -1., 1., 1., 1.])
|
Derivative of transfer function HardLims
Logarithmic sigmoid transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = LogSig()
>>> x = np.array([-np.Inf, 0.0, np.Inf])
>>> f(x).tolist()
[0.0, 0.5, 1.0]
|
Derivative of transfer function LogSig
Linear transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> import numpy as np
>>> f = PureLin()
>>> x = np.array([-100., 50., 10., 40.])
>>> f(x).tolist()
[-100.0, 50.0, 10.0, 40.0]
|
Derivative of transfer function PureLin
Hyperbolic tangent sigmoid transfer function
Parameters : |
|
---|---|
Returns : |
|
Example : | >>> f = TanSig()
>>> f([-np.Inf, 0.0, np.Inf])
array([-1., 0., 1.])
|
Derivative of transfer function TanSig