Library

net: Neural Networks

The module contains the basic network architectures

Network Type Function Count of layers Support train fcn Error fcn
Single-layer perceptron newp 1 train_delta SSE
Multi-layer perceptron newff more 1 train_gd, train_gdm, train_gda, train_gdx*, train_rprop, train_bfgs, train_cg SSE
Competitive layer newc 1 train_wta, train_cwta* SAE
LVQ newlvq 2 train_lvq MSE

Note

* - default function

neurolab.net.newc(minmax, cn)

Create competitive layer (Kohonen network)

Parameters :
minmax: list ci x 2

Range of input value

cn: int

Number of neurons

Returns :

net: Net

Example :
>>> # create network with 2 inputs and 10 neurons
>>> net = newc([[-1, 1], [-1, 1]], 10)
neurolab.net.newff(minmax, size, transf=None)

Create multilayer perceptron

Parameters :
minmax: list ci x 2

Range of input value

size: list of length equal to the number of layers

Contains the number of neurons for each layer

transf: list (default TanSig)

List of activation function for each layer

Returns :

net: Net

Example :
>>> # create neural net with 2 inputs, 1 output and 2 layers
>>> net = newff([[-0.5, 0.5], [-0.5, 0.5]], [3, 1])
>>> net.ci
2
>>> net.co
1
>>> len(net.layers)
2
neurolab.net.newlvq(minmax, cn0, pc)

Create a learning vector quantization (LVQ) network

Parameters :
minmax: list ci x 2

Range of input value

cn0: int

Number of neurons in input layer

pc: list

List of percent, sum(pc) == 1

Returns :

net: Net

Example :
>>> # create network with 2 inputs,
>>> # 2 layers and 10 neurons in each layer
>>> net = newlvq([[-1, 1], [-1, 1]], 10, [0.6, 0.4])
neurolab.net.newp(minmax, cn, transf=<neurolab.trans.HardLim instance at 0x02F71A80>)

Create one layer perceptron

Parameters :
minmax: list ci x 2

Range of input value

cn: int

Number of neurons

transf: func (default HardLim)

Activation function

Returns :

net: Net

Example :
>>> # create network with 2 inputs and 10 neurons
>>> net = newp([[-1, 1], [-1, 1]], 10)

train: Train Algorithms

Train algorithms based gradients algorihms

neurolab.train.train_gd(net, input, target=None, **kwargs)

Gradient descent backpropagation

Support networks:
 

newff (multi-layers perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt bool (default False)

type of learning

neurolab.train.train_gdm(net, input, target=None, **kwargs)

Gradient descent with momentum backpropagation

Support networks:
 

newff (multi-layers perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt bool (default False)

type of learning

neurolab.train.train_gda(net, input, target=None, **kwargs)

Gradient descent with adaptive learning rate

Support networks:
 

newff (multi-layers perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt: bool (detault False)

type of learning

lr_inc: float (> 1, default 1.05)

Ratio to increase learning rate

lr_dec: float (< 1, default 0.7)

Ratio to decrease learning rate

max_perf_inc:float (> 1, default 1.04)

Maximum performance increase

neurolab.train.train_gdx(net, input, target=None, **kwargs)

Gradient descent with momentum backpropagation and adaptive lr

Support networks:
 

newff (multi-layers perceptron)

Рarameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt: bool (detault False)

type of learning

lr_inc: float (default 1.05)

Ratio to increase learning rate

lr_dec: float (default 0.7)

Ratio to decrease learning rate

max_perf_inc:float (default 1.04)

Maximum performance increase

mc: float (default 0.9)

Momentum constant

neurolab.train.train_rprop(net, input, target=None, **kwargs)

Resilient Backpropagation

Support networks:
 

newff (multi-layers perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.07)

learning rate (init rate)

adapt bool (default False)

type of learning

rate_dec: float (default 0.5)

Decrement to weight change

rate_inc: float (default 1.2)

Increment to weight change

rate_min: float (default 1e-9)

Minimum performance gradient

rate_max: float (default 50)

Maximum weight change

Train algorithms based on Winner Take All - rule

neurolab.train.train_wta(net, input, target=None, **kwargs)

Winner Take All algorithm

Support networks:
 

newc (Kohonen layer)

Parameters :
input: array like (l x net.ci)

train input patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

neurolab.train.train_cwta(net, input, target=None, **kwargs)

Conscience Winner Take All algorithm

Support networks:
 

newc (Kohonen layer)

Parameters :
input: array like (l x net.ci)

train input patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

Train algorithms based on spipy.optimize

neurolab.train.train_bfgs(net, input, target=None, **kwargs)

Broyden–Fletcher–Goldfarb–Shanno (BFGS) method Using scipy.optimize.fmin_bfgs

Support networks:
 

newff (multi-layers perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

neurolab.train.train_cg(net, input, target=None, **kwargs)

Newton-CG method Using scipy.optimize.fmin_ncg

Support networks:
 

newff (multi-layers perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

neurolab.train.train_ncg(net, input, target=None, **kwargs)

Conjugate gradient algorithm Using scipy.optimize.fmin_ncg

Support networks:
 

newff (multi-layers perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

Train algorithms for LVQ networks

neurolab.train.train_lvq(net, input, target=None, **kwargs)

LVQ1 train function

Support networks:
 

newlvq

Parameters :
input: array like (l x net.ci)

train input patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt bool (default False)

type of learning

Delta rule

neurolab.train.train_delta(net, input, target=None, **kwargs)

Train with Delta rule

Support networks:
 

newp (one-layer perceptron)

Parameters :
input: array like (l x net.ci)

train input patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (default 0.01)

learning rate

error: Error functions

Train error functions with derivatives

Example:
>>> msef = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> msef(x)
1.25
>>> # calc derivative:
>>> msef.deriv(x[0])
array([ 1.,  0.])
class neurolab.error.MAE

Mean absolute error function

Parameters :
e: ndarray

current errors: target - output

Returns :
v: float

Error value

deriv(e)

Derivative of SAE error function

Parameters :
e: ndarray

current errors: target - output

Returns :
d: ndarray

Derivative: dE/d_out

class neurolab.error.MSE

Mean squared error function

Parameters :
e: ndarray

current errors: target - output

Returns :
v: float

Error value

Example :
>>> f = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> f(x)
1.25
deriv(e)

Derivative of MSE error function

Parameters :
e: ndarray

current errors: target - output

Returns :
d: ndarray

Derivative: dE/d_out

Example :
>>> f = MSE()
>>> x = np.array([1.0, 0.0])
>>> # calc derivative:
>>> f.deriv(x)
array([ 1.,  0.])
class neurolab.error.SAE

Sum absolute error function

Parameters :
e: ndarray

current errors: target - output

Returns :
v: float

Error value

deriv(e)

Derivative of SAE error function

Parameters :
e: ndarray

current errors: target - output

Returns :
d: ndarray

Derivative: dE/d_out

class neurolab.error.SSE

Sum squared error function

Parameters :
e: ndarray

current errors: target - output

Returns :
v: float

Error value

deriv(e)

Derivative of SSE error function

Parameters :
e: ndarray

current errors: target - output

Returns :
d: ndarray

Derivative: dE/d_out

trans: Transfer functions

Transfer function with derivatives

Example:
>>> import numpy as np
>>> f = TanSig()
>>> x = np.linspace(-5,5,100)
>>> y = f(x)
>>> df_on_dy = f.deriv(x, y) # calc derivative
>>> f.out_minmax    # list output range [min, max]
[-1, 1]
>>> f.inp_active    # list input active range [min, max]
[-2, 2]
class neurolab.trans.Competitive

Competitive transfer function

Parameters :
x: ndarray

Input array

Returns :
y : ndarray

may take the following values: 0, 1 1 if is a minimal element of x, else 0

Example :
>>> f = Competitive()
>>> f([-5, -0.1, 0, 0.1, 100])
array([ 1.,  0.,  0.,  0.,  0.])
>>> f([-5, -0.1, 0, -6, 100])
array([ 0.,  0.,  0.,  1.,  0.])
class neurolab.trans.HardLim

Hard limit transfer function

Parameters :
x: ndarray

Input array

Returns :
y : ndarray

may take the following values: 0, 1

Example :
>>> f = HardLim()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0.,  0.,  0.,  1.,  1.])
deriv(x, y)

Derivative of transfer function HardLim

class neurolab.trans.HardLims

Symmetric hard limit transfer function

Parameters :
x: ndarray

Input array

Returns :
y : ndarray

may take the following values: -1, 1

Example :
>>> f = HardLims()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([-1., -1., -1.,  1.,  1.])
deriv(x, y)

Derivative of transfer function HardLims

class neurolab.trans.LogSig

Logarithmic sigmoid transfer function

Parameters :
x: ndarray

Input array

Returns :
y : ndarray

The corresponding logarithmic sigmoid values.

Example :
>>> f = LogSig()
>>> x = np.array([-np.Inf, 0.0, np.Inf])
>>> f(x).tolist()
[0.0, 0.5, 1.0]
deriv(x, y)

Derivative of transfer function LogSig

class neurolab.trans.PureLin

Linear transfer function

Parameters :
x: ndarray

Input array

Returns :
y : ndarray

copy of x

Example :
>>> import numpy as np
>>> f = PureLin()
>>> x = np.array([-100., 50., 10., 40.])
>>> f(x).tolist()
[-100.0, 50.0, 10.0, 40.0]
deriv(x, y)

Derivative of transfer function PureLin

class neurolab.trans.TanSig

Hyperbolic tangent sigmoid transfer function

Parameters :
x: ndarray

Input array

Returns :
y : ndarray

The corresponding hyperbolic tangent values.

Example :
>>> f = TanSig()
>>> f([-np.Inf, 0.0, np.Inf])
array([-1.,  0.,  1.])
deriv(x, y)

Derivative of transfer function TanSig

init: Initializing functions

Functions of initialization layers

class neurolab.init.InitRand(minmax, init_prop)

Initialize the specified properties of the layer random numbers within specified limits

neurolab.init.init_rand(layer, min=-0.5, max=0.5, init_prop='w')

Initialize the specified property of the layer random numbers within specified limits

Parameters :
layer:

Initialized layer

min: float (default -0.5)

minimum value after the initialization

max: float (default 0.5)

maximum value after the initialization

init_prop: str (default ‘w’)

name of initialized property, must be in layer.np

neurolab.init.init_zeros(layer)

Set all layer properties of zero

neurolab.init.initwb_reg(layer)

Initialize weights and bias in the range defined by the activation function (transf.inp_active)

neurolab.init.midpoint(layer)

Sets weight to the center of the input ranges