numdifftools package

Submodules

numdifftools.core module

numerical differentiation functions: Derivative, Gradient, Jacobian, and Hessian

Author: Per A. Brodtkorb

Created: 01.08.2008 Copyright: (c) pab 2008 Licence: New BSD

Based on matlab functions derivest.m gradest.m hessdiag.m, hessian.m and jacobianest.m version 1.0 released 12/27/2006 by John D’Errico (e-mail: woodchips@rochester.rr.com)

Also based on the python functions approx_fprime, approx_fprime_cs, approx_hess_cs, approx_hess1, approx_hess2 and approx_hess3 in the statsmodels.tools.numdiff module released in 2014 written by Josef Perktold.

numdifftools.core.dea3(v0, v1, v2, symmetric=False)[source]

Extrapolate a slowly convergent sequence

v0, v1, v2 : array-like
3 values of a convergent sequence to extrapolate
result : array-like
extrapolated value
abserr : array-like
absolute error estimate

DEA3 attempts to extrapolate nonlinearly to a better estimate of the sequence’s limiting value, thus improving the rate of convergence. The routine is based on the epsilon algorithm of P. Wynn, see [1]_.

# integrate sin(x) from 0 to pi/2

>>> import numpy as np
>>> import numdifftools as nd
>>> Ei= np.zeros(3)
>>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1)
>>> for k in np.arange(3):
...    x = linfun(k)
...    Ei[k] = np.trapz(np.sin(x),x)
>>> [En, err] = nd.dea3(Ei[0], Ei[1], Ei[2])
>>> truErr = Ei-1.
>>> (truErr, err, En)
(array([ -2.00805680e-04,  -5.01999079e-05,  -1.25498825e-05]),
array([ 0.00020081]), array([ 1.]))

dea

[1]C. Brezinski (1977) “Acceleration de la convergence en analyse numerique”, “Lecture Notes in Math.”, vol. 584, Springer-Verlag, New York, 1977.
class numdifftools.core.Derivative(f, step=None, method='central', order=2, n=1, full_output=False)[source]

Bases: numdifftools.core._Derivative

Calculate n-th derivative with finite difference approximation

f : function
function of one array f(x, *args, **kwds)
step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.
method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
order : int, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
n : int, optional
Order of the derivative.
full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
x : array_like
value at which function derivative is evaluated
args : tuple
Arguments for function f.
kwds : dict
Keyword arguments for function f.
der : ndarray
array of derivatives

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
>>> import numpy as np
>>> import numdifftools as nd

# 1’st derivative of exp(x), at x == 1

>>> fd = nd.Derivative(np.exp)
>>> np.allclose(fd(1), 2.71828183)
True
>>> d2 = fd([1, 2])
>>> np.allclose(d2, [ 2.71828183,  7.3890561 ])
True
>>> def f(x):
...     return x**3 + x**2
>>> df = nd.Derivative(f)
>>> np.allclose(df(1), 5)
True
>>> ddf = nd.Derivative(f, n=2)
>>> np.allclose(ddf(1), 8)
True

Gradient, Hessian

n
class numdifftools.core.Jacobian(f, step=None, method='central', order=2, full_output=False)[source]

Bases: numdifftools.core.Gradient

Calculate Jacobian with finite difference approximation

f : function
function of one array f(x, *args, **kwds)
step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.
method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
order : int, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
x : array_like
value at which function derivative is evaluated
args : tuple
Arguments for function f.
kwds : dict
Keyword arguments for function f.
jacob : array
Jacobian

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

If f returns a 1d array, it returns a Jacobian. If a 2d array is returned by f (e.g., with a value for each observation), it returns a 3d array with the Jacobian of each observation with shape xk x nobs x xk. I.e., the Jacobian of the first observation would be [:, 0, :]

Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
>>> import numdifftools as nd

#(nonlinear least squares)

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nd.Jacobian(fun)
>>> val = Jfun([1,2,0.75])
>>> np.allclose(val, np.zeros((10,3)))
True
>>> fun2 = lambda x : x[0]*x[1]*x[2] + np.exp(x[0])*x[1]
>>> Jfun3 = nd.Jacobian(fun2)
>>> Jfun3([3.,5.,7.])
array([ 135.42768462,   41.08553692,   15.        ])

Derivative, Hessian, Gradient

class numdifftools.core.Gradient(f, step=None, method='central', order=2, full_output=False)[source]

Bases: numdifftools.core.Derivative

Calculate Gradient with finite difference approximation

f : function
function of one array f(x, *args, **kwds)
step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.
method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
order : int, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
x : array_like
value at which function derivative is evaluated
args : tuple
Arguments for function f.
kwds : dict
Keyword arguments for function f.
grad : array
gradient

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
>>> import numpy as np
>>> import numdifftools as nd
>>> fun = lambda x: np.sum(x**2)
>>> dfun = nd.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)

>>> sin = np.sin; exp = np.exp
>>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0])
>>> dz = nd.Gradient(z)
>>> grad2 = dz([1, 1])
>>> grad2
array([ 3.71828183,  1.71828183])

# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.

>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2
>>> rd = nd.Gradient(rosen)
>>> grad3 = rd([1,1])
>>> np.allclose(grad3,[0, 0])
True

Derivative, Hessian, Jacobian

class numdifftools.core.Hessian(f, step=None, method='central', full_output=False)[source]

Bases: numdifftools.core._Derivative

Calculate Hessian with finite difference approximation

f : function
function of one array f(x, *args, **kwds)
step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.
method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
x : array_like
value at which function derivative is evaluated
args : tuple
Arguments for function f.
kwds : dict
Keyword arguments for function f.
hess : ndarray
array of partial second derivatives, Hessian

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Computes the Hessian according to method as: ‘forward’ (1), ‘central’ (2) and ‘complex’ (3):

(1)\quad ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j))) / (d_j d_k)

(2)\quad  ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j - d_k e_k)) -  (f(x - d_j e_j + d_k e_k) - f(x - d_j e_j - d_k e_k)) / (4 d_j d_k)

(3)imag(f(x + i d_j e_j + d_k e_k) - f(x + i d_j e_j - d_k e_k)) /(2 d_j d_k)

where e_j is a vector with element j is one and the rest are zero and d_j is a scalar spacing steps_j.

Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
>>> import numpy as np
>>> import numdifftools as nd

# Rosenbrock function, minimized at [1,1]

>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2
>>> Hfun = nd.Hessian(rosen)
>>> h = Hfun([1, 1])
>>> h
array([[ 842., -420.],
       [-420.,  210.]])

# cos(x-y), at (0,0)

>>> cos = np.cos
>>> fun = lambda xy : cos(xy[0]-xy[1])
>>> Hfun2 = nd.Hessian(fun)
>>> h2 = Hfun2([0, 0])
>>> h2
array([[-1.,  1.],
       [ 1., -1.]])

Derivative, Hessian

class numdifftools.core.Hessdiag(f, step=None, method='central', order=2, full_output=False)[source]

Bases: numdifftools.core.Derivative

Calculate Hessian diagonal with finite difference approximation

f : function
function of one array f(x, *args, **kwds)
step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.
method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximationorder : int, optional defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
x : array_like
value at which function derivative is evaluated
args : tuple
Arguments for function f.
kwds : dict
Keyword arguments for function f.
hessdiag : array
hessian diagonal

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
>>> import numpy as np
>>> import numdifftools as nd
>>> fun = lambda x : x[0] + x[1]**2 + x[2]**3
>>> Hfun = nd.Hessdiag(fun, full_output=True)
>>> hd, info = Hfun([1,2,3])
>>> np.allclose(hd, [  0.,   2.,  18.])
True
>>> info.error_estimate < 1e-11
array([ True,  True,  True], dtype=bool)

Derivative, Hessian, Jacobian, Gradient

class numdifftools.core.MinStepGenerator(base_step=None, step_ratio=2, num_steps=None, offset=0, scale=None, num_extrap=0, use_exact_steps=True, check_num_steps=True)[source]

Bases: object

Generates a sequence of steps

where steps = base_step * step_ratio ** (np.arange(num_steps) + offset)

base_step : float, array-like, optional
Defines the base step, if None, then base_step is set to EPS**(1/scale)*max(log(1+|x|), 1) where x is supplied at runtime through the __call__ method.
step_ratio : real scalar, optional, default 2
Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6
num_steps : scalar integer, optional, default n + order - 1 + num_extrap
defines number of steps generated. It should be larger than n + order - 1
offset : real scalar, optional, default 0
offset to the base step
scale : real scalar, optional
scale used in base step. If not None it will override the default computed with the default_scale function.
class numdifftools.core.MaxStepGenerator(step_max=2.0, step_ratio=2.0, num_steps=15, step_nom=None, offset=0, num_extrap=0, use_exact_steps=False, check_num_steps=True)[source]

Bases: numdifftools.core.MinStepGenerator

Generates a sequence of steps

where
steps = base_step * step_ratio ** (-np.arange(num_steps) + offset) base_step = step_max * step_nom
max_step : float, array-like, optional default 2
Defines the maximum step
step_ratio : real scalar, optional, default 2
Ratio between sequential steps generated. Note: Ratio > 1
num_steps : scalar integer, optional, default n + order - 1 + num_extrap
defines number of steps generated. It should be larger than n + order - 1
step_nom : default maximum(log1p(abs(x)), 1)
Nominal step.
offset : real scalar, optional, default 0
offset to the base step: max_step * nom_step
class numdifftools.core.Richardson(step_ratio=2.0, step=1, order=1, num_terms=2)[source]

Bases: object

Extrapolates as sequence with Richardsons method

Suppose you have series expansion that goes like this

L = f(h) + a0 * h^p_0 + a1 * h^p_1+ a2 * h^p_2 + ...

where p_i = order + step * i and f(h) -> L as h -> 0, but f(0) != L.

If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does.

>>> import numpy as np
>>> import numdifftools as nd
>>> n = 3
>>> Ei = np.zeros((n,1))
>>> h = np.zeros((n,1))
>>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1)
>>> for k in np.arange(n):
...    x = linfun(k)
...    h[k] = x[1]
...    Ei[k] = np.trapz(np.sin(x),x)
>>> En, err, step = nd.Richardson(step=1, order=1)(Ei, h)
>>> truErr = Ei-1.
>>> (truErr, err, En)
(array([[ -2.00805680e-04],
       [ -5.01999079e-05],
       [ -1.25498825e-05]]), array([[ 0.00320501]]), array([[ 1.]]))
extrapolate(sequence, steps)[source]

numdifftools.extrapolation module

Created on 28. aug. 2015

@author: pab

class numdifftools.extrapolation.Dea(limexp=3)[source]

Bases: object

LIMEXP is the maximum number of elements the epsilon table data can contain. The epsilon table is stored in the first (LIMEXP+2) entries of EPSTAB.

E0,E1,E2,E3 - DOUBLE PRECISION
The 4 elements on which the computation of a new element in the epsilon table is based.
NRES - INTEGER
Number of extrapolation results actually generated by the epsilon algorithm in prior calls to the routine.
NEWELM - INTEGER
Number of elements to be computed in the new diagonal of the epsilon table. The condensed epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
RES - DOUBLE PREISION
New element in the new diagonal of the epsilon table.
ERROR - DOUBLE PRECISION
An estimate of the absolute error of RES. Routine decides whether RESULT=RES or RESULT=SVALUE by comparing ERROR with ABSERR from the previous call.
RES3LA - DOUBLE PREISION
Vector of DIMENSION 3 containing at most the last 3 results.
class numdifftools.extrapolation.Richardson(step_ratio=2.0, step=1, order=1, num_terms=2)[source]

Bases: object

Extrapolates as sequence with Richardsons method

Suppose you have series expansion that goes like this

L = f(h) + a0 * h^p_0 + a1 * h^p_1+ a2 * h^p_2 + ...

where p_i = order + step * i and f(h) -> L as h -> 0, but f(0) != L.

If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does.

>>> import numpy as np
>>> import numdifftools as nd
>>> n = 3
>>> Ei = np.zeros((n,1))
>>> h = np.zeros((n,1))
>>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1)
>>> for k in np.arange(n):
...    x = linfun(k)
...    h[k] = x[1]
...    Ei[k] = np.trapz(np.sin(x),x)
>>> En, err, step = nd.Richardson(step=1, order=1)(Ei, h)
>>> truErr = Ei-1.
>>> (truErr, err, En)
(array([[ -2.00805680e-04],
       [ -5.01999079e-05],
       [ -1.25498825e-05]]), array([[ 0.00320501]]), array([[ 1.]]))
extrapolate(sequence, steps)[source]
numdifftools.extrapolation.convolve(sequence, rule, **kwds)[source]

Wrapper around scipy.ndimage.convolve1d that allows complex input

numdifftools.extrapolation.dea3(v0, v1, v2, symmetric=False)[source]

Extrapolate a slowly convergent sequence

v0, v1, v2 : array-like
3 values of a convergent sequence to extrapolate
result : array-like
extrapolated value
abserr : array-like
absolute error estimate

DEA3 attempts to extrapolate nonlinearly to a better estimate of the sequence’s limiting value, thus improving the rate of convergence. The routine is based on the epsilon algorithm of P. Wynn, see [1]_.

# integrate sin(x) from 0 to pi/2

>>> import numpy as np
>>> import numdifftools as nd
>>> Ei= np.zeros(3)
>>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1)
>>> for k in np.arange(3):
...    x = linfun(k)
...    Ei[k] = np.trapz(np.sin(x),x)
>>> [En, err] = nd.dea3(Ei[0], Ei[1], Ei[2])
>>> truErr = Ei-1.
>>> (truErr, err, En)
(array([ -2.00805680e-04,  -5.01999079e-05,  -1.25498825e-05]),
array([ 0.00020081]), array([ 1.]))

dea

[1]C. Brezinski (1977) “Acceleration de la convergence en analyse numerique”, “Lecture Notes in Math.”, vol. 584, Springer-Verlag, New York, 1977.
numdifftools.extrapolation.test_dea()[source]
numdifftools.extrapolation.test_epsal()[source]

numdifftools.info module

Introduction to Numdifftools

Numdifftools is a suite of tools written in Python to solve automatic numerical differentiation problems in one or more variables. Finite differences are used in an adaptive manner, coupled with a Richardson extrapolation methodology to provide a maximally accurate result. The user can configure many options like; changing the order of the method or the extrapolation, even allowing the user to specify whether complex, multicomplex, central, forward or backward differences are used. The methods provided are:

Derivative:
Computates the derivative of order 1 through 10 on any scalar function.
Gradient:
Computes the gradient vector of a scalar function of one or more variables.
Jacobian:
Computes the Jacobian matrix of a vector valued function of one or more variables.
Hessian:
Computes the Hessian matrix of all 2nd partial derivatives of a scalar function of one or more variables.
Hessdiag:
Computes only the diagonal elements of the Hessian matrix

All of these methods also produce error estimates on the result.

Numdifftools also provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python. The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs.

Documentation is at: http://numdifftools.readthedocs.org/

Code and issue tracker is at https://github.com/pbrod/numdifftools.

Latest stable release is at http://pypi.python.org/pypi/Numdifftools.

To test if the toolbox is working paste the following in an interactive python session:

import numdifftools as nd
nd.test(coverage=True, doctests=True)

Getting Started

Compute 1’st and 2’nd derivative of exp(x), at x == 1:

>>> import numpy as np
>>> import numdifftools as nd
>>> fd = nd.Derivative(np.exp)        # 1'st derivative
>>> fdd = nd.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nd.Jacobian(fun)
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nd.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

Compute the same with the easy to use interface to AlgoPy:

>>> import numdifftools.nd_algopy as nda
>>> import numpy as np
>>> fd = nda.Derivative(np.exp)        # 1'st derivative
>>> fdd = nda.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nda.Jacobian(fun, method='reverse')
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nda.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

See also

scipy.misc.derivative

numdifftools.info.test_docstrings()[source]

numdifftools.limits module

Created on 27. aug. 2015

@author: pab Author: John D’Errico e-mail: woodchips@rochester.rr.com Release: 1.0 Release date: 5/23/2008

class numdifftools.limits.Limit(f, step=None, method='above', order=4, full_output=False)[source]

Bases: object

Compute limit of a function at a given point

f : callable
function of one array f(z, *args, **kwds) to compute the limit for. The function, f, is assumed to return a result of the same shape and size as its input, z.
step: float, complex, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=4)
method : {‘above’, ‘below’}
defines if the limit is taken from above or below
order: positive scalar integer, optional.
defines the order of approximation used to find the specified limit. The order must be member of [1 2 3 4 5 6 7 8]. 4 is a good compromise.
limit_fz: array like
estimated limit of f(z) as z –> z0
info:

Only given if full_output is True and contains the following: error estimate: ndarray

95 uncertainty estimate around the limit, such that abs(limit_fz - f(z0)*(z-z0)) < error_estimate
final_step: ndarray
final step used in approximation

Limit computes the limit of a given function at a specified point, z0. When the function is evaluable at the point in question, this is a simple task. But when the function cannot be evaluated at that location due to a singularity, you may need a tool to compute the limit. Limit does this, as well as produce an uncertainty estimate in the final result.

The methods used by Limit are Richardson extrapolation in a combination with Wynn’s epsilon algorithm which also yield an error estimate. The user can specify the method order, as well as the path into z0. z0 may be real or complex. Limit uses a proportionally cascaded series of function evaluations, moving away from your point of evaluation along a path along the real line (or in the complex plane for complex z0 or step.) The step_ratio is the ratio used between sequential steps. The sign of step allows you to specify a limit from above or below. Negative values of step will cause the limit to be taken approaching z0 from below.

A smaller step_ratio means that Limit will take more function evaluations to evaluate the limit, but the result will potentially be less accurate. The step_ratio MUST be a scalar larger than 1. A value in the range [2,100] is recommended. 4 seems a good compromise.

Compute the limit of sin(x)./x, at x == 0. The limit is 1.
>>> import numpy as np
>>> from numdifftools.limits import Limit
>>> def f(x): return np.sin(x)/x
>>> lim_f0, err = Limit(f, full_output=True)(0)
>>> np.allclose(lim_f0, 1)
True
>>> np.allclose(err.error_estimate, 1.77249444610966e-15)
True

Compute the derivative of cos(x) at x == pi/2. It should be -1. The limit will be taken as a function of the differential parameter, dx.

>>> x0 = np.pi/2;
>>> def g(x): return (np.cos(x0+x)-np.cos(x0))/x
>>> lim_g0, err = Limit(g, full_output=True)(0)
>>> np.allclose(lim_g0, -1)
True
>>> err.error_estimate < 1e-14
True

Compute the residue at a first order pole at z = 0 The function 1./(1-exp(2*z)) has a pole at z == 0. The residue is given by the limit of z*fun(z) as z –> 0. Here, that residue should be -0.5.

>>> def h(z): return -z/(np.expm1(2*z))
>>> lim_h0, err = Limit(h, full_output=True)(0)
>>> np.allclose(lim_h0, -0.5)
True
>>> err.error_estimate < 1e-14
True

A more difficult limit is one where there is significant subtractive cancellation at the limit point. In the following example, the cancellation is second order. The true limit should be 0.5.

>>> def k(x): return (x*np.exp(x)-np.exp(x)+1)/x**2
>>> lim_k0,err = Limit(k, full_output=True)(0)
>>> np.allclose(lim_k0, 0.5)
True
>>> np.allclose(err.error_estimate, 7.4e-9)
True
>>> def h(x): return  (x-np.sin(x))/x**3
>>> lim_h0, err = Limit(h, full_output=True)(0)
>>> lim_h0, err
class info(error_estimate, final_step, index)

Bases: tuple

error_estimate

Alias for field number 0

final_step

Alias for field number 1

index

Alias for field number 2

Limit.limit(x, *args, **kwds)[source]
class numdifftools.limits.MinStepGenerator(base_step=None, step_ratio=4.0, num_steps=None, offset=0, scale=1.2, use_exact_steps=True)[source]

Bases: object

Generates a sequence of steps

where steps = base_step * step_ratio ** (np.arange(num_steps) + offset)

base_step : float, array-like, optional
Defines the base step, if None, then base_step is set to EPS**(1/scale)*max(log(1+|x|), 1) where x is supplied at runtime through the __call__ method.
step_ratio : real scalar, optional, default 4
Ratio between sequential steps generated.
num_steps : scalar integer, optional, default n + order - 1 + num_extrap
defines number of steps generated. It should be larger than n + order - 1
offset : real scalar, optional, default 0
offset to the base step
scale : real scalar, optional
scale used in base step. If not None it will override the default computed with the default_scale function.
numdifftools.limits.nom_step(x=None)[source]

Return nominal step

numdifftools.limits.test_docstrings()[source]
numdifftools.limits.valarray(shape, value=nan, typecode=None)[source]

Return an array of all value.

numdifftools.multicomplex module

Created on 22. apr. 2015

@author: pab

References

A METHODOLOGY FOR ROBUST OPTIMIZATION OF LOW-THRUST TRAJECTORIES IN MULTI-BODY ENVIRONMENTS Gregory Lantoine (2010) Phd thesis, Georgia Institute of Technology

USING MULTICOMPLEX VARIABLES FOR AUTOMATIC COMPUTATION OF HIGH-ORDER DERIVATIVES Gregory Lantoine, Ryan P. Russell , and Thierry Dargent ACM Transactions on Mathematical Software, Vol. 38, No. 3, Article 16, April 2012, 21 pages,

M.E. Luna-Elizarraras, M. Shapiro, D.C. Struppa1, A. Vajiac (2012) CUBO A Mathematical Journal Vol. 14, No 2, (61-80). June 2012.

Computation of higher-order derivatives using the multi-complex step method Adriaen Verheyleweghen, (2014) Project report, NTNU

class numdifftools.multicomplex.bicomplex(z1, z2)[source]

Bases: object

BICOMPLEX(z1, z2) Creates an instance of a bicomplex object. zeta = z1 + j*z2, where z1 and z2 are complex numbers.

arccos()[source]
arccosh()[source]
arcsin()[source]
arcsinh()[source]
arctan()[source]
arctanh()[source]
arg_c()[source]
arg_c1p()[source]
static asarray(other)[source]
conjugate()[source]
cos()[source]
cosh()[source]
cot()[source]
coth()[source]
csc()[source]
csch()[source]
dot(other)[source]
exp()[source]
exp2()[source]
expm1()[source]
flat(index)[source]
imag
imag1
imag12
imag2
log()[source]
log10()[source]
log1p()[source]
log2()[source]
logaddexp(other)[source]
logaddexp2(other)[source]
static mat2bicomp(arr)[source]
mod_c()[source]

Complex modulus

norm()[source]
real
sec()[source]
sech()[source]
shape
sin()[source]
sinh()[source]
size
sqrt()[source]
tan()[source]
tanh()[source]
numdifftools.multicomplex.c_abs(z)[source]
numdifftools.multicomplex.c_atan2(x, y)[source]
numdifftools.multicomplex.c_max(x, y)[source]
numdifftools.multicomplex.c_min(x, y)[source]
numdifftools.multicomplex.example_derivative()[source]

numdifftools.nd_algopy module

Numdifftools.nd_algopy

This module provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python.

The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs. Particular focus are functions that contain numerical linear algebra functions as they often appear in statistically motivated functions. The intended use of AlgoPy is for easy prototyping at reasonable execution speeds. More precisely, for a typical program a directional derivative takes order 10 times as much time as time as the function evaluation. This is approximately also true for the gradient.

Algoritmic differentiation

Algorithmic differentiation (AD) is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Algorithmic differentiation is not:

Symbolic differentiation, nor Numerical differentiation (the method of finite differences). These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Algoritmic differentiation solves all of these problems.

Reference

Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013

https://en.wikipedia.org/wiki/Automatic_differentiation

https://pythonhosted.org/algopy/index.html

class numdifftools.nd_algopy.Derivative(f, n=1, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate n-th derivative with Algorithmic Differentiation method

f : function
function of one array f(x, *args, **kwds)
n : int, optional
Order of the derivative.
method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
der : ndarray
array of derivatives

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013

https://en.wikipedia.org/wiki/Automatic_differentiation

# 1’st and 2’nd derivative of exp(x), at x == 1

>>> import numpy as np
>>> import numdifftools.nd_algopy as nda
>>> fd = nda.Derivative(np.exp)              # 1'st derivative
>>> np.allclose(fd(1), 2.718281828459045)
True
>>> fd5 = nda.Derivative(np.exp, n=5)         # 5'th derivative
>>> np.allclose(fd5(1), 2.718281828459045)
True

# 1’st derivative of x^3+x^4, at x = [0,1]

>>> f = lambda x: x**3 + x**4
>>> fd3 = nda.Derivative(f)
>>> np.allclose(fd3([0,1]), [ 0.,  7.])
True

Gradient, Hessdiag, Hessian, Jacobian

class numdifftools.nd_algopy.Gradient(f, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate Gradient with Algorithmic Differentiation method

f : function
function of one array f(x, *args, **kwds)
method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
grad : array
gradient

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013

https://en.wikipedia.org/wiki/Automatic_differentiation

>>> import numdifftools.nd_algopy as nda
>>> f = lambda x: np.sum(x**2)
>>> df = nda.Gradient(f, method='reverse')
>>> df([1,2,3])
array([ 2.,  4.,  6.])

#At [x,y] = [1,1], compute the numerical gradient #of the function sin(x-y) + y*exp(x)

>>> sin = np.sin; exp = np.exp
>>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0])
>>> dz = nda.Gradient(z)
>>> grad2 = dz([1, 1])
>>> grad2
array([ 3.71828183,  1.71828183])

#At the global minimizer (1,1) of the Rosenbrock function, #compute the gradient. It should be essentially zero.

>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2
>>> rd = nda.Gradient(rosen)
>>> grad3 = rd([1,1])
>>> grad3==np.array([ 0.,  0.])
array([ True,  True], dtype=bool)

Derivative Jacobian, Hessdiag, Hessian,

class numdifftools.nd_algopy.Hessdiag(f, method='forward')[source]

Bases: numdifftools.nd_algopy.Hessian

Calculate Hessian diagonal with Algorithmic Differentiation method

f : function
function of one array f(x, *args, **kwds)
method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
hessdiag : ndarray
Hessian diagonal array of partial second order derivatives.

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013

https://en.wikipedia.org/wiki/Automatic_differentiation

>>> import numdifftools.nd_algopy as nda

# Rosenbrock function, minimized at [1,1]

>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2
>>> Hfun = nda.Hessdiag(rosen)
>>> h = Hfun([1, 1]) #  h =[ 842, 210]
>>> h
array([ 842.,  210.])

# cos(x-y), at (0,0)

>>> cos = np.cos
>>> f = lambda xy : cos(xy[0]-xy[1])
>>> Hfun2 = nda.Hessdiag(f)
>>> h2 = Hfun2([0, 0]) # h2 = [-1, -1]
>>> h2
array([-1., -1.])
>>> Hfun3 = nda.Hessdiag(f, method='reverse')
>>> h3 = Hfun3([0, 0]) # h2 = [-1, -1];
>>> h3
array([-1., -1.])

Derivative Gradient, Jacobian, Hessian,

class numdifftools.nd_algopy.Hessian(f, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate Hessian with Algorithmic Differentiation method

f : function
function of one array f(x, *args, **kwds)
method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
hess : ndarray
array of partial second derivatives, Hessian

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013

https://en.wikipedia.org/wiki/Automatic_differentiation

>>> import numdifftools.nd_algopy as nda

# Rosenbrock function, minimized at [1,1]

>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2
>>> Hf = nda.Hessian(rosen)
>>> h = Hf([1, 1]) #  h =[ 842 -420; -420, 210];
>>> h
array([[ 842., -420.],
       [-420.,  210.]])

# cos(x-y), at (0,0)

>>> cos = np.cos
>>> f = lambda xy : cos(xy[0]-xy[1])
>>> Hfun2 = nda.Hessian(f)
>>> h2 = Hfun2([0, 0]) # h2 = [-1 1; 1 -1]
>>> h2
array([[-1.,  1.],
       [ 1., -1.]])
>>> Hfun3 = nda.Hessian(f, method='reverse')
>>> h3 = Hfun3([0, 0]) # h2 = [-1, 1; 1, -1];
>>> h3
array([[-1.,  1.],
       [ 1., -1.]])

Derivative Gradient, Jacobian, Hessdiag,

class numdifftools.nd_algopy.Jacobian(f, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate Jacobian with Algorithmic Differentiation method

f : function
function of one array f(x, *args, **kwds)
method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
jacob : array
Jacobian

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013

https://en.wikipedia.org/wiki/Automatic_differentiation

>>> import numdifftools.nd_algopy as nda

#(nonlinear least squares)

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> f = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2

Jfun = nda.Jacobian(f) # Todo: This does not work Jfun([1,2,0.75]) # should be numerically zero array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],

[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
>>> Jfun2 = nda.Jacobian(f, method='reverse')
>>> Jfun2([1,2,0.75]).T # should be numerically zero
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]])
>>> f2 = lambda x : x[0]*x[1]*x[2] + np.exp(x[0])*x[1]
>>> Jfun3 = nda.Jacobian(f2)
>>> Jfun3([3.,5.,7.])
array([[ 135.42768462,   41.08553692,   15.        ]])
>>> Jfun4 = nda.Jacobian(f2, method='reverse')
>>> Jfun4([3,5,7])
array([[ 135.42768462,   41.08553692,   15.        ]])

Derivative Gradient, Hessdiag, Hessian,

numdifftools.nd_algopy.hessian_forward()[source]
numdifftools.nd_algopy.test_docstrings()[source]

numdifftools.run_benchmark module

class numdifftools.run_benchmark.BenchmarkFunction(N)[source]

Bases: object

numdifftools.run_benchmark.compute_gradients(gradient_funs, problem_sizes)[source]
numdifftools.run_benchmark.compute_hessians(hessian_funs, problem_sizes)[source]
numdifftools.run_benchmark.loglimits(data, border=0.05)[source]
numdifftools.run_benchmark.plot_errors(error_objects, problem_sizes, symbols)[source]
numdifftools.run_benchmark.plot_runtimes(run_time_objects, problem_sizes, symbols)[source]

numdifftools.test_functions module

Created on 17. mai 2015

@author: pab

numdifftools.test_functions.dcos(x)[source]
numdifftools.test_functions.ddcos(x)[source]
numdifftools.test_functions.get_function(fun_name, n=1)[source]

Module contents

Introduction to Numdifftools

Numdifftools is a suite of tools written in Python to solve automatic numerical differentiation problems in one or more variables. Finite differences are used in an adaptive manner, coupled with a Richardson extrapolation methodology to provide a maximally accurate result. The user can configure many options like; changing the order of the method or the extrapolation, even allowing the user to specify whether complex, multicomplex, central, forward or backward differences are used. The methods provided are:

Derivative:
Computates the derivative of order 1 through 10 on any scalar function.
Gradient:
Computes the gradient vector of a scalar function of one or more variables.
Jacobian:
Computes the Jacobian matrix of a vector valued function of one or more variables.
Hessian:
Computes the Hessian matrix of all 2nd partial derivatives of a scalar function of one or more variables.
Hessdiag:
Computes only the diagonal elements of the Hessian matrix

All of these methods also produce error estimates on the result.

Numdifftools also provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python. The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs.

Documentation is at: http://numdifftools.readthedocs.org/

Code and issue tracker is at https://github.com/pbrod/numdifftools.

Latest stable release is at http://pypi.python.org/pypi/Numdifftools.

To test if the toolbox is working paste the following in an interactive python session:

import numdifftools as nd
nd.test(coverage=True, doctests=True)

Getting Started

Compute 1’st and 2’nd derivative of exp(x), at x == 1:

>>> import numpy as np
>>> import numdifftools as nd
>>> fd = nd.Derivative(np.exp)        # 1'st derivative
>>> fdd = nd.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nd.Jacobian(fun)
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nd.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

Compute the same with the easy to use interface to AlgoPy:

>>> import numdifftools.nd_algopy as nda
>>> import numpy as np
>>> fd = nda.Derivative(np.exp)        # 1'st derivative
>>> fdd = nda.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nda.Jacobian(fun, method='reverse')
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nda.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

See also

scipy.misc.derivative