Model Properties

Todo

Populate these with more text/examples

harold.minimal_realization(A, B, C, mu_tol=1e-09)

Given state matrices \(A,B,C\) computes minimal state matrices such that the system is controllable and observable within the given tolerance \(\mu\).

Implements a basic two pass algorithm :
1- First distance to mode cancellation is computed then also the Hessenberg form is obtained with the identified o’ble/c’ble block numbers. If staircase form reports that there are no cancellations but the distance is less than the tolerance, distance wins and the respective mode is removed.

Uses cancellation_distance() and staircase() for the tests.

Parameters:
  • A,B,C ({(n,n), (n,m), (pxn)} array_like) – System matrices to be checked for minimality
  • mu_tol (float) – The sensitivity threshold for the cancellation to be compared with the first default output of cancellation_distance() function. The default value is (default value is \(10^{-9}\))
Returns:

A,B,C – System matrices that are identified as minimal with k states instead of the original n where (k <= n)

Return type:

{(k,k), (k,m), (pxk)} array_like

harold.transmission_zeros(A, B, C, D)

Computes the transmission zeros of a (A,B,C,D) system matrix quartet.

Note

This is a straightforward implementation of the algorithm of Misra, van Dooren, Varga 1994 but skipping the descriptor matrix which in turn becomes Emami-Naeini,van Dooren 1979. I don’t know if anyone actually uses descriptor systems in practice so I removed the descriptor parts to reduce the clutter. Hence, it is possible to directly row/column compress the matrices without caring about the upper Hessenbergness of E matrix.

Parameters:A,B,C,D ({(nxn),(nxm),(p,n),(p,m) 2D Numpy arrays}) –
Returns:z – The array of computed transmission zeros. The array is returned empty if no transmission zeros are found.
Return type:{1D Numpy array}
harold.staircase(A, B, C, compute_T=False, form='c', invert=False, block_indices=False)

The staircase form is used very often to assess system properties. Given a state system matrix triplet A,B,C, this function computes the so-called controller/observer-Hessenberg form such that the resulting system matrices have the block-form (x denoting the nonzero blocks)

\[\begin{split}\begin{array}{c|c} \begin{bmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & \times \end{bmatrix} & \begin{bmatrix} \times \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \\ \hline \begin{bmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \end{bmatrix} \end{array}\end{split}\]

For controllability and observability, the existence of zero-rank subdiagonal blocks can be checked, as opposed to forming the Kalman matrix and checking the rank. Staircase method can numerically be more stable since for certain matrices, A^n computations can introduce large errors (for some A that have entries with varying order of magnitudes). But it is also prone to numerical rank guessing mismatches.

Notice that, if we use the pertransposed data, then we have the observer form which is usually asked from the user to supply the data as \(A,B,C \Rightarrow A^T,C^T,B^T\) and then transpose back the result. This is just silly to ask the user to do that. Hence the additional form option denoting whether it is the observer or the controller form that is requested.

Parameters:
  • A,B,C ({(n,n),(n,m),(p,n)} array_like) – System Matrices to be converted
  • compute_T (bool, optional) – Whether the transformation matrix T should be computed or not
  • form ({ 'c' , 'o' }, optional) – Determines whether the controller- or observer-Hessenberg form will be computed.
  • invert (bool, optional) – Whether to select which side the B or C matrix will be compressed. For example, the default case returns the B matrix with (if any) zero rows at the bottom. invert option flips this choice either in B or C matrices depending on the “form” switch.
  • block_indices (bool, optional) –
Returns:

  • Ah,Bh,Ch ({(n,n),(n,m),(p,n)} 2D numpy arrays) – Converted system matrices

  • T ((n,n) 2D numpy array) – If the boolean compute_T is true, returns the transformation matrix such that

    \[\begin{split}\left[\begin{array}{c|c} T^{-1}AT &T^{-1}B \\ \hline CT & D \end{array}\right]\end{split}\]

    is in the desired staircase form.

  • k (Numpy array) – If the boolean block_indices is true, returns the array of controllable/observable block sizes identified during block diagonalization

harold.system_norm(state_or_transfer, p=inf, validate=False, verbose=False, max_iter_limit=100, hinf_tolerance=1e-10, eig_tolerance=1e-12)

Computes the system p-norm. Currently, no balancing is done on the system, however in the future, a scaling of some sort will be introduced. Another short-coming is that while sounding general, only \(\mathcal{H}_2\) and \(\mathcal{H}_\infty\) norm are understood.

For \(\mathcal{H}_2\) norm, the standard grammian definition via controllability grammian, that can be found elsewhere is used.

Currently, the \(\mathcal{H}_\infty\) norm is computed via so-called Boyd-Balakhrishnan-Bruinsma-Steinbuch algorithm (See e.g. [2]).

However, (with kind and generous help of Melina Freitag) the algorithm given in [1] is being implemented and depending on the speed benefit might be replaced as the default.

[1] M.A. Freitag, A Spence, P. Van Dooren: Calculating the \(\mathcal{H}_\infty\)-norm using the implicit determinant method. SIAM J. Matrix Anal. Appl., 35(2), 619-635, 2014

[2] N.A. Bruinsma, M. Steinbuch: Fast Computation of \(\mathcal{H}_\infty\)-norm of transfer function. System and Control Letters, 14, 1990

Parameters:
  • state_or_transfer ({State,Transfer}) – System for which the norm is computed
  • p ({int,Inf}) – Whether the rank of the matrix should also be reported or not. The returned rank is computed via the definition taken from the official numpy.linalg.matrix_rank and appended here.
  • validate (boolean) – If applicable and if the resulting norm is finite, the result is validated via other means.
  • verbose (boolean) – If True, the (some) internal progress is printed out.
  • max_iter_limit (int) – Stops the iteration after max_iter_limit many times spinning the loop. Very unlikely but might be required for pathological examples.
  • hinf_tolerance (float) – Convergence check value such that when the progress is below this tolerance the result is accepted as converged.
  • eig_tolerance (float) – The algorithm relies on checking the eigenvalues of the Hamiltonian being on the imaginary axis or not. This value is the threshold such that the absolute real value of the eigenvalues smaller than this value will be accepted as pure imaginary eigenvalues.
Returns:

  • n (float) – Computed norm. In NumPy, infinity is also float-type
  • omega (float) – For Hinf norm, omega is the frequency where the maximum is attained (technically this is a numerical approximation of the supremum).

Auxilliary Functions

These are either functions that were necessary but not available or a certain functionality was missing in them in their native form. Hence the prefix harold. Over time, as scipy keeps adding functionalities (hopefully), they would become obsolete.

harold.haroldsvd(D, also_rank=False, rank_tol=None)

This is a wrapper/container function of both the SVD decomposition and the rank computation. Since the regular rank computation is implemented via SVD it doesn’t make too much sense to recompute the SVD if we already have the rank information. Thus instead of typing two commands back to back for both the SVD and rank, we return both. To reduce the clutter, the rank information is supressed by default.

numpy svd is a bit strange because it compresses and looses the S matrix structure. From the manual, it is advised to use u.dot(np.diag(s).dot(v)) for recovering the original matrix. But that won’t work for rectangular matrices. Hence it recreates the rectangular S matrix of U,S,V triplet.

Parameters:
  • D ((m,n) array_like) – Matrix to be decomposed
  • also_rank (bool, optional) – Whether the rank of the matrix should also be reported or not. The returned rank is computed via the definition taken from the official numpy.linalg.matrix_rank and appended here.
  • rank_tol ({None,float} optional) – The tolerance used for deciding the numerical rank. The default is set to None and uses the default definition of matrix_rank() from numpy.
Returns:

  • U,S,V ({(m,m),(m,n),(n,n)} 2D numpy arrays) – Decomposed-form matrices
  • r (integer) – If the boolean “also_rank” is true, this variable is the numerical rank of the matrix D

harold.haroldker(N, side='right')

This function is a straightforward basis computation for the right/left nullspace for rank deficient or fat/tall matrices.

It simply returns the remaining columns of the right factor of the singular value decomposition whenever applicable. Otherwise returns a zero vector such that it has the same number of rows as the columns of the argument, hence the dot product makes sense.

The basis columns have unity 2-norm except for the trivial zeros.

Parameters:
  • N ((m,n) array_like) – Matrix for which the nullspace basis to be computed
  • side ({'right','left'} string) – The switch for the right or left nullspace computation.
Returns:

Nn – Basis for the nullspace. dim is the dimension of the nullspace. If the nullspace is trivial then dim is 1 for consistent 2D array output

Return type:

(n,dim) array_like

harold.matrix_slice(M, M11shape)

Takes a two dimensional array of size \(p\times m\) and slices into four parts such that

\[\begin{split}\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right]\end{split}\]

where the shape of \(A\) is determined by M11shape=(r,q).

Parameters:
  • M (2D Numpy array) –
  • M11shape (tuple) – An integer valued 2-tuple for the shape of \((1,1)\) block element
Returns:

  • A (2D Numpy array) – The upper left \(r\times q\) block of \(M\)
  • B (2D Numpy array) – The upper right \(r\times (m-q)\) block of \(M\)
  • C (2D Numpy array) – The lower left \((p-r)\times q\) block of \(M\)
  • D (2D Numpy array) – The lower right \((p-r)\times (m-q)\) block of \(M\)

harold.e_i(width, nth=0)

Returns the nth column of the identity matrix with shape (width,width). Slicing is permitted with the nth parameter.

harold.haroldtrimleftzeros(somearray)

Trims the insignificant zeros in an array on the left hand side, e.g., [0,0,2,3,1,0] becomes [2,3,1,0].

Parameters:somearray (1D Numpy array) –
Returns:anotherarray
Return type:1D Numpy array
harold.haroldcompanion(somearray)

Takes a 1D numpy array or list and returns the companion matrix of the monic polynomial of somearray. Hence [0.5,1,2] will be first converted to [1,2,4].

Example:

>>>> haroldcompanion([2,4,6])
    array([[ 0.,  1.],
           [-3., -2.]])

>>>> haroldcompanion([1,3])
    array([[-3.]])

>>>> haroldcompanion([1])
    array([], dtype=float64)

The following function is particularly useful if there is a significant amount of copy/pasting to matlab. Hence this small function when used inside a print prints a numpy array in matlab syntax.

harold.matrix_printer(M, num_format='f', var_name=None)

This is a simple helper function for quickly carrying matrix data to matlab. The array is assumed to be 1- or 2D. If the array is 3D or more then use a slice in a for loop with explicit naming since matlab does not have multidimensional entry syntax.

The var_name can be used in a loop for such purposes e.g.:

matrix_printer(M,var_name = 'A(:,:,2)')

which will include the string A(:,:,2) = at the beginning of the output string.

This function is inspired by Lee Archer’s gist

Parameters:
  • M (Numpy array) – The numpy array that is going to be pretty printed in matlab format.
  • num_format (str) – String specifier (See Python documentation Section 6.1)
  • var_name (str) – The name of the variable to be created in matlab.
Returns:

mat – Resulting string

Return type:

str