sporco.admm.cbpdntv

Classes for ADMM algorithms for convolutional sparse coding with Total Variation regularisation terms

Classes

ConvBPDNScalarTV(*args, **kwargs)

ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of each coefficient map [58].

ConvBPDNVectorTV(*args, **kwargs)

ADMM algorithm for an extension of Convolutional BPDN including a term penalising the vector total variation of the coefficient maps [58].

ConvBPDNRecTV(*args, **kwargs)

ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of the reconstruction from the sparse representation [58].


Class Descriptions

class sporco.admm.cbpdntv.ConvBPDNScalarTV(*args, **kwargs)[source]

Bases: sporco.admm.admm.ADMM

ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of each coefficient map [58].


Inheritance diagram of ConvBPDNScalarTV


Solve the optimisation problem

\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \sum_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,\]

where \(G_i\) is an operator computing the derivative along index \(i\), via the ADMM problem

\[\begin{split}\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D \mathbf{x} - \mathbf{s} \right\|_2^2 + \lambda \| \mathbf{y}_L \|_1 + \mu \sum_m \left\| \sqrt{\sum_{i=0}^{L-1} \mathbf{y}_i^2} \right\|_1 \quad \text{ such that } \quad \left( \begin{array}{c} \Gamma_0 \\ \Gamma_1 \\ \vdots \\ I \end{array} \right) \mathbf{x} = \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \\ \vdots \\ \mathbf{y}_L \end{array} \right) \;\;,\end{split}\]

where

\[\begin{split}D = \left( \begin{array}{ccc} D_0 & D_1 & \ldots \end{array} \right) \qquad \mathbf{x} = \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) \qquad \Gamma_i = \left( \begin{array}{ccc} G_i & 0 & \ldots \\ 0 & G_i & \ldots \\ \vdots & \vdots & \ddots \end{array} \right) \;\;.\end{split}\]

For multi-channel signals with a single-channel dictionary, scalar TV is applied independently to each coefficient map for channel \(c\) and filter \(m\). Since multi-channel signals with a multi-channel dictionary also have one coefficient map per filter, the behaviour is the same as for single-channel signals.

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\)

RegL1 : Value of regularisation term \(\sum_m \| \mathbf{x}_m \|_1\)

RegTV : Value of regularisation term \(\sum_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1\)

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual residual

EpsPrimal : Primal residual stopping tolerance \(\epsilon_{\mathrm{pri}}\)

EpsDual : Dual residual stopping tolerance \(\epsilon_{\mathrm{dua}}\)

Rho : Penalty parameter

XSlvRelRes : Relative residual of X step solver

Time : Cumulative run time


Call graph

../_images/cbpdnstv_init.svg

Parameters
Darray_like

Dictionary matrix

Sarray_like

Signal vector or matrix

lmbdafloat

Regularisation parameter (l1)

mufloat

Regularisation parameter (gradient)

optConvBPDNScalarTV.Options object

Algorithm options

dimK0, 1, or None, optional (default None)

Number of dimensions in input signal corresponding to multiple independent signals

dimNint, optional (default 2)

Number of spatial dimensions

class Options(opt=None)[source]

Bases: sporco.admm.cbpdn.ConvBPDN.Options

ConvBPDNScalarTV algorithm options

Options include all of those defined in admm.cbpdn.ConvBPDN.Options, together with additional options:

TVWeight : An array of weights \(w_m\) for the term penalising the gradient of the coefficient maps. If this option is defined, the regularization term is \(\sum_m w_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1\) where \(w_m\) is the weight for filter index \(m\). The array should be an \(M\)-vector where \(M\) is the number of filters in the dictionary.

Parameters
optdict or None, optional (default None)

ConvBPDNScalarTV algorithm options

setdict(D=None)[source]

Set dictionary array.

rhochange()[source]

Updated cached c array when rho changes.

xstep()[source]

Minimise Augmented Lagrangian with respect to \(\mathbf{x}\).

ystep()[source]

Minimise Augmented Lagrangian with respect to \(\mathbf{y}\).

obfn_fvarf()[source]

Variable to be evaluated in computing data fidelity term, depending on fEvalX option value.

var_y0()[source]

Get \(\mathbf{y}_0\) variable, consisting of all blocks of \(\mathbf{y}\) corresponding to a gradient operator.

var_y1()[source]

Get \(\mathbf{y}_1\) variable, the block of \(\mathbf{y}\) corresponding to the identity operator.

var_yx()[source]

Get component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\).

var_yx_idx()[source]

Get index expression for component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\).

getmin()[source]

Get minimiser after optimisation.

getcoef()[source]

Get final coefficient array.

obfn_g0var()[source]

Variable to be evaluated in computing the TV regularisation term, depending on the gEvalY option value.

obfn_g1var()[source]

Variable to be evaluated in computing the \(\ell_1\) regularisation term, depending on the gEvalY option value.

obfn_gvar()[source]

Method providing compatibility with the interface of admm.cbpdn.ConvBPDN and derived classes in order to make this class compatible with classes such as AddMaskSim.

eval_objfn()[source]

Compute components of objective function as well as total contribution to objective function.

obfn_dfd()[source]

Compute data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\).

obfn_reg()[source]

Compute regularisation term and contribution to objective function.

itstat_extra()[source]

Non-standard entries for the iteration stats record tuple.

cnst_A0(X, Xf=None)[source]

Compute \(A_0 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_0 \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots )^T \mathbf{x}\).

cnst_A0T(X)[source]

Compute \(A_0^T \mathbf{x}\) where \(A_0 \mathbf{x}\) is a component of the ADMM problem constraint. In this case \(A_0^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots ) \mathbf{x}\).

cnst_A1(X)[source]

Compute \(A_1 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_1 \mathbf{x} = \mathbf{x}\).

cnst_A1T(X)[source]

Compute \(A_1^T \mathbf{x}\) where \(A_1 \mathbf{x}\) is a component of the ADMM problem constraint. In this case \(A_1^T \mathbf{x} = \mathbf{x}\).

cnst_A(X, Xf=None)[source]

Compute \(A \mathbf{x}\) component of ADMM problem constraint. In this case \(A \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I)^T \mathbf{x}\).

cnst_AT(X)[source]

Compute \(A^T \mathbf{x}\) where \(A \mathbf{x}\) is a component of ADMM problem constraint. In this case \(A^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I) \mathbf{x}\).

cnst_B(Y)[source]

Compute \(B \mathbf{y}\) component of ADMM problem constraint. In this case \(B \mathbf{y} = -\mathbf{y}\).

cnst_c()[source]

Compute constant component \(\mathbf{c}\) of ADMM problem constraint. In this case \(\mathbf{c} = \mathbf{0}\).

relax_AX()[source]

Implement relaxation if option RelaxParam != 1.0.

reconstruct(X=None)[source]

Reconstruct representation.

solve()

Call graph

../_images/cbpdnstv_solve.svg
class sporco.admm.cbpdntv.ConvBPDNVectorTV(*args, **kwargs)[source]

Bases: sporco.admm.cbpdntv.ConvBPDNScalarTV

ADMM algorithm for an extension of Convolutional BPDN including a term penalising the vector total variation of the coefficient maps [58].


Inheritance diagram of ConvBPDNVectorTV


Solve the optimisation problem

\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_m \sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,\]

where \(G_i\) is an operator computing the derivative along index \(i\), via the ADMM problem

\[\begin{split}\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D \mathbf{x} - \mathbf{s} \right\|_2^2 + \lambda \| \mathbf{y}_L \|_1 + \mu \left\| \sqrt{\sum_{i=0}^{L-1} I_B \mathbf{y}_i^2} \right\|_1 \quad \text{ such that } \quad \left( \begin{array}{c} \Gamma_0 \\ \Gamma_1 \\ \vdots \\ I \end{array} \right) \mathbf{x} = \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \\ \vdots \\ \mathbf{y}_L \end{array} \right) \;\;,\end{split}\]

where

\[\begin{split}D = \left( \begin{array}{ccc} D_0 & D_1 & \ldots \end{array} \right) \qquad \mathbf{x} = \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) \qquad \Gamma_i = \left( \begin{array}{ccc} G_i & 0 & \ldots \\ 0 & G_i & \ldots \\ \vdots & \vdots & \ddots \end{array} \right) \qquad I_B = \left( \begin{array}{ccc} I & I & \ldots \end{array} \right) \;\;.\end{split}\]

For multi-channel signals with a single-channel dictionary, vector TV is applied jointly over the coefficient maps for channel \(c\) and filter \(m\). Since multi-channel signals with a multi-channel dictionary also have one coefficient map per filter, the behaviour is the same as for single-channel signals.

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\)

RegL1 : Value of regularisation term \(\sum_m \| \mathbf{x}_m \|_1\)

RegTV : Value of regularisation term \(\left\| \sqrt{\sum_m \sum_i (G_i \mathbf{x}_m)^2} \right\|_1\)

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual residual

EpsPrimal : Primal residual stopping tolerance \(\epsilon_{\mathrm{pri}}\)

EpsDual : Dual residual stopping tolerance \(\epsilon_{\mathrm{dua}}\)

Rho : Penalty parameter

XSlvRelRes : Relative residual of X step solver

Time : Cumulative run time


Call graph

../_images/cbpdnvtv_init.svg

Parameters
Darray_like

Dictionary matrix

Sarray_like

Signal vector or matrix

lmbdafloat

Regularisation parameter (l1)

mufloat

Regularisation parameter (gradient)

optConvBPDNScalarTV.Options object

Algorithm options

dimK0, 1, or None, optional (default None)

Number of dimensions in input signal corresponding to multiple independent signals

dimNint, optional (default 2)

Number of spatial dimensions

ystep()[source]

Minimise Augmented Lagrangian with respect to \(\mathbf{y}\).

obfn_reg()[source]

Compute regularisation term and contribution to objective function.

solve()

Call graph

../_images/cbpdnvtv_solve.svg
class sporco.admm.cbpdntv.ConvBPDNRecTV(*args, **kwargs)[source]

Bases: sporco.admm.admm.ADMM

ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of the reconstruction from the sparse representation [58].


Inheritance diagram of ConvBPDNRecTV


Solve the optimisation problem

\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_i \left( G_i \left( \sum_m \mathbf{d}_m * \mathbf{x}_m \right) \right)^2} \right\|_1 \;\;,\]

where \(G_i\) is an operator computing the derivative along index \(i\), via the ADMM problem

\[\begin{split}\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D \mathbf{x} - \mathbf{s} \right\|_2^2 + \lambda \| \mathbf{y}_0 \|_1 + \mu \left\| \sqrt{\sum_{i=1}^L \mathbf{y}_i^2} \right\|_1 \quad \text{ such that } \quad \left( \begin{array}{c} I \\ \Gamma_0 \\ \Gamma_1 \\ \vdots \\ \Gamma_{L-1} \end{array} \right) \mathbf{x} = \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \\ \mathbf{y}_2 \\ \vdots \\ \mathbf{y}_L \end{array} \right) \;\;,\end{split}\]

where

\[\begin{split}D = \left( \begin{array}{ccc} D_0 & D_1 & \ldots \end{array} \right) \qquad \mathbf{x} = \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) \qquad \Gamma_i = \left( \begin{array}{ccc} G_{i,0} & G_{i,1} & \ldots \end{array} \right) \;\;,\end{split}\]

and linear operator \(G_{i,m}\) is defined such that

\[G_{i,m} \mathbf{x} = \mathbf{g}_i * \mathbf{d}_m * \mathbf{x} \;\;,\]

where \(\mathbf{g}_i\) is the filter corresponding to \(G_i\), i.e. \(G_i \mathbf{x} = \mathbf{g}_i * \mathbf{x}\).

For multi-channel signals, vector TV is applied jointly over the reconstructions of all channels.

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\)

RegL1 : Value of regularisation term \(\sum_m \| \mathbf{x}_m \|_1\)

RegTV : Value of regularisation term \(\left\| \sqrt{\sum_i \left( G_i \left( \sum_m \mathbf{d}_m * \mathbf{x}_m \right) \right)^2} \right\|_1\)

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual residual

EpsPrimal : Primal residual stopping tolerance \(\epsilon_{\mathrm{pri}}\)

EpsDual : Dual residual stopping tolerance \(\epsilon_{\mathrm{dua}}\)

Rho : Penalty parameter

XSlvRelRes : Relative residual of X step solver

Time : Cumulative run time


Call graph

../_images/cbpdnrtv_init.svg

Parameters
Darray_like

Dictionary matrix

Sarray_like

Signal vector or matrix

lmbdafloat

Regularisation parameter (l1)

mufloat

Regularisation parameter (gradient)

optConvBPDNRecTV.Options object

Algorithm options

dimK0, 1, or None, optional (default None)

Number of dimensions in input signal corresponding to multiple independent signals

dimNint, optional (default 2)

Number of spatial dimensions

class Options(opt=None)[source]

Bases: sporco.admm.cbpdn.ConvBPDN.Options

ConvBPDNRecTV algorithm options

Options include all of those defined in admm.cbpdn.ConvBPDN.Options, together with additional options:

TVWeight : An array of weights \(w_m\) for the term penalising the gradient of the coefficient maps. If this option is defined, the regularization term is \(\left\| \sqrt{\sum_i \left( G_i \left( \sum_m w_m (\mathbf{d}_m * \mathbf{x}_m) \right) \right)^2} \right\|_1\) where \(w_m\) is the weight for filter index \(m\). The array should be an \(M\)-vector where \(M\) is the number of filters in the dictionary.

Parameters
optdict or None, optional (default None)

ConvBPDNRecTV algorithm options

setdict(D=None)[source]

Set dictionary array.

block_sep0(Y)[source]

Separate variable into component corresponding to Y0 in Y.

block_sep1(Y)[source]

Separate variable into component corresponding to Y1 in Y.

block_cat(Y0, Y1)[source]

Concatenate components corresponding to Y0 and Y1 blocks into Y.

xstep()[source]

Minimise Augmented Lagrangian with respect to \(\mathbf{x}\).

ystep()[source]

Minimise Augmented Lagrangian with respect to \(\mathbf{y}\).

obfn_fvarf()[source]

Variable to be evaluated in computing data fidelity term, depending on fEvalX option value.

var_y0()[source]

Get \(\mathbf{y}_0\) variable, the block of \(\mathbf{y}\) corresponding to the identity operator.

var_y1()[source]

Get \(\mathbf{y}_1\) variable, consisting of all blocks of \(\mathbf{y}\) corresponding to a gradient operator.

var_yx()[source]

Get component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\)

var_yx_idx()[source]

Get index expression for component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\).

getmin()[source]

Get minimiser after optimisation.

getcoef()[source]

Get final coefficient array.

obfn_g0var()[source]

Variable to be evaluated in computing the TV regularisation term, depending on the gEvalY option value.

obfn_g1var()[source]

Variable to be evaluated in computing the \(\ell_1\) regularisation term, depending on the gEvalY option value.

obfn_gvar()[source]

Method providing compatibility with the interface of admm.cbpdn.ConvBPDN and derived classes in order to make this class compatible with classes such as AddMaskSim.

eval_objfn()[source]

Compute components of objective function as well as total contribution to objective function.

obfn_dfd()[source]

Compute data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\).

obfn_reg()[source]

Compute regularisation term and contribution to objective function.

itstat_extra()[source]

Non-standard entries for the iteration stats record tuple.

cnst_A0(X)[source]

Compute \(A_0 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_0 \mathbf{x} = \mathbf{x}\).

cnst_A0T(Y0)[source]

Compute \(A_0^T \mathbf{y}_0\) component of \(A^T \mathbf{y}\). In this case \(A_0^T \mathbf{y}_0 = \mathbf{y}_0\), i.e. \(A_0 = I\).

cnst_A1(X, Xf=None)[source]

Compute \(A_1 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_1 \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots )^T \mathbf{x}\).

cnst_A1T(Y1)[source]

Compute \(A_1^T \mathbf{y}_1\) component of \(A^T \mathbf{y}\). In this case \(A_1^T \mathbf{y}_1 = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots) \mathbf{y}_1\).

cnst_A(X, Xf=None)[source]

Compute \(A \mathbf{x}\) component of ADMM problem constraint. In this case \(A \mathbf{x} = (I \;\; \Gamma_0^T \;\; \Gamma_1^T \;\; \ldots)^T \mathbf{x}\).

cnst_AT(Y)[source]

Compute \(A^T \mathbf{y}\). In this case \(A^T \mathbf{y} = (I \;\; \Gamma_0^T \;\; \Gamma_1^T \;\; \ldots) \mathbf{y}\).

cnst_B(Y)[source]

Compute \(B \mathbf{y}\) component of ADMM problem constraint. In this case \(B \mathbf{y} = -\mathbf{y}\).

solve()

Call graph

../_images/cbpdnrtv_solve.svg
cnst_c()[source]

Compute constant component \(\mathbf{c}\) of ADMM problem constraint. In this case \(\mathbf{c} = \mathbf{0}\).

relax_AX()[source]

Implement relaxation if option RelaxParam != 1.0.

reconstruct(X=None)[source]

Reconstruct representation.