Classes for ADMM algorithms for Total Variation (TV) optimisation with an $$\ell_2$$ data fidelity term

Classes

 TVL2Denoise(S, lmbda[, opt, axes, caxis]) ADMM algorithm for $$\ell_2$$-TV denoising problem [28], [20], [5]. TVL2Deconv(A, S, lmbda[, opt, axes, caxis]) ADMM algorithm for $$\ell_2$$-TV deconvolution problem.

Class Descriptions¶

class sporco.admm.tvl2.TVL2Denoise(S, lmbda, opt=None, axes=(0, 1), caxis=None)[source]

ADMM algorithm for $$\ell_2$$-TV denoising problem [28], [20], [5].

Solve the optimisation problem

$\mathrm{argmin}_\mathbf{x} \; (1/2) \| W_{\mathrm{df}}(\mathbf{x} - \mathbf{s}) \|_2^2 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \right\|_1$

$\begin{split}\mathrm{argmin}_{\mathbf{x},\mathbf{y}_r,\mathbf{y}_c} \; (1/2) \| W_{\mathrm{df}}(\mathbf{x} - \mathbf{s}) \|_2^2 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(\mathbf{y}_r)^2 + (\mathbf{y}_c)^2} \right\|_1 \;\text{such that}\; \left( \begin{array}{c} G_r \\ G_c \end{array} \right) \mathbf{x} - \left( \begin{array}{c} \mathbf{y}_r \\ \mathbf{y}_c \end{array} \right) = \left( \begin{array}{c} \mathbf{0} \\ \mathbf{0} \end{array} \right) \;\;.\end{split}$

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term $$(1/2) \| W_{\mathrm{df}} (\mathbf{x} - \mathbf{s}) \|_2^2$$

RegTV : Value of regularisation term $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \|_1$$

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual residual

EpsPrimal : Primal residual stopping tolerance $$\epsilon_{\mathrm{pri}}$$

EpsDual : Dual residual stopping tolerance $$\epsilon_{\mathrm{dua}}$$

Rho : Penalty parameter

GSIter : Number of Gauss-Seidel iterations

GSRelRes : Relative residual of Gauss-Seidel solution

Time : Cumulative run time

Call graph

Parameters: S : array_like Signal vector or matrix lmbda : float Regularisation parameter opt : TVL2Denoise.Options object Algorithm options axes : tuple or list Axes on which TV regularisation is to be applied caxis : int or None, optional (default None) Axis on which channels of a multi-channel image are stacked. If None, TV regularisation is applied indepdendently to each channel, otherwise Vector TV [5] regularisation is applied jointly to all channels.
class Options(opt=None)[source]

Bases: sporco.admm.admm.Options

TVL2Denoise algorithm options

Options include all of those defined in sporco.admm.admm.ADMM.Options, together with additional options:

gEvalY : Flag indicating whether the $$g$$ component of the objective function should be evaluated using variable Y (True) or X (False) as its argument.

MaxGSIter : Maximum Gauss-Seidel iterations.

GSTol : Gauss-Seidel stopping tolerance.

DFidWeight : Data fidelity weight matrix.

TVWeight : TV term weight matrix.

Parameters: opt : dict or None, optional (default None) TVL2Denoise algorithm options
uinit(ushape)[source]

Return initialiser for working variable U.

xstep()[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{x}$$.

ystep()[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{y}$$.

obfn_gvar()[source]

Variable to be evaluated in computing regularisation term, depending on ‘gEvalY’ option value.

eval_objfn()[source]

Compute components of objective function as well as total contribution to objective function. Data fidelity term is $$(1/2) \| \mathbf{x} - \mathbf{s} \|_2^2$$ and regularisation term is $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2}\|_1$$.

itstat_extra()[source]

Non-standard entries for the iteration stats record tuple.

cnst_A(X)[source]

Compute $$A \mathbf{x}$$ component of ADMM problem constraint. In this case $$A \mathbf{x} = (G_r^T \;\; G_c^T)^T \mathbf{x}$$.

cnst_AT(X)[source]

Compute $$A^T \mathbf{x}$$ where $$A \mathbf{x}$$ is a component of ADMM problem constraint. In this case $$A^T \mathbf{x} = (G_r^T \;\; G_c^T) \mathbf{x}$$.

cnst_B(Y)[source]

Compute $$B \mathbf{y}$$ component of ADMM problem constraint. In this case $$B \mathbf{y} = -\mathbf{y}$$.

cnst_c()[source]

Compute constant component $$\mathbf{c}$$ of ADMM problem constraint. In this case $$\mathbf{c} = \mathbf{0}$$.

LaplaceCentreWeight()[source]

Centre weighting matrix for TV Laplacian.

GaussSeidelStep(S, X, ATYU, rho, lcw, W2)[source]

Gauss-Seidel step for linear system in TV problem.

solve()

Call graph

class sporco.admm.tvl2.TVL2Deconv(A, S, lmbda, opt=None, axes=(0, 1), caxis=None)[source]

ADMM algorithm for $$\ell_2$$-TV deconvolution problem.

Solve the optimisation problem

$\mathrm{argmin}_\mathbf{x} \; (1/2) \| H \mathbf{x} - \mathbf{s} \|_2^2 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \right\|_1 \;\;,$

where $$H$$ denotes the linear operator corresponding to a convolution, via the ADMM problem

$\begin{split}\mathrm{argmin}_{\mathbf{x},\mathbf{y}_r,\mathbf{y}_c} \; (1/2) \| H \mathbf{x} - \mathbf{s} \|_2^2 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(\mathbf{y}_r)^2 + (\mathbf{y}_c)^2} \right\|_1 \;\text{such that}\; \left( \begin{array}{c} G_r \\ G_c \end{array} \right) \mathbf{x} - \left( \begin{array}{c} \mathbf{y}_r \\ \mathbf{y}_c \end{array} \right) = \left( \begin{array}{c} \mathbf{0} \\ \mathbf{0} \end{array} \right) \;\;.\end{split}$

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term $$(1/2) \| H \mathbf{x} - \mathbf{s} \|_2^2$$

RegTV : Value of regularisation term $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \|_1$$

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual residual

EpsPrimal : Primal residual stopping tolerance $$\epsilon_{\mathrm{pri}}$$

EpsDual : Dual residual stopping tolerance $$\epsilon_{\mathrm{dua}}$$

Rho : Penalty parameter

XSlvRelRes : Relative residual of X step solver

Time : Cumulative run time

Call graph

Parameters: A : array_like Filter kernel corresponding to operator $$H$$ above S : array_like Signal vector or matrix lmbda : float Regularisation parameter opt : TVL2Deconv.Options object Algorithm options axes : tuple or list Axes on which TV regularisation is to be applied caxis : int or None, optional (default None) Axis on which channels of a multi-channel image are stacked. If None, TV regularisation is applied indepdendently to each channel, otherwise Vector TV [5] regularisation is applied jointly to all channels.
class Options(opt=None)[source]

Bases: sporco.admm.admm.Options

TVL2Deconv algorithm options

Options include all of those defined in sporco.admm.admm.ADMM.Options, together with additional options:

gEvalY : Flag indicating whether the $$g$$ component of the objective function should be evaluated using variable Y (True) or X (False) as its argument.

LinSolveCheck : If True, compute relative residual of X step solver.

TVWeight : TV term weight matrix.

Parameters: opt : dict or None, optional (default None) TVL2Deconv algorithm options
uinit(ushape)[source]

Return initialiser for working variable U.

xstep()[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{x}$$.

ystep()[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{y}$$.

obfn_gvar()[source]

Variable to be evaluated in computing regularisation term, depending on ‘gEvalY’ option value.

eval_objfn()[source]

Compute components of objective function as well as total contribution to objective function. Data fidelity term is $$(1/2) \| H \mathbf{x} - \mathbf{s} \|_2^2$$ and regularisation term is $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2}\|_1$$.

itstat_extra()[source]

Non-standard entries for the iteration stats record tuple.

cnst_A(X, Xf=None)[source]

Compute $$A \mathbf{x}$$ component of ADMM problem constraint. In this case $$A \mathbf{x} = (G_r^T \;\; G_c^T)^T \mathbf{x}$$.

solve()

Call graph

cnst_AT(X)[source]

Compute $$A^T \mathbf{x}$$ where $$A \mathbf{x}$$ is a component of ADMM problem constraint. In this case $$A^T \mathbf{x} = (G_r^T \;\; G_c^T) \mathbf{x}$$.

cnst_B(Y)[source]

Compute $$B \mathbf{y}$$ component of ADMM problem constraint. In this case $$B \mathbf{y} = -\mathbf{y}$$.

cnst_c()[source]

Compute constant component $$\mathbf{c}$$ of ADMM problem constraint. In this case $$\mathbf{c} = \mathbf{0}$$.