Classes for ADMM algorithms for Total Variation (TV) optimisation with an $$\ell_1$$ data fidelity term

Classes

 TVL1Denoise(S, lmbda[, opt, axes, caxis]) ADMM algorithm for $$\ell_1$$-TV denoising problem [3] [14] (Sec. TVL1Deconv(A, S, lmbda[, opt, axes, caxis]) ADMM algorithm for $$\ell_1$$-TV deconvolution problem.

## Class Descriptions¶

class sporco.admm.tvl1.TVL1Denoise(S, lmbda, opt=None, axes=(0, 1), caxis=None)[source]

ADMM algorithm for $$\ell_1$$-TV denoising problem [3] [14] (Sec. 2.4.4).

Solve the optimisation problem

$\mathrm{argmin}_\mathbf{x} \; \| W_{\mathrm{df}} (\mathbf{x} - \mathbf{s}) \|_1 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \right\|_1$

$\begin{split}\mathrm{argmin}_{\mathbf{x},\mathbf{y}_d,\mathbf{y}_r,\mathbf{y}_c} \; (1/2) \| W_{\mathrm{df}} \mathbf{y}_d \|_1 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(\mathbf{y}_r)^2 + (\mathbf{y}_c)^2} \right\|_1 \;\text{such that}\; \left( \begin{array}{c} G_r \\ G_c \\ I \end{array} \right) \mathbf{x} - \left( \begin{array}{c} \mathbf{y}_r \\ \mathbf{y}_c \\ \mathbf{y}_d \end{array} \right) = \left( \begin{array}{c} \mathbf{0} \\ \mathbf{0} \\ \mathbf{s} \end{array} \right) \;\;.\end{split}$

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term $$\| W_{\mathrm{df}} (\mathbf{x} - \mathbf{s}) \|_1$$

RegTV : Value of regularisation term $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \|_1$$

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual residual

EpsPrimal : Primal residual stopping tolerance $$\epsilon_{\mathrm{pri}}$$

EpsDual : Dual residual stopping tolerance $$\epsilon_{\mathrm{dua}}$$

Rho : Penalty parameter

GSIter : Number of Gauss-Seidel iterations

GSRelRes : Relative residual of Gauss-Seidel solution

Time : Cumulative run time

Call graph

Parameters: S : array_like Signal vector or matrix lmbda : float Regularisation parameter opt : TVL1Denoise.Options object Algorithm options axes : tuple or list Axes on which TV regularisation is to be applied caxis : int or None, optional (default None) Axis on which channels of a multi-channel image are stacked. If None, TV regularisation is applied indepdendently to each channel, otherwise Vector TV [5] regularisation is applied jointly to all channels.
class Options(opt=None)[source]

Bases: sporco.admm.admm.Options

TVL1Denoise algorithm options

Options include all of those defined in sporco.admm.admm.ADMM.Options, together with additional options:

gEvalY : Flag indicating whether the $$g$$ component of the objective function should be evaluated using variable Y (True) or X (False) as its argument.

MaxGSIter : Maximum Gauss-Seidel iterations.

GSTol : Gauss-Seidel stopping tolerance.

DFidWeight : Data fidelity weight matrix.

TVWeight : TV term weight matrix.

Parameters: opt : dict or None, optional (default None) TVL1Denoise algorithm options
uinit(self, ushape)[source]

Return initialiser for working variable U.

xstep(self)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{x}$$.

ystep(self)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{y}$$.

obfn_gvar(self)[source]

Variable to be evaluated in computing regularisation term, depending on ‘gEvalY’ option value.

eval_objfn(self)[source]

Compute components of objective function as well as total contribution to objective function. Data fidelity term is $$(1/2) \| \mathbf{x} - \mathbf{s} \|_2^2$$ and regularisation term is $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2}\|_1$$.

itstat_extra(self)[source]

Non-standard entries for the iteration stats record tuple.

cnst_A(self, X)[source]

Compute $$A \mathbf{x}$$ component of ADMM problem constraint. In this case $$A \mathbf{x} = (G_r^T \;\; G_c^T \;\; I)^T \mathbf{x}$$.

cnst_AT(self, X)[source]

Compute $$A^T \mathbf{x}$$ where $$A \mathbf{x}$$ is a component of ADMM problem constraint. In this case $$A^T \mathbf{x} = (G_r^T \;\; G_c^T \;\; I) \mathbf{x}$$.

cnst_B(self, Y)[source]

Compute $$B \mathbf{y}$$ component of ADMM problem constraint. In this case $$B \mathbf{y} = -\mathbf{y}$$.

cnst_c(self)[source]

Compute constant component $$\mathbf{c}$$ of ADMM problem constraint. In this case $$\mathbf{c} = (\mathbf{0} \;\; \mathbf{0} \;\; \mathbf{s})$$.

rsdl_s(self, Yprev, Y)[source]

Compute dual residual vector.

rsdl_sn(self, U)[source]

Compute dual residual normalisation term.

LaplaceCentreWeight(self)[source]

Centre weighting matrix for TV Laplacian.

GaussSeidelStep(self, S, X, ATYU, rho, lcw, W2)[source]

Gauss-Seidel step for linear system in TV problem.

solve(*args)

Call graph

class sporco.admm.tvl1.TVL1Deconv(A, S, lmbda, opt=None, axes=(0, 1), caxis=None)[source]

ADMM algorithm for $$\ell_1$$-TV deconvolution problem.

Solve the optimisation problem

$\mathrm{argmin}_\mathbf{x} \; \| W_{\mathrm{df}} (H \mathbf{x} - \mathbf{s}) \|_1 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \right\|_1 \;\;,$

where $$H$$ denotes the linear operator corresponding to a convolution, via the ADMM problem

$\begin{split}\mathrm{argmin}_{\mathbf{x},\mathbf{y}_d,\mathbf{y}_r,\mathbf{y}_c} \; (1/2) \| W_{\mathrm{df}} \mathbf{y}_d \|_1 + \lambda \left\| W_{\mathrm{tv}} \sqrt{(\mathbf{y}_r)^2 + (\mathbf{y}_c)^2} \right\|_1 \;\text{such that}\; \left( \begin{array}{c} G_r \\ G_c \\ H \end{array} \right) \mathbf{x} - \left( \begin{array}{c} \mathbf{y}_r \\ \mathbf{y}_c \\ \mathbf{y}_d \end{array} \right) = \left( \begin{array}{c} \mathbf{0} \\ \mathbf{0} \\ \mathbf{s} \end{array} \right) \;\;.\end{split}$

Note that the convolution is implemented in the frequency domain, having the same phase offset as fftconv, which differs from that of scipy.ndimage.convolve with the default origin parameter.

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term $$\| W_{\mathrm{df}} (H \mathbf{x} - \mathbf{s}) \|_1$$

RegTV : Value of regularisation term $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2} \|_1$$

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual residual

EpsPrimal : Primal residual stopping tolerance $$\epsilon_{\mathrm{pri}}$$

EpsDual : Dual residual stopping tolerance $$\epsilon_{\mathrm{dua}}$$

Rho : Penalty parameter

XSlvRelRes : Relative residual of X step solver

Time : Cumulative run time

Call graph

Parameters: A : array_like Filter kernel corresponding to operator $$H$$ above S : array_like Signal vector or matrix lmbda : float Regularisation parameter opt : TVL1Deconv.Options object Algorithm options axes : tuple, optional (default (0,1)) Axes on which TV regularisation is to be applied caxis : int or None, optional (default None) Axis on which channels of a multi-channel image are stacked. If None, TV regularisation is applied indepdendently to each channel, otherwise Vector TV [5] regularisation is applied jointly to all channels.
class Options(opt=None)[source]

Bases: sporco.admm.admm.Options

TVL1Deconv algorithm options

Options include all of those defined in sporco.admm.admm.ADMM.Options, together with additional options:

gEvalY : Flag indicating whether the $$g$$ component of the objective function should be evaluated using variable Y (True) or X (False) as its argument.

LinSolveCheck : If True, compute relative residual of X step solver.

DFidWeight : Data fidelity weight matrix.

TVWeight : TV term weight matrix.

Parameters: opt : dict or None, optional (default None) TVL1Deconv algorithm options
uinit(self, ushape)[source]

Return initialiser for working variable U.

xstep(self)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{x}$$.

ystep(self)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{y}$$.

obfn_gvar(self)[source]

Variable to be evaluated in computing regularisation term, depending on ‘gEvalY’ option value.

eval_objfn(self)[source]

Compute components of objective function as well as total contribution to objective function. Data fidelity term is $$\| W_{\mathrm{df}} (H \mathbf{x} - \mathbf{s}) \|_1$$ and regularisation term is $$\| W_{\mathrm{tv}} \sqrt{(G_r \mathbf{x})^2 + (G_c \mathbf{x})^2}\|_1$$.

itstat_extra(self)[source]

Non-standard entries for the iteration stats record tuple.

cnst_A(self, X, Xf=None)[source]

Compute $$A \mathbf{x}$$ component of ADMM problem constraint. In this case $$A \mathbf{x} = (G_r^T \;\; G_c^T \;\; H)^T \mathbf{x}$$.

cnst_AT(self, X)[source]

Compute $$A^T \mathbf{x}$$ where $$A \mathbf{x}$$ is a component of ADMM problem constraint. In this case $$A^T \mathbf{x} = (G_r^T \;\; G_c^T \;\; H^T) \mathbf{x}$$.

cnst_B(self, Y)[source]

Compute $$B \mathbf{y}$$ component of ADMM problem constraint. In this case $$B \mathbf{y} = -\mathbf{y}$$.

solve(*args)

Call graph

cnst_c(self)[source]

Compute constant component $$\mathbf{c}$$ of ADMM problem constraint. In this case $$\mathbf{c} = (\mathbf{0} \;\; \mathbf{0} \;\; \mathbf{s})$$.

rsdl_s(self, Yprev, Y)[source]

Compute dual residual vector.

rsdl_sn(self, U)[source]

Compute dual residual normalisation term.