sporco.admm.cbpdntv¶
Classes for ADMM algorithms for convolutional sparse coding with Total Variation regularisation terms
Classes
|
ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of each coefficient map [58]. |
|
ADMM algorithm for an extension of Convolutional BPDN including a term penalising the vector total variation of the coefficient maps [58]. |
|
ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of the reconstruction from the sparse representation [58]. |
Class Descriptions¶
- class sporco.admm.cbpdntv.ConvBPDNScalarTV(*args, **kwargs)[source]¶
Bases:
ADMM
ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of each coefficient map [58].
Solve the optimisation problem
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \sum_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,\]where \(G_i\) is an operator computing the derivative along index \(i\), via the ADMM problem
\[\begin{split}\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D \mathbf{x} - \mathbf{s} \right\|_2^2 + \lambda \| \mathbf{y}_L \|_1 + \mu \sum_m \left\| \sqrt{\sum_{i=0}^{L-1} \mathbf{y}_i^2} \right\|_1 \quad \text{ such that } \quad \left( \begin{array}{c} \Gamma_0 \\ \Gamma_1 \\ \vdots \\ I \end{array} \right) \mathbf{x} = \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \\ \vdots \\ \mathbf{y}_L \end{array} \right) \;\;,\end{split}\]where
\[\begin{split}D = \left( \begin{array}{ccc} D_0 & D_1 & \ldots \end{array} \right) \qquad \mathbf{x} = \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) \qquad \Gamma_i = \left( \begin{array}{ccc} G_i & 0 & \ldots \\ 0 & G_i & \ldots \\ \vdots & \vdots & \ddots \end{array} \right) \;\;.\end{split}\]For multi-channel signals with a single-channel dictionary, scalar TV is applied independently to each coefficient map for channel \(c\) and filter \(m\). Since multi-channel signals with a multi-channel dictionary also have one coefficient map per filter, the behaviour is the same as for single-channel signals.
After termination of the
solve
method, attributeitstat
is a list of tuples representing statistics of each iteration. The fields of the named tupleIterationStats
are:
Iter
: Iteration number
ObjFun
: Objective function value
DFid
: Value of data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\)
RegL1
: Value of regularisation term \(\sum_m \| \mathbf{x}_m \|_1\)
RegTV
: Value of regularisation term \(\sum_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1\)
PrimalRsdl
: Norm of primal residual
DualRsdl
: Norm of dual residual
EpsPrimal
: Primal residual stopping tolerance \(\epsilon_{\mathrm{pri}}\)
EpsDual
: Dual residual stopping tolerance \(\epsilon_{\mathrm{dua}}\)
Rho
: Penalty parameter
XSlvRelRes
: Relative residual of X step solver
Time
: Cumulative run timeCall graph
- Parameters:
- Darray_like
Dictionary matrix
- Sarray_like
Signal vector or matrix
- lmbdafloat
Regularisation parameter (l1)
- mufloat
Regularisation parameter (gradient)
- opt
ConvBPDNScalarTV.Options
objectAlgorithm options
- dimK0, 1, or None, optional (default None)
Number of dimensions in input signal corresponding to multiple independent signals
- dimNint, optional (default 2)
Number of spatial dimensions
- class Options(opt=None)[source]¶
Bases:
Options
ConvBPDNScalarTV algorithm options
Options include all of those defined in
admm.cbpdn.ConvBPDN.Options
, together with additional options:
TVWeight
: An array of weights \(w_m\) for the term penalising the gradient of the coefficient maps. If this option is defined, the regularization term is \(\sum_m w_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1\) where \(w_m\) is the weight for filter index \(m\). The array should be an \(M\)-vector where \(M\) is the number of filters in the dictionary.
- Parameters:
- optdict or None, optional (default None)
ConvBPDNScalarTV algorithm options
- defaults = {'AbsStopTol': 0.0, 'AutoRho': {'AutoScaling': True, 'Enabled': True, 'Period': 1, 'RsdlRatio': 1.2, 'RsdlTarget': None, 'Scaling': 1000.0, 'StdResiduals': False}, 'AuxVarObj': False, 'Callback': None, 'DataType': None, 'FastSolve': False, 'HighMemSolve': False, 'IterTimer': 'solve', 'L1Weight': 1.0, 'LinSolveCheck': False, 'MaxMainIter': 1000, 'NoBndryCross': False, 'NonNegCoef': False, 'RelStopTol': 0.001, 'RelaxParam': 1.8, 'ReturnX': False, 'StatusHeader': True, 'TVWeight': 1.0, 'U0': None, 'Verbose': False, 'Y0': None, 'fEvalX': True, 'gEvalY': False, 'rho': None}¶
Default content and allowed dict keys
- itstat_fields_objfn = ('ObjFun', 'DFid', 'RegL1', 'RegTV')¶
Fields in IterationStats associated with the objective function; see
eval_objfn
- itstat_fields_extra = ('XSlvRelRes',)¶
Non-standard fields in IterationStats; see
itstat_extra
- hdrtxt_objfn = ('Fnc', 'DFid', 'Regℓ1', 'RegTV')¶
Display column headers associated with the objective function; see
eval_objfn
- hdrval_objfun = {'DFid': 'DFid', 'Fnc': 'ObjFun', 'RegTV': 'RegTV', 'Regℓ1': 'RegL1'}¶
Dictionary mapping display column headers in
hdrtxt_objfn
to IterationStats entries
- obfn_fvarf()[source]¶
Variable to be evaluated in computing data fidelity term, depending on
fEvalX
option value.
- var_y0()[source]¶
Get \(\mathbf{y}_0\) variable, consisting of all blocks of \(\mathbf{y}\) corresponding to a gradient operator.
- var_y1()[source]¶
Get \(\mathbf{y}_1\) variable, the block of \(\mathbf{y}\) corresponding to the identity operator.
- var_yx()[source]¶
Get component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\).
- var_yx_idx()[source]¶
Get index expression for component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\).
- obfn_g0var()[source]¶
Variable to be evaluated in computing the TV regularisation term, depending on the
gEvalY
option value.
- obfn_g1var()[source]¶
Variable to be evaluated in computing the \(\ell_1\) regularisation term, depending on the
gEvalY
option value.
- obfn_gvar()[source]¶
Method providing compatibility with the interface of
admm.cbpdn.ConvBPDN
and derived classes in order to make this class compatible with classes such asAddMaskSim
.
- eval_objfn()[source]¶
Compute components of objective function as well as total contribution to objective function.
- obfn_dfd()[source]¶
Compute data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\).
- cnst_A0(X, Xf=None)[source]¶
Compute \(A_0 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_0 \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots )^T \mathbf{x}\).
- cnst_A0T(X)[source]¶
Compute \(A_0^T \mathbf{x}\) where \(A_0 \mathbf{x}\) is a component of the ADMM problem constraint. In this case \(A_0^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots ) \mathbf{x}\).
- cnst_A1(X)[source]¶
Compute \(A_1 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_1 \mathbf{x} = \mathbf{x}\).
- cnst_A1T(X)[source]¶
Compute \(A_1^T \mathbf{x}\) where \(A_1 \mathbf{x}\) is a component of the ADMM problem constraint. In this case \(A_1^T \mathbf{x} = \mathbf{x}\).
- cnst_A(X, Xf=None)[source]¶
Compute \(A \mathbf{x}\) component of ADMM problem constraint. In this case \(A \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I)^T \mathbf{x}\).
- cnst_AT(X)[source]¶
Compute \(A^T \mathbf{x}\) where \(A \mathbf{x}\) is a component of ADMM problem constraint. In this case \(A^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I) \mathbf{x}\).
- cnst_B(Y)[source]¶
Compute \(B \mathbf{y}\) component of ADMM problem constraint. In this case \(B \mathbf{y} = -\mathbf{y}\).
- cnst_c()[source]¶
Compute constant component \(\mathbf{c}\) of ADMM problem constraint. In this case \(\mathbf{c} = \mathbf{0}\).
- solve()¶
Call graph
- class sporco.admm.cbpdntv.ConvBPDNVectorTV(*args, **kwargs)[source]¶
Bases:
ConvBPDNScalarTV
ADMM algorithm for an extension of Convolutional BPDN including a term penalising the vector total variation of the coefficient maps [58].
Solve the optimisation problem
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_m \sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,\]where \(G_i\) is an operator computing the derivative along index \(i\), via the ADMM problem
\[\begin{split}\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D \mathbf{x} - \mathbf{s} \right\|_2^2 + \lambda \| \mathbf{y}_L \|_1 + \mu \left\| \sqrt{\sum_{i=0}^{L-1} I_B \mathbf{y}_i^2} \right\|_1 \quad \text{ such that } \quad \left( \begin{array}{c} \Gamma_0 \\ \Gamma_1 \\ \vdots \\ I \end{array} \right) \mathbf{x} = \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \\ \vdots \\ \mathbf{y}_L \end{array} \right) \;\;,\end{split}\]where
\[\begin{split}D = \left( \begin{array}{ccc} D_0 & D_1 & \ldots \end{array} \right) \qquad \mathbf{x} = \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) \qquad \Gamma_i = \left( \begin{array}{ccc} G_i & 0 & \ldots \\ 0 & G_i & \ldots \\ \vdots & \vdots & \ddots \end{array} \right) \qquad I_B = \left( \begin{array}{ccc} I & I & \ldots \end{array} \right) \;\;.\end{split}\]For multi-channel signals with a single-channel dictionary, vector TV is applied jointly over the coefficient maps for channel \(c\) and filter \(m\). Since multi-channel signals with a multi-channel dictionary also have one coefficient map per filter, the behaviour is the same as for single-channel signals.
After termination of the
solve
method, attributeitstat
is a list of tuples representing statistics of each iteration. The fields of the named tupleIterationStats
are:
Iter
: Iteration number
ObjFun
: Objective function value
DFid
: Value of data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\)
RegL1
: Value of regularisation term \(\sum_m \| \mathbf{x}_m \|_1\)
RegTV
: Value of regularisation term \(\left\| \sqrt{\sum_m \sum_i (G_i \mathbf{x}_m)^2} \right\|_1\)
PrimalRsdl
: Norm of primal residual
DualRsdl
: Norm of dual residual
EpsPrimal
: Primal residual stopping tolerance \(\epsilon_{\mathrm{pri}}\)
EpsDual
: Dual residual stopping tolerance \(\epsilon_{\mathrm{dua}}\)
Rho
: Penalty parameter
XSlvRelRes
: Relative residual of X step solver
Time
: Cumulative run timeCall graph
- Parameters:
- Darray_like
Dictionary matrix
- Sarray_like
Signal vector or matrix
- lmbdafloat
Regularisation parameter (l1)
- mufloat
Regularisation parameter (gradient)
- opt
ConvBPDNScalarTV.Options
objectAlgorithm options
- dimK0, 1, or None, optional (default None)
Number of dimensions in input signal corresponding to multiple independent signals
- dimNint, optional (default 2)
Number of spatial dimensions
- solve()¶
Call graph
- class sporco.admm.cbpdntv.ConvBPDNRecTV(*args, **kwargs)[source]¶
Bases:
ADMM
ADMM algorithm for an extension of Convolutional BPDN including terms penalising the total variation of the reconstruction from the sparse representation [58].
Solve the optimisation problem
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_i \left( G_i \left( \sum_m \mathbf{d}_m * \mathbf{x}_m \right) \right)^2} \right\|_1 \;\;,\]where \(G_i\) is an operator computing the derivative along index \(i\), via the ADMM problem
\[\begin{split}\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D \mathbf{x} - \mathbf{s} \right\|_2^2 + \lambda \| \mathbf{y}_0 \|_1 + \mu \left\| \sqrt{\sum_{i=1}^L \mathbf{y}_i^2} \right\|_1 \quad \text{ such that } \quad \left( \begin{array}{c} I \\ \Gamma_0 \\ \Gamma_1 \\ \vdots \\ \Gamma_{L-1} \end{array} \right) \mathbf{x} = \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \\ \mathbf{y}_2 \\ \vdots \\ \mathbf{y}_L \end{array} \right) \;\;,\end{split}\]where
\[\begin{split}D = \left( \begin{array}{ccc} D_0 & D_1 & \ldots \end{array} \right) \qquad \mathbf{x} = \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) \qquad \Gamma_i = \left( \begin{array}{ccc} G_{i,0} & G_{i,1} & \ldots \end{array} \right) \;\;,\end{split}\]and linear operator \(G_{i,m}\) is defined such that
\[G_{i,m} \mathbf{x} = \mathbf{g}_i * \mathbf{d}_m * \mathbf{x} \;\;,\]where \(\mathbf{g}_i\) is the filter corresponding to \(G_i\), i.e. \(G_i \mathbf{x} = \mathbf{g}_i * \mathbf{x}\).
For multi-channel signals, vector TV is applied jointly over the reconstructions of all channels.
After termination of the
solve
method, attributeitstat
is a list of tuples representing statistics of each iteration. The fields of the named tupleIterationStats
are:
Iter
: Iteration number
ObjFun
: Objective function value
DFid
: Value of data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\)
RegL1
: Value of regularisation term \(\sum_m \| \mathbf{x}_m \|_1\)
RegTV
: Value of regularisation term \(\left\| \sqrt{\sum_i \left( G_i \left( \sum_m \mathbf{d}_m * \mathbf{x}_m \right) \right)^2} \right\|_1\)
PrimalRsdl
: Norm of primal residual
DualRsdl
: Norm of dual residual
EpsPrimal
: Primal residual stopping tolerance \(\epsilon_{\mathrm{pri}}\)
EpsDual
: Dual residual stopping tolerance \(\epsilon_{\mathrm{dua}}\)
Rho
: Penalty parameter
XSlvRelRes
: Relative residual of X step solver
Time
: Cumulative run timeCall graph
- Parameters:
- Darray_like
Dictionary matrix
- Sarray_like
Signal vector or matrix
- lmbdafloat
Regularisation parameter (l1)
- mufloat
Regularisation parameter (gradient)
- opt
ConvBPDNRecTV.Options
objectAlgorithm options
- dimK0, 1, or None, optional (default None)
Number of dimensions in input signal corresponding to multiple independent signals
- dimNint, optional (default 2)
Number of spatial dimensions
- class Options(opt=None)[source]¶
Bases:
Options
ConvBPDNRecTV algorithm options
Options include all of those defined in
admm.cbpdn.ConvBPDN.Options
, together with additional options:
TVWeight
: An array of weights \(w_m\) for the term penalising the gradient of the coefficient maps. If this option is defined, the regularization term is \(\left\| \sqrt{\sum_i \left( G_i \left( \sum_m w_m (\mathbf{d}_m * \mathbf{x}_m) \right) \right)^2} \right\|_1\) where \(w_m\) is the weight for filter index \(m\). The array should be an \(M\)-vector where \(M\) is the number of filters in the dictionary.
- Parameters:
- optdict or None, optional (default None)
ConvBPDNRecTV algorithm options
- defaults = {'AbsStopTol': 0.0, 'AutoRho': {'AutoScaling': True, 'Enabled': True, 'Period': 1, 'RsdlRatio': 1.2, 'RsdlTarget': None, 'Scaling': 1000.0, 'StdResiduals': False}, 'AuxVarObj': False, 'Callback': None, 'DataType': None, 'FastSolve': False, 'HighMemSolve': False, 'IterTimer': 'solve', 'L1Weight': 1.0, 'LinSolveCheck': False, 'MaxMainIter': 1000, 'NoBndryCross': False, 'NonNegCoef': False, 'RelStopTol': 0.001, 'RelaxParam': 1.8, 'ReturnX': False, 'StatusHeader': True, 'TVWeight': 1.0, 'U0': None, 'Verbose': False, 'Y0': None, 'fEvalX': True, 'gEvalY': False, 'rho': None}¶
Default content and allowed dict keys
- itstat_fields_objfn = ('ObjFun', 'DFid', 'RegL1', 'RegTV')¶
Fields in IterationStats associated with the objective function; see
eval_objfn
- itstat_fields_extra = ('XSlvRelRes',)¶
Non-standard fields in IterationStats; see
itstat_extra
- hdrtxt_objfn = ('Fnc', 'DFid', 'Regℓ1', 'RegTV')¶
Display column headers associated with the objective function; see
eval_objfn
- hdrval_objfun = {'DFid': 'DFid', 'Fnc': 'ObjFun', 'RegTV': 'RegTV', 'Regℓ1': 'RegL1'}¶
Dictionary mapping display column headers in
hdrtxt_objfn
to IterationStats entries
- obfn_fvarf()[source]¶
Variable to be evaluated in computing data fidelity term, depending on
fEvalX
option value.
- var_y0()[source]¶
Get \(\mathbf{y}_0\) variable, the block of \(\mathbf{y}\) corresponding to the identity operator.
- var_y1()[source]¶
Get \(\mathbf{y}_1\) variable, consisting of all blocks of \(\mathbf{y}\) corresponding to a gradient operator.
- var_yx()[source]¶
Get component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\)
- var_yx_idx()[source]¶
Get index expression for component block of \(\mathbf{y}\) that is constrained to be equal to \(\mathbf{x}\).
- obfn_g0var()[source]¶
Variable to be evaluated in computing the TV regularisation term, depending on the
gEvalY
option value.
- obfn_g1var()[source]¶
Variable to be evaluated in computing the \(\ell_1\) regularisation term, depending on the
gEvalY
option value.
- obfn_gvar()[source]¶
Method providing compatibility with the interface of
admm.cbpdn.ConvBPDN
and derived classes in order to make this class compatible with classes such asAddMaskSim
.
- eval_objfn()[source]¶
Compute components of objective function as well as total contribution to objective function.
- obfn_dfd()[source]¶
Compute data fidelity term \((1/2) \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2\).
- cnst_A0(X)[source]¶
Compute \(A_0 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_0 \mathbf{x} = \mathbf{x}\).
- cnst_A0T(Y0)[source]¶
Compute \(A_0^T \mathbf{y}_0\) component of \(A^T \mathbf{y}\). In this case \(A_0^T \mathbf{y}_0 = \mathbf{y}_0\), i.e. \(A_0 = I\).
- cnst_A1(X, Xf=None)[source]¶
Compute \(A_1 \mathbf{x}\) component of ADMM problem constraint. In this case \(A_1 \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots )^T \mathbf{x}\).
- cnst_A1T(Y1)[source]¶
Compute \(A_1^T \mathbf{y}_1\) component of \(A^T \mathbf{y}\). In this case \(A_1^T \mathbf{y}_1 = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots) \mathbf{y}_1\).
- cnst_A(X, Xf=None)[source]¶
Compute \(A \mathbf{x}\) component of ADMM problem constraint. In this case \(A \mathbf{x} = (I \;\; \Gamma_0^T \;\; \Gamma_1^T \;\; \ldots)^T \mathbf{x}\).
- cnst_AT(Y)[source]¶
Compute \(A^T \mathbf{y}\). In this case \(A^T \mathbf{y} = (I \;\; \Gamma_0^T \;\; \Gamma_1^T \;\; \ldots) \mathbf{y}\).
- cnst_B(Y)[source]¶
Compute \(B \mathbf{y}\) component of ADMM problem constraint. In this case \(B \mathbf{y} = -\mathbf{y}\).
- solve()¶
Call graph