Classes

 ADMM(Nx, yshape, ushape, dtype[, opt]) Base class for Alternating Direction Method of Multipliers (ADMM) algorithms . ADMMEqual(xshape, dtype[, opt]) Base class for ADMM algorithms with a simple equality constraint. ADMMTwoBlockCnstrnt(Nx, yshape, blkaxis, …) Base class for ADMM algorithms for problems for which $$g(\mathbf{y}) = g_0(\mathbf{y}_0) + g_1(\mathbf{y}_1)$$ with $$\mathbf{y}^T = (\mathbf{y}_0^T \; \mathbf{y}_1^T)$$. ADMMConsensus(Nb, yshape, dtype[, opt]) Base class for ADMM algorithms with a global variable consensus structure (see Ch.

## Class Descriptions¶

class sporco.admm.admm.ADMM(Nx, yshape, ushape, dtype, opt=None)[source]

Base class for Alternating Direction Method of Multipliers (ADMM) algorithms .

Solve an optimisation problem of the form

$\mathrm{argmin}_{\mathbf{x},\mathbf{y}} \; f(\mathbf{x}) + g(\mathbf{y}) \;\mathrm{such\;that}\; A\mathbf{x} + B\mathbf{y} = \mathbf{c} \;\;.$

This class is intended to be a base class of other classes that specialise to specific optimisation problems.

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The default fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

FVal : Value of objective function component $$f$$

GVal : Value of objective function component $$g$$

PrimalRsdl : Norm of primal residual

DualRsdl : Norm of dual Residual

EpsPrimal : Primal residual stopping tolerance $$\epsilon_{\mathrm{pri}}$$ (see Sec. 3.3.1 of )

EpsDual : Dual residual stopping tolerance $$\epsilon_{\mathrm{dua}}$$ (see Sec. 3.3.1 of )

Rho : Penalty parameter

Time : Cumulative run time

Parameters: Nx : int Size of variable $$\mathbf{x}$$ in objective function yshape : tuple of ints Shape of working variable Y (the auxiliary variable) ushape : tuple of ints Shape of working variable U (the scaled dual variable) dtype : data-type Data type for working variables (overridden by ‘DataType’ option) opt : ADMM.Options object Algorithm options
class Options(opt=None)[source]

Options:

FastSolve : Flag determining whether non-essential computation is skipped. When FastSolve is True and Verbose is False, the functional value and related iteration statistics are not computed. If FastSolve is True and the AutoRho mechanism is disabled, residuals are also not calculated, in which case the residual-based stopping method is also disabled, with the number of iterations determined only by MaxMainIter.

Verbose : Flag determining whether iteration status is displayed.

StatusHeader : Flag determining whether status header and separator are displayed.

DataType : Specify data type for solution variables, e.g. np.float32.

Y0 : Initial value for Y variable.

U0 : Initial value for U variable.

Callback : Callback function to be called at the end of every iteration.

IterTimer : Label of the timer to use for iteration times.

MaxMainIter : Maximum main iterations.

AbsStopTol : Absolute convergence tolerance (see Sec. 3.3.1 of ).

RelStopTol : Relative convergence tolerance (see Sec. 3.3.1 of ).

RelaxParam : Relaxation parameter (see Sec. 3.4.3 of ). Note: relaxation is disabled by setting this value to 1.0.

rho : ADMM penalty parameter $$\rho$$.

AutoRho : Options for adaptive rho strategy (see  and Sec. 3.4.3 of ).

Enabled : Flag determining whether adaptive penalty parameter strategy is enabled.

Period : Iteration period on which rho is updated. If set to 1, the rho update test is applied at every iteration.

Scaling : Multiplier applied to rho when updated ($$\tau$$ in ).

RsdlRatio : Primal/dual residual ratio in rho update test ($$\mu$$ in ).

RsdlTarget : Residual ratio targeted by auto rho update policy ($$\xi$$ in ).

AutoScaling : Flag determining whether RhoScaling value is adaptively determined (see Sec. IV.C in ). If enabled, Scaling specifies a maximum allowed multiplier instead of a fixed multiplier.

StdResiduals : Flag determining whether standard residual definitions are used instead of normalised residuals (see Sec. IV.B in ).

Parameters: opt : dict or None, optional (default None) ADMM algorithm options
fwiter = 4

Field width for iteration count display column

fpothr = 2

Field precision for other display columns

itstat_fields_objfn = ('ObjFun', 'FVal', 'GVal')

Fields in IterationStats associated with the objective function; see eval_objfn

itstat_fields_alg = ('PrimalRsdl', 'DualRsdl', 'EpsPrimal', 'EpsDual', 'Rho')

Fields in IterationStats associated with the specific solver algorithm

itstat_fields_extra = ()

Non-standard fields in IterationStats; see itstat_extra

hdrtxt_objfn = ('Fnc', 'f', 'g')

Display column headers associated with the objective function; see eval_objfn

hdrval_objfun = {'Fnc': 'ObjFun', 'f': 'FVal', 'g': 'GVal'}

Dictionary mapping display column headers in hdrtxt_objfn to IterationStats entries

yinit(self, yshape)[source]

Return initialiser for working variable Y

uinit(self, ushape)[source]

Return initialiser for working variable U

solve(self)[source]

Start (or re-start) optimisation. This method implements the framework for the iterations of an ADMM algorithm. There is sufficient flexibility in overriding the component methods that it calls that it is usually not necessary to override this method in derived clases.

If option Verbose is True, the progress of the optimisation is displayed at every iteration. At termination of this method, attribute itstat is a list of tuples representing statistics of each iteration, unless option FastSolve is True and option Verbose is False.

Attribute timer is an instance of util.Timer that provides the following labelled timers:

init: Time taken for object initialisation by __init__

solve: Total time taken by call(s) to solve

solve_wo_func: Total time taken by call(s) to solve, excluding time taken to compute functional value and related iteration statistics

solve_wo_rsdl : Total time taken by call(s) to solve, excluding time taken to compute functional value and related iteration statistics as well as time take to compute residuals and implemented AutoRho mechanism

runtime

Transitional property providing access to the new timer mechanism. This will be removed in the future.

getmin(self)[source]

Get minimiser after optimisation.

xstep(self)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{x}$$.

Overriding this method is required.

ystep(self)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{y}$$.

Overriding this method is required.

ustep(self)[source]

Dual variable update.

relax_AX(self)[source]

Implement relaxation if option RelaxParam != 1.0.

compute_residuals(self)[source]

Compute residuals and stopping thresholds.

classmethod hdrtxt()[source]

Construct tuple of status display column titles.

classmethod hdrval()[source]

Construct dictionary mapping display column title to IterationStats entries.

iteration_stats(self, k, r, s, epri, edua)[source]

Construct iteration stats record tuple.

eval_objfn(self)[source]

Compute components of objective function as well as total contribution to objective function.

itstat_extra(self)[source]

Non-standard entries for the iteration stats record tuple.

getitstat(self)[source]

Get iteration stats as named tuple of arrays instead of array of named tuples.

update_rho(self, k, r, s)[source]

display_start(self)[source]

Set up status display if option selected. NB: this method assumes that the first entry is the iteration count and the last is the rho value.

display_status(self, fmtstr, itst)[source]

Display current iteration status as selection of fields from iteration stats tuple.

display_end(self, nsep)[source]

Terminate status display if option selected.

var_x(self)[source]

Get $$\mathbf{x}$$ variable.

var_y(self)[source]

Get $$\mathbf{y}$$ variable.

var_u(self)[source]

Get $$\mathbf{u}$$ variable.

obfn_f(self, X)[source]

Compute $$f(\mathbf{x})$$ component of ADMM objective function.

Overriding this method is required if eval_objfn is not overridden.

obfn_g(self, Y)[source]

Compute $$g(\mathbf{y})$$ component of ADMM objective function.

Overriding this method is required if eval_objfn is not overridden.

cnst_A(self, X)[source]

Compute $$A \mathbf{x}$$ component of ADMM problem constraint.

Overriding this method is required if methods rsdl_r, rsdl_s, rsdl_rn, and rsdl_sn are not overridden.

cnst_AT(self, X)[source]

Compute $$A^T \mathbf{x}$$ where $$A \mathbf{x}$$ is a component of ADMM problem constraint.

Overriding this method is required if methods rsdl_r, rsdl_s, rsdl_rn, and rsdl_sn are not overridden.

cnst_B(self, Y)[source]

Compute $$B \mathbf{y}$$ component of ADMM problem constraint.

Overriding this method is required if methods rsdl_r, rsdl_s, rsdl_rn, and rsdl_sn are not overridden.

cnst_c(self)[source]

Compute constant component $$\mathbf{c}$$ of ADMM problem constraint.

Overriding this method is required if methods rsdl_r, rsdl_s, rsdl_rn, and rsdl_sn are not overridden.

rsdl_r(self, AX, Y)[source]

Compute primal residual vector.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_B, and cnst_c are not overridden.

rsdl_s(self, Yprev, Y)[source]

Compute dual residual vector.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_B, and cnst_c are not overridden.

rsdl_rn(self, AX, Y)[source]

Compute primal residual normalisation term.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_B, and cnst_c are not overridden.

rsdl_sn(self, U)[source]

Compute dual residual normalisation term.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_B, and cnst_c are not overridden.

rhochange(self)[source]

Action to be taken, if any, when rho parameter is changed.

Overriding this method is optional.

class sporco.admm.admm.ADMMEqual(xshape, dtype, opt=None)[source]

Base class for ADMM algorithms with a simple equality constraint.

Solve optimisation problems of the form

$\mathrm{argmin}_{\mathbf{x},\mathbf{y}} \; f(\mathbf{x}) + g(\mathbf{y}) \;\mathrm{such\;that}\; \mathbf{x} = \mathbf{y} \;\;.$

This class specialises class ADMM, but remains a base class for other classes that specialise to specific optimisation problems.

Parameters: xshape : tuple of ints Shape of working variable X (the primary variable) dtype : data-type Data type for working variables opt : Algorithm options
class Options(opt=None)[source]

Bases: sporco.admm.admm.Options

Options include all of those defined in ADMM.Options, together with additional options:

fEvalX : Flag indicating whether the $$f$$ component of the objective function should be evaluated using variable X (True) or Y (False) as its argument.

gEvalY : Flag indicating whether the $$g$$ component of the objective function should be evaluated using variable Y (True) or X (False) as its argument.

ReturnX : Flag indicating whether the return value of the solve method is the X variable (True) or the Y variable (False).

Parameters: opt : dict or None, optional (default None) ADMMEqual algorithm options
getmin(self)[source]

Get minimiser after optimisation.

relax_AX(self)[source]

Implement relaxation if option RelaxParam != 1.0.

obfn_fvar(self)[source]

Variable to be evaluated in computing ADMM.obfn_f, depending on the fEvalX option value.

obfn_gvar(self)[source]

Variable to be evaluated in computing ADMM.obfn_g, depending on the gEvalY option value.

eval_objfn(self)[source]

Compute components of objective function as well as total contribution to objective function.

cnst_A(self, X)[source]

Compute $$A \mathbf{x}$$ component of ADMM problem constraint. In this case $$A \mathbf{x} = \mathbf{x}$$ since the constraint is $$\mathbf{x} = \mathbf{y}$$.

cnst_AT(self, Y)[source]

Compute $$A^T \mathbf{y}$$ where $$A \mathbf{x}$$ is a component of ADMM problem constraint. In this case $$A^T \mathbf{y} = \mathbf{y}$$ since the constraint is $$\mathbf{x} = \mathbf{y}$$.

cnst_B(self, Y)[source]

Compute $$B \mathbf{y}$$ component of ADMM problem constraint. In this case $$B \mathbf{y} = -\mathbf{y}$$ since the constraint is $$\mathbf{x} = \mathbf{y}$$.

cnst_c(self)[source]

Compute constant component $$\mathbf{c}$$ of ADMM problem constraint. In this case $$\mathbf{c} = \mathbf{0}$$ since the constraint is $$\mathbf{x} = \mathbf{y}$$.

rsdl_r(self, AX, Y)[source]

Compute primal residual vector.

rsdl_s(self, Yprev, Y)[source]

Compute dual residual vector.

rsdl_rn(self, AX, Y)[source]

Compute primal residual normalisation term.

rsdl_sn(self, U)[source]

Compute dual residual normalisation term.

class sporco.admm.admm.ADMMTwoBlockCnstrnt(Nx, yshape, blkaxis, blkidx, dtype, opt=None)[source]

Base class for ADMM algorithms for problems for which $$g(\mathbf{y}) = g_0(\mathbf{y}_0) + g_1(\mathbf{y}_1)$$ with $$\mathbf{y}^T = (\mathbf{y}_0^T \; \mathbf{y}_1^T)$$.

Solve optimisation problems of the form

$\mathrm{argmin}_{\mathbf{x}} \; f(\mathbf{x}) + g_0(A_0 \mathbf{x}) + g_1(A_1 \mathbf{x})$

via an ADMM problem of the form

$\begin{split}\mathrm{argmin}_{\mathbf{x},\mathbf{y}_0,\mathbf{y}_1} \; f(\mathbf{x}) + g_0(\mathbf{y}_0) + g_0(\mathbf{y}_1) \;\text{such that}\; \left( \begin{array}{c} A_0 \\ A_1 \end{array} \right) \mathbf{x} - \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \end{array} \right) = \left( \begin{array}{c} \mathbf{c}_0 \\ \mathbf{c}_1 \end{array} \right) \;\;.\end{split}$

In this case the ADMM constraint is $$A\mathbf{x} + B\mathbf{y} = \mathbf{c}$$ where

$\begin{split}A = \left( \begin{array}{c} A_0 \\ A_1 \end{array} \right) \qquad B = -I \qquad \mathbf{y} = \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \end{array} \right) \qquad \mathbf{c} = \left( \begin{array}{c} \mathbf{c}_0 \\ \mathbf{c}_1 \end{array} \right) \;\;.\end{split}$

This class specialises class ADMM, but remains a base class for other classes that specialise to specific optimisation problems.

Parameters: Nx : int Size of variable $$\mathbf{x}$$ in objective function yshape : tuple of ints Shape of working variable Y (the auxiliary variable) blkaxis : int Axis on which $$\mathbf{y}_0$$ and $$\mathbf{y}_1$$ are concatenated to form $$\mathbf{y}$$ blkidx : int Index of boundary between $$\mathbf{y}_0$$ and $$\mathbf{y}_1$$ on axis on which they are concatenated to form $$\mathbf{y}$$ dtype : data-type Data type for working variables opt : Algorithm options
class Options(opt=None)[source]

Bases: sporco.admm.admm.Options

Options include all of those defined in ADMM.Options, together with additional options:

AuxVarObj : Flag indicating whether the $$g(\mathbf{y})$$ component of the objective function should be evaluated using variable X (False) or Y (True) as its argument. Setting this flag to True often gives a better estimate of the objective function, but at additional computational cost for some problems.

ReturnVar : A string (valid values are ‘X’, ‘Y0’, or ‘Y1’) indicating which of the objective function variables should be returned by the solve method.

Parameters: opt : dict or None, optional (default None) ADMMTwoBlockCnstrnt algorithm options
itstat_fields_objfn = ('ObjFun', 'FVal', 'G0Val', 'G1Val')

Fields in IterationStats associated with the objective function; see eval_objfn

hdrtxt_objfn = ('Fnc', 'f', 'g0', 'g1')

Display column headers associated with the objective function; see eval_objfn

hdrval_objfun = {'Fnc': 'ObjFun', 'f': 'FVal', 'g0': 'G0Val', 'g1': 'G1Val'}

Dictionary mapping display column headers in hdrtxt_objfn to IterationStats entries

getmin(self)[source]

Get minimiser after optimisation.

block_sep0(self, Y)[source]

Separate variable into component corresponding to $$\mathbf{y}_0$$ in $$\mathbf{y}\;\;$$.

block_sep1(self, Y)[source]

Separate variable into component corresponding to $$\mathbf{y}_1$$ in $$\mathbf{y}\;\;$$.

block_sep(self, Y)[source]

Separate variable into components corresponding to blocks $$\mathbf{y}_0$$ and $$\mathbf{y}_1$$ in $$\mathbf{y}\;\;$$.

block_cat(self, Y0, Y1)[source]

Concatenate components corresponding to $$\mathbf{y}_0$$ and $$\mathbf{y}_1$$ to form $$\mathbf{y}\;\;$$.

relax_AX(self)[source]

Implement relaxation if option RelaxParam != 1.0.

var_y0(self)[source]

Get $$\mathbf{y}_0$$ variable.

var_y1(self)[source]

Get $$\mathbf{y}_1$$ variable.

obfn_fvar(self)[source]

Variable to be evaluated in computing ADMM.obfn_f.

obfn_g0var(self)[source]

Variable to be evaluated in computing ADMMTwoBlockCnstrnt.obfn_g0, depending on the AuxVarObj option value.

obfn_g1var(self)[source]

Variable to be evaluated in computing ADMMTwoBlockCnstrnt.obfn_g1, depending on the AuxVarObj option value.

obfn_f(self, X)[source]

Compute $$f(\mathbf{x})$$ component of ADMM objective function. Unless overridden, $$f(\mathbf{x}) = 0$$.

obfn_g(self, Y)[source]

Compute $$g(\mathbf{y}) = g_0(\mathbf{y}_0) + g_1(\mathbf{y}_1)$$ component of ADMM objective function.

obfn_g0(self, Y0)[source]

Compute $$g_0(\mathbf{y}_0)$$ component of ADMM objective function.

Overriding this method is required.

obfn_g1(self, Y1)[source]

Compute $$g_1(\mathbf{y_1})$$ component of ADMM objective function.

Overriding this method is required.

eval_objfn(self)[source]

Compute components of objective function as well as total contribution to objective function.

cnst_A(self, X)[source]

Compute $$A \mathbf{x}$$ component of ADMM problem constraint.

cnst_AT(self, Y)[source]

Compute $$A^T \mathbf{y}$$ where

$\begin{split}A^T \mathbf{y} = \left( \begin{array}{cc} A_0^T & A_1^T \end{array} \right) \left( \begin{array}{c} \mathbf{y}_0 \\ \mathbf{y}_1 \end{array} \right) = A_0^T \mathbf{y}_0 + A_1^T \mathbf{y}_1 \;\;.\end{split}$
cnst_B(self, Y)[source]

Compute $$B \mathbf{y}$$ component of ADMM problem constraint. In this case $$B \mathbf{y} = -\mathbf{y}$$ since the constraint is $$A \mathbf{x} - \mathbf{y} = \mathbf{c}$$.

cnst_c(self)[source]

Compute constant component $$\mathbf{c}$$ of ADMM problem constraint. This method should not be used or overridden: all calculations should make use of components cnst_c0 and cnst_c1 so that these methods can return scalar zeros instead of zero arrays if appropriate.

cnst_c0(self)[source]

Compute constant component $$\mathbf{c}_0$$ of $$\mathbf{c}$$ in the ADMM problem constraint. Unless overridden, $$\mathbf{c}_0 = 0$$.

cnst_c1(self)[source]

Compute constant component $$\mathbf{c}_1$$ of $$\mathbf{c}$$ in the ADMM problem constraint. Unless overridden, $$\mathbf{c}_1 = 0$$.

cnst_A0(self, X)[source]

Compute $$A_0 \mathbf{x}$$ component of $$A \mathbf{x}$$ in ADMM problem constraint (see cnst_A). Unless overridden, $$A_0 \mathbf{x} = \mathbf{x}$$, i.e. $$A_0 = I$$.

cnst_A0T(self, Y0)[source]

Compute $$A_0^T \mathbf{y}_0$$ component of $$A^T \mathbf{y}$$ (see cnst_AT). Unless overridden, $$A_0^T \mathbf{y}_0 = \mathbf{y}_0$$, i.e. $$A_0 = I$$.

cnst_A1(self, X)[source]

Compute $$A_1 \mathbf{x}$$ component of $$A \mathbf{x}$$ in ADMM problem constraint (see cnst_A). Unless overridden, $$A_1 \mathbf{x} = \mathbf{x}$$, i.e. $$A_1 = I$$.

cnst_A1T(self, Y1)[source]

Compute $$A_1^T \mathbf{y}_1$$ component of $$A^T \mathbf{y}$$ (see cnst_AT). Unless overridden, $$A_1^T \mathbf{y}_1 = \mathbf{y}_1$$, i.e. $$A_1 = I$$.

rsdl_r(self, AX, Y)[source]

Compute primal residual vector.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_c0 and cnst_c1 are not overridden.

rsdl_s(self, Yprev, Y)[source]

Compute dual residual vector.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_B, and cnst_c are not overridden.

rsdl_rn(self, AX, Y)[source]

Compute primal residual normalisation term.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_B, and cnst_c are not overridden.

rsdl_sn(self, U)[source]

Compute dual residual normalisation term.

Overriding this method is required if methods cnst_A, cnst_AT, cnst_B, and cnst_c are not overridden.

class sporco.admm.admm.ADMMConsensus(Nb, yshape, dtype, opt=None)[source]

Base class for ADMM algorithms with a global variable consensus structure (see Ch. 7 of ).

Solve optimisation problems of the form

$\mathrm{argmin}_{\mathbf{x}} \; \sum_i f_i(\mathbf{x}) + g(\mathbf{x})$

via an ADMM problem of the form

$\begin{split}\mathrm{argmin}_{\mathbf{x}_i,\mathbf{y}} \; \sum_i f(\mathbf{x}_i) + g(\mathbf{y}) \;\mathrm{such\;that}\; \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) = \left( \begin{array}{c} I \\ I \\ \vdots \end{array} \right) \mathbf{y} \;\;.\end{split}$

This class specialises class ADMM, but remains a base class for other classes that specialise to specific optimisation problems.

Parameters: yshape : tuple Shape of variable $$\mathbf{y}$$ in objective function Nb : int Number of blocks / consensus components opt : Algorithm options
class Options(opt=None)[source]

Bases: sporco.admm.admm.Options

Options include all of those defined in ADMM.Options, together with additional options:

fEvalX : Flag indicating whether the $$f$$ component of the objective function should be evaluated using variable X (True) or Y (False) as its argument.

gEvalY : Flag indicating whether the $$g$$ component of the objective function should be evaluated using variable Y (True) or X (False) as its argument.

AuxVarObj : Flag selecting choices of fEvalX and gEvalY that give meaningful functional values. If True, fEvalX and gEvalY are set to False and True respectively, and vice versa if False. Setting this flag to True often gives a better estimate of the objective function, at some additional computational cost.

Parameters: opt : dict or None, optional (default None) ADMMConsensus algorithm options
getmin(self)[source]

Get minimiser after optimisation.

xstep(self)[source]

Minimise Augmented Lagrangian with respect to block vector $$\mathbf{x} = \left( \begin{array}{ccc} \mathbf{x}_0^T & \mathbf{x}_1^T & \ldots \end{array} \right)^T\;$$.

xistep(self, i)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{x}$$ component $$\mathbf{x}_i$$.

Overriding this method is required.

ystep(self)[source]

Minimise Augmented Lagrangian with respect to $$\mathbf{y}$$.

prox_g(self, X, rho)[source]

Proximal operator of $$\rho^{-1} g(\cdot)$$.

Overriding this method is required.

relax_AX(self)[source]

Implement relaxation if option RelaxParam != 1.0.

eval_objfn(self)[source]

Compute components of objective function as well as total contribution to objective function.

obfn_fvar(self, i)[source]

Variable to be evaluated in computing $$f_i(\cdot)$$, depending on the fEvalX option value.

obfn_gvar(self)[source]

Variable to be evaluated in computing $$g(\cdot)$$, depending on the gEvalY option value.

obfn_f(self)[source]

Compute $$f(\mathbf{x}) = \sum_i f(\mathbf{x}_i)$$ component of ADMM objective function.

obfn_fi(self, X, i)[source]

Compute $$f(\mathbf{x}_i)$$ component of ADMM objective function.

Overriding this method is required.

rsdl_r(self, AX, Y)[source]

Compute primal residual vector.

rsdl_s(self, Yprev, Y)[source]

Compute dual residual vector.

rsdl_rn(self, AX, Y)[source]

Compute primal residual normalisation term.

rsdl_sn(self, U)[source]

Compute dual residual normalisation term.