sporco.dictlrn.prlcnscdl

Parallel consensus convolutional dictionary learning

Classes

ConvBPDNDictLearn_Consensus(*args, **kwargs)

Dictionary learning based on Convolutional BPDN [49] and an ADMM Consensus solution of the constrained dictionary update problem [1].

ConvBPDNMaskDcplDictLearn_Consensus(*args, ...)

Dictionary learning based on Convolutional BPDN with Mask Decoupling [28] and the hybrid Mask Decoupling/Consensus solution of the constrained dictionary update problem proposed in [26].


Class Descriptions

class sporco.dictlrn.prlcnscdl.ConvBPDNDictLearn_Consensus(*args, **kwargs)[source]

Bases: sporco.dictlrn.cbpdndl.ConvBPDNDictLearn

Dictionary learning based on Convolutional BPDN [49] and an ADMM Consensus solution of the constrained dictionary update problem [1].


Inheritance diagram of ConvBPDNDictLearn_Consensus


The dictionary learning algorithm itself is as described in [26]. The sparse coding of each training image and the individual consensus problem components are computed in parallel, giving a substantial computational advantage, on a multi-core host, over dictlrn.cbpdndl.ConvBPDNDictLearn with the consensus solver (dmethod = 'cns') for the constrained dictionary update problem.

Solve the optimisation problem

\[\mathrm{argmin}_{\mathbf{d}, \mathbf{x}} \; (1/2) \sum_k \left \| \sum_m \mathbf{d}_m * \mathbf{x}_{k,m} - \mathbf{s}_k \right \|_2^2 + \lambda \sum_k \sum_m \| \mathbf{x}_{k,m} \|_1 \quad \text{such that} \quad \mathbf{d}_m \in C \;\; \forall m \;,\]

where \(C\) is the feasible set consisting of filters with unit norm and constrained support, via interleaved alternation between the ADMM steps of the sparse coding and dictionary update algorithms. Multi-channel signals with multi-channel dictionaries are supported. Unlike dictlrn.cbpdndl.ConvBPDNDictLearn, multi-channel signals with single-channel dictionaries are not supported; it is the responsibility of the user to convert the multi-channel input signal to a single-channel input with channels converted into additional single-channel signal instances.

This class is derived from dictlrn.cbpdndl.ConvBPDNDictLearn so that the variable initialisation of its parent can be re-used. The entire solve infrastructure is overidden in this class, without any use of inherited functionality. Variables initialised by the parent class that are non-singleton on axis axisK have this axis swapped with axis 0 for simpler and more computationally efficient indexing. Note that automatic penalty parameter selection (see option AutoRho in admm.ADMM.Options) is not supported, the option settings being silently ignored.

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term \((1/2) \sum_k \| \sum_m \mathbf{d}_m * \mathbf{x}_{k,m} - \mathbf{s}_k \|_2^2\)

RegL1 : Value of regularisation term \(\sum_k \sum_m \| \mathbf{x}_{k,m} \|_1\)

Time : Cumulative run time

Parameters
D0array_like

Initial dictionary array

Sarray_like

Signal array

lmbdafloat

Regularisation parameter

optdictlrn.cbpdndl.ConvBPDNDictLearn.Options object

Algorithm options

nprocint

Number of parallel processes to use

dimKint, optional (default 1)

Number of signal dimensions. If there is only a single input signal (e.g. if S is a 2D array representing a single image) dimK must be set to 0.

dimNint, optional (default 2)

Number of spatial/temporal dimensions

class Options(opt=None)[source]

Bases: sporco.dictlrn.cbpdndl.ConvBPDNDictLearn.Options

ConvBPDNDictLearn_Consensus algorithm options

Options are the same as defined in cbpdndl.ConvBPDNDictLearn.Options.

Parameters
optdict or None, optional (default None)

ConvBPDNDictLearn_Consensus algorithm options

fwiter = 4

Field width for iteration count display column

fpothr = 2

Field precision for other display columns

step()[source]

Do a single iteration over all cbpdn and ccmod steps. Those that are not coupled on the K axis are performed in parallel.

solve()[source]

Start (or re-start) optimisation. This method implements the framework for the alternation between X and D updates in a dictionary learning algorithm.

If option Verbose is True, the progress of the optimisation is displayed at every iteration. At termination of this method, attribute itstat is a list of tuples representing statistics of each iteration.

Attribute timer is an instance of util.Timer that provides the following labelled timers:

init: Time taken for object initialisation by __init__

solve: Total time taken by call(s) to solve

solve_wo_func: Total time taken by call(s) to solve, excluding time taken to compute functional value and related iteration statistics

getdict(crop=True)[source]

Get final dictionary. If crop is True, apply cnvrep.bcrop to returned array.

getcoef()[source]

Get final coefficient map array.

evaluate()[source]

Evaluate functional value of previous iteration.

getitstat()[source]

Get iteration stats as named tuple of arrays instead of array of named tuples.

class sporco.dictlrn.prlcnscdl.ConvBPDNMaskDcplDictLearn_Consensus(*args, **kwargs)[source]

Bases: sporco.dictlrn.cbpdndlmd.ConvBPDNMaskDictLearn

Dictionary learning based on Convolutional BPDN with Mask Decoupling [28] and the hybrid Mask Decoupling/Consensus solution of the constrained dictionary update problem proposed in [26].


Inheritance diagram of ConvBPDNMaskDcplDictLearn_Consensus


The dictionary learning algorithm itself is as described in [26]. The sparse coding of each training image and the individual consensus problem components are computed in parallel, giving a substantial computational advantage, on a multi-core host, over cbpdndlmd.ConvBPDNMaskDictLearn with the consensus solver (method = 'cns') for the constrained dictionary update problem.

Solve the optimisation problem

\[\mathrm{argmin}_{\mathbf{d}, \mathbf{x}} \; (1/2) \sum_k \left \| W ( \sum_m \mathbf{d}_m * \mathbf{x}_{k,m} - \mathbf{s}_k ) \right \|_2^2 + \lambda \sum_k \sum_m \| \mathbf{x}_{k,m} \|_1 \quad \text{such that} \quad \mathbf{d}_m \in C \;\; \forall m \;,\]

where \(W\) is a mask array and \(C\) is the feasible set consisting of filters with unit norm and constrained support, via interleaved alternation between the ADMM steps of the sparse coding and dictionary update algorithms. Multi-channel signals with multi-channel dictionaries are supported. Unlike dictlrn.cbpdndl.ConvBPDNDictLearn, multi-channel signals with single-channel dictionaries are not supported; it is the responsibility of the user to convert the multi-channel input signal to a single-channel input with channels converted into additional single-channel signal instances.

This class is derived from cbpdndlmd.ConvBPDNMaskDictLearn so that the variable initialisation of its parent can be re-used. The entire solve infrastructure is overidden in this class, without any use of inherited functionality. Variables initialised by the parent class that are non-singleton on axis axisK have this axis swapped with axis 0 for simpler and more computationally efficient indexing. Note that automatic penalty parameter selection (see option AutoRho in admm.ADMM.Options) is not supported, the option settings being silently ignored.

After termination of the solve method, attribute itstat is a list of tuples representing statistics of each iteration. The fields of the named tuple IterationStats are:

Iter : Iteration number

ObjFun : Objective function value

DFid : Value of data fidelity term \((1/2) \sum_k \| W ( \sum_m \mathbf{d}_m * \mathbf{x}_{k,m} - \mathbf{s}_k ) \|_2^2\)

RegL1 : Value of regularisation term \(\sum_k \sum_m \| \mathbf{x}_{k,m} \|_1\)

Time : Cumulative run time

Parameters
D0array_like

Initial dictionary array

Sarray_like

Signal array

lmbdafloat

Regularisation parameter

Warray_like

Mask array. The array shape must be such that the array is compatible for multiplication with the internal shape of input array S (see cnvrep.CDU_ConvRepIndexing for a discussion of the distinction between external and internal data layouts) after reshaping to the shape determined by cnvrep.mskWshape.

optcbpdndlmd.ConvBPDNMaskDictLearn.Options object

Algorithm options

nprocint

Number of parallel processes to use

dimKint, optional (default 1)

Number of signal dimensions. If there is only a single input signal (e.g. if S is a 2D array representing a single image) dimK must be set to 0.

dimNint, optional (default 2)

Number of spatial/temporal dimensions

class Options(opt=None)[source]

Bases: sporco.dictlrn.cbpdndlmd.ConvBPDNMaskDictLearn.Options

ConvBPDNMaskDcplDictLearn_Consensus algorithm options

Options are the same as defined in cbpdndlmd.ConvBPDNMaskDictLearn.Options.

Parameters
optdict or None, optional (default None)

ConvBPDNMaskDcplDictLearn_Consensus algorithm options

fwiter = 4

Field width for iteration count display column

fpothr = 2

Field precision for other display columns

step()[source]

Do a single iteration over all cbpdn and ccmod steps. Those that are not coupled on the K axis are performed in parallel.

solve()[source]

Start (or re-start) optimisation. This method implements the framework for the alternation between X and D updates in a dictionary learning algorithm.

If option Verbose is True, the progress of the optimisation is displayed at every iteration. At termination of this method, attribute itstat is a list of tuples representing statistics of each iteration.

Attribute timer is an instance of util.Timer that provides the following labelled timers:

init: Time taken for object initialisation by __init__

solve: Total time taken by call(s) to solve

solve_wo_func: Total time taken by call(s) to solve, excluding time taken to compute functional value and related iteration statistics

getdict(crop=True)[source]

Get final dictionary. If crop is True, apply cnvrep.bcrop to returned array.

getcoef()[source]

Get final coefficient map array.

evaluate()[source]

Evaluate functional value of previous iteration.

getitstat()[source]

Get iteration stats as named tuple of arrays instead of array of named tuples.