Modules cbpdn and cbpdntv¶
Module admm.cbpdn
includes the following classes:
-
Solve the basic Convolutional BPDN problem (see [53])
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right \|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1\]The function
sporco.cuda.cbpdn
provides a GPU accelerated solver for this problem if the sporco-cuda extension package is installed. -
Solve the Convolutional BPDN problem with joint sparsity over multiple signal channels via an \(\ell_{2,1}\) norm term (see [51])
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \sum_c \left\| \sum_m \mathbf{d}_m * \mathbf{x}_{c,m} - \mathbf{s}_c \right\|_2^2 + \lambda \sum_c \sum_m \| \mathbf{x}_{c,m} \|_1 + \mu \| \{ \mathbf{x}_{c,m} \} \|_{2,1}\] -
Solve the Convolutional Elastic Net (i.e. Convolutional BPDN with an additional \(\ell_2\) penalty on the coefficient maps)
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right \|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \frac{\mu}{2} \sum_m \| \mathbf{x}_m \|_2^2\] -
Solve Convolutional BPDN with an additional \(\ell_2\) penalty on the gradient of the coefficient maps (see [52])
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right \|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \frac{\mu}{2} \sum_i \sum_m \| G_i \mathbf{x}_m \|_2^2\]where \(G_i\) is an operator computing the derivative along index \(i\).
The function
sporco.cuda.cbpdngrd
provides a GPU accelerated solver for this problem if the sporco-cuda extension package is installed. -
Solve a Convolutional Sparse Coding problem with an \(\ell_1\) data fidelity term and both \(\ell_1\) and \(\ell_2\) of gradient regularisation terms [52]
\[\mathrm{argmin}_\mathbf{x} \; \left\| W \left(\sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s}\right) \right\|_1 + \lambda \sum_m \| \mathbf{x}_m \|_1 + (\mu/2) \sum_i \sum_m \| G_i \mathbf{x}_m \|_2^2\]where \(W\) is a mask array and \(G_i\) is an operator computing the derivative along index \(i\).
-
Solve the convolutional sparse representation problem with an \(\ell_2\) objective and an \(\ell_1\) constraint
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 \; \text{such that} \; \| \mathbf{x}_m \|_1 \leq \gamma\] -
Solve the convolutional sparse representation problem with an \(\ell_1\) objective and an \(\ell_2\) constraint
\[\mathrm{argmin}_\mathbf{x} \sum_m \| \mathbf{x}_m \|_1 \; \text{such that} \; \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2 \leq \epsilon\] -
Solve Convolutional BPDN with Mask Decoupling (see [28])
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| W \left(\sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s}\right) \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1\]where \(W\) is a mask array.
-
A wrapper class for applying the Additive Mask Simulation (AMS) boundary handling technique (see [50]) to any of the other
admm.cbpdn
classes.If the sporco-cuda extension package is installed, functions
sporco.cuda.cbpdnmsk
andsporco.cuda.cbpdngrdmsk
provide GPU accelerated solvers for AMS variants of theConvBPDN
andConvBPDNGradReg
problems. -
A wrapper class for solving a convolutional sparse coding problem fitting a single set of coefficient maps to multiple dictionaries and signals, e.g. when applied to
admm.cbpdn.ConvBPDN
,\[\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D_0 \mathbf{x} - \mathbf{s}_0 \right\|_2^2 + (1/2) \left\| D_1 \mathbf{x} - \mathbf{s}_1 \right\|_2^2 + \lambda \| \mathbf{x} \|_1 \;\;,\]for input images \(\mathbf{s}_0\), \(\mathbf{s}_1\), dictionaries \(D_0\) and \(D_0\), and coefficient map set \(\mathbf{x}\), where \(D_0 \mathbf{x} = \sum_m \mathbf{d}_{0,m} \mathbf{x}_m\) and \(D_1 \mathbf{x} = \sum_m \mathbf{d}_{1,m} \mathbf{x}_m\).
Module admm.cbpdntv
includes the following classes:
-
Solve Convolutional BPDN with an additional term penalising the total variation of each coefficient map (see [58])
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \sum_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,\]where \(G_i\) is an operator computing the derivative along index \(i\).
-
Solve Convolutional BPDN with an additional term penalising the vector total variation of the coefficient maps (see [58])
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_m \sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,\]where \(G_i\) is an operator computing the derivative along index \(i\).
-
Solve Convolutional BPDN with an additional term penalising the total variation of the reconstruction from the sparse representation (see [58])
\[\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_i \left( G_i \left( \sum_m \mathbf{d}_m * \mathbf{x}_m \right) \right)^2} \right\|_1 \;\;,\]where \(G_i\) is an operator computing the derivative along index \(i\).
Usage examples are available.
Multi-channel Data¶
Some of the example scripts demonstrate usage of the classes in the admm.cbpdn
module with multi-channel (all of these examples are for RGB colour images, but an arbitrary number of channels is supported) input images. Multi-channel input examples are not provided for all classes since the usage differences for single- and multi-channel inputs are the same across most of the classes. There are two fundamentally different ways of representing multi-channel input images: a single-channel dictionary together with a separate set of coefficient maps for each channel, or a multi-channel dictionary with a single set of coefficient maps shared across all channels. In the former case the coefficient maps can be independent across the different channels, or expected correlations between the channels can be modelled via a joint sparsity penalty. A more detailed discussion of these issues can be found in [51].