# Modules cbpdn and cbpdntv¶

Module admm.cbpdn includes the following classes:

• admm.cbpdn.ConvBPDN

Solve the basic Convolutional BPDN problem (see [37])

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right \|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1$

The function sporco.cuda.cbpdn provides a GPU accelerated solver for this problem if the sporco-cuda extension package is installed.

• ConvBPDNJoint

Solve the Convolutional BPDN problem with joint sparsity over multiple signal channels via an $$\ell_{2,1}$$ norm term (see [35])

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \sum_c \left\| \sum_m \mathbf{d}_m * \mathbf{x}_{c,m} - \mathbf{s}_c \right\|_2^2 + \lambda \sum_c \sum_m \| \mathbf{x}_{c,m} \|_1 + \mu \| \{ \mathbf{x}_{c,m} \} \|_{2,1}$
• ConvElasticNet

Solve the Convolutional Elastic Net (i.e. Convolutional BPDN with an additional $$\ell_2$$ penalty on the coefficient maps)

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right \|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \frac{\mu}{2} \sum_m \| \mathbf{x}_m \|_2^2$
• ConvBPDNGradReg

Solve Convolutional BPDN with an additional $$\ell_2$$ penalty on the gradient of the coefficient maps (see [36])

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left \| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right \|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \frac{\mu}{2} \sum_i \sum_m \| G_i \mathbf{x}_m \|_2^2$

where $$G_i$$ is an operator computing the derivative along index $$i$$.

The function sporco.cuda.cbpdngrd provides a GPU accelerated solver for this problem if the sporco-cuda extension package is installed.

• ConvL1L1Grd

Solve a Convolutional Sparse Coding problem with an $$\ell_1$$ data fidelity term and both $$\ell_1$$ and $$\ell_2$$ of gradient regularisation terms [36]

$\mathrm{argmin}_\mathbf{x} \; \left\| W \left(\sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s}\right) \right\|_1 + \lambda \sum_m \| \mathbf{x}_m \|_1 + (\mu/2) \sum_i \sum_m \| G_i \mathbf{x}_m \|_2^2$

where $$W$$ is a mask array and $$G_i$$ is an operator computing the derivative along index $$i$$.

• ConvBPDNProjL1

Solve the convolutional sparse representation problem with an $$\ell_2$$ objective and an $$\ell_1$$ constraint

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 \; \text{such that} \; \| \mathbf{x}_m \|_1 \leq \gamma$
• ConvMinL1InL2Ball

Solve the convolutional sparse representation problem with an $$\ell_1$$ objective and an $$\ell_2$$ constraint

$\mathrm{argmin}_\mathbf{x} \sum_m \| \mathbf{x}_m \|_1 \; \text{such that} \; \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2 \leq \epsilon$
• ConvBPDNMaskDcpl

Solve Convolutional BPDN with Mask Decoupling (see [21])

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| W \left(\sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s}\right) \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1$

where $$W$$ is a mask array.

• AddMaskSim

A wrapper class for applying the Additive Mask Simulation (AMS) boundary handling technique (see [34]) to any of the other admm.cbpdn classes.

If the sporco-cuda extension package is installed, functions sporco.cuda.cbpdnmsk and sporco.cuda.cbpdngrdmsk provide GPU accelerated solvers for AMS variants of the ConvBPDN and ConvBPDNGradReg problems.

• MultiDictConvBPDN

A wrapper class for solving a convolutional sparse coding problem fitting a single set of coefficient maps to multiple dictionaries and signals, e.g. when applied to admm.cbpdn.ConvBPDN,

$\mathrm{argmin}_\mathbf{x} \; (1/2) \left\| D_0 \mathbf{x} - \mathbf{s}_0 \right\|_2^2 + (1/2) \left\| D_1 \mathbf{x} - \mathbf{s}_1 \right\|_2^2 + \lambda \| \mathbf{x} \|_1 \;\;,$

for input images $$\mathbf{s}_0$$, $$\mathbf{s}_1$$, dictionaries $$D_0$$ and $$D_0$$, and coefficient map set $$\mathbf{x}$$, where $$D_0 \mathbf{x} = \sum_m \mathbf{d}_{0,m} \mathbf{x}_m$$ and $$D_1 \mathbf{x} = \sum_m \mathbf{d}_{1,m} \mathbf{x}_m$$.

Module admm.cbpdntv includes the following classes:

• ConvBPDNScalarTV

Solve Convolutional BPDN with an additional term penalising the total variation of each coefficient map (see [42])

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \sum_m \left\| \sqrt{\sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,$

where $$G_i$$ is an operator computing the derivative along index $$i$$.

• ConvBPDNVectorTV

Solve Convolutional BPDN with an additional term penalising the vector total variation of the coefficient maps (see [42])

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_m \sum_i (G_i \mathbf{x}_m)^2} \right\|_1 \;\;,$

where $$G_i$$ is an operator computing the derivative along index $$i$$.

• ConvBPDNRecTV

Solve Convolutional BPDN with an additional term penalising the total variation of the reconstruction from the sparse representation (see [42])

$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_m \|_1 + \mu \left\| \sqrt{\sum_i \left( G_i \left( \sum_m \mathbf{d}_m * \mathbf{x}_m \right) \right)^2} \right\|_1 \;\;,$

where $$G_i$$ is an operator computing the derivative along index $$i$$.

Usage examples are available.

## Multi-channel Data¶

Some of the example scripts demonstrate usage of the classes in the admm.cbpdn module with multi-channel (all of these examples are for RGB colour images, but an arbitrary number of channels is supported) input images. Multi-channel input examples are not provided for all classes since the usage differences for single- and multi-channel inputs are the same across most of the classes. There are two fundamentally different ways of representing multi-channel input images: a single-channel dictionary together with a separate set of coefficient maps for each channel, or a multi-channel dictionary with a single set of coefficient maps shared across all channels. In the former case the coefficient maps can be independent across the different channels, or expected correlations between the channels can be modelled via a joint sparsity penalty. A more detailed discussion of these issues can be found in [35].