lfcnn.models.center_and_disparity package

Submodules

lfcnn.models.center_and_disparity.conv3d_decode2d module

Encoder-decoder network based on 3D convolution for estimating disparity and a central view from spectrally coded light fields.

class lfcnn.models.center_and_disparity.conv3d_decode2d.Conv3dDecode2d(num_filters_base=24, skip=True, superresolution=False, kernel_reg=1e-05, **kwargs)[source]

Bases: BaseModel

Estimate a central view and disparity from a (coded) light field. Based on 3D Convolution for encoding and 2D convolution for decoding.

Parameters:
  • num_filters_base (int) – Number of filters in the base layer. For each downsampling, the number of filters is doubled.

  • skip (bool) – Whether to use skip connections (U-net architecture).

  • superresolution (bool) – Whethter to perform superresolution for output.

  • kernel_reg (float) – Strength of the L2 kernel regularizer during training.

  • **kwargs – Passed to lfcnn.models.BaseModel.

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters:
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type:

Model

decoder_central(inputs, num_ch_out)[source]
decoder_disparity(inputs)[source]
encoder(input)[source]
set_generator_and_reshape()[source]
class lfcnn.models.center_and_disparity.conv3d_decode2d.Conv3dDecode2dMasked(mask_type='neuralfractal', mask_kwargs=None, warmup=False, finetune=False, **kwargs)[source]

Bases: Conv3dDecode2d

**kwargs: See Conv3dDecode2d model.

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters:
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type:

Model

get_mask()[source]
set_finetune(value)[source]
set_generator_and_reshape()[source]
set_warmup(value)[source]

lfcnn.models.center_and_disparity.conv3d_decode2d_st module

Encoder-decoder network based on 3D convolution for estimating disparity and a central view from spectrally coded light fields.

class lfcnn.models.center_and_disparity.conv3d_decode2d_st.Conv3dDecode2dStCentral(**kwargs)[source]

Bases: Conv3dDecode2d

Estimate a central view from a (coded) light field. Based on 3D Convolution for encoding and 2D convolution for decoding.

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters:
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type:

Model

set_generator_and_reshape()[source]
class lfcnn.models.center_and_disparity.conv3d_decode2d_st.Conv3dDecode2dStDisp(**kwargs)[source]

Bases: Conv3dDecode2d

Estimate a disparity from a (coded) light field. Based on 3D Convolution for encoding and 2D convolution for decoding.

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters:
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type:

Model

set_generator_and_reshape()[source]

lfcnn.models.center_and_disparity.conv4d_decode2d module

Encoder-decoder network based on spatio-angular separable 4D convolution for estimating disparity and a central view from spectrally coded light fields.

class lfcnn.models.center_and_disparity.conv4d_decode2d.Conv4dDecode2d(num_filters_base=24, skip=True, kernel_reg=1e-05, **kwargs)[source]

Bases: BaseModel

LFCNN Model class.

TODO: Implement proper logging.

Parameters:
  • optimizer – Optimizer used for training.

  • loss – Loss used for training.

  • metrics – Metrics used for validation, testing and evaluation.

  • callbacks – Callbacks used during training.

  • loss_weights – Optional loss weights for multi-output models.

static ang_sample_down(x, filters, kernel_regularizer, name=None)[source]

Implicit angular downsampling via valid padding. Input shape (batch, s*t, u, v, channel)

static ang_to_spt(x)[source]

Angular to spatial reshape. Only works for square spatial resolution. Reshape (batch, s*t, u, v, channel) -> (batch, u*v, s, t, channel)

static bottleneck_reshape(x)[source]
create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters:
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type:

Model

static res_block_td(x, filters, kernel_regularizer, name=None)[source]

Time Distributed Residual Convolution

static sample_down_base(x, filters, kernel_size, strides, padding, kernel_regularizer, name=None)[source]

Downsampling via strided convolution

set_generator_and_reshape()[source]
static spt_sample_down(x, filters, downsample_strides, kernel_regularizer, name=None)[source]
static spt_to_ang(x)[source]

Spatial to angular reshape. Only works for square angular resolution. Reshape (batch, u*v, s, t, channel) -> (batch, s*t, u, v, channel)

lfcnn.models.center_and_disparity.dictionary_sparse_coding_epinet module

Encoder-decoder network based on 3D convolution for estimating disparity and a central view from spectrally coded light fields.

class lfcnn.models.center_and_disparity.dictionary_sparse_coding_epinet.DictionarySparseCodingEpinet(sparse_coding_kwargs, **kwargs)[source]

Bases: BaseModel

LFCNN Model class.

TODO: Implement proper logging.

Parameters:
  • optimizer – Optimizer used for training.

  • loss – Loss used for training.

  • metrics – Metrics used for validation, testing and evaluation.

  • callbacks – Callbacks used during training.

  • loss_weights – Optional loss weights for multi-output models.

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters:
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type:

Model

set_generator_and_reshape()[source]

lfcnn.models.center_and_disparity.dummy module

Light field 2 output dummy model.

class lfcnn.models.center_and_disparity.dummy.Dummy(depth=0, **kwargs)[source]

Bases: BaseModel

LFCNN Model class.

TODO: Implement proper logging.

Parameters:
  • optimizer – Optimizer used for training.

  • loss – Loss used for training.

  • metrics – Metrics used for validation, testing and evaluation.

  • callbacks – Callbacks used during training.

  • loss_weights – Optional loss weights for multi-output models.

create_model(inputs, augmented_shape)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters:
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type:

Model

property depth
static final_reshape_central_view(input, augmented_shape, name='reshape')[source]

Reshape to central view

static final_reshape_disparity(input, augmented_shape, name='reshape')[source]

Reshape to disparity

set_generator_and_reshape()[source]

Module contents

The LFCNN central view and disparity estimator models.

lfcnn.models.center_and_disparity.get(model)[source]

Given a model name, returns an lfcnn model instance.

Parameters:

model (str) – Name of the model.

Returns:

Model instance.