lfcnn.models.autoencoder package

Submodules

lfcnn.models.autoencoder.conv3dmodel module

Light field autoencoder using 3d convolution. This model is mostly analogous to the autoencoder path of the encoder-decoder network proposed by [1] but instead of using to EPI volumes, we use the full light field as the networks input. Also, we perform one less downsampling and upsampling operation.

[1] Alperovich, Anna, et al. “Light field intrinsics with a deep encoder-decoder network.” IEEE Conference on Computer Vision and Pattern Recognition. 2018.

class lfcnn.models.autoencoder.conv3dmodel.Conv3dModel(**kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

static final_reshape(input, augmented_shape, name='light_field')[source]

Spatial to light field reshape. Only works when u==v.

static res_block_3d(x, num_filters, kernel_size=3, 3, 3, strides=1, 1, 1, kernel_regularizer=None, name=None)[source]
static res_block_3d_transposed(x, num_filters, kernel_size=3, 3, 3, strides=2, 2, 3, kernel_regularizer=None, name=None)[source]
set_generator_and_reshape()[source]

lfcnn.models.autoencoder.dummy module

Light field dummy model.

class lfcnn.models.autoencoder.dummy.Dummy(depth=0, **kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

create_model(inputs, augmented_shape)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

property depth
set_generator_and_reshape()[source]
lfcnn.models.autoencoder.dummy.final_reshape(input, augmented_shape, name='reshape')[source]

Spatial to light field reshape.

lfcnn.models.autoencoder.epi_volume_encoder module

Light field EPI volume encoder. This model is analogous to the autoencoder path of the encoder-decoder network proposed by [1] operating on a single EPI volume. However, the depth was reduced by one downsampling to be able to run the model on (32, 32) spatial input.

[1] Alperovich, Anna, et al. “Light field intrinsics with a deep encoder-decoder network.” IEEE Conference on Computer Vision and Pattern Recognition. 2018.

class lfcnn.models.autoencoder.epi_volume_encoder.EpiVolumeEncoder(**kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

static final_reshape(input, augmented_shape, name='light_field')[source]

Spatial to light field reshape.

static res_block_3d(x, num_filters, kernel_size=3, 3, 3, strides=1, 1, 1, kernel_regularizer=None, name=None)[source]
static res_block_3d_transposed(x, num_filters, kernel_size=3, 3, 3, strides=2, 2, 3, kernel_regularizer=None, name=None)[source]
set_generator_and_reshape()[source]

Module contents

The LFCNN autoencoder models.

lfcnn.models.autoencoder.get(model)[source]

Given a model name, returns an lfcnn model instance.

Parameters

model (str) – Name of the model.

Returns

Model instance.