lfcnn.models.disparity package

Submodules

lfcnn.models.disparity.conv2dmodel module

Disparity estimator based on a 2D residual conv. encoder-decoder network.

class lfcnn.models.disparity.conv2dmodel.Conv2dModel(**kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

set_generator_and_reshape()[source]

lfcnn.models.disparity.conv3dmodel module

Disparity estimator based on a 3D residual conv. encoder-decoder network.

class lfcnn.models.disparity.conv3dmodel.Conv3dModel(**kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

set_generator_and_reshape()[source]

lfcnn.models.disparity.dummy module

Light field disparity dummy model

class lfcnn.models.disparity.dummy.Dummy(depth=0, **kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

create_model(inputs, augmented_shape)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

property depth
static final_reshape(input, augmented_shape, name='reshape')[source]

Reshape to disparity

set_generator_and_reshape()[source]

lfcnn.models.disparity.epinet module

EPINET disparity estimator model.

Shin, Changha, et al. “Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.

class lfcnn.models.disparity.epinet.Epinet(**kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

static base_block(input, kernel_size, num_filters, reps, name)[source]
create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

set_generator_and_reshape()[source]

lfcnn.models.disparity.lfattnet module

LfAttNet attention-based disparity estimator model.

Original implementation released under the MIT License. See: https://github.com/LIAGM/LFattNet

Tsai, Yu-Ju, et al. “Attention-based View Selection Networks for Light-field Disparity Estimation.” In: Proceedings of the 34th Conference on Artificial Intelligence (AAAI) 2020

class lfcnn.models.disparity.lfattnet.LfAttNet(**kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

create_model(inputs, augmented_shape=None)[source]

Multi input network, using u*v subaperture inputs.

Return type

Model

set_generator_and_reshape()[source]
static single_input_block(input, name)[source]

Processing a single subaperture of the light field.

lfcnn.models.disparity.lfattnet.basic(cost_volume)[source]
lfcnn.models.disparity.lfattnet.channel_attention(cost_volume)[source]
lfcnn.models.disparity.lfattnet.channel_attention_free(cost_volume)[source]
lfcnn.models.disparity.lfattnet.channel_attention_mirror(cost_volume)[source]
lfcnn.models.disparity.lfattnet.conv2d_block(input, num_filters, kernel_size, stride, dilation, name=None)[source]
lfcnn.models.disparity.lfattnet.conv3d_block(input, num_filters, kernel_size, stride)[source]
lfcnn.models.disparity.lfattnet.disparityregression(input)[source]
lfcnn.models.disparity.lfattnet.res_block(input, num_filters, stride, downsample, dilation)[source]
lfcnn.models.disparity.lfattnet.upsample_2d(size)[source]
lfcnn.models.disparity.lfattnet.upsample_3d(size)[source]
lfcnn.models.disparity.lfattnet.upsample_3d__helper(x, size)[source]

lfcnn.models.disparity.vommanet module

VommaNet disparity estimator model.

CAUTION: The paper does not specify the number of filters in the first layer of the dilation block. Feel free to play around with that number.

Ma, Haoxin, et al. “VommaNet: an End-to-End Network for Disparity Estimation from Reflective and Texture-less Light Field Images.” arXiv preprint arXiv:1811.07124 (2018).

class lfcnn.models.disparity.vommanet.VommaNet(**kwargs)[source]

Bases: lfcnn.models.abstracts.BaseModel

static concat_block(inputs, num_filters, kernel_size)[source]
create_model(inputs, augmented_shape=None)[source]

Create the Keras model. Needs to be implemented by the derived class to define the network topology.

Parameters
  • inputs (List[Input]) – List of Keras Inputs. Single or multi inputs supported.

  • augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.

Return type

Model

static dilation_block(input, num_filters, kernel_size)[source]
static final_block(input, num_filters, kernel_size)[source]
static res_block(input, num_filters, kernel_size)[source]
set_generator_and_reshape()[source]

Module contents

The LFCNN disparity estimator models.

lfcnn.models.disparity.get(model)[source]

Given a model name, returns an lfcnn model instance.

Parameters

model (str) – Name of the model.

Returns

Model instance.