lfcnn.models.disparity package
Submodules
lfcnn.models.disparity.conv2dmodel module
Disparity estimator based on a 2D residual conv. encoder-decoder network.
- class lfcnn.models.disparity.conv2dmodel.Conv2dModel(**kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
lfcnn.models.disparity.conv3dmodel module
Disparity estimator based on a 3D residual conv. encoder-decoder network.
- class lfcnn.models.disparity.conv3dmodel.Conv3dModel(**kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
lfcnn.models.disparity.dummy module
Light field disparity dummy model
- class lfcnn.models.disparity.dummy.Dummy(depth=0, **kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- create_model(inputs, augmented_shape)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
- property depth
lfcnn.models.disparity.epinet module
EPINET disparity estimator model. EPINET operates on greyscale light fields.
Shin, Changha, et al. “Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
- class lfcnn.models.disparity.epinet.Epinet(padding='valid', **kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
lfcnn.models.disparity.lfattnet module
LfAttNet attention-based disparity estimator model.
Original implementation released under the MIT License. See: https://github.com/LIAGM/LFattNet
Tsai, Yu-Ju, et al. “Attention-based View Selection Networks for Light-field Disparity Estimation.” In: Proceedings of the 34th Conference on Artificial Intelligence (AAAI) 2020
- class lfcnn.models.disparity.lfattnet.LfAttNet(**kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- lfcnn.models.disparity.lfattnet.conv2d_block(input, num_filters, kernel_size, stride, dilation, name=None)[source]
lfcnn.models.disparity.vommanet module
VommaNet disparity estimator model.
CAUTION: The paper does not specify the number of filters in the first layer of the dilation block. Feel free to play around with that number.
Ma, Haoxin, et al. “VommaNet: an End-to-End Network for Disparity Estimation from Reflective and Texture-less Light Field Images.” arXiv preprint arXiv:1811.07124 (2018).
- class lfcnn.models.disparity.vommanet.VommaNet(**kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
Module contents
The LFCNN disparity estimator models.