lfcnn.models.center_and_disparity package
Submodules
lfcnn.models.center_and_disparity.conv3d_decode2d module
Encoder-decoder network based on 3D convolution for estimating disparity and a central view from spectrally coded light fields.
- class lfcnn.models.center_and_disparity.conv3d_decode2d.Conv3dDecode2d(num_filters_base=24, skip=True, superresolution=False, kernel_reg=1e-05, **kwargs)[source]
Bases:
BaseModel
Estimate a central view and disparity from a (coded) light field. Based on 3D Convolution for encoding and 2D convolution for decoding.
- Parameters:
num_filters_base (
int
) – Number of filters in the base layer. For each downsampling, the number of filters is doubled.skip (
bool
) – Whether to use skip connections (U-net architecture).superresolution (
bool
) – Whethter to perform superresolution for output.kernel_reg (
float
) – Strength of the L2 kernel regularizer during training.**kwargs – Passed to lfcnn.models.BaseModel.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
- class lfcnn.models.center_and_disparity.conv3d_decode2d.Conv3dDecode2dMasked(mask_type='neuralfractal', mask_kwargs=None, warmup=False, finetune=False, **kwargs)[source]
Bases:
Conv3dDecode2d
**kwargs: See Conv3dDecode2d model.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
lfcnn.models.center_and_disparity.conv3d_decode2d_st module
Encoder-decoder network based on 3D convolution for estimating disparity and a central view from spectrally coded light fields.
- class lfcnn.models.center_and_disparity.conv3d_decode2d_st.Conv3dDecode2dStCentral(**kwargs)[source]
Bases:
Conv3dDecode2d
Estimate a central view from a (coded) light field. Based on 3D Convolution for encoding and 2D convolution for decoding.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
- class lfcnn.models.center_and_disparity.conv3d_decode2d_st.Conv3dDecode2dStDisp(**kwargs)[source]
Bases:
Conv3dDecode2d
Estimate a disparity from a (coded) light field. Based on 3D Convolution for encoding and 2D convolution for decoding.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
lfcnn.models.center_and_disparity.conv4d_decode2d module
Encoder-decoder network based on spatio-angular separable 4D convolution for estimating disparity and a central view from spectrally coded light fields.
- class lfcnn.models.center_and_disparity.conv4d_decode2d.Conv4dDecode2d(num_filters_base=24, skip=True, kernel_reg=1e-05, **kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- static ang_sample_down(x, filters, kernel_regularizer, name=None)[source]
Implicit angular downsampling via valid padding. Input shape (batch, s*t, u, v, channel)
- static ang_to_spt(x)[source]
Angular to spatial reshape. Only works for square spatial resolution. Reshape (batch, s*t, u, v, channel) -> (batch, u*v, s, t, channel)
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
- static res_block_td(x, filters, kernel_regularizer, name=None)[source]
Time Distributed Residual Convolution
lfcnn.models.center_and_disparity.dictionary_sparse_coding_epinet module
Encoder-decoder network based on 3D convolution for estimating disparity and a central view from spectrally coded light fields.
- class lfcnn.models.center_and_disparity.dictionary_sparse_coding_epinet.DictionarySparseCodingEpinet(sparse_coding_kwargs, **kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- create_model(inputs, augmented_shape=None)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
lfcnn.models.center_and_disparity.dummy module
Light field 2 output dummy model.
- class lfcnn.models.center_and_disparity.dummy.Dummy(depth=0, **kwargs)[source]
Bases:
BaseModel
LFCNN Model class.
TODO: Implement proper logging.
- Parameters:
optimizer – Optimizer used for training.
loss – Loss used for training.
metrics – Metrics used for validation, testing and evaluation.
callbacks – Callbacks used during training.
loss_weights – Optional loss weights for multi-output models.
- create_model(inputs, augmented_shape)[source]
Create the Keras model. Needs to be implemented by the derived class to define the network topology.
- Parameters:
inputs (
List
[Input
]) – List of Keras Inputs. Single or multi inputs supported.augmented_shape – The augmented shape as generated by the generator. Can be used to obtain the original light field’s shape, for example the number of subapertures or the number of spectral channels.
- Return type:
Model
- property depth
- static final_reshape_central_view(input, augmented_shape, name='reshape')[source]
Reshape to central view
Module contents
The LFCNN central view and disparity estimator models.