lfcnn.training.auxiliary_loss package

Submodules

lfcnn.training.auxiliary_loss.gradient_similarity module

A keras.Model subclass implementing the Gradient Similarity [1] training strategy for adaptive auxiliary loss weighting. More precisely, this implements the “weighted version” of the proposed method as described in Algorithm 2, Appendix C of [1].

[1]: Du, Y., Czarnecki, W. M., Jayakumar, S. M., Pascanu, R., & Lakshminarayanan, B.: “Adapting auxiliary losses using gradient similarity.” arXiv preprint arXiv:1812.02224. 2018.

class lfcnn.training.auxiliary_loss.gradient_similarity.GradientSimilarity(aux_losses, **kwargs)[source]

Bases: Model

A keras.Model subclass implementing the Gradient Similarity [1] training strategy for adaptive auxiliary loss weighting. More precisely, this implements the “weighted version” of the proposed method as described in Algorithm 2, Appendix C of [1].

[1]: Du, Y., Czarnecki, W. M., Jayakumar, S. M., Pascanu, R., & Lakshminarayanan, B.: “Adapting auxiliary losses using gradient similarity.” arXiv preprint arXiv:1812.02224. 2018.

Parameters:
  • aux_losses (Dict[str, List[Union[str, Loss]]]) – A dictionary containing a list of auxilary losses per task. Either pass the loss name as string or a loss class. If passing classes, the losses should not have been initialized yet, i.e. just pass the class object, not an instance.

  • **kwargs – kwargs passed to keras.Model init.

compile(**kwargs)[source]

Overwrite keras.Model compile().

train_step(data)[source]

Overwrite keras.Model train_step() which is called in fit().

lfcnn.training.auxiliary_loss.normalized_gradient_similarity module

A keras.Model subclass implementing the Normalized Gradient Similarity [1] training strategy for adaptive auxiliary loss weighting.

[1]: M. Schambach, J. Shi, and M. Heizmann: “Spectral Reconstruction and Disparity from Spatio-Spectrally Coded Light Fields via Multi-Task Deep Learning” International Conference on 3D Vision (3DV), 2021.

class lfcnn.training.auxiliary_loss.normalized_gradient_similarity.NormalizedGradientSimilarity(aux_losses, gradient_approximation=None, approximation_percentage=None, multi_task_uncertainty=False, weights_init=None, **kwargs)[source]

Bases: Model

A keras.Model subclass implementing the Normalized Gradient Similarity [1] training strategy for adaptive auxiliary loss weighting.

[1]: M. Schambach, J. Shi, and M. Heizmann: “Spectral Reconstruction and Disparity from Spatio-Spectrally Coded Light Fields via Multi-Task Deep Learning” International Conference on 3D Vision (3DV), 2021.

Parameters:
  • aux_losses (Dict[str, List[Union[str, Loss]]]) – A dictionary containing a list of auxiliary losses per task. Either pass the loss name as string or a loss class. If passing classes, the losses should not have been initialized yet, i.e. just pass the class object, not an instance.

  • gradient_approximation (Optional[str]) – Whether to approximate the auxiliary gradient calculation. Can be either a name of layers to use, e.g. “shared” or “last_shared”. Or apply stochastic gradient approximation, use “stochastic” or “stochastic_eager”. Default is to not use any approximation. This may be quite memory intensive.

  • approximation_percentage (Optional[float]) – Only used when gradient_approximation=”stochastic”. Percentage of layers to use for stochastic sampling. Defaults to 0.1 = 10%.

  • multi_task_uncertainty (bool) – Whether to use adaptive multi-task weighting. See Also: MultiTaskUncertainty model.

  • weights_init (Optional[Dict[str, float]]) – Dictionary of initial weights for each tasks when using multitask uncertainty.

  • **kwargs – kwargs passed to keras.Model init.

compile(**kwargs)[source]

Overwrite keras.Model compile().

train_step(data)[source]

Overwrite keras.Model train_step() which is called in fit().

Module contents

The LFCNN auxiliary loss training strategies.

lfcnn.training.auxiliary_loss.get(strategy)[source]

Given a strategy name, returns a Keras model subclass.

Parameters:

model – Name of the strategy.

Returns:

Keras model subclass.