lfcnn.callbacks package

Submodules

lfcnn.callbacks.cyclic_learning module

Callbacks used for cyclic learning.

For details on cyclic learning (cyclic learning rate, cyclic momentum and learning rate finder) see the original article and references within:

Smith, Leslie N. “Cyclical learning rates for training neural networks.” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.

For a broader overview, see:

Smith, Leslie N. “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

class lfcnn.callbacks.cyclic_learning.MomentumScheduler(schedule, verbose=0)[source]

Bases: Callback

A callback to schedule optimizer momentum. This is based on Keras’ LearningRateScheduler implementation. The momentum is logged to logs.

Parameters:
  • schedule – Schedule function.

  • verbose – Whether to print verbose output.

on_epoch_begin(epoch, logs=None)[source]

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs

    Dict, metric results for this training epoch, and for the

    validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the

    Model’s metrics are returned. Example`{‘loss’: 0.2, ‘accuracy’:

    0.7}`.

on_train_begin(logs=None)[source]

Called at the beginning of training.

Subclasses should override for any actions to run.

Parameters:

logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

set_optimizer()[source]

Set optimizer attribute.

This is a bit ugly, but is necessary when using mixed precision training with loss scaling. Internally, when using loss scaling, the optimizer is wrapped in a LossScaleOptimizer which does not expose the momentum directly.

class lfcnn.callbacks.cyclic_learning.OneCycle(lr_min, lr_max, lr_final, cycle_epoch, max_epoch)[source]

Bases: LearningRateScheduler

The 1cycle learning rate scheduler as proposed by Smith. The learning rate starts at the minimal learning rate, increases linearly to the maximal learning rate and decreases linearly to the minimal learning rate. This cycle takes cycle_epoch steps. Finally, the learning rate decays linearly to the final learning rate at max_epoch.

See: Smith, Leslie N. “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

Parameters:
  • lr_min (float) – Minimum learning rate. This is the initial learning rate.

  • lr_max (float) – Maximum learning rate.

  • lr_final (float) – Final learning rate.

  • cycle_epoch (int) – Number of epochs one cycle takes, starting from the minimal learning rate to toe maximal learning rate, back to the minimal learning rate.

  • max_epoch (int) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_final.

one_cycle(epoch, lr)[source]
Return type:

float

class lfcnn.callbacks.cyclic_learning.OneCycleCosine(lr_min, lr_max, lr_final, phase_epoch, max_epoch)[source]

Bases: LearningRateScheduler

This adaption (proposed by fastAI) of the 1cycle policy uses the cosine to increase and decrease the learning rate. The learning rate update consists of only two phases: 1. increasing from lr_min to lr_max 2. decreasing from lr_max to lr_final.

See: https://sgugger.github.io/the-1cycle-policy.html

Parameters:
  • lr_min (float) – Minimum learning rate. This is the initial learning rate.

  • lr_max (float) – Maximum learning rate.

  • lr_final (float) – Final learning rate.

  • phase_epoch (int) – Epoch where maximum learning rate is achieved.

  • max_epoch (int) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_final.

one_cycle_cosine(epoch, lr)[source]
Return type:

float

class lfcnn.callbacks.cyclic_learning.OneCycleCosineMomentum(phase_epoch, max_epoch, m_min=0.85, m_max=0.95, verbose=0)[source]

Bases: MomentumScheduler

This adaption (proposed by fastAI) of the 1cycle policy uses the cosine to adapt the momentum.

It should be used with the OneCycleCosine learning rate scheduler with the same phase_epoch and max_epoch settings.

The momentum update consists of only two phases: 1. decreasing from m_max to m_min 2. increasing from m_min to m_max.

Parameters:
  • phase_epoch – Epoch where minimum momentum is achieved.

  • max_epoch – Epoch where schedule ends. For epochs larger than max_epoch, the momentum stays constant at m_max.

  • m_min – Minimum momentum.

  • m_max – Maximum momentum.

one_cycle_cosine(epoch, lr)[source]
Return type:

float

class lfcnn.callbacks.cyclic_learning.OneCycleMomentum(cycle_epoch, m_min=0.85, m_max=0.95, verbose=0)[source]

Bases: MomentumScheduler

The 1cycle momentum scheduler as proposed by Smith. Should be used in combination with OneCycle learning rate scheduler when training with SGD or another optimizer that has a momentum attribute.

The momentum starts at the maximum value (when the learning rate is small) and decreases to a minimum value (when the learning rate is large). Finally, the momentum stays at the maximum value for the rest of the training.

See: Smith, Leslie N. “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

Parameters:
  • cycle_epoch – Number of epochs one cycle takes. Should be the same as for the OneCycle learning rate scheduler.

  • m_min – Minimum momentum.

  • m_max – Maximum momentum.

one_cycle(epoch, momentum)[source]
Return type:

float

lfcnn.callbacks.lr_finder module

Callbacks used for cyclic learning.

For details on cyclic learning (cyclic learning rate, cyclic momentum and learning rate finder) see the original article and references within:

Smith, Leslie N. “Cyclical learning rates for training neural networks.” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.

For a broader overview, see:

Smith, Leslie N. “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

class lfcnn.callbacks.lr_finder.LearningRateFinder(lr_min, lr_max, num_batches, sweep='exponential', beta=0.95, verbose=False)[source]

Bases: Callback

Learning rate finder according to Leslie Smith. Starting from a small learning rate, the learning rate is increased after each batch up to the maximum learning rate. The corresponding training loss (per batch) is logged. The optimal learning rate corresponds to the point, where the training loss has the steepest slope. For OneCycle learning rate schedulers, the maximum and minimum learning rates can be found using this LearningRateFinder.

See: Smith, Leslie N. “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

See Also: Blog post by Sylvain Gugger of fastai https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html

Parameters:
  • lr_min (float) – Minimum learning rate, start point.

  • lr_max (float) – Maximum learning rate, end point.

  • num_batches (int) – Number of batches per single epoch.

  • sweep (str) – Whether to perform a linear increase of the learning rate (sweep = “linear”) or an exponential one (sweep = “exponential”, default).

  • beta (float) – Smoothing factor for logged loss_avg.

  • verbose (bool) – Whether to log verbose info.

on_batch_begin(batch, logs=None)[source]

A backwards compatibility alias for on_train_batch_begin.

on_batch_end(batch, logs=None)[source]

A backwards compatibility alias for on_train_batch_end.

schedule(batch, lr)[source]
Return type:

float

lfcnn.callbacks.lr_schedules module

Learningrate schedulers to automatically adjust the learning rate during training.

class lfcnn.callbacks.lr_schedules.ExponentialDecay(lr_init, max_epoch, alpha=0.02, lr_min=1e-06, **kwargs)[source]

Bases: LearningRateScheduler

Learning rate decays exponentially from initial learning rate to minimal learning rate at max_epoch epoch. For epochs larger then max_epoch, the learning rate stays constant at lr_min.

Parameters:
  • lr_init (float) – Initial learning rate.

  • max_epoch (int) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_min.

  • power. (power. Polynomial) –

  • lr_init – Minimum learning rate.

exponential_decay(epoch, lr)[source]
Return type:

float

class lfcnn.callbacks.lr_schedules.LinearDecay(lr_init, max_epoch, lr_min=1e-06, **kwargs)[source]

Bases: PolynomialDecay

Learning rate decays linearly. Corresponds to PolynomialDecay with power=1.

Parameters:
  • lr_init (float) – Initial learning rate.

  • max_epoch (int) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_min.

  • lr_init – Minimum learning rate.

class lfcnn.callbacks.lr_schedules.PolynomialDecay(lr_init, max_epoch, power=2, lr_min=1e-06, **kwargs)[source]

Bases: LearningRateScheduler

Learning rate decays polynomially from initial learning rate to minimal learning rate at max_epoch epochs. For epochs larger then max_epoch, the learning rate stays constant at lr_min.

Parameters:
  • lr_init (float) – Initial learning rate.

  • max_epoch (int) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_min.

  • power. (power. Polynomial) –

  • lr_init – Minimum learning rate.

polynomial_decay(epoch, lr)[source]
Return type:

float

class lfcnn.callbacks.lr_schedules.ReduceLROnPlateauRelative(**kwargs)[source]

Bases: ReduceLROnPlateau

Reduce the learning rate on plateau of monitored loss using a relative improvement monitoring.

This scheduler is basically equivalent to the ReduceLROnPlateau callback except that the monitored values are compared relatively rather than absolute, using the min_delta value.

_reset()[source]

Set relative thresholding

class lfcnn.callbacks.lr_schedules.SigmoidDecay(lr_init, max_epoch, alpha=0.1, offset=0, lr_min=1e-06, **kwargs)[source]

Bases: LearningRateScheduler

Sigmoid decay. The sigmoid function is create symmetrically around max_epoch // 2.

Parameters:
  • lr_init (float) – Initial learning rate.

  • max_epoch (int) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_min.

  • sigmoid. (alpha. Decay factor tuning the width of the) –

  • offset (int) – Offset when to start with sigmoid decay.

  • lr_init – Minimum learning rate.

sigmoid_decay(epoch, lr)[source]
Return type:

float

class lfcnn.callbacks.lr_schedules.StepDecay(lr_init, steps, decay=0.5, **kwargs)[source]

Bases: LearningRateScheduler

Learning rate is dropped every steps epochs to decay*learning_rate starting with the initial learning rate. That is, the learning rate is given by ` lr = lr_init * decay**N ` where ` N = epoch // steps `

Parameters:
  • lr_init (float) – Initial learning rate.

  • steps (int) – Decay learning rate every steps epoch.

  • decay (float) – Decay factor.

step_decay(epoch, lr)[source]
Return type:

float

class lfcnn.callbacks.lr_schedules.StepListDecay(lr_init, steps, decay=0.5, **kwargs)[source]

Bases: LearningRateScheduler

Learning rate is dropped at specified steps to decay*learning_rate starting with the initial learning rate.

Parameters:
  • lr_init (float) – Initial learning rate.

  • steps (List[int]) – List of epoch numbers when to drop the learning rate.

  • decay (float) – Decay factor.

step_decay(epoch, lr)[source]
Return type:

float

lfcnn.callbacks.sacred module

Callbacks to log metrics and other information to Sacred.

class lfcnn.callbacks.sacred.SacredEpochLogger(run, epochs, offset=0)[source]

Bases: Callback

Callback that logs the current epoch to Sacred.

on_epoch_begin(epoch, logs=None)[source]

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

class lfcnn.callbacks.sacred.SacredLearningRateLogger(run, offset=0)[source]

Bases: Callback

Callback that logs the learning rate to Sacred.

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs

    Dict, metric results for this training epoch, and for the

    validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the

    Model’s metrics are returned. Example`{‘loss’: 0.2, ‘accuracy’:

    0.7}`.

class lfcnn.callbacks.sacred.SacredLossWeightLogger(run, offset=0)[source]

Bases: Callback

Callback that logs all loss weights to Sacred.

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs

    Dict, metric results for this training epoch, and for the

    validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the

    Model’s metrics are returned. Example`{‘loss’: 0.2, ‘accuracy’:

    0.7}`.

class lfcnn.callbacks.sacred.SacredMetricLogger(run, offset=0)[source]

Bases: Callback

Callback that logs all losses and metrics to Sacred.

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs

    Dict, metric results for this training epoch, and for the

    validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the

    Model’s metrics are returned. Example`{‘loss’: 0.2, ‘accuracy’:

    0.7}`.

class lfcnn.callbacks.sacred.SacredTimeLogger(run, offset=0)[source]

Bases: Callback

Callback that logs the times per epoch to Sacred.

on_epoch_begin(epoch, logs=None)[source]

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs

    Dict, metric results for this training epoch, and for the

    validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the

    Model’s metrics are returned. Example`{‘loss’: 0.2, ‘accuracy’:

    0.7}`.

Module contents

The LFCNN callbacks module.

lfcnn.callbacks.get(callback)[source]

Given a callback name, returns an Keras callback instance.

Parameters:

callback (str) – Name of the callback.

Returns:

Callback instance.