lfcnn.callbacks package¶
Submodules¶
lfcnn.callbacks.cyclic_learning module¶
Callbacks used for cyclic learning.
For details on cyclic learning (cyclic learning rate, cyclic momentum and learning rate finder) see the original article and references within:
Smith, Leslie N. “Cyclical learning rates for training neural networks.” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.
For a broader overview, see:
Smith, Leslie N. “A disciplined approach to neural network hyperparameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

class
lfcnn.callbacks.cyclic_learning.
MomentumScheduler
(schedule, verbose=0)[source]¶ Bases:
tensorflow.python.keras.callbacks.Callback
A callback to schedule optimizer momentum. This is based on Keras’ LearningRateScheduler implementation. The momentum is logged to logs.
 Parameters
schedule – Schedule function.
verbose – Whether to print verbose output.

on_epoch_begin
(epoch, logs=None)[source]¶ Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
 Parameters
epoch – integer, index of epoch.
logs – dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end
(epoch, logs=None)[source]¶ Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
 Parameters
epoch – integer, index of epoch.
logs – dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_.

class
lfcnn.callbacks.cyclic_learning.
OneCycle
(lr_min, lr_max, lr_final, cycle_epoch, max_epoch)[source]¶ Bases:
tensorflow.python.keras.callbacks.LearningRateScheduler
The 1cycle learning rate scheduler as proposed by Smith. The learning rate starts at the minimal learning rate, increases linearly to the maximal learning rate and decreases linearly to the minimal learning rate. This cycle takes cycle_epoch steps. Finally, the learning rate decays linearly to the final learning rate at max_epoch.
See: Smith, Leslie N. “A disciplined approach to neural network hyperparameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).
 Parameters
lr_min (
float
) – Minimum learning rate. This is the initial learning rate.lr_max (
float
) – Maximum learning rate.lr_final (
float
) – Final learning rate.cycle_epoch (
int
) – Number of epochs one cycle takes, starting from the minimal learning rate to toe maximal learning rate, back to the minimal learning rate.max_epoch (
int
) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_final.

class
lfcnn.callbacks.cyclic_learning.
OneCycleCosine
(lr_min, lr_max, lr_final, phase_epoch, max_epoch)[source]¶ Bases:
tensorflow.python.keras.callbacks.LearningRateScheduler
This adaption (proposed by fastAI) of the 1cycle policy uses the cosine to increase and decrease the learning rate. The learning rate update consists of only two phases: 1. increasing from lr_min to lr_max 2. decreasing from lr_max to lr_final.
See: https://sgugger.github.io/the1cyclepolicy.html
 Parameters
lr_min (
float
) – Minimum learning rate. This is the initial learning rate.lr_max (
float
) – Maximum learning rate.lr_final (
float
) – Final learning rate.phase_epoch (
int
) – Epoch where maximum learning rate is achieved.max_epoch (
int
) – Epoch where decay ends. For epochs larger than max_epoch, the learning rate stays constant at lr_final.

class
lfcnn.callbacks.cyclic_learning.
OneCycleCosineMomentum
(phase_epoch, max_epoch, m_min=0.85, m_max=0.95, verbose=0)[source]¶ Bases:
lfcnn.callbacks.cyclic_learning.MomentumScheduler
This adaption (proposed by fastAI) of the 1cycle policy uses the cosine to adapt the momentum.
It should be used with the OneCycleCosine learning rate scheduler with the same phase_epoch and max_epoch settings.
The momentum update consists of only two phases: 1. decreasing from m_max to m_min 2. increasing from m_min to m_max.
 Parameters
phase_epoch – Epoch where minimum momentum is achieved.
max_epoch – Epoch where schedule ends. For epochs larger than max_epoch, the momentum stays constant at m_max.
m_min – Minimum momentum.
m_max – Maximum momentum.

class
lfcnn.callbacks.cyclic_learning.
OneCycleMomentum
(cycle_epoch, m_min=0.85, m_max=0.95, verbose=0)[source]¶ Bases:
lfcnn.callbacks.cyclic_learning.MomentumScheduler
The 1cycle momentum scheduler as proposed by Smith. Should be used in combination with OneCycle learning rate scheduler when training with SGD or another optimizer that has a momentum attribute.
The momentum starts at the maximum value (when the learning rate is small) and decreases to a minimum value (when the learning rate is large). Finally, the momentum stays at the maximum value for the rest of the training.
See: Smith, Leslie N. “A disciplined approach to neural network hyperparameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).
 Parameters
cycle_epoch – Number of epochs one cycle takes. Should be the same as for the OneCycle learning rate scheduler.
m_min – Minimum momentum.
m_max – Maximum momentum.
lfcnn.callbacks.lr_finder module¶
Callbacks used for cyclic learning.
For details on cyclic learning (cyclic learning rate, cyclic momentum and learning rate finder) see the original article and references within:
Smith, Leslie N. “Cyclical learning rates for training neural networks.” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.
For a broader overview, see:
Smith, Leslie N. “A disciplined approach to neural network hyperparameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

class
lfcnn.callbacks.lr_finder.
LearningRateFinder
(lr_min, lr_max, num_batches, sweep='exponential', beta=0.95, verbose=False)[source]¶ Bases:
tensorflow.python.keras.callbacks.Callback
Learning rate finder according to Leslie Smith. Starting from a small learning rate, the learning rate is increased after each batch up to the maximum learning rate. The corresponding training loss (per batch) is logged. The optimal learning rate corresponds to the point, where the training loss has the steepest slope. For OneCycle learning rate schedulers, the maximum and minimum learning rates can be found using this LearningRateFinder.
See: Smith, Leslie N. “A disciplined approach to neural network hyperparameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).
See Also: Blog post by Sylvain Gugger of fastai https://sgugger.github.io/howdoyoufindagoodlearningrate.html
 Parameters
lr_min (
float
) – Minimum learning rate, start point.lr_max (
float
) – Maximum learning rate, end point.num_batches (
int
) – Number of batches per single epoch.sweep (
str
) – Whether to perform a linear increase of the learning rate (sweep = “linear”) or an exponential one (sweep = “exponential”, default).beta (
float
) – Smoothing factor for logged loss_avg.verbose (
bool
) – Whether to log verbose info.
lfcnn.callbacks.lr_schedules module¶
Leanringrate schedulers to automatically adjust the learning rate during training.

class
lfcnn.callbacks.lr_schedules.
ExponentialDecay
(lr_init, max_epoch, alpha=0.02, lr_min=1e06)[source]¶ Bases:
tensorflow.python.keras.callbacks.LearningRateScheduler
Learning rate decays exponentially from inital learning rate to minimal learning rate at max_poch epoch. For epochs larger then max_epoch, the learning rate stays constant at lr_min.
 Parameters

class
lfcnn.callbacks.lr_schedules.
LinearDecay
(lr_init, max_epoch, lr_min=1e06)[source]¶ Bases:
lfcnn.callbacks.lr_schedules.PolynomialDecay
Learning rate decays linearly. Corresponds to PolynomialDecay with power=1.

class
lfcnn.callbacks.lr_schedules.
PolynomialDecay
(lr_init, max_epoch, power=2, lr_min=1e06)[source]¶ Bases:
tensorflow.python.keras.callbacks.LearningRateScheduler
Learning rate decays polynomially from initial learning rate to minimal learning rate at max_poch epochs. For epochs larger then max_epoch, the learning rate stays constant at lr_min.
 Parameters

class
lfcnn.callbacks.lr_schedules.
SigmoidDecay
(lr_init, max_epoch, alpha=0.1, lr_min=1e06)[source]¶ Bases:
tensorflow.python.keras.callbacks.LearningRateScheduler
Sigmoid decay. The sigmoid function is create symmetrically around max_epoch // 2.
 Parameters

class
lfcnn.callbacks.lr_schedules.
StepDecay
(lr_init, steps, decay=0.5)[source]¶ Bases:
tensorflow.python.keras.callbacks.LearningRateScheduler
Learning rate is dropped every steps epochs to decay*learning_rate starting with the initial learning rate. That is, the learning rate is given by
` lr = lr_init * decay**N `
where` N = epoch // steps `
 Parameters
lfcnn.callbacks.sacred module¶
Callbacks to log metrics and other information to Sacred.

class
lfcnn.callbacks.sacred.
SacredEpochLogger
(run, epochs)[source]¶ Bases:
tensorflow.python.keras.callbacks.Callback
Callback that logs the current epoch to Sacred.

on_epoch_begin
(epoch, logs=None)[source]¶ Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
 Parameters
epoch – integer, index of epoch.
logs – dict. Currently no data is passed to this argument for this method but that may change in the future.


class
lfcnn.callbacks.sacred.
SacredLearningRateLogger
(run)[source]¶ Bases:
tensorflow.python.keras.callbacks.Callback
Callback that logs learning rate to Sacred.

on_epoch_end
(epoch, logs=None)[source]¶ Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
 Parameters
epoch – integer, index of epoch.
logs – dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_.


class
lfcnn.callbacks.sacred.
SacredMetricLogger
(run)[source]¶ Bases:
tensorflow.python.keras.callbacks.Callback
Callback that logs all losses and metrics to Sacred.

on_epoch_end
(epoch, logs=None)[source]¶ Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
 Parameters
epoch – integer, index of epoch.
logs – dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_.


class
lfcnn.callbacks.sacred.
SacredTimeLogger
(run)[source]¶ Bases:
tensorflow.python.keras.callbacks.Callback
Callback that logs times per epoch to Sacred.

on_epoch_begin
(epoch, logs=None)[source]¶ Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
 Parameters
epoch – integer, index of epoch.
logs – dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end
(epoch, logs=None)[source]¶ Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
 Parameters
epoch – integer, index of epoch.
logs – dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_.
