lfcnn.utils package

Submodules

lfcnn.utils.callback_utils module

Utilities to be used with LearningRateSchedulers and MomentumSchedulers. Requires the matplotlib library.

lfcnn.utils.callback_utils.plot_scheduler(schedulers, max_epoch, log=False, **kwargs)[source]

Plot one ore more learning rate or momentum schedulers for visual comparison. Uses the matplotlib library.

Parameters:
  • schedulers – Single instance or list of scheduler instances. Scheduler instance cann be either a LearningRateScheduler or a MomentumScheduler instance

  • max_epoch – Maximum epoch to plot.

  • log – Whether to scale the y-axis logarithmically.

  • **kwargs – Passed to plt.plot

lfcnn.utils.tf_utils module

lfcnn tensorflow utils

lfcnn.utils.tf_utils.disable_eager()[source]

Disables TF eager execution. By default in TF >= 2.0, eager execution is enabled.

lfcnn.utils.tf_utils.list_physical_devices(type=None)[source]
lfcnn.utils.tf_utils.list_visible_devices(type=None)[source]
lfcnn.utils.tf_utils.mixed_precision_graph_rewrite(opt, loss_scale='dynamic')[source]

Using a graph rewrite to enable mixed precision training. Use with care. The Keras API set_mixed_precision_keras() is the prefered method for mixed precision training. .. seealso:: https://www.tensorflow.org/api_docs/python/tf/train/experimental/enable_mixed_precision_graph_rewrite

Parameters:
  • opt (OptimizerV2) – Keras optimizer instance.

  • loss_scale (str) – Lass scale method. Default: ‘dynamic’.

Return type:

OptimizerV2

Returns:

Optimizer with mixed precision graph rewrite enabled.

lfcnn.utils.tf_utils.set_mixed_precision_keras(policy='mixed_float16', loss_scale='dynamic')[source]

Set to use the Keras mixed precision api. Simply call at the beginning of your script.

Parameters:
  • policy (str) – Mixed precision dtype policy. Default: ‘mixed_float16’. See tf.keras.mixed_precision.experimental.Policy for available dtype policies.

  • loss_scale (Union[float, str]) – : Loss scale value or loss scale method. Default: ‘dynamic’.

lfcnn.utils.tf_utils.use_cpu()[source]

Set CPU as visible tf device. This effectively hides all available GPUs.

lfcnn.utils.tf_utils.use_gpu(index=None)[source]

Set GPU as visible tf device.

Parameters:

index – Set index of visible GPU. If None, all GPUs are visible to TF.

Module contents

LCNN utilities.