lfcnn.utils package

Submodules

lfcnn.utils.callback_utils module

Utilities to be used with LearningRateSchedulers and MomentumSchedulers. Requires the matplotlib library.

lfcnn.utils.callback_utils.plot_scheduler(schedulers, max_epoch, log=False)[source]

Plot one ore more learning rate or momentum schedulers for visual comparison. Uses the matplotlib library.

Parameters
  • schedulers – Single instance or list of scheduler instances. Scheduler instance cann be either a LearningRateScheduler or a MomentumScheduler instance

  • max_epoch – Maximum epoch to plot.

  • log – Whether to scale the y-axis logarithmically.

lfcnn.utils.data_utils module

Utilities to convert between depth and disparity for microlens array based unfocues plenoptic cameras in the thin lens approximation.

Mostly to be used with raytracer_utils and the depth map rendered by the IIIT Raytracer.

lfcnn.utils.data_utils.depth_from_disp(disp, f, g, r, R, numAngSteps)[source]

Calculate the depth from disparity given the camera parameters.

Note that all given camera parameters need to be in the same SI units, e.g. all in meters.

Inverse of disp_from_depth.

Parameters
  • disp (float) – Disparity to convert.

  • f (float) – Main lens focal length

  • g (float) – Focus distance (distance of the focal plane)

  • r (float) – Micro lens radius

  • R (float) – Main lens radius

  • numAngSteps (int) – Angular resolution (number of subapertures per dimension) (number of pixels underneath microlens)

Returns

depth (in same unit as camera parameters, e.g. in meters)

lfcnn.utils.data_utils.depth_from_distance(dist, r, b, offset=None)[source]

Calculte depth from distance values, i.e. given the norm of a ray distance = |(x, y, z)|, calculate the depth z.

Parameters
  • dist – Distance map of shape (s, t, 1)

  • r – Microlens radius

  • b – Image distance

  • offset – Sensor offset. Can be used for light field subapertures that show an effective sensor center offset.

Returns:

lfcnn.utils.data_utils.depth_from_distance_lf(lf_dist, r, R, b, g)[source]
lfcnn.utils.data_utils.disp_from_depth(depth, f, g, r, R, numAngSteps)[source]

Calculate the disparity from depth given the camera parameters.

Note that all given camera parameters need to be in the same SI units, e.g. all in meters.

Inverse of deps_from_disp.

Parameters
  • depth (Union[float, ndarray]) – Depth to convert.

  • f (float) – Main lens focal length

  • g (float) – Focus distance (distance of the focal plane)

  • r (float) – Micro lens radius

  • R (float) – Main lens radius

  • numAngSteps – Angular resolution (number of subapertures per dimension) (number of pixels underneath microlens)

Returns

disparity (in pixels)

lfcnn.utils.raytracer_utils module

lfcnn.utils.tf_utils module

lfcnn tensorflow utils

lfcnn.utils.tf_utils.disable_eager()[source]

Disables TF eager execution. By default in TF >= 2.0, eager execution is enabled.

lfcnn.utils.tf_utils.list_physical_devices(type=None)[source]
lfcnn.utils.tf_utils.list_visible_devices(type=None)[source]
lfcnn.utils.tf_utils.mixed_precision_graph_rewrite(opt, loss_scale='dynamic')[source]

Using a graph rewrite to enable mixed precision training. Use with care. The Keras API set_mixed_precision_keras() is the prefered method for mixed precision training. .. seealso:: https://www.tensorflow.org/api_docs/python/tf/train/experimental/enable_mixed_precision_graph_rewrite

Parameters
  • opt (OptimizerV2) – Keras optimizer instance.

  • loss_scale (str) – Lass scale method. Default: ‘dynamic’.

Return type

OptimizerV2

Returns

Optimizer with mixed precision graph rewrite enabled.

lfcnn.utils.tf_utils.set_mixed_precision_keras(policy='mixed_float16', loss_scale='dynamic')[source]

Set to use the Keras mixed precision api. Simply call at the beginning of your script.

Parameters
  • policy (str) – Mixed precision dtype policy. Default: ‘mixed_float16’. See tf.keras.mixed_precision.experimental.Policy for available dtype policies.

  • loss_scale (Union[float, str]) – : Loss scale value or loss scale method. Default: ‘dynamic’.

lfcnn.utils.tf_utils.use_cpu()[source]

Set CPU as visible tf device. This effectively hides all available GPUs.

lfcnn.utils.tf_utils.use_gpu(index=None)[source]

Set GPU as visible tf device.

Parameters

index – Set index of visible GPU. If None, all GPUs are visible to TF.

Module contents

LCNN utilities.