lfcnn.losses package¶
Submodules¶
lfcnn.losses.combined_losses module¶

class
lfcnn.losses.combined_losses.
CenterLoss
(mu=0, gamma=0, power_factors=0.5, 0.3, 0.2, filter_size=3, k1=0.03, k2=0.09, reduction='sum_over_batch_size', name='center_loss')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Central view loss function based on the Huber loss with delta=1 and two regularization terms based on the multiscale structural similarity and the cosine proximity.
Note that the default values for the free parameters of SSIM/MSSSIM deviate from the original papers. In particular, we choose k1=0.03, k2=0.09 to be three times larger than the original to improve the numerical stability for small spatial resolutions. Furthermore the filtersize of the mean and standard deviation calculation is set to 3 instead of 11 since the compared images will have a low spatial resultion. The MSSSIM is calculated at 3 instead of 5 levels, since for a size of 32x32 only 3 meaningful downsampling operations are possible in combination with a filter width of 3.
If you have a different output size than 32x32, these values need to be adapted!
 Parameters
mu (
float
) – Regularization factor of term based on the structural similarity.gamma (
float
) – Regularization factor of term based on cosine proximity.power_factors (
Tuple
[float
, …]) – Scale power factors of the MSSSIM regularizing term.filter_size (
int
) – Filter size of the averaging filter used to calculate the SSIM at each scale.k1 (
float
) – Constant for numerical stability of SSIM and MS_SSIM.k2 (
float
) – Constant for numerical stability of SSIM and MS_SSIM.

class
lfcnn.losses.combined_losses.
DisparityLoss
(mu=0, reduction='sum_over_batch_size', name='disparity_loss')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Disparity loss function based on the Huber loss with delta=1 and a total variational regularizer.
 Parameters
mu – Regularization factor. Default: No regularization.
lfcnn.losses.losses module¶

class
lfcnn.losses.losses.
CosineProximity
(reduction='sum_over_batch_size', name='cosine_proximity')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the cosine proximity (CP) of two tensors along the last axis.
If the last axis is a spectral axis, this measures the spectral similarity of two multispectral or hyperspectral tensors (resp. light fields).
Maximum similarity corresponds to a value of CP = 1.
` CP = cos(alpha) = <y_pred, y_true> / (y_pred * y_true) `
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
Dummy
(reduction='sum_over_batch_size', name='dummy')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Dummy loss that does not compute anything. Can be used when benchmarking training time performance.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
Huber
(delta=1.0, ver='lfcnn', reduction='sum_over_batch_size', name='huber_loss')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the Huber loss between y_true and y_pred.
Given x = y_true  y_pred:
` loss = x^2 if x <= d loss = d^2 + d * (x  d) if x > d `
where d is delta. See: https://en.wikipedia.org/wiki/Huber_loss Note that our definition deviates from the definition on Wikipedia and the one used in Keras by a factor of 2. This way, the Huber loss has the same scaling as the MSE. To acchieve Keras compatible behaviour, specify ver=’keras’. Parameters
delta – A float, the point where the Huber loss function changes from a quadratic to linear.
ver – Optional version argument. If ver=’keras’, use definition as used in Keras. Else, Huber loss is scaled with a factor of two.
reduction – (Optional) Type of reduction to apply to loss.
name – Optional name for the object.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

lfcnn.losses.losses.
MSE
[source]¶ alias of
lfcnn.losses.losses.MeanSquaredError

class
lfcnn.losses.losses.
MeanAbsoluteError
(reduction='sum_over_batch_size', name='mean_absolute_error')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the mean absolute error (MAE) between y_true and y_pred.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
MeanSquaredError
(reduction='sum_over_batch_size', name='mean_square_error')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the mean squared error (MSE) between y_true and y_pred.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
MultiScaleStructuralSimilarity
(reduction='sum_over_batch_size', name='ms_ssim')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the multiscale structural similarity index metric (SSIM) between predicted and true tensor.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

lfcnn.losses.losses.
N_MS_SSIM
[source]¶ alias of
lfcnn.losses.losses.NormalizedMultiScaleStructuralSimilarity

class
lfcnn.losses.losses.
NormalizedCosineProximity
(reduction='sum_over_batch_size', name='normalized_cosine_proximity')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the normalized cosine proximity (NCP) of two tensors along the last axis. The NCP can directly be used for loss minimization.
If the last axis is a spectral axis, this measures the spectral similarity of two multispectral or hyperspectral tensors (resp. light fields).
Maximum similarity corresponds to a value of NCP = 0.
` NCP = 0.5*(1.0  cos(alpha)) `
where
` cos(alpha) = <y_pred, y_true> / (y_pred * y_true) `
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
NormalizedMultiScaleStructuralSimilarity
(reduction='sum_over_batch_size', name='n_ms_ssim')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the normalized multiscale structural similarity index metric (SSIM) between predicted and true tensor. Here,
` NMSSSIM = 0.5 * ( 1.0  MSSSIM(y_true, y_pred) ) `
That is, the NMSSSIM is ranged in [0, 1] where 0 corresponds to a maximal similarity.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
NormalizedStructuralSimilarity
(reduction='sum_over_batch_size', name='n_ssim')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the normalized structural similarity index metric (SSIM) between predicted and true tensor with
` NSSIM = 0.5 * ( 1.0  SSIM(y_true, y_pred) ) `
That is, the NMSSSIM is ranged in [0, 1] where 0 corresponds to a maximal similarity.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
PseudoHuber
(delta=1.0, ver='lfcnn', reduction='sum_over_batch_size', name='pseudo_huber_loss')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the pseudo Huber loss between y_true and y_pred. Given x = y_true  y_pred:
` loss = 2 * d * (sqrt(1 + (x/d)²)  1) `
where d is delta. See: https://en.wikipedia.org/wiki/Huber_loss Note that our definition deviates from the definition on Wikipedia and the one used in Keras by a factor of 2. This way, the Huber loss has the same scaling as the MSE. Parameters
delta – A float, the point where the Huber loss function changes from a quadratic to linear.
ver – Optional version argument. If ver=’keras’, use definition as used in Keras. Else, Huber loss is scaled with a factor of two.
reduction – (Optional) Type of reduction to apply to loss.
name – Optional name for the object.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
SpectralInformationDivergence
(reduction='sum_over_batch_size', name='sid')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the spectral information divergence (SID) between predicted and true tensor. SID is basically a symmetrized KullbackLeibler divergence if the pixel spectra are interpreted as a probability distribution.
Original Paper: CheinI Chang, “An informationtheoretic approach to spectral variability, similarity, and discrimination for hyperspectral image analysis,” in IEEE Transactions on Information Theory, vol. 46, no. 5, pp. 19271932, Aug. 2000.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
StructuralSimilarity
(reduction='sum_over_batch_size', name='ssim')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the structural similarity index metric (SSIM) between predicted and true tensor.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

class
lfcnn.losses.losses.
TotalVariation
(reduction='sum_over_batch_size', name='total_variation')[source]¶ Bases:
tensorflow.python.keras.losses.LossFunctionWrapper
Computes the total variation of a predicted tensor.
Initializes LossFunctionWrapper class.
 Parameters
fn – The loss function to wrap, with signature fn(y_true, y_pred, **kwargs).
reduction – (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of builtin training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – (Optional) name for the loss.
**kwargs – The keyword arguments that are passed on to fn.

lfcnn.losses.losses.
bad_pix
(y_true, y_pred, val)[source]¶ Calculate the amount of pixels in percent that deviate more than val from the true value.

lfcnn.losses.losses.
bad_pix_01
(y_true, y_pred)[source]¶ Calculate the amount of pixels in percent that deviate more than 0.01 from the true value.

lfcnn.losses.losses.
bad_pix_03
(y_true, y_pred)[source]¶ Calculate the amount of pixels in percent that deviate more than 0.03 from the true value.

lfcnn.losses.losses.
bad_pix_07
(y_true, y_pred)[source]¶ Calculate the amount of pixels in percent that deviate more than 0.07 from the true value.

lfcnn.losses.losses.
cosine_proximity
(y_true, y_pred, axis= 1)[source]¶ Calculate the cosine proximity between true and predicted tensor.

lfcnn.losses.losses.
dummy
(y_true, y_pred)[source]¶ Dummy loss not performing any calculation. Always returns 1

lfcnn.losses.losses.
huber_loss
(y_true, y_pred, delta=1.0, ver='lfcnn')[source]¶ Calculate the Huber loss between true and predicted tensor.

lfcnn.losses.losses.
mae
(y_true, y_pred)[source]¶ Calculate the mean absolute error between true and predicted tensor.

lfcnn.losses.losses.
mae_clipped
(y_true, y_pred, max_val=1.0)[source]¶ Calculates mean absolute error (MAE) of y_pred with respect to y_true, but clips y_pred with max_val before MAE calculation.

lfcnn.losses.losses.
mean_absolute_error
(y_true, y_pred)[source]¶ Calculate the mean absolute error between true and predicted tensor.

lfcnn.losses.losses.
mean_squared_error
(y_true, y_pred)[source]¶ Calculate the mean square error between true and predicted tensor.

lfcnn.losses.losses.
ms_ssim
(y_true, y_pred, max_val=1.0, k1=0.01, k2=0.03, **kwargs)[source]¶ Calculate the multiscale structural similarity (MSSSIM) between true and predicted tensor.

lfcnn.losses.losses.
mse
(y_true, y_pred)[source]¶ Calculate the mean square error between true and predicted tensor.

lfcnn.losses.losses.
mse_clipped
(y_true, y_pred, max_val=1.0)[source]¶ Calculates mean square error (MSE) of y_pred with respect to y_true, but clips y_pred with max_val before MSE calculation.

lfcnn.losses.losses.
multiscale_structural_similarity
(y_true, y_pred, max_val=1.0, k1=0.01, k2=0.03, **kwargs)[source]¶ Calculate the multiscale structural similarity (MSSSIM) between true and predicted tensor.

lfcnn.losses.losses.
n_ms_ssim
(y_true, y_pred, **kwargs)[source]¶ Calculates a normalized multiscale structural similiarity (NMSSSIM)
` 0.5*(1  MSSSIM(y_true, y_pred)). `
That is, the NMSSSIM is ranged in [0, 1] where 0 corresponds to a maximal similarity.

lfcnn.losses.losses.
n_ssim
(y_true, y_pred, **kwargs)[source]¶ Calculates a normalized structural similiarity (NSSIM)
` 0.5*(1  SSIM(y_true, y_pred)). `
That is, the NSSIM is ranged in [0, 1] where 0 corresponds to a maximal similarity.

lfcnn.losses.losses.
normalized_cosine_proximity
(y_true, y_pred, axis= 1)[source]¶ Calculates a normalized cosine proximity
` 0.5*(1cos(alpha)), `
where
` cos(alpha) = <y_true, y_pred> / (y_true * y_pred), `
where the scalar product is taken along the specified axis. That is, the normalized cosine proximity is ranged in [0, 1] where 0 corresponds to a maximal similarity.

lfcnn.losses.losses.
normalized_multiscale_structural_similarity
(y_true, y_pred, **kwargs)[source]¶ Calculates a normalized multiscale structural similiarity (NMSSSIM)
` 0.5*(1  MSSSIM(y_true, y_pred)). `
That is, the NMSSSIM is ranged in [0, 1] where 0 corresponds to a maximal similarity.

lfcnn.losses.losses.
normalized_structural_similarity
(y_true, y_pred, **kwargs)[source]¶ Calculates a normalized structural similiarity (NSSIM)
` 0.5*(1  SSIM(y_true, y_pred)). `
That is, the NSSIM is ranged in [0, 1] where 0 corresponds to a maximal similarity.

lfcnn.losses.losses.
pseudo_huber_loss
(y_true, y_pred, delta=1.0, ver='lfcnn')[source]¶ Calculated the pseudo Huber loss between y_true and y_pred. The pseudo Huber loss function is a smooth approximation of the Huber loss, i.e. all derivatives exist and are continuous.
 Parameters
delta – The point where the pseudo Huber loss function changes from quadratic to linear behaviour.

lfcnn.losses.losses.
psnr
(y_true, y_pred, max_val=1.0)[source]¶ Calculates the peak signaltonoise ratio (PSNR) in dB of y_pred with respect to y_true.
 Parameters
y_true – True image.
y_pred – Predicted image.
max_val – Dynamic range of image. For float images: 1, for uint8: 255, etc.
 Returns
PSNR value in decibel.

lfcnn.losses.losses.
psnr_clipped
(y_true, y_pred, max_val=1.0)[source]¶ Calculates peak signaltonoise ratio (PSNR) of y_pred with respect to y_true, but clips y_pred with max_val before psnr calculation.

lfcnn.losses.losses.
sid
(y_true, y_pred, k=0)[source]¶ Calculate the mean spectral information divergence (SID) which is basically a symmetrized KullbackLeibler divergence if the pixel spectra are interpreted as a probability distribution.
Original Paper: CheinI Chang, “An informationtheoretic approach to spectral variability, similarity, and discrimination for hyperspectral image analysis,” in IEEE Transactions on Information Theory, vol. 46, no. 5, pp. 19271932, Aug. 2000.
 Parameters
y_true – True tensor.
y_pred – Predicted tensor.
k – Factor for numerical stability.
 Returns
Spectral Information Divergence of y_true and y_pred.

lfcnn.losses.losses.
spectral_information_divergence
(y_true, y_pred, k=0)[source]¶ Calculate the mean spectral information divergence (SID) which is basically a symmetrized KullbackLeibler divergence if the pixel spectra are interpreted as a probability distribution.
Original Paper: CheinI Chang, “An informationtheoretic approach to spectral variability, similarity, and discrimination for hyperspectral image analysis,” in IEEE Transactions on Information Theory, vol. 46, no. 5, pp. 19271932, Aug. 2000.
 Parameters
y_true – True tensor.
y_pred – Predicted tensor.
k – Factor for numerical stability.
 Returns
Spectral Information Divergence of y_true and y_pred.

lfcnn.losses.losses.
ssim
(y_true, y_pred, max_val=1.0, k1=0.01, k2=0.03, **kwargs)[source]¶ Calculate the structural Similarity (SSIM) between true and predicted tensor.