accelbrainbase.computableloss._mxnet package

Submodules

accelbrainbase.computableloss._mxnet.discriminator_loss module

class accelbrainbase.computableloss._mxnet.discriminator_loss.DiscriminatorLoss(weight=None, batch_axis=0, **kwargs)

Bases: mxnet.gluon.loss.Loss, accelbrainbase.computable_loss.ComputableLoss

Loss function of discriminators in Generative Adversarial Networks(GANs).

References

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
compute(true_posterior_arr, generated_posterior_arr)

Compute loss.

Parameters:
  • true_posterior_arr – Real samples.
  • generated_posterior_arr – Generated samples.
Returns:

Tensor of losses.

hybrid_forward(F, true_posterior_arr, generated_posterior_arr, sample_weight=None)

Forward propagation, computing losses.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • true_posterior_arrmxnet.ndarray or mxnet.symbol of true posterior inferenced by the discriminator.
  • generated_posterior_arrmxnet.ndarray or mxnet.symbol of fake posterior inferenced by the generator.
Returns:

mxnet.ndarray or mxnet.symbol of loss.

accelbrainbase.computableloss._mxnet.drcn_loss module

class accelbrainbase.computableloss._mxnet.drcn_loss.DRCNLoss(axis=-1, sparse_label=False, rc_lambda=0.75, from_logits=False, log_softmax_flag=True, weight=1.0, classification_weight=None, reconstruction_weight=None, grad_clip_threshold=0.0, batch_axis=0, **kwargs)

Bases: mxnet.gluon.loss.Loss, accelbrainbase.computable_loss.ComputableLoss

Loss function of Deep Reconstruction-Classification Networks.

Deep Reconstruction-Classification Network(DRCN) is a convolutional network that jointly learns two tasks:

  1. supervised source label prediction.
  2. unsupervised target data reconstruction.

Ideally, a discriminative representation should model both the label and the structure of the data. Based on that intuition, Ghifary, M., et al.(2016) hypothesize that a domain-adaptive representation should satisfy two criteria:

  1. classify well the source domain labeled data.
  2. reconstruct well the target domain unlabeled data, which can be viewed as an approximate of the ideal discriminative representation.

The encoding parameters of the DRCN are shared across both tasks, while the decoding parameters are sepa-rated. The aim is that the learned label prediction function can perform well onclassifying images in the target domain thus the data reconstruction can beviewed as an auxiliary task to support the adaptation of the label prediction.

References

  • Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016, October). Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision (pp. 597-613). Springer, Cham.
compute(decoded_arr, pred_arr, observed_arr, label_arr)

Compute loss.

Parameters:
  • decoded_arrmxnet.ndarray or mxnet.symbol of decoded feature points.
  • pred_arrmxnet.ndarray or mxnet.symbol of inferenced labeled feature points.
  • observed_arrmxnet.ndarray or mxnet.symbol of observed data points.
  • label_arrmxnet.ndarray or mxnet.symbol of label data.
Returns:

Tensor of losses.

hybrid_forward(F, decoded_arr, pred_arr, observed_arr, label_arr, sample_weight=None)

Forward propagation, computing losses.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • decoded_arrmxnet.ndarray or mxnet.symbol of decoded feature points.
  • pred_arrmxnet.ndarray or mxnet.symbol of inferenced labeled feature points.
  • observed_arrmxnet.ndarray or mxnet.symbol of observed data points.
  • label_arrmxnet.ndarray or mxnet.symbol of label data.
  • sample_weight – element-wise weighting tensor. Must be broadcastable to the same shape as label. For example, if label has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1).
Returns:

mxnet.ndarray or mxnet.symbol of loss.

accelbrainbase.computableloss._mxnet.generator_loss module

class accelbrainbase.computableloss._mxnet.generator_loss.GeneratorLoss(weight=None, batch_axis=0, **kwargs)

Bases: mxnet.gluon.loss.Loss

Loss function of generators in Generative Adversarial Networks(GANs).

References

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
compute(generated_posterior_arr)

Compute loss.

Parameters:generated_posterior_arr – Generated samples.
Returns:Tensor of losses.
hybrid_forward(F, generated_posterior_arr, sample_weight=None)

Forward propagation, computing losses.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • generated_posterior_arrmxnet.ndarray or mxnet.symbol of fake posterior inferenced by the generator.
Returns:

mxnet.ndarray or mxnet.symbol of loss.

accelbrainbase.computableloss._mxnet.l2_norm_loss module

class accelbrainbase.computableloss._mxnet.l2_norm_loss.L2NormLoss(weight=1.0, batch_axis=0, **kwargs)

Bases: mxnet.gluon.loss.Loss, accelbrainbase.computable_loss.ComputableLoss

Loss function that computes L2 norm.

compute(pred_arr, real_arr)

Compute loss.

Parameters:
  • pred_arr – Inferenced results.
  • real_arr – Real results.
Returns:

Tensor of losses.

hybrid_forward(F, orign_arr, dest_arr, sample_weight=None)

Forward propagation, computing L2 norm.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • orign_arrmxnet.ndarray or mxnet.symbol of origins.
  • dest_arrmxnet.ndarray or mxnet.symbol of destinations.
Returns:

mxnet.ndarray or mxnet.symbol of loss.

Module contents