accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutionalautoencoder package

Submodules

accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutionalautoencoder.contractive_cae module

class accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutionalautoencoder.contractive_cae.ContractiveCAE(encoder, decoder, computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, hidden_units_list=[], output_nn=None, hidden_dropout_rate_list=[], optimizer_name='SGD', hidden_activation_list=[], hidden_batch_norm_list=[], ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, tied_weights_flag=True, init_deferred_flag=None, wd=None, **kwargs)

Bases: accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutional_auto_encoder.ConvolutionalAutoEncoder

Convolutional Contracitve Auto-Encoder.

A stack of Convolutional Auto-Encoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.

In this library, Convolutional Auto-Encoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

The First-Order Contractive Auto-Encoder(Rifai, S., et al., 2011) executes the representation learning by adding a penalty term to the classical reconstruction cost function. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input and results in a localized space contraction which in turn yields robust features on the activation layer.

Analogically, the Contractive Convolutional Auto-Encoder calculates the penalty term. But it differs in that the operation of the deconvolution intervenes insted of inner product.

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Kamyshanska, H., & Memisevic, R. (2014). The potential energy of an autoencoder. IEEE transactions on pattern analysis and machine intelligence, 37(6), 1261-1273.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
  • Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y. (2011, June). Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning (pp. 833-840). Omnipress.
  • Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., & Glorot, X. (2011, September). Higher order contractive auto-encoder. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 645-660). Springer, Berlin, Heidelberg.
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_penalty_lambda()

getter for lambda.

penalty_lambda

getter for lambda.

set_penalty_lambda(value)

setter for lambda.

accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutionalautoencoder.convolutional_ladder_networks module

class accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutionalautoencoder.convolutional_ladder_networks.ConvolutionalLadderNetworks(encoder, decoder, computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, hidden_units_list=[], output_nn=None, hidden_dropout_rate_list=[], optimizer_name='SGD', hidden_activation_list=[], hidden_batch_norm_list=[], ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, tied_weights_flag=True, init_deferred_flag=None, wd=None, **kwargs)

Bases: accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutional_auto_encoder.ConvolutionalAutoEncoder

Ladder Networks with a Stacked convolutional Auto-Encoder.

In most classification problems, finding and producing labels for the samples is hard. In many cases plenty of unlabeled data existand it seems obvious that using them should improve the results. For instance, there are plenty of unlabeled images available and in most image classification tasks there are vastly more bits of information in the statistical structure of input images than in their labels.

It is argued here that the reason why unsupervised learning has not been able to improve results is that most current versions are incompatible with supervised learning. The problem is that many un-supervised learning methods try to represent as much information about the original data as possible whereas supervised learning tries to filter out all the information which is irrelevant for the task at hand.

Ladder network is an Auto-Encoder which can discard information. Unsupervised learning needs to toleratediscarding information in order to work well with supervised learning. Many unsupervised learning methods are not good at this but one class of models stands out as an exception: hierarchical latent variable models. Unfortunately their derivation can be quite complicated and often involves approximations which compromise their per-formance.

A simpler alternative is offered by Auto-Encoders which also have the benefit of being compatible with standard supervised feedforward networks. They would be a promising candidate for combining supervised and unsupervised learning but unfortunately Auto-Encoders normally correspond to latent variable models with a single layer of stochastic variables, that is, they do not tolerate discarding information.

Ladder network makes it possible to solve that problem by settting recursive derivation of the learning rule with a distributed cost function, building denoisng Auto-Encoder recursively. Normally denoising Auto-Encoders have a fixed input but the cost functions on the higher layers can influence their input mappings and this creates a bias towards PCA-type solutions.

References

  • Bengio, Y., Lamblin, P., Popovici, D., & Larochelle, H. (2007). Greedy layer-wise training of deep networks. In Advances in neural information processing systems (pp. 153-160).
  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Erhan, D., Bengio, Y., Courville, A., Manzagol, P. A., Vincent, P., & Bengio, S. (2010). Why does unsupervised pre-training help deep learning?. Journal of Machine Learning Research, 11(Feb), 625-660.
  • Erhan, D., Courville, A., & Bengio, Y. (2010). Understanding representations learned in deep architectures. Department dInformatique et Recherche Operationnelle, University of Montreal, QC, Canada, Tech. Rep, 1355, 1.
  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning (adaptive computation and machine learning series). Adaptive Computation and Machine Learning series, 800.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
  • Rasmus, A., Berglund, M., Honkala, M., Valpola, H., & Raiko, T. (2015). Semi-supervised learning with ladder networks. In Advances in neural information processing systems (pp. 3546-3554).
  • Valpola, H. (2015). From neural PCA to deep unsupervised learning. In Advances in Independent Component Analysis and Learning Machines (pp. 143-171). Academic Press.
alpha

getter

alpha_decoder_loss_list

getter

alpha_decoder_test_loss_list

getter

alpha_encoder_loss_list

getter

alpha_encoder_test_loss_list

getter

alpha_loss_arr

getter

Returns:Logs of alpha losses. The shape is … - encoder’s train losses of alpha. - encoder’s test losses of alpha. - decoder’s train losses of alpha. - decoder’s test losses of alpha.
forward_ladder_net(F, x, noised_x)
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_alpha()

getter

get_alpha_decoder_loss_list()

getter

get_alpha_decoder_test_loss_list()

getter

get_alpha_encoder_loss_list()

getter

get_alpha_encoder_test_loss_list()

getter

get_alpha_loss_arr()

getter

Returns:Logs of alpha losses. The shape is … - encoder’s train losses of alpha. - encoder’s test losses of alpha. - decoder’s train losses of alpha. - decoder’s test losses of alpha.
get_mu()

getter

get_mu_decoder_loss_list()

getter

get_mu_decoder_test_loss_list()

getter

get_mu_encoder_loss_list()

getter

get_mu_encoder_test_loss_list()

getter

get_mu_loss_arr()

getter

Returns:Logs of mu losses. The shape is … - encoder’s train losses of mu. - encoder’s test losses of mu. - decoder’s train losses of mu. - decoder’s test losses of mu.
get_noiseable_data()

getter

get_recoding_ld_loss()

getter

get_sigma()

getter

get_sigma_decoder_loss_list()

getter

get_sigma_decoder_test_loss_list()

getter

get_sigma_encoder_loss_list()

getter

get_sigma_encoder_test_loss_list()

getter

get_sigma_loss_arr()

getter

Returns:Logs of sigma losses. The shape is … - encoder’s train losses of sigma. - encoder’s test losses of sigma. - decoder’s train losses of sigma. - decoder’s test losses of sigma.
mu

getter

mu_decoder_loss_list

getter

mu_decoder_test_loss_list

getter

mu_encoder_loss_list

getter

mu_encoder_test_loss_list

getter

mu_loss_arr

getter

Returns:Logs of mu losses. The shape is … - encoder’s train losses of mu. - encoder’s test losses of mu. - decoder’s train losses of mu. - decoder’s test losses of mu.
noiseable_data

getter

recoding_ld_loss

getter

set_alpha(value)

setter

set_alpha_decoder_loss_list(value)

setter

set_alpha_decoder_test_loss_list(value)

setter

set_alpha_encoder_loss_list(value)

setter

set_alpha_encoder_test_loss_list(value)

setter

set_mu(value)

setter

set_mu_decoder_loss_list(value)

setter

set_mu_decoder_test_loss_list(value)

setter

set_mu_encoder_loss_list(value)

setter

set_mu_encoder_test_loss_list(value)

setter

set_noiseable_data(value)

setter

set_readonly(value)

setter

set_recoding_ld_loss(value)

setter

set_sigma(value)

setter

set_sigma_decoder_loss_list(value)

setter

set_sigma_decoder_test_loss_list(value)

setter

set_sigma_encoder_loss_list(value)

setter

set_sigma_encoder_test_loss_list(value)

setter

sigma

getter

sigma_decoder_loss_list

getter

sigma_decoder_test_loss_list

getter

sigma_encoder_loss_list

getter

sigma_encoder_test_loss_list

getter

sigma_loss_arr

getter

Returns:Logs of sigma losses. The shape is … - encoder’s train losses of sigma. - encoder’s test losses of sigma. - decoder’s train losses of sigma. - decoder’s test losses of sigma.

accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutionalautoencoder.repelling_cae module

class accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutionalautoencoder.repelling_cae.RepellingCAE(encoder, decoder, computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, hidden_units_list=[], output_nn=None, hidden_dropout_rate_list=[], optimizer_name='SGD', hidden_activation_list=[], hidden_batch_norm_list=[], ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, tied_weights_flag=True, init_deferred_flag=None, wd=None, **kwargs)

Bases: accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutional_auto_encoder.ConvolutionalAutoEncoder

Repelling Convolutional Auto-Encoder.

A stack of Convolutional Auto-Encoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.

In this library, Convolutional Auto-Encoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

This Convolutional Auto-Encoder calculates the Repelling regularizer(Zhao, J., et al., 2016) as a penalty term.

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Kamyshanska, H., & Memisevic, R. (2014). The potential energy of an autoencoder. IEEE transactions on pattern analysis and machine intelligence, 37(6), 1261-1273.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
  • Zhao, J., Mathieu, M., & LeCun, Y. (2016). Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_penalty_lambda()

getter for lambda.

penalty_lambda

getter for lambda.

set_penalty_lambda(value)

setter for lambda.

Module contents