pygan.generativemodel.autoencodermodel package

Submodules

pygan.generativemodel.autoencodermodel.convolutional_auto_encoder module

class pygan.generativemodel.autoencodermodel.convolutional_auto_encoder.ConvolutionalAutoEncoder(batch_size=20, learning_rate=1e-10, opt_params=None, convolutional_auto_encoder=None, deconvolution_layer_list=None, gray_scale_flag=True, channel=None)[source]

Bases: pygan.generativemodel.auto_encoder_model.AutoEncoderModel

Convolutional Auto-Encoder(CAE) as a AutoEncoderModel.

A stack of Convolutional Auto-Encoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.

In this library, Convolutional Auto-Encoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
convolutional_auto_encoder

getter

deconvolution_layer_list

getter

draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
get_convolutional_auto_encoder()[source]

getter

get_deconvolution_layer_list()[source]

getter

get_pre_loss_arr()[source]

getter

inference(observed_arr)[source]

Draws samples from the fake distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
pre_learn(true_sampler, epochs=1000)[source]

Pre learning.

Parameters:
  • true_sampler – is-a TrueSampler.
  • epochs – Epochs.
pre_loss_arr

getter

set_convolutional_auto_encoder(value)[source]

setter

set_deconvolution_layer_list(value)[source]

setter

set_readonly(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.
update()[source]

Update the encoder and the decoder to minimize the reconstruction error of the inputs.

Returns:np.ndarray of the reconstruction errors.

pygan.generativemodel.autoencodermodel.encoder_decoder_model module

class pygan.generativemodel.autoencodermodel.encoder_decoder_model.EncoderDecoderModel(encoder_decoder_controller, seq_len=10, learning_rate=1e-10, join_io_flag=False)[source]

Bases: pygan.generativemodel.auto_encoder_model.AutoEncoderModel

Encoder/Decoder based on LSTM as a Generator.

This library regards the Encoder/Decoder based on LSTM as an Auto-Encoder.

Originally, Long Short-Term Memory(LSTM) networks as a special RNN structure has proven stable and powerful for modeling long-range dependencies.

The Key point of structural expansion is its memory cell which essentially acts as an accumulator of the state information. Every time observed data points are given as new information and input to LSTM’s input gate, its information will be accumulated to the cell if the input gate is activated. The past state of cell could be forgotten in this process if LSTM’s forget gate is on. Whether the latest cell output will be propagated to the final state is further controlled by the output gate.

References

  • Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.
  • Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
encoder_decoder_controller

getter

get_encoder_decoder_controller()[source]

getter

get_pre_loss_arr()[source]

getter

inference(observed_arr)[source]

Draws samples from the fake distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
pre_learn(true_sampler, epochs=1000)[source]

Pre learning.

Parameters:
  • true_sampler – is-a TrueSampler.
  • epochs – Epochs.
pre_loss_arr

getter

set_encoder_decoder_controller(value)[source]

setter

set_readonly(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.
update()[source]

Update the encoder and the decoder to minimize the reconstruction error of the inputs.

Returns:np.ndarray of the reconstruction errors.

Module contents