pygan.generativemodel.autoencodermodel package

Submodules

pygan.generativemodel.autoencodermodel.conditional_convolutional_auto_encoder module

class pygan.generativemodel.autoencodermodel.conditional_convolutional_auto_encoder.ConditionalConvolutionalAutoEncoder(conditional_convolutional_model, batch_size=20, learning_rate=1e-10, learning_attenuate_rate=0.1, attenuate_epoch=50, opt_params=None)[source]

Bases: pygan.generativemodel.auto_encoder_model.AutoEncoderModel

Conditional Convolutional Auto-Encoder(CCAE) as a AutoEncoderModel which has a CondtionalConvolutionalModel.

A stack of Convolutional Auto-Encoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.

In this library, Convolutional Auto-Encoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

Also, this model has a so-called Deconvolutional Neural Network as a Conditioner, where the function of Conditioner is a conditional mechanism to use previous knowledge to condition the generations, incorporating information from previous observed data points to itermediate layers of the Generator. In this method, this model can “look back” without a recurrent unit as used in RNN or LSTM.

This model observes not only random noises but also any other prior information as a previous knowledge and outputs feature points. Due to the Conditioner, this model has the capacity to exploit whatever prior knowledge that is available and can be represented as a matrix or tensor.

Note that this model defines the inputs as samples of conditions and so the outputs as reconstructed samples of condtions, considering that those distributions, especially scales represented by biases, can be equivalents or similar. This definition assumes an intuitive implementation specific to this library. If you do not want to train in this way, you should use not this model but ConditionalConvolutionalModel.

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
  • Yang, L. C., Chou, S. Y., & Yang, Y. H. (2017). MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847.
conditional_convolutional_model

getter

draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
get_conditional_convolutional_model()[source]

getter

get_pre_loss_arr()[source]

getter

inference(observed_arr)[source]

Draws samples from the true distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Generator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
pre_learn(true_sampler, epochs=1000)[source]

Pre learning.

Parameters:
  • true_sampler – is-a TrueSampler.
  • epochs – Epochs.
pre_loss_arr

getter

set_conditional_convolutional_model(value)[source]

setter

set_readonly(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.
update()[source]

Update the encoder and the decoder to minimize the reconstruction error of the inputs.

This model defines the inputs as samples of conditions and so the outputs as reconstructed samples of condtions, considering that those distributions, especially scales represented by biases, are equivalents or similar.

Returns:np.ndarray of the reconstruction errors.

pygan.generativemodel.autoencodermodel.convolutional_auto_encoder module

class pygan.generativemodel.autoencodermodel.convolutional_auto_encoder.ConvolutionalAutoEncoder(batch_size=20, learning_rate=1e-10, learning_attenuate_rate=0.1, attenuate_epoch=50, opt_params=None, convolutional_auto_encoder=None, deconvolution_layer_list=None, gray_scale_flag=True, channel=None)[source]

Bases: pygan.generativemodel.auto_encoder_model.AutoEncoderModel

Convolutional Auto-Encoder(CAE) as a AutoEncoderModel.

A stack of Convolutional Auto-Encoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.

In this library, Convolutional Auto-Encoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
convolutional_auto_encoder

getter

deconvolution_layer_list

getter

draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
get_convolutional_auto_encoder()[source]

getter

get_deconvolution_layer_list()[source]

getter

get_pre_loss_arr()[source]

getter

inference(observed_arr)[source]

Draws samples from the fake distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
pre_learn(true_sampler, epochs=1000)[source]

Pre learning.

Parameters:
  • true_sampler – is-a TrueSampler.
  • epochs – Epochs.
pre_loss_arr

getter

set_convolutional_auto_encoder(value)[source]

setter

set_deconvolution_layer_list(value)[source]

setter

set_readonly(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.
update()[source]

Update the encoder and the decoder to minimize the reconstruction error of the inputs.

Returns:np.ndarray of the reconstruction errors.

pygan.generativemodel.autoencodermodel.encoder_decoder_model module

class pygan.generativemodel.autoencodermodel.encoder_decoder_model.EncoderDecoderModel(encoder_decoder_controller, seq_len=10, learning_rate=1e-10, learning_attenuate_rate=0.1, attenuate_epoch=50, join_io_flag=False)[source]

Bases: pygan.generativemodel.auto_encoder_model.AutoEncoderModel

Encoder/Decoder based on LSTM as a Generator.

This library regards the Encoder/Decoder based on LSTM as an Auto-Encoder.

Originally, Long Short-Term Memory(LSTM) networks as a special RNN structure has proven stable and powerful for modeling long-range dependencies.

The Key point of structural expansion is its memory cell which essentially acts as an accumulator of the state information. Every time observed data points are given as new information and input to LSTM’s input gate, its information will be accumulated to the cell if the input gate is activated. The past state of cell could be forgotten in this process if LSTM’s forget gate is on. Whether the latest cell output will be propagated to the final state is further controlled by the output gate.

References

  • Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.
  • Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
encoder_decoder_controller

getter

get_encoder_decoder_controller()[source]

getter

get_pre_loss_arr()[source]

getter

inference(observed_arr)[source]

Draws samples from the fake distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
pre_learn(true_sampler, epochs=1000)[source]

Pre learning.

Parameters:
  • true_sampler – is-a TrueSampler.
  • epochs – Epochs.
pre_loss_arr

getter

set_encoder_decoder_controller(value)[source]

setter

set_readonly(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.
update()[source]

Update the encoder and the decoder to minimize the reconstruction error of the inputs.

Returns:np.ndarray of the reconstruction errors.

Module contents