pygan.generativemodel.autoencodermodel package¶
Submodules¶
pygan.generativemodel.autoencodermodel.conditional_convolutional_auto_encoder module¶

class
pygan.generativemodel.autoencodermodel.conditional_convolutional_auto_encoder.
ConditionalConvolutionalAutoEncoder
(conditional_convolutional_model, batch_size=20, learning_rate=1e10, learning_attenuate_rate=0.1, attenuate_epoch=50, opt_params=None)[source]¶ Bases:
pygan.generativemodel.auto_encoder_model.AutoEncoderModel
Conditional Convolutional AutoEncoder(CCAE) as a AutoEncoderModel which has a CondtionalConvolutionalModel.
A stack of Convolutional AutoEncoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional AutoEncoder is trained using conventional online gradient descent without additional regularization terms.
In this library, Convolutional AutoEncoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)
Also, this model has a socalled Deconvolutional Neural Network as a Conditioner, where the function of Conditioner is a conditional mechanism to use previous knowledge to condition the generations, incorporating information from previous observed data points to itermediate layers of the Generator. In this method, this model can “look back” without a recurrent unit as used in RNN or LSTM.
This model observes not only random noises but also any other prior information as a previous knowledge and outputs feature points. Due to the Conditioner, this model has the capacity to exploit whatever prior knowledge that is available and can be represented as a matrix or tensor.
Note that this model defines the inputs as samples of conditions and so the outputs as reconstructed samples of condtions, considering that those distributions, especially scales represented by biases, can be equivalents or similar. This definition assumes an intuitive implementation specific to this library. If you do not want to train in this way, you should use not this model but ConditionalConvolutionalModel.
References
 Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
 Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional autoencoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 5259). Springer, Berlin, Heidelberg.
 Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
 Yang, L. C., Chou, S. Y., & Yang, Y. H. (2017). MidiNet: A convolutional generative adversarial network for symbolicdomain music generation. arXiv preprint arXiv:1703.10847.

conditional_convolutional_model
¶ getter

inference
(observed_arr)[source]¶ Draws samples from the true distribution.
Parameters: observed_arr – np.ndarray of observed data points. Returns: np.ndarray of inferenced.

learn
(grad_arr)[source]¶ Update this Generator by ascending its stochastic gradient.
Parameters: grad_arr – np.ndarray of gradients. Returns: np.ndarray of delta or gradients.

pre_learn
(true_sampler, epochs=1000)[source]¶ Pre learning.
Parameters:  true_sampler – isa TrueSampler.
 epochs – Epochs.

pre_loss_arr
¶ getter

switch_inferencing_mode
(inferencing_mode=True)[source]¶ Set inferencing mode in relation to concrete regularizations.
Parameters: inferencing_mode – Inferencing mode or not.

update
()[source]¶ Update the encoder and the decoder to minimize the reconstruction error of the inputs.
This model defines the inputs as samples of conditions and so the outputs as reconstructed samples of condtions, considering that those distributions, especially scales represented by biases, are equivalents or similar.
Returns: np.ndarray of the reconstruction errors.
pygan.generativemodel.autoencodermodel.convolutional_auto_encoder module¶

class
pygan.generativemodel.autoencodermodel.convolutional_auto_encoder.
ConvolutionalAutoEncoder
(batch_size=20, learning_rate=1e10, learning_attenuate_rate=0.1, attenuate_epoch=50, opt_params=None, convolutional_auto_encoder=None, deconvolution_layer_list=None, gray_scale_flag=True, channel=None)[source]¶ Bases:
pygan.generativemodel.auto_encoder_model.AutoEncoderModel
Convolutional AutoEncoder(CAE) as a AutoEncoderModel.
A stack of Convolutional AutoEncoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional AutoEncoder is trained using conventional online gradient descent without additional regularization terms.
In this library, Convolutional AutoEncoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)
References
 Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
 Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional autoencoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 5259). Springer, Berlin, Heidelberg.

convolutional_auto_encoder
¶ getter

deconvolution_layer_list
¶ getter

inference
(observed_arr)[source]¶ Draws samples from the fake distribution.
Parameters: observed_arr – np.ndarray of observed data points. Returns: np.ndarray of inferenced.

learn
(grad_arr)[source]¶ Update this Discriminator by ascending its stochastic gradient.
Parameters: grad_arr – np.ndarray of gradients. Returns: np.ndarray of delta or gradients.

pre_learn
(true_sampler, epochs=1000)[source]¶ Pre learning.
Parameters:  true_sampler – isa TrueSampler.
 epochs – Epochs.

pre_loss_arr
¶ getter
pygan.generativemodel.autoencodermodel.encoder_decoder_model module¶

class
pygan.generativemodel.autoencodermodel.encoder_decoder_model.
EncoderDecoderModel
(encoder_decoder_controller, seq_len=10, learning_rate=1e10, learning_attenuate_rate=0.1, attenuate_epoch=50, join_io_flag=False)[source]¶ Bases:
pygan.generativemodel.auto_encoder_model.AutoEncoderModel
Encoder/Decoder based on LSTM as a Generator.
This library regards the Encoder/Decoder based on LSTM as an AutoEncoder.
Originally, Long ShortTerm Memory(LSTM) networks as a special RNN structure has proven stable and powerful for modeling longrange dependencies.
The Key point of structural expansion is its memory cell which essentially acts as an accumulator of the state information. Every time observed data points are given as new information and input to LSTM’s input gate, its information will be accumulated to the cell if the input gate is activated. The past state of cell could be forgotten in this process if LSTM’s forget gate is on. Whether the latest cell output will be propagated to the final state is further controlled by the output gate.
References
 Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
 Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTMbased encoderdecoder for multisensor anomaly detection. arXiv preprint arXiv:1607.00148.
 Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.

encoder_decoder_controller
¶ getter

inference
(observed_arr)[source]¶ Draws samples from the fake distribution.
Parameters: observed_arr – np.ndarray of observed data points. Returns: np.ndarray of inferenced.

learn
(grad_arr)[source]¶ Update this Discriminator by ascending its stochastic gradient.
Parameters: grad_arr – np.ndarray of gradients. Returns: np.ndarray of delta or gradients.

pre_learn
(true_sampler, epochs=1000)[source]¶ Pre learning.
Parameters:  true_sampler – isa TrueSampler.
 epochs – Epochs.

pre_loss_arr
¶ getter