pygan.generativemodel package¶
Subpackages¶
Submodules¶
pygan.generativemodel.auto_encoder_model module¶
-
class
pygan.generativemodel.auto_encoder_model.
AutoEncoderModel
[source]¶ Bases:
pygan.generative_model.GenerativeModel
Auto-Encoder as a Generative model which draws samples from the fake distribution.
pygan.generativemodel.conditional_generative_model module¶
-
class
pygan.generativemodel.conditional_generative_model.
ConditionalGenerativeModel
[source]¶ Bases:
pygan.generative_model.GenerativeModel
Generate samples based on the conditional noise prior.
GenerativeModel that has a Conditioner, where the function of Conditioner is a conditional mechanism to use previous knowledge to condition the generations, incorporating information from previous observed data points to itermediate layers of the Generator. In this method, this model can “look back” without a recurrent unit as used in RNN or LSTM.
This model observes not only random noises but also any other prior information as a previous knowledge and outputs feature points. Dut to the Conditioner, this model has the capacity to exploit whatever prior knowledge that is available and can be represented as a matrix or tensor.
References
- Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
pygan.generativemodel.deconvolution_model module¶
-
class
pygan.generativemodel.deconvolution_model.
DeconvolutionModel
(deconvolution_layer_list, computable_loss=None, cnn_output_graph=None, opt_params=None, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50)[source]¶ Bases:
pygan.generative_model.GenerativeModel
So-called Deconvolutional Neural Network as a GenerativeModel.
Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)
References
- Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
-
deconvolution_layer_list
¶ getter
-
inference
(observed_arr)[source]¶ Draws samples from the fake distribution.
Parameters: observed_arr – np.ndarray of observed data points. Returns: np.ndarray of inferenced.
-
learn
(grad_arr)[source]¶ Update this Generator by ascending its stochastic gradient.
Parameters: grad_arr – np.ndarray of gradients. Returns: np.ndarray of delta or gradients.
-
output_back_propagate
(pred_arr, delta_arr)[source]¶ Back propagation in output layer.
Parameters: - pred_arr – np.ndarray of predicted data points.
- delta_output_arr – Delta.
Returns: Tuple data. - np.ndarray of Delta, - list of gradations.
pygan.generativemodel.lstm_model module¶
-
class
pygan.generativemodel.lstm_model.
LSTMModel
(lstm_model=None, computable_loss=None, batch_size=20, input_neuron_count=100, hidden_neuron_count=300, observed_activating_function=None, input_gate_activating_function=None, forget_gate_activating_function=None, output_gate_activating_function=None, hidden_activating_function=None, output_activating_function=None, seq_len=10, join_io_flag=False, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50)[source]¶ Bases:
pygan.generative_model.GenerativeModel
LSTM as a Generator.
Originally, Long Short-Term Memory(LSTM) networks as a special RNN structure has proven stable and powerful for modeling long-range dependencies.
The Key point of structural expansion is its memory cell which essentially acts as an accumulator of the state information. Every time observed data points are given as new information and input to LSTM’s input gate, its information will be accumulated to the cell if the input gate is activated. The past state of cell could be forgotten in this process if LSTM’s forget gate is on. Whether the latest cell output will be propagated to the final state is further controlled by the output gate.
References
- Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
- Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.
- Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
-
inference
(observed_arr)[source]¶ Draws samples from the fake distribution.
Parameters: observed_arr – np.ndarray of observed data points. Returns: np.ndarray of inferenced data.
-
learn
(grad_arr)[source]¶ Update this Discriminator by ascending its stochastic gradient.
Parameters: grad_arr – np.ndarray of gradients. Returns: np.ndarray of delta or gradients.
-
lstm_model
¶ getter
pygan.generativemodel.nn_model module¶
-
class
pygan.generativemodel.nn_model.
NNModel
(batch_size, nn_layer_list, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50, computable_loss=None, opt_params=None, verificatable_result=None, pre_learned_path_list=None, nn=None)[source]¶ Bases:
pygan.generative_model.GenerativeModel
Neural Network as a GenerativeModel.
-
inference
(observed_arr)[source]¶ Draws samples from the fake distribution.
Parameters: observed_arr – np.ndarray of observed data points. Returns: np.ndarray of inferenced.
-
learn
(grad_arr)[source]¶ Update this Generator by ascending its stochastic gradient.
Parameters: - grad_arr – np.ndarray of gradients.
- fix_opt_flag – If False, no optimization in this model will be done.
Returns: np.ndarray of delta or gradients.
-
nn
¶ getter
-