pygan.generativemodel package

Submodules

pygan.generativemodel.auto_encoder_model module

class pygan.generativemodel.auto_encoder_model.AutoEncoderModel[source]

Bases: pygan.generative_model.GenerativeModel

Auto-Encoder as a Generative model which draws samples from the fake distribution.

pre_learn(true_sampler, epochs=1000)[source]

Pre learning.

Parameters:
  • true_sampler – is-a TrueSampler.
  • epochs – Epochs.
update()[source]

Update the encoder and the decoder to minimize the reconstruction error of the inputs.

Returns:np.ndarray of the reconstruction errors.

pygan.generativemodel.conditional_generative_model module

class pygan.generativemodel.conditional_generative_model.ConditionalGenerativeModel[source]

Bases: pygan.generative_model.GenerativeModel

Generate samples based on the conditional noise prior.

GenerativeModel that has a Conditioner, where the function of Conditioner is a conditional mechanism to use previous knowledge to condition the generations, incorporating information from previous observed data points to itermediate layers of the Generator. In this method, this model can “look back” without a recurrent unit as used in RNN or LSTM.

This model observes not only random noises but also any other prior information as a previous knowledge and outputs feature points. Dut to the Conditioner, this model has the capacity to exploit whatever prior knowledge that is available and can be represented as a matrix or tensor.

References

  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
extract_conditions()[source]

Extract samples of conditions.

Returns:np.ndarray of samples.

pygan.generativemodel.deconvolution_model module

class pygan.generativemodel.deconvolution_model.DeconvolutionModel(deconvolution_layer_list, computable_loss=None, cnn_output_graph=None, opt_params=None, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50)[source]

Bases: pygan.generative_model.GenerativeModel

So-called Deconvolutional Neural Network as a GenerativeModel.

Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
deconvolution_layer_list

getter

draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
get_deconvolution_layer_list()[source]

getter

inference(observed_arr)[source]

Draws samples from the fake distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Generator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
output_back_propagate(pred_arr, delta_arr)[source]

Back propagation in output layer.

Parameters:
  • pred_arrnp.ndarray of predicted data points.
  • delta_output_arr – Delta.
Returns:

Tuple data. - np.ndarray of Delta, - list of gradations.

output_forward_propagate(pred_arr)[source]

Forward propagation in output layer.

Parameters:pred_arrnp.ndarray of predicted data points.
Returns:np.ndarray of propagated data points.
set_deconvolution_layer_list(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.

pygan.generativemodel.lstm_model module

class pygan.generativemodel.lstm_model.LSTMModel(lstm_model=None, computable_loss=None, batch_size=20, input_neuron_count=100, hidden_neuron_count=300, observed_activating_function=None, input_gate_activating_function=None, forget_gate_activating_function=None, output_gate_activating_function=None, hidden_activating_function=None, output_activating_function=None, seq_len=10, join_io_flag=False, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50)[source]

Bases: pygan.generative_model.GenerativeModel

LSTM as a Generator.

Originally, Long Short-Term Memory(LSTM) networks as a special RNN structure has proven stable and powerful for modeling long-range dependencies.

The Key point of structural expansion is its memory cell which essentially acts as an accumulator of the state information. Every time observed data points are given as new information and input to LSTM’s input gate, its information will be accumulated to the cell if the input gate is activated. The past state of cell could be forgotten in this process if LSTM’s forget gate is on. Whether the latest cell output will be propagated to the final state is further controlled by the output gate.

References

  • Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.
  • Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
get_lstm_model()[source]

getter

inference(observed_arr)[source]

Draws samples from the fake distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced data.
learn(grad_arr)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
lstm_model

getter

set_lstm_model(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.

pygan.generativemodel.nn_model module

class pygan.generativemodel.nn_model.NNModel(batch_size, nn_layer_list, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50, computable_loss=None, opt_params=None, verificatable_result=None, pre_learned_path_list=None, nn=None)[source]

Bases: pygan.generative_model.GenerativeModel

Neural Network as a GenerativeModel.

draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
get_nn()[source]

getter

inference(observed_arr)[source]

Draws samples from the fake distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Generator by ascending its stochastic gradient.

Parameters:
  • grad_arrnp.ndarray of gradients.
  • fix_opt_flag – If False, no optimization in this model will be done.
Returns:

np.ndarray of delta or gradients.

nn

getter

set_nn(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.

Module contents