pygan.discriminativemodel package

Submodules

pygan.discriminativemodel.auto_encoder_model module

class pygan.discriminativemodel.auto_encoder_model.AutoEncoderModel[source]

Bases: pygan.discriminative_model.DiscriminativeModel

Auto-Encoder as a Discriminative Model which discriminates true from fake.

The Energy-based GAN framework considers the discriminator as an energy function, which assigns low energy values to real data and high to fake data. The generator is a trainable parameterized function that produces samples in regions to which the discriminator assigns low energy.

References

  • Manisha, P., & Gujar, S. (2018). Generative Adversarial Networks (GANs): What it can generate and What it cannot?. arXiv preprint arXiv:1804.00140.
  • Zhao, J., Mathieu, M., & LeCun, Y. (2016). Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.
pre_learn(true_sampler, epochs=1000)[source]

Pre learning.

Parameters:
  • true_sampler – is-a TrueSampler.
  • epochs – Epochs.

pygan.discriminativemodel.cnn_model module

class pygan.discriminativemodel.cnn_model.CNNModel(batch_size, layerable_cnn_list, cnn_output_graph, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50, computable_loss=None, opt_params=None, verificatable_result=None, cnn=None, feature_matching_layer=0)[source]

Bases: pygan.discriminative_model.DiscriminativeModel

Convolutional Neural Network as a Discriminator.

cnn

getter

feature_matching_backward(grad_arr)[source]

Back propagation in only first or intermediate layer for so-called Feature matching.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of outputs.
feature_matching_forward(observed_arr)[source]

Forward propagation in only first or intermediate layer for so-called Feature matching.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of outputs.
get_cnn()[source]

getter

inference(observed_arr)[source]

Draws samples from the true distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr, fix_opt_flag=False)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:
  • grad_arrnp.ndarray of gradients.
  • fix_opt_flag – If False, no optimization in this model will be done.
Returns:

np.ndarray of delta or gradients.

set_cnn(value)[source]

setter

pygan.discriminativemodel.lstm_model module

class pygan.discriminativemodel.lstm_model.LSTMModel(lstm_model=None, batch_size=20, input_neuron_count=100, hidden_neuron_count=300, observed_activating_function=None, input_gate_activating_function=None, forget_gate_activating_function=None, output_gate_activating_function=None, hidden_activating_function=None, seq_len=10, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50)[source]

Bases: pygan.discriminative_model.DiscriminativeModel

LSTM as a Discriminator.

Originally, Long Short-Term Memory(LSTM) networks as a special RNN structure has proven stable and powerful for modeling long-range dependencies.

The Key point of structural expansion is its memory cell which essentially acts as an accumulator of the state information. Every time observed data points are given as new information and input to LSTM’s input gate, its information will be accumulated to the cell if the input gate is activated. The past state of cell could be forgotten in this process if LSTM’s forget gate is on. Whether the latest cell output will be propagated to the final state is further controlled by the output gate.

References

  • Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.
  • Mogren, O. (2016). C-RNN-GAN: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904.
  • Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
feature_matching_backward(grad_arr)[source]

Back propagation in only first or intermediate layer for so-called Feature matching.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of outputs.
feature_matching_forward(observed_arr)[source]

Forward propagation in only first or intermediate layer for so-called Feature matching.

Like C-RNN-GAN(Mogren, O. 2016), this model chooses the last layer before the output layer in this Discriminator.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of outputs.
get_lstm_model()[source]

getter

inference(observed_arr)[source]

Draws samples from the true distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr, fix_opt_flag=False)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:
  • grad_arrnp.ndarray of gradients.
  • fix_opt_flag – If False, no optimization in this model will be done.
Returns:

np.ndarray of delta or gradients.

lstm_model

getter

set_lstm_model(value)[source]

setter

pygan.discriminativemodel.nn_model module

class pygan.discriminativemodel.nn_model.NNModel(batch_size, nn_layer_list, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50, computable_loss=None, opt_params=None, verificatable_result=None, nn=None, feature_matching_layer=0)[source]

Bases: pygan.discriminative_model.DiscriminativeModel

Neural Network as a Discriminator.

feature_matching_backward(grad_arr)[source]

Back propagation in only first or intermediate layer for so-called Feature matching.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of outputs.
feature_matching_forward(observed_arr)[source]

Forward propagation in only first or intermediate layer for so-called Feature matching.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of outputs.
get_nn()[source]

getter

inference(observed_arr)[source]

Draws samples from the true distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr, fix_opt_flag=False)[source]

Update this Discriminator by ascending its stochastic gradient.

Parameters:
  • grad_arrnp.ndarray of gradients.
  • fix_opt_flag – If False, no optimization in this model will be done.
Returns:

np.ndarray of delta or gradients.

nn

getter

set_nn(value)[source]

setter

Module contents