pygan.generativemodel.conditionalgenerativemodel package

Submodules

pygan.generativemodel.conditionalgenerativemodel.conditional_convolutional_model module

class pygan.generativemodel.conditionalgenerativemodel.conditional_convolutional_model.ConditionalConvolutionalModel(deconvolution_model, batch_size, layerable_cnn_list, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50, computable_loss=None, opt_params=None, verificatable_result=None, cnn=None, condition_noise_sampler=None)[source]

Bases: pygan.generativemodel.conditional_generative_model.ConditionalGenerativeModel

Convolutional Neural Network as a GenerativeModel.

This model has a so-called Deconvolutional Neural Network as a Conditioner, where the function of Conditioner is a conditional mechanism to use previous knowledge to condition the generations, incorporating information from previous observed data points to itermediate layers of the Generator. In this method, this model can “look back” without a recurrent unit as used in RNN or LSTM.

This model observes not only random noises but also any other prior information as a previous knowledge and outputs feature points. Due to the Conditioner, this model has the capacity to exploit whatever prior knowledge that is available and can be represented as a matrix or tensor.

Deconvolution in this class is a transposed convolutions which “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
  • Yang, L. C., Chou, S. Y., & Yang, Y. H. (2017). MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847.
cnn

getter

condition_noise_sampler

getter

conditional_axis

getter

deconvolution_model

getter

draw()[source]

Draws samples from the fake distribution.

Returns:np.ndarray of samples.
epoch_counter

getter

extract_conditions()[source]

Extract samples of conditions.

Returns:np.ndarray of samples.
get_cnn()[source]

getter

get_condition_noise_sampler()[source]

getter

get_conditional_axis()[source]

getter

get_deconvolution_model()[source]

getter

get_epoch_counter()[source]

getter

inference(observed_arr)[source]

Draws samples from the true distribution.

Parameters:observed_arrnp.ndarray of observed data points.
Returns:np.ndarray of inferenced.
learn(grad_arr)[source]

Update this Generator by ascending its stochastic gradient.

Parameters:grad_arrnp.ndarray of gradients.
Returns:np.ndarray of delta or gradients.
set_cnn(value)[source]

setter

set_condition_noise_sampler(value)[source]

setter

set_conditional_axis(value)[source]

setter

set_deconvolution_model(value)[source]

setter

set_epoch_counter(value)[source]

setter

switch_inferencing_mode(inferencing_mode=True)[source]

Set inferencing mode in relation to concrete regularizations.

Parameters:inferencing_mode – Inferencing mode or not.

Module contents