pygan.featurematching package

Submodules

pygan.featurematching.denoising_feature_matching module

class pygan.featurematching.denoising_feature_matching.DenoisingFeatureMatching(auto_encoder, lambda1=0.5, lambda2=0.0, lambda3=0.5, computable_loss=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, noising_f=None)[source]

Bases: pygan.feature_matching.FeatureMatching

Value function with Feature matching, which addresses the instability of GANs by specifying a new objective for the generator that prevents it from overtraining on the current discriminator(Salimans, T., et al., 2016).

“Instead of directly maximizing the output of the discriminator, the new objective requires the generator to generate data that matches the statistics of the real data, where we use the discriminator only to specify the statistics that we think are worth matching.” (Salimans, T., et al., 2016, p2.)

While the gradient of the loss function defined by the discriminator may be a source of information mostly relevant to very local improvements, the discriminator itself is a potentially valuable source of compact descriptors of the training data. Although non-stationary, the distribution of the highlevel activations of the discriminator when evaluated on data is ripe for exploitation as an additional source of knowledge about salient aspects of the data distribution. Warde-Farley, D., et al. (2016) proposed in this work to track this distribution with a denoising auto-encoder trained on the discriminator’s hidden states when evaluated on training data.

Then Warde-Farley, D., et al. (2016) proposed an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator towards probable configurations of abstract discriminator features. They estimate and track the distribution of these features, as computed from data, with a denoising auto-encoder, and use it to propose high-level targets for the generator.

References

  • Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. In Advances in neural information processing systems (pp. 2234-2242).
  • Yang, L. C., Chou, S. Y., & Yang, Y. H. (2017). MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847.
  • Warde-Farley, D., & Bengio, Y. (2016). Improving generative adversarial networks with denoising feature matching.
compute_delta(true_sampler, discriminative_model, generated_arr)[source]

Compute generator’s reward.

Parameters:
  • true_sampler – Sampler which draws samples from the true distribution.
  • discriminative_model – Discriminator which discriminates true from fake.
  • generated_arrnp.ndarray generated data points.
Returns:

np.ndarray of Gradients.

Module contents