Title:
On Adversarial Learning
Abstract:
We consider learning a generative model for general data, with imaging data considered for demonstration. We seek to match the model data
distribution to the unknown distribution of observed samples. Learning is based on minimizing the Kullback-Leibler divergence in the reverse
direction compared to that done typically. We demonstrate that this setup yields a learning framework similar to, but distinct from, the
original generative adversarial network (GAN), where here we estimate an explicit "critic" in terms of a likelihood ratio. This framework
addresses previously noted challenges with the original GAN, and it extends adversarial methods such that we may learn based on either an
unnormalized distribution or the more-widely-assumed observed samples.
|