diff options
author | Soumith Chintala <soumith@gmail.com> | 2015-11-16 19:08:10 +0300 |
---|---|---|
committer | Soumith Chintala <soumith@gmail.com> | 2015-11-16 19:08:10 +0300 |
commit | 613a1e0f42ad55d1180e6e6e59959af3e56d2885 (patch) | |
tree | 02cf16b20d1ce1cb7144454af8274d7e69afc6ef | |
parent | 6f119395fba8dc1c487e4cf1d75d728d13e7436b (diff) |
Update 2015-11-13-gan.md
-rw-r--r-- | blog/_posts/2015-11-13-gan.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/blog/_posts/2015-11-13-gan.md b/blog/_posts/2015-11-13-gan.md index bf400e7..aa64056 100644 --- a/blog/_posts/2015-11-13-gan.md +++ b/blog/_posts/2015-11-13-gan.md @@ -83,7 +83,7 @@ local samples = model_G:forward(noise_inputs) In principle, the GAN optimization game is simple. We use binary cross entropy to optimize the parameters in the discriminator. Afterwards we use binary cross entropy to optimize the generator to fool the discriminator. That said, you often find yourself left with not very convincing outputs from generator: -<p align='center'><img src="https://raw.githubusercontent.com/torch/torch.github.io/master/blog/_posts/images/bad_examples.png"></p> +<p align='center'><img width="75%" src="https://raw.githubusercontent.com/torch/torch.github.io/master/blog/_posts/images/bad_examples.png"></p> This gibberish is typical for a generator trained without proper care! |