Created
April 17, 2018 02:59
-
-
Save harveyslash/2c397bf0b36f32e0a8913a8a06f22c4a to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Text generation models typically learn the distribution of a text and | |
used this learned information to generate text themselves. By tuning | |
various hyperparameters, it is possible to finetune the tradeoff between | |
the deviation of the text from the distribution and the robustness of | |
the text. | |
I present a way to extend this ability of generative models to learn | |
multiple distributions in order to generate a style of text | |
that is unique from both distributions. | |
I also discuss ways to regularize the presented model using Generative | |
Adversarial Networks(GANs) and governing the contribution of each particular | |
style of text. | |
Evaluating models that generate unique texts are hard to evaluate.I | |
also present a way to evaluate the generated texts from the model. | |
Recurrent Neural Networks(RNNs) have shown to have superior performance | |
when dealing with sequences that have long term dependencies. | |
I employ RNNs as the text generation model and adversarial networks as | |
the regularization mechanism. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment