Paper ID: 2203.12106
An Empirical Study on Learning and Improving the Search Objective for Unsupervised Paraphrasing
Weikai Steven Lu
Research in unsupervised text generation has been gaining attention over the years. One recent approach is local search towards a heuristically defined objective, which specifies language fluency, semantic meanings, and other task-specific attributes. Search in the sentence space is realized by word-level edit operations including insertion, replacement, and deletion. However, such objective function is manually designed with multiple components. Although previous work has shown maximizing this objective yields good performance in terms of true measure of success (i.e. BLEU and iBLEU), the objective landscape is considered to be non-smooth with significant noises, posing challenges for optimization. In this dissertation, we address the research problem of smoothing the noise in the heuristic search objective by learning to model the search dynamics. Then, the learned model is combined with the original objective function to guide the search in a bootstrapping fashion. Experimental results show that the learned models combined with the original search objective can indeed provide a smoothing effect, improving the search performance by a small margin.
Submitted: Mar 23, 2022