Paper ID: 2406.11338

Fine-grained Controllable Text Generation through In-context Learning with Feedback

Sarubi Thillainathan, Alexander Koller

We present a method for rewriting an input sentence to match specific values of nontrivial linguistic features, such as dependency depth. In contrast to earlier work, our method uses in-context learning rather than finetuning, making it applicable in use cases where data is sparse. We show that our model performs accurate rewrites and matches the state of the art on rewriting sentences to a specified school grade level.

Submitted: Jun 17, 2024