From c83960a2dbdda57d2137b8d62364b50d13dd5897 Mon Sep 17 00:00:00 2001 From: Manuel Romero Date: Mon, 21 Feb 2022 23:04:32 +0100 Subject: [PATCH] Fix typo --- examples/bart/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/bart/README.md b/examples/bart/README.md index 4050a724e..1aa14950f 100644 --- a/examples/bart/README.md +++ b/examples/bart/README.md @@ -4,7 +4,7 @@ ## Introduction -BART is sequence-to-sequence model trained with denoising as pretraining objective. We show that this pretraining objective is more generic and show that we can match [RoBERTa](../roberta) results on SQuAD and GLUE and gain state-of-the-art results on summarization (XSum, CNN dataset), long form generative question answering (ELI5) and dialog response genration (ConvAI2). See the associated paper for more details. +BART is sequence-to-sequence model trained with denoising as pretraining objective. We show that this pretraining objective is more generic and show that we can match [RoBERTa](../roberta) results on SQuAD and GLUE and gain state-of-the-art results on summarization (XSum, CNN dataset), long form generative question answering (ELI5) and dialog response generation (ConvAI2). See the associated paper for more details. ## Pre-trained models