[1409.3215] Sequence to Sequence Learning with Neural Networks
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Foundational work on sequence-to-sequence neural networks that enabled modern neural machine translation and large language models; important for understanding the architectural innovations underlying current AI systems and their capabilities.
Paper Details
Metadata
Abstract
Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Summary
This paper introduces a sequence-to-sequence learning approach using stacked LSTMs to map variable-length input sequences to variable-length output sequences. The method employs one LSTM to encode the input sequence into a fixed-dimensional vector and another LSTM to decode the target sequence from this representation. Tested on English-to-French translation using the WMT'14 dataset, the approach achieves a BLEU score of 34.8, outperforming phrase-based statistical machine translation systems (33.3 BLEU). The authors demonstrate that reversing source sentence word order improves performance and that the model learns meaningful representations invariant to linguistic variations like active/passive voice.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Ilya Sutskever | Person | 26.0 |
Cached Content Preview
[1409.3215] Sequence to Sequence Learning with Neural Networks
Sequence to Sequence Learning
with Neural Networks
Ilya Sutskever
Google
ilyasu@google.com
&Oriol Vinyals
Google
vinyals@google.com
&Quoc V. Le
Google
qvl@google.com
Abstract
Deep Neural Networks (DNNs) are powerful models that have achieved
excellent performance on difficult learning tasks. Although DNNs work
well whenever large labeled training sets are available,
they cannot be used to map sequences to sequences. In this paper, we
present a general end-to-end approach to sequence learning that makes
minimal assumptions on the sequence structure. Our method uses a
multilayered Long Short-Term Memory (LSTM) to map the input sequence
to a vector of a fixed dimensionality, and then another deep LSTM to
decode the target sequence from the vector. Our main result is that
on an English to French translation task from the WMT’14 dataset, the
translations produced by the LSTM achieve a BLEU score of 34.8 on the
entire test set, where the LSTM’s BLEU score was penalized on
out-of-vocabulary words. Additionally, the LSTM did not have
difficulty on long sentences. For comparison, a phrase-based
SMT system achieves a BLEU score of 33.3 on the same dataset. When we
used the LSTM to rerank the 1000 hypotheses produced by the
aforementioned SMT system, its BLEU score increases to 36.5, which
is close to the previous best result on this task. The LSTM also learned sensible
phrase and sentence representations that are sensitive to word order
and are relatively invariant to the active and the passive voice.
Finally, we found that reversing the order of the words in all source
sentences (but not target sentences) improved the LSTM’s performance
markedly, because doing so introduced many short term dependencies
between the source and the target sentence which made the optimization
problem easier.
1 Introduction
Deep Neural Networks (DNNs) are extremely powerful machine learning
models that achieve excellent performance on difficult problems such
as speech recognition [ 13 , 7 ] and visual object
recognition [ 19 , 6 , 21 , 20 ] . DNNs are
powerful because they can perform arbitrary parallel computation for a
modest number of steps. A surprising example of the power of DNNs is
their ability to sort N 𝑁 N N 𝑁 N -bit numbers using only 2
hidden layers of quadratic size [ 27 ] . So, while neural
networks are related to conventional statistical models, they learn an
intricate computation. Furthermore, large DNNs can be trained with
supervised backpropagation whenever the labeled training set has
enough information to specify the network’s parameters. Thus, if
there exists a parameter setting of a large DNN that achieves good
results (for example, because humans can solve the task very rapidly),
supervised backpropagation will find these parameters and solve the
problem.
Despite their flexibility and power,
... (truncated, 36 KB total)9cee40b637f69874 | Stable ID: ZDRhMzhmMz