seq2seq
Published March 1, 2019
|
21 min
    Download
    Add to queue
    Copy URL
    Show notes

    A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.

    The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.

    In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.

    Related Links

      15
      15
        0:00:00 / 0:00:00