Showing 1–20 of 39 results
/ Date/ Name
Oct 12, 2019vq-wav2vec: Self-Supervised Learning of Discrete Speech RepresentationsApr 25, 2022On-demand compute reduction with stochastic wav2vec 2.0Nov 10, 2019Effectiveness of self-supervised pre-training for speech recognitionDec 14, 2022Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and LanguageApr 11, 2019wav2vec: Unsupervised Pre-training for Speech RecognitionJun 24, 2020Unsupervised Cross-lingual Representation Learning for Speech RecognitionJun 20, 2020wav2vec 2.0: A Framework for Self-Supervised Learning of Speech RepresentationsSep 28, 2018Adaptive Input Representations for Neural Language ModelingJan 29, 2019Pay Less Attention with Lightweight and Dynamic ConvolutionsOct 22, 2020Self-training and Pre-training are Complementary for Speech RecognitionMay 24, 2021Unsupervised Speech RecognitionMar 19, 2019Cloze-driven Pretraining of Self-attention NetworksMar 22, 2019Pre-trained Language Model Representations for Language GenerationApr 1, 2019fairseq: A Fast, Extensible Toolkit for Sequence ModelingJul 15, 2019Facebook FAIR's WMT19 News Translation Task SubmissionNov 23, 2020The Zero Resource Speech Benchmark 2021: Metrics and baselines for unsupervised spoken language modelingFeb 7, 2022data2vec: A General Framework for Self-supervised Learning in Speech, Vision and LanguageFeb 10, 2023AV-data2vec: Self-supervised Learning of Audio-Visual Speech Representations with Contextualized Target RepresentationsJul 13, 2022Masked Autoencoders that ListenDec 30, 2020Reservoir Transformers