Microsoft Creates VALL-E, AI-Driven Text-to-Speech Synthesis (TTS)

Microsoft-sponsored research has led to the development of VALL-E, an AI-driven text-to-speech synthesis (TTS) model that can apparently replicate an individual’s spoken voice based on a mere three seconds of recorded audio.

We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems. VALL-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that VALL-E significantly outperforms the state-of-the-art zero-shot TTS system in terms of speech naturalness and speaker similarity. In addition, we find VALL-E could preserve the speaker’s emotion and acoustic environment of the acoustic prompt in synthesis.

Model Overview

The overview of VALL-E. Unlike the previous pipeline (e.g., phoneme → mel-spectrogram → waveform), the pipeline of VALL-E is phoneme → discrete code → waveform. VALL-E generates the discrete audio codec codes based on phoneme and acoustic code prompts, corresponding to the target content and the speaker’s voice. VALL-E directly enables various speech synthesis applications, such as zero-shot TTS, speech editing, and content creation combined with other generative AI models like GPT-3.

You can listen to audio samples of VALL-E’s speech synthesis on the demonstration page.

Now all that remains is to teach VALL-E how to sing the hit single Anything You Can Do ([AI] Can Do Better), written by chatGPT with musical accompaniment by SingSong.


Another company making headlines for its eerily realistic text-to-speech synthesis is ElevenLabs. Like VALL-E, ElevenLabs’ software can clone existing voices based on a short audio sample. You can try ElevenLabs TTS for free here using their demo voices. Custom voices require a paid subscription.


It is indeed uncannily natural. I tried the first two sentences of my short story, “We’ll Return, After this Message”, with the "Antoni (American, modulated) and was amazed at how it stressed all the right places. Here is the text and generated audio.

When the foundations of everything you think you know shift beneath you, you can feel it. The night Art Crane and I found the Message, it felt like that moment at the onset of an earthquake when you realize the floor is really moving.

On the pricing side, Eleven Labs has just introduced a US$ 5/month Starter level with 30,000 characters per month and 10 custom voices.