全部产品
Search
文档中心

大模型服务平台百炼:实时语音合成-CosyVoice

更新时间:Dec 15, 2025

语音合成,又称文本转语音(Text-to-Speech,TTS),是将文本转换为自然语音的技术。该技术基于机器学习算法,通过学习大量语音样本,掌握语言的韵律、语调和发音规则,从而在接收到文本输入时生成真人般自然的语音内容。

核心功能

  • 实时生成高保真语音,支持中英等多语种自然发声

  • 提供声音复刻能力,快速定制个性化音色

  • 支持流式输入输出,低延迟响应实时交互场景

  • 可调节语速、语调、音量与码率,精细控制语音表现

  • 兼容主流音频格式,最高支持48kHz采样率输出

适用范围

  • 支持的地域:仅支持中国大陆(北京)地域,需使用“中国大陆(北京)”地域的API Key

  • 支持的模型:cosyvoice-v3-plus、cosyvoice-v3-flash、cosyvoice-v2

  • 支持的音色:参见CosyVoice音色列表

模型选型

场景

推荐模型

理由

注意事项

品牌形象语音定制/个性化语音克隆服务

cosyvoice-v3-plus

声音复刻能力最强,支持48kHz高音质输出,高音质+声音复刻,打造拟人化品牌声纹

成本较高($0.286706/万字符),建议用于核心场景

智能客服 / 语音助手

cosyvoice-v3-flash

成本最低($0.14335/万字符),支持流式交互、情感表达,响应快,性价比高

方言广播系统

cosyvoice-v3-flash、cosyvoice-v3-plus

支持东北话、闽南语等多种方言,适合地方内容播报

cosyvoice-v3-plus成本较高($0.286706/万字符

教育类应用(含公式朗读)

cosyvoice-v2、cosyvoice-v3-flash、cosyvoice-v3-plus

支持LaTeX公式转语音,适合数理化课程讲解

cosyvoice-v2和cosyvoice-v3-plus成本较高($0.286706/万字符

结构化语音播报(新闻/公告)

cosyvoice-v3-plus、cosyvoice-v3-flash、cosyvoice-v2

支持SSML控制语速、停顿、发音等,提升播报专业度

需额外开发 SSML 生成逻辑,不支持设置情感

语音与文本精准对齐(如字幕生成、教学回放、听写训练)

cosyvoice-v3-flash、cosyvoice-v3-plus、cosyvoice-v2

支持时间戳输出,可实现合成语音与原文同步

需显式启用时间戳功能,默认关闭

多语言出海产品

cosyvoice-v3-flash、cosyvoice-v3-plus

支持多语种

Sambert不支持流式输入,价格高于cosyvoice-v3-flash

更多说明请参见模型功能特性对比

快速开始

下面是调用API的示例代码。更多常用场景的代码示例,请参见GitHub

您需要已获取与配置 API Key配置API Key到环境变量。如果通过SDK调用,还需要安装DashScope SDK

CosyVoice

将合成音频保存为文件

Python

# coding=utf-8

import dashscope
from dashscope.audio.tts_v2 import *

# 若没有将API Key配置到环境变量中,需将your-api-key替换为自己的API Key
# dashscope.api_key = "your-api-key"

# 模型
# 不同模型版本需要使用对应版本的音色:
# cosyvoice-v3-flash/cosyvoice-v3-plus:使用longanyang等音色。
# cosyvoice-v2:使用longxiaochun_v2等音色。
model = "cosyvoice-v3-flash"
# 音色
voice = "longanyang"

# 实例化SpeechSynthesizer,并在构造方法中传入模型(model)、音色(voice)等请求参数
synthesizer = SpeechSynthesizer(model=model, voice=voice)
# 发送待合成文本,获取二进制音频
audio = synthesizer.call("今天天气怎么样?")
# 首次发送文本时需建立 WebSocket 连接,因此首包延迟会包含连接建立的耗时
print('[Metric] requestId为:{},首包延迟为:{}毫秒'.format(
    synthesizer.get_last_request_id(),
    synthesizer.get_first_package_delay()))

# 将音频保存至本地
with open('output.mp3', 'wb') as f:
    f.write(audio)

Java

import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesisParam;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesizer;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;

public class Main {
    // 模型
    // 不同模型版本需要使用对应版本的音色:
    // cosyvoice-v3-flash/cosyvoice-v3-plus:使用longanyang等音色。
    // cosyvoice-v2:使用longxiaochun_v2等音色。
    private static String model = "cosyvoice-v3-flash";
    // 音色
    private static String voice = "longanyang";

    public static void streamAudioDataToSpeaker() {
        // 请求参数
        SpeechSynthesisParam param =
                SpeechSynthesisParam.builder()
                        // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将your-api-key替换为自己的API Key
                        // .apiKey("your-api-key")
                        .model(model) // 模型
                        .voice(voice) // 音色
                        .build();

        // 同步模式:禁用回调(第二个参数为null)
        SpeechSynthesizer synthesizer = new SpeechSynthesizer(param, null);
        ByteBuffer audio = null;
        try {
            // 阻塞直至音频返回
            audio = synthesizer.call("今天天气怎么样?");
        } catch (Exception e) {
            throw new RuntimeException(e);
        } finally {
            // 任务结束关闭websocket连接
            synthesizer.getDuplexApi().close(1000, "bye");
        }
        if (audio != null) {
            // 将音频数据保存到本地文件“output.mp3”中
            File file = new File("output.mp3");
            // 首次发送文本时需建立 WebSocket 连接,因此首包延迟会包含连接建立的耗时
            System.out.println(
                    "[Metric] requestId为:"
                            + synthesizer.getLastRequestId()
                            + "首包延迟(毫秒)为:"
                            + synthesizer.getFirstPackageDelay());
            try (FileOutputStream fos = new FileOutputStream(file)) {
                fos.write(audio.array());
            } catch (IOException e) {
                throw new RuntimeException(e);
            }
        }
    }

    public static void main(String[] args) {
        streamAudioDataToSpeaker();
        System.exit(0);
    }
}

将LLM生成的文本实时转成语音并通过扬声器播放

以下代码展示通过本地设备播放通义千问大语言模型(qwen-turbo)实时返回的文本内容。

Python

运行Python示例前,需要通过pip安装第三方音频播放库。

# coding=utf-8
# Installation instructions for pyaudio:
# APPLE Mac OS X
#   brew install portaudio
#   pip install pyaudio
# Debian/Ubuntu
#   sudo apt-get install python-pyaudio python3-pyaudio
#   or
#   pip install pyaudio
# CentOS
#   sudo yum install -y portaudio portaudio-devel && pip install pyaudio
# Microsoft Windows
#   python -m pip install pyaudio

import pyaudio
import dashscope
from dashscope.audio.tts_v2 import *


from http import HTTPStatus
from dashscope import Generation

# 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
# dashscope.api_key = "apiKey"

# 不同模型版本需要使用对应版本的音色:
# cosyvoice-v3-flash/cosyvoice-v3-plus:使用longanyang等音色。
# cosyvoice-v2:使用longxiaochun_v2等音色。
model = "cosyvoice-v3-flash"
voice = "longanyang"


class Callback(ResultCallback):
    _player = None
    _stream = None

    def on_open(self):
        print("websocket is open.")
        self._player = pyaudio.PyAudio()
        self._stream = self._player.open(
            format=pyaudio.paInt16, channels=1, rate=22050, output=True
        )

    def on_complete(self):
        print("speech synthesis task complete successfully.")

    def on_error(self, message: str):
        print(f"speech synthesis task failed, {message}")

    def on_close(self):
        print("websocket is closed.")
        # stop player
        self._stream.stop_stream()
        self._stream.close()
        self._player.terminate()

    def on_event(self, message):
        print(f"recv speech synthsis message {message}")

    def on_data(self, data: bytes) -> None:
        print("audio result length:", len(data))
        self._stream.write(data)


def synthesizer_with_llm():
    callback = Callback()
    synthesizer = SpeechSynthesizer(
        model=model,
        voice=voice,
        format=AudioFormat.PCM_22050HZ_MONO_16BIT,
        callback=callback,
    )

    messages = [{"role": "user", "content": "请介绍一下你自己"}]
    responses = Generation.call(
        model="qwen-turbo",
        messages=messages,
        result_format="message",  # set result format as 'message'
        stream=True,  # enable stream output
        incremental_output=True,  # enable incremental output 
    )
    for response in responses:
        if response.status_code == HTTPStatus.OK:
            print(response.output.choices[0]["message"]["content"], end="")
            synthesizer.streaming_call(response.output.choices[0]["message"]["content"])
        else:
            print(
                "Request id: %s, Status code: %s, error code: %s, error message: %s"
                % (
                    response.request_id,
                    response.status_code,
                    response.code,
                    response.message,
                )
            )
    synthesizer.streaming_complete()
    print('requestId: ', synthesizer.get_last_request_id())


if __name__ == "__main__":
    synthesizer_with_llm()

Java

import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisResult;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesisAudioFormat;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesisParam;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesizer;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.ResultCallback;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import java.nio.ByteBuffer;
import java.util.Arrays;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicBoolean;
import javax.sound.sampled.*;

public class Main {
    // 不同模型版本需要使用对应版本的音色:
    // cosyvoice-v3-flash/cosyvoice-v3-plus:使用longanyang等音色。
    // cosyvoice-v2:使用longxiaochun_v2等音色。
    private static String model = "cosyvoice-v3-flash";
    private static String voice = "longanyang";
    public static void process() throws NoApiKeyException, InputRequiredException {
        // Playback thread
        class PlaybackRunnable implements Runnable {
            // Set the audio format. Please configure according to your actual device,
            // synthesized audio parameters, and platform choice Here it is set to
            // 22050Hz16bit single channel. It is recommended that customers choose other
            // sample rates and formats based on the model sample rate and device
            // compatibility.
            private AudioFormat af = new AudioFormat(22050, 16, 1, true, false);
            private DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);
            private SourceDataLine targetSource = null;
            private AtomicBoolean runFlag = new AtomicBoolean(true);
            private ConcurrentLinkedQueue<ByteBuffer> queue =
                    new ConcurrentLinkedQueue<>();

            // Prepare the player
            public void prepare() throws LineUnavailableException {
                targetSource = (SourceDataLine) AudioSystem.getLine(info);
                targetSource.open(af, 4096);
                targetSource.start();
            }

            public void put(ByteBuffer buffer) {
                queue.add(buffer);
            }

            // Stop playback
            public void stop() {
                runFlag.set(false);
            }

            @Override
            public void run() {
                if (targetSource == null) {
                    return;
                }

                while (runFlag.get()) {
                    if (queue.isEmpty()) {
                        try {
                            Thread.sleep(100);
                        } catch (InterruptedException e) {
                        }
                        continue;
                    }

                    ByteBuffer buffer = queue.poll();
                    if (buffer == null) {
                        continue;
                    }

                    byte[] data = buffer.array();
                    targetSource.write(data, 0, data.length);
                }

                // Play all remaining cache
                if (!queue.isEmpty()) {
                    ByteBuffer buffer = null;
                    while ((buffer = queue.poll()) != null) {
                        byte[] data = buffer.array();
                        targetSource.write(data, 0, data.length);
                    }
                }
                // Release the player
                targetSource.drain();
                targetSource.stop();
                targetSource.close();
            }
        }

        // Create a subclass inheriting from ResultCallback<SpeechSynthesisResult>
        // to implement the callback interface
        class ReactCallback extends ResultCallback<SpeechSynthesisResult> {
            private PlaybackRunnable playbackRunnable = null;

            public ReactCallback(PlaybackRunnable playbackRunnable) {
                this.playbackRunnable = playbackRunnable;
            }

            // Callback when the service side returns the streaming synthesis result
            @Override
            public void onEvent(SpeechSynthesisResult result) {
                // Get the binary data of the streaming result via getAudio
                if (result.getAudioFrame() != null) {
                    // Stream the data to the player
                    playbackRunnable.put(result.getAudioFrame());
                }
            }

            // Callback when the service side completes the synthesis
            @Override
            public void onComplete() {
                // Notify the playback thread to end
                playbackRunnable.stop();
            }

            // Callback when an error occurs
            @Override
            public void onError(Exception e) {
                // Tell the playback thread to end
                System.out.println(e);
                playbackRunnable.stop();
            }
        }

        PlaybackRunnable playbackRunnable = new PlaybackRunnable();
        try {
            playbackRunnable.prepare();
        } catch (LineUnavailableException e) {
            throw new RuntimeException(e);
        }
        Thread playbackThread = new Thread(playbackRunnable);
        // Start the playback thread
        playbackThread.start();
        /*******  Call the Generative AI Model to get streaming text *******/
        // Prepare for the LLM call
        Generation gen = new Generation();
        Message userMsg = Message.builder()
                .role(Role.USER.getValue())
                .content("请介绍一下你自己")
                .build();
        GenerationParam genParam =
                GenerationParam.builder()
                        // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
                        // .apiKey("apikey")
                        .model("qwen-turbo")
                        .messages(Arrays.asList(userMsg))
                        .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                        .topP(0.8)
                        .incrementalOutput(true)
                        .build();
        // Prepare the speech synthesis task
        SpeechSynthesisParam param =
                SpeechSynthesisParam.builder()
                        // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
                        // .apiKey("apikey")
                        .model(model)
                        .voice(voice)
                        .format(SpeechSynthesisAudioFormat
                                .PCM_22050HZ_MONO_16BIT)
                        .build();
        SpeechSynthesizer synthesizer =
                new SpeechSynthesizer(param, new ReactCallback(playbackRunnable));
        Flowable<GenerationResult> result = gen.streamCall(genParam);
        result.blockingForEach(message -> {
            String text =
                    message.getOutput().getChoices().get(0).getMessage().getContent().trim();
            if (text != null && !text.isEmpty()) {
                System.out.println("LLM output:" + text);
                synthesizer.streamingCall(text);
            }
        });
        synthesizer.streamingComplete();
        System.out.print("requestId: " + synthesizer.getLastRequestId());
        try {
            // Wait for the playback thread to finish playing all
            playbackThread.join();
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }

    public static void main(String[] args) throws NoApiKeyException, InputRequiredException {
        process();
        System.exit(0);
    }
}

API参考

模型功能特性对比

功能/特性

cosyvoice-v3-plus

cosyvoice-v3-flash

cosyvoice-v2

支持语言

音色而异:中文(普通话、广东话、东北话、甘肃话、贵州话、河南话、湖北话、江西话、闽南话、宁夏话、山西话、陕西话、山东话、上海话、四川话、天津话、云南话)、英文、法语、德语、日语、韩语、俄语

音色而异:中文、英文(英式、美式)、韩语、日语

音频格式

pcm、wav、mp3、opus

音频采样率

8kHz、16kHz、22.05kHz、24kHz、44.1kHz、48kHz

声音复刻

支持 参见CosyVoice声音复刻API

SSML

支持 参见SSML标记语言介绍此功能适用于复刻音色,以及音色列表中已标记为支持的系统音色

LaTeX

支持 参见LaTeX 公式转语音

音量调节

支持

语速调节

支持

语调(音高)调节

支持

码率调节

支持 仅opus格式音频支持

时间戳

支持 默认关闭,可开启;此功能适用于复刻音色,以及音色列表中已标记为支持的系统音色

指令控制(Instruct)

支持 此功能适用于复刻音色,以及音色列表中已标记为支持的系统音色

不支持

流式输入

支持

流式输出

支持

限流(RPS)

3

接入方式

Java/Python SDK、WebSocket API

价格

$0.286706/万字符

$0.14335/万字符

$0.286706/万字符

常见问题

Q:语音合成的发音读错怎么办?多音字如何控制发音?

  • 将多音字替换成同音的其他汉字,快速解决发音问题。

  • 使用SSML标记语言控制发音。