This topic describes the parameters and API details for the Paraformer audio file recognition RESTful API.
This document applies only to the China (Beijing) region. To use the model, you must use an API key from the China (Beijing) region.
User guide: For an overview of the models and guidance on model selection, see Audio file recognition.
The service provides a task submission interface and a task query interface. Typically, you call the task submission interface to upload a recognition task and then repeatedly call the task query interface until the task is complete.
Prerequisites
You have activated the Model Studio and created an API key. To prevent security risks, export the API key as an environment variable instead of hard-coding it in your code.
To grant temporary access permissions to third-party applications or users, or if you want to strictly control high-risk operations such as accessing or deleting sensitive data, we recommend that you use a temporary authentication token.
Compared with long-term API keys, temporary authentication tokens are more secure because they are short-lived (60 seconds). They are suitable for temporary call scenarios and can effectively reduce the risk of API key leakage.
To use a temporary token, replace the API key used for authentication in your code with the temporary authentication token.
Model availability
paraformer-v2 | paraformer-8k-v2 | |
Scenarios | Multilingual recognition for scenarios such as live streaming and meetings | Chinese recognition for scenarios such as telephone customer service and voicemail |
Sample rate | Any | 8 kHz |
Languages | Chinese (including Mandarin and various dialects), English, Japanese, Korean, German, French, and Russian Supported Chinese dialects: Shanghai dialect, Wu dialect, Min Nan dialect, Northeastern dialect, Gansu dialect, Guizhou dialect, Henan dialect, Hubei dialect, Hunan dialect, Jiangxi dialect, Ningxia dialect, Shanxi dialect, Shaanxi dialect, Shandong dialect, Sichuan dialect, Tianjin dialect, Yunnan dialect, and Cantonese | Chinese |
Punctuation prediction | ✅ Supported by default, no configuration required | ✅ Supported by default, no configuration required |
Inverse text normalization (ITN) | ✅ Supported by default, no configuration required | ✅ Supported by default, no configuration required |
Custom hotwords | ✅ For more information, see Custom vocabularies | ✅ For more information, see Custom vocabularies |
Specify language for recognition | ✅ Specified by the | ❌ |
Limitations
The service does not support direct uploads of local audio or video files. It also does not support base64-encoded audio. The input source must be a file URL that is accessible over the Internet and supports the HTTP or HTTPS protocol, for example, https://your-domain.com/file.mp3.
You can specify the URL using the file_urls parameter. A single request supports up to 100 URLs.
Audio formats
aac,amr,avi,flac,flv,m4a,mkv,mov,mp3,mp4,mpeg,ogg,opus,wav,webm,wma, andwmvImportantThe API cannot guarantee correct recognition for all audio and video formats and their variants because it is not feasible to test every possibility. We recommend testing your files to confirm that they produce the expected speech recognition results.
Audio sampling rate
The sample rate varies by model:
paraformer-v2 supports any sample rate
paraformer-8k-v2 only supports an 8 kHz sample rate
Audio file size and duration
The audio file cannot exceed 2 GB in size and 12 hours in duration.
To process files that exceed these limits, you can pre-process them to reduce their size. For more information about pre-processing best practices, see Preprocess video files to improve file transcription efficiency (for audio file recognition scenarios).
Number of audio files for batch processing
A single request supports up to 100 file URLs.
Recognizable languages
Varies by model:
paraformer-v2:
Chinese, including Mandarin and various dialects: Shanghai dialect, Wu dialect, Min Nan dialect, Northeastern dialect, Gansu dialect, Guizhou dialect, Henan dialect, Hubei dialect, Hunan dialect, Jiangxi dialect, Ningxia dialect, Shanxi dialect, Shaanxi dialect, Shandong dialect, Sichuan dialect, Tianjin dialect, Yunnan dialect, and Cantonese
English
Japanese
Korean
paraformer-8k-v2 only supports Chinese
API call method limitations
Direct API calls from the frontend are not supported. You must route calls through a backend server.
Task submission interface
Basic information
API endpoint description | Submits a speech recognition task. |
URL | |
Request method | POST |
Request headers | |
Message body | The following code shows a message body that contains all request parameters. You can omit optional fields as needed. |
Request parameters
Parameter | Type | Default value | Required | Description |
model | string | - | Yes | The name of the Paraformer model that is used for audio and video file transcription. For more information, see Model availability. |
file_urls | array[string] | - | Yes | The list of URLs for audio and video file transcription. HTTP and HTTPS protocols are supported. A maximum of 100 URLs are supported in a single request. If your audio files are stored in Alibaba Cloud OSS, the RESTful API supports temporary URLs that start with the oss:// prefix. Important
|
vocabulary_id | string | - | No | The ID of the custom vocabulary. The latest v2 series models support this parameter and language configurations. The hotwords corresponding to this hotword ID take effect in the current speech recognition. This feature is disabled by default. For more information about how to use this feature, see Custom vocabularies. |
resource_type | string | - | No | Must be set to "asr_phrase". This parameter must be used together with "resource_id". |
channel_id | array[integer] | [0] | No | Specifies the indices of the audio tracks in a multi-track file that require speech recognition. The indices are provided in a list. For example, |
disfluency_removal_enabled | boolean | false | No | Specifies whether to filter filler words. This feature is disabled by default. |
timestamp_alignment_enabled | boolean | false | No | Specifies whether to enable the timestamp alignment feature. This feature is disabled by default. |
special_word_filter | string | - | No | Specifies the sensitive words to be processed during speech recognition and supports different processing methods for different sensitive words. If you do not pass this parameter, the system enables its built-in sensitive word filtering logic. Any words in the detection results that match the Alibaba Cloud Model Studio sensitive word list (Chinese) are replaced with an equal number of If this parameter is passed, the following sensitive word processing strategies can be implemented:
The value of this parameter must be a JSON string with the following structure: JSON field description:
|
language_hints | array[string] | ["zh", "en"] | No | Specifies the language codes of the speech to be recognized. This parameter is applicable only to the paraformer-v2 model. Supported language codes:
|
diarization_enabled | boolean | false | No | Automatic speaker diarization. This feature is disabled by default. This feature is applicable only to mono audio. Multi-channel audio does not support speaker diarization. When this feature is enabled, the recognition results will display a For an example of |
speaker_count | integer | - | No | The reference value for the number of speakers. The value must be an integer from 2 to 100, including 2 and 100. This parameter takes effect after speaker diarization is enabled (diarization_enabled is set to true). By default, the number of speakers is automatically determined. If you configure this parameter, it can only assist the algorithm in trying to output the specified number of speakers, but cannot guarantee that this number will definitely be output. |
Response parameters
Parameter | Type | Description |
task_status | string | The task status. |
task_id | string | The task ID. This ID is passed as a request parameter in the task query interface. |
Task query interface
Basic information
API endpoint description | Queries the status and result of a speech recognition task. |
URL | |
Request method | POST |
Request headers | |
Message body | None. |
Request parameters
Parameter | Type | Default value | Required | Description |
task_id | string | - | Yes | To query a task, you must specify its ID. This ID is returned by the task submission interface. |
Response parameters
Parameter | Type | Description |
task_id | string | The ID of the queried task. |
task_status | string | The status of the queried task. Note If a task contains multiple subtasks, the status of the entire task is marked as |
subtask_status | string | The subtask status. |
file_url | string | The URL of the file that is processed in the file transcription task. |
transcription_url | string | The link to obtain the recognition result. This link is valid for 24 hours. After expiration, you cannot query the task or download results using a previously obtained URL. The recognition result is saved as a JSON file. You can download the file from the preceding link or directly read the content of the file using an HTTP request. For more information about the fields in the JSON data, see Recognition result description. |
Recognition result description
The recognition result is saved as a JSON file.
The following table describes the key parameters:
Parameter | Type | Description |
audio_format | string | The audio format in the source file. |
channels | array[integer] | The audio track index information in the source file. Returns [0] for single-track audio, [0, 1] for dual-track audio, and so on. |
original_sampling_rate | integer | The sample rate (Hz) of the audio in the source file. |
original_duration | integer | The original audio duration (ms) in the source file. |
channel_id | integer | The audio track index of the transcription result, starting from 0. |
content_duration | integer | The duration (ms) of content determined to be speech in the audio track. Important The Paraformer speech recognition model service only transcribes and charges for the duration of content determined to be speech in the audio track. Non-speech content is not measured or charged. Typically, the speech content duration is shorter than the original audio duration. Because an AI model determines whether speech content exists, discrepancies may occur. |
transcript | string | The paragraph-level speech transcription result. |
sentences | array | The sentence-level speech transcription result. |
words | array | The word-level speech transcription result. |
begin_time | integer | The start timestamp (ms). |
end_time | integer | The end timestamp (ms). |
text | string | The speech transcription result. |
speaker_id | integer | The index of the current speaker, starting from 0, used to distinguish different speakers. This field is displayed in the recognition result only when speaker diarization is enabled. |
punctuation | string | The predicted punctuation after the word, if any. |
Complete example
You can use the HTTP libraries that are built into programming languages to implement task submission and query requests. First, call the task submission interface to upload a recognition task, and then repeatedly call the task query interface until the task is complete.
The following code provides an example in Python:
import requests
import json
import time
api_key = "your-dashscope-api-key" # Replace this with your API key.
file_urls = [
"https://dashscope.oss-cn-beijing.aliyuncs.com/samples/audio/paraformer/hello_world_female2.wav",
"https://dashscope.oss-cn-beijing.aliyuncs.com/samples/audio/paraformer/hello_world_male2.wav",
]
language_hints = ["zh", "en"]
# Submit a file transcription task, including a list of URLs of the files to be transcribed.
def submit_task(apikey, file_urls) -> str:
headers = {
"Authorization": f"Bearer {apikey}",
"Content-Type": "application/json",
"X-DashScope-Async": "enable",
}
data = {
"model": "paraformer-v2",
"input": {"file_urls": file_urls},
"parameters": {
"channel_id": [0],
"language_hints": language_hints,
"vocabulary_id": "vocab-Xxxx",
},
}
# The URL of the recorded file transcription service.
service_url = (
"https://dashscope.aliyuncs.com/api/v1/services/audio/asr/transcription"
)
response = requests.post(
service_url, headers=headers, data=json.dumps(data)
)
# Print the response content.
if response.status_code == 200:
return response.json()["output"]["task_id"]
else:
print("task failed!")
print(response.json())
return None
# Recursively query the task status until the task is successful.
def wait_for_complete(task_id):
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"X-DashScope-Async": "enable",
}
pending = True
while pending:
# The URL of the task status query service.
service_url = f"https://dashscope.aliyuncs.com/api/v1/tasks/{task_id}"
response = requests.post(
service_url, headers=headers
)
if response.status_code == 200:
status = response.json()['output']['task_status']
if status == 'SUCCEEDED':
print("task succeeded!")
pending = False
return response.json()['output']['results']
elif status == 'RUNNING' or status == 'PENDING':
pass
else:
print("task failed!")
pending = False
else:
print("query failed!")
pending = False
print(response.json())
time.sleep(0.1)
task_id = submit_task(apikey=api_key, file_urls=file_urls)
print("task_id: ", task_id)
result = wait_for_complete(task_id)
print("transcription result: ", result)Error codes
If you encounter an error, see Error messages for troubleshooting.
If the problem persists, join the developer group to report the issue and provide the Request ID for further investigation.
If a task contains multiple subtasks and any subtask succeeds, the overall task status is marked as SUCCEEDED. You must check the subtask_status field to determine the result of each subtask.
Error response example:
{
"task_id": "7bac899c-06ec-4a79-8875-xxxxxxxxxxxx",
"task_status": "SUCCEEDED",
"submit_time": "2024-12-16 16:30:59.170",
"scheduled_time": "2024-12-16 16:30:59.204",
"end_time": "2024-12-16 16:31:02.375",
"results": [
{
"file_url": "https://dashscope.oss-cn-beijing.aliyuncs.com/samples/audio/sensevoice/long_audio_demo_cn.mp3",
"transcription_url": "https://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/prod/paraformer-v2/20241216/xxxx",
"subtask_status": "SUCCEEDED"
},
{
"file_url": "https://dashscope.oss-cn-beijing.aliyuncs.com/samples/audio/sensevoice/rich_text_exaple_1.wav",
"code": "InvalidFile.DownloadFailed",
"message": "The audio file cannot be downloaded.",
"subtask_status": "FAILED"
}
],
"task_metrics": {
"TOTAL": 2,
"SUCCEEDED": 1,
"FAILED": 1
}
}More examples
For more examples, see our GitHub repository.
FAQ
Features
Q: Is Base64 encoded audio supported?
No, it is not. The service only supports recognition of audio from URLs that are accessible over the internet. It does not support binary streams or local files.
Q: How can I provide audio files as publicly accessible URLs?
Follow these general steps. The specific process may vary depending on the storage product you use. We recommend uploading audio to OSS.
Q: How long does it take to obtain the recognition results?
After a task is submitted, it enters the PENDING state. The queuing time depends on the queue length and file duration and cannot be precisely determined, but it is typically within a few minutes. Longer audio files require more processing time.
Troubleshooting
If you encounter an error, refer to the information in Error codes.
Q: What should I do if the recognition results are not synchronized with the audio playback?
Set the timestamp_alignment_enabled request parameter to true to enable the timestamp alignment feature. This feature synchronizes the recognition results with the audio playback.
Q: What do I do if the temporary public access URL of an OSS audio file is inaccessible?
In the headers, set X-DashScope-OssResourceResolve to enable.
This method is not recommended.
The Java SDK and the Python SDK do not support configuring headers.
Q: Why can't I obtain a result after continuous polling?
This may be due to rate limiting. To request a quota increase, join the developer group.
Q: Why is the speech not recognized (no recognition result)?
Check whether the audio meets the format and sample rate requirements.
If you are using the
paraformer-v2model, check whether thelanguage_hintsparameter is set correctly.If the previous checks do not resolve the issue, you can use custom hotwords to improve the recognition of specific words.
More questions
For more questions, see the FAQ on GitHub.