All Products
Search
Document Center

Overview

Last Updated: Sep 18, 2020

The real-time speech recognition service provides the Natural User Interaction (NUI) SDK for mobile clients to recognize speech data streams that last for a long time. The NUI SDK applies to uninterrupted speech recognition scenarios such as conference speeches and live streaming.

Description

Compared with common SDKs, the NUI SDK is smaller in size and supports more comprehensive status management. The NUI SDK provides comprehensive speech processing capabilities and can also serve as an atomic SDK, meeting diverse user requirements. In addition, the NUI SDK uses a unified API.

Features

  • Supports pulse-code modulation (PCM) encoded 16-bit mono audio files.

  • Supports the audio sampling rates of 8,000 Hz and 16,000 Hz.

  • Allows you to specify whether to return intermediate results, whether to add punctuation marks during post-processing, and whether to convert Chinese numerals to Arabic numerals.

  • Allows you to select linguistic models to recognize speeches in different languages when you manage projects in the Intelligent Speech Interaction console. For more information, see Manage projects.

Endpoints

Access type

Description

URL

External access from the Internet

This endpoint allows you to access the real-time speech recognition service from any host over the Internet. By default, the Internet access URL is built in the SDK.

wss://nls-gateway.cn-shanghai.aliyuncs.com/ws/v1

Internal access from an Elastic Compute Service (ECS) instance located in the China (Shanghai) region

This endpoint allows you to access the real-time speech recognition service from an ECS instance located in the China (Shanghai) region over an internal network.<br> You cannot access an AnyTunnel virtual IP address (VIP) from a classic network-connected ECS instance. This means that you cannot use such an ECS instance to access the real-time speech recognition service over an internal network. To access this service by using an AnyTunnel VIP, you must create a virtual private network (VPC) and access the service from the VPC.

Note

  • Access from an ECS instance over the internal network does not consume Internet access traffic.

  • For more information about the network types of ECS instances, see Network types.

ws://nls-gateway.cn-shanghai-internal.aliyuncs.com:80/ws/v1

Interaction process

Interaction process of real-time speech recognition

Note

The server adds the task_id parameter to the response header for all responses to indicate the ID of the recognition task. You can record the value of this parameter. If an error occurs, you can submit a ticket to report the task ID and error message.

1. Authenticate the client and initialize the SDK

To establish a WebSocket connection with the server, the client must use a token for authentication. For more information about how to obtain the token, see Obtain a token.

The following table describes the parameters used for authentication and initialization.

Parameter

Type

Required

Description

ParameterTypeRequiredDescription
workspaceStringYesThe working directory from which the SDK reads the configuration file.
app_keyStringYesThe appkey of your project created in the Intelligent Speech Interaction console.
tokenStringYesThe token provided as the credential for you to use Intelligent Speech Interaction. Make sure that the token is valid. You can set the token when you initialize the SDK and update the token when you set the request parameters.
device_idStringYesThe unique identifier of the device, for example, the media access control (MAC) address, serial number, or pseudo unique ID of the device.
debug_pathStringNoThe directory where audio files generated during the debugging are stored. If the save_log parameter is set to true when you initialize the SDK, intermediate results are stored in this directory.
save_wavStringNoThis parameter is valid if the save_log parameter is set to true when you initialize the SDK. This parameter specifies whether to store audio files generated during the debugging in the directory specified by the debug_path parameter. Make sure that the directory is writable.

2. Send a request to use the real-time speech recognition service

You must set the request parameters for the client to send a service request. You can set the request parameters in the JSON format by using the setParams method in the SDK. The parameter configuration applies to all service requests. The following table describes the request parameters.

Parameter

Type

Required

Description

appkeyStringNoThe appkey of your project created in the Intelligent Speech Interaction console. This parameter is generally set when you initialize the SDK.
tokenStringNoThe token provided as the credential for you to use Intelligent Speech Interaction. You can update the token as required by setting this parameter.
service_typeIntYesThe type of speech service to be requested. Set this parameter to 4, which indicates the real-time speech recognition service.
direct_ipStringNoThe IP address that is resolved from the Domain Name System (DNS) domain name. The client completes the resolution and uses the obtained IP address to access the service.
nls_configJsonObjectNoThe service parameters.

The following table describes the parameters in the nls_config parameter.

Parameter

Type

Required

Description

sr_formatStringNoThe audio encoding format. The real-time speech recognition service supports the Opus and PCM formats. Default value: OPUS. Note: This parameter must be set to PCM if the sample_rate parameter is set to 8000.
sample_rateIntegerNoThe audio sampling rate. Unit: Hz. Default value: 16000. After you set this parameter, you must specify a model or scene that is applicable to the audio sampling rate for your project in the Intelligent Speech Interaction console.
enable_intermediate_resultBooleanNoSpecifies whether to return intermediate results. Default value: False.
enable_punctuation_predictionBooleanNoSpecifies whether to add punctuation marks during post-processing. Default value: False.
enable_inverse_text_normalizationBooleanNoSpecifies whether to enable inverse text normalization (ITN) during post-processing. Valid values: true and false. Default value: false. If you set this parameter to true, Chinese numerals are converted to Arabic numerals. Note: ITN is not implemented on words.
customization_idStringNoThe ID of the custom speech training model.
vocabulary_idStringNoThe vocabulary ID of custom extensive hotwords.
max_sentence_silenceIntegerNoThe threshold for detecting the end of a sentence. If the silence duration exceeds the specified threshold, the system determines the end of a sentence. Unit: milliseconds. Valid values: 200 to 2000. Default value: 800.
enable_wordsBooleanNoSpecifies whether to return information about words. Default value: False.
enable_ignore_sentence_timeoutBooleanNoSpecifies whether to ignore the recognition timeout issue of a single sentence in real-time speech recognition. Default value: False.
disfluencyBooleanNoSpecifies whether to enable disfluency detection. Default value: False.
vad_modelStringNoOptional. The ID of the voice activity detection (VAD) model used by the server.
speech_noise_thresholdfloatNoThe threshold for recognizing audio streams as noise. Valid values: -1 to 1. The closer the parameter value is to -1, the more likely an audio stream is recognized as a normal speech. In other words, noise is more likely recognized as normal speeches and processed by the service. In addition, the closer the parameter value is to 1, the more likely an audio stream is recognized as noise. In other words, normal speeches are more likely recognized as noise and ignored by the service. Note: This parameter is an advanced parameter. Proceed with caution when you adjust the parameter value. Perform a test after you adjust the parameter value.

3. Send audio data from the client

The client cyclically sends audio data to the server and continuously receives recognition results from the server.

  • If an EVENT_SENTENCE_START event is reported, the server detects the beginning of a sentence. Real-time speech recognition uses VAD to determine the beginning and end of a sentence. For example, the server returns the following response:

    {
        "header": {
            "namespace": "SpeechTranscriber",
            "name": "SentenceBegin",
            "status": 20000000,
            "message_id": "a426f3d4618447519c9d85d1a0d1****",
            "task_id": "5ec521b5aa104e3abccf3d361822****",
            "status_text": "Gateway:SUCCESS:Success."
        },
        "payload": {
            "index": 1,
            "time": 0
        }
    }

    The following table describes the parameters in the header object.

    Parameter

    Type

    Description

    namespaceStringThe namespace of the message.
    nameStringThe name of the message. The SentenceBegin message indicates that the server detects the beginning of a sentence.
    statusIntegerThe HTTP status code. It indicates whether the request is successful. For more information, see the "Error codes" section of this topic.
    message_idStringThe ID of the message, which is automatically generated by the SDK.
    task_idStringThe GUID of the task. Record the value of this parameter to facilitate troubleshooting.
    status_textStringThe status message.

    The following table describes the parameters in the payload object.

    Parameter

    Type

    Description

    index

    Integer

    The sequence number of the sentence, which starts from 1.

    time

    Integer

    The duration of the processed audio stream. Unit: milliseconds.

  • If the enable_intermediate_result parameter is set to true, the SDK reports multiple EVENT_ASR_PARTIAL_RESULT events by calling the onNuiEventCallback method to return intermediate results of a sentence. For example, the server returns the following response:

    {
        "header": {
            "namespace": "SpeechTranscriber",
            "name": "TranscriptionResultChanged",
            "status": 20000000,
            "message_id": "dc21193fada84380a3b6137875ab****",
            "task_id": "5ec521b5aa104e3abccf3d361822****",
            "status_text": "Gateway:SUCCESS:Success."
        },
        "payload": {
            "index": 1,
            "time": 1835,
            "result": "Sky in Beijing",
            "confidence": 1.0,
            "words": [{
                "text": "Sky",
                "startTime": 630,
                "endTime": 930
            }, {
                "text": "in",
                "startTime": 930,
                "endTime": 1110
            }, {
                "text": "Beijing",
                "startTime": 1110,
                "endTime": 1140
            }]
        }
    }
    Note

    As shown in the header object, the value of the name parameter is TranscriptionResultChanged, which indicates that an intermediate result is obtained. For more information about other parameters in the header object, see the preceding table.

    The following table describes the parameters in the payload object.

    Parameter

    Type

    Description

    indexIntegerThe sequence number of the sentence, which starts from 1.
    timeIntegerThe duration of the processed audio stream. Unit: milliseconds.
    resultStringThe recognition result of the sentence.
    wordsList< Word >The word information of the sentence. The word information is returned only when the enable_words parameter is set to true.
    confidenceDoubleThe confidence level of the recognition result of the sentence. Valid values: 0.0 to 1.0. A larger value indicates a higher confidence level.

  • If an EVENT_SENTENCE_END event is reported, the server detects the end of a sentence and returns the recognition result of the sentence. For example, the server returns the following response:

    {
        "header": {
            "namespace": "SpeechTranscriber",
            "name": "SentenceEnd",
            "status": 20000000,
            "message_id": "c3a9ae4b231649d5ae05d4af36fd****",
            "task_id": "5ec521b5aa104e3abccf3d361822****",
            "status_text": "Gateway:SUCCESS:Success."
        },
        "payload": {
            "index": 1,
            "time": 1820,
            "begin_time": 0,
            "result": "Weather in Beijing.",
            "confidence": 1.0,
            "words": [{
                "text": "Weather",
                "startTime": 630,
                "endTime": 930
            }, {
                "text": "in",
                "startTime": 930,
                "endTime": 1110
            }, {
                "text": "Beijing",
                "startTime": 1110,
                "endTime": 1380
            }]
        }
    }
    Note

    As shown in the header object, the value of the name parameter is SentenceEnd, which indicates that the server detects the end of the sentence. For more information about other parameters in the header object, see the preceding table.

    The following table describes the parameters in the payload object.

    Parameter

    Type

    Description

    indexIntegerThe sequence number of the sentence, which starts from 1.
    timeIntegerThe duration of the processed audio stream. Unit: milliseconds.
    begin_timeIntegerThe time when the server returns the SentenceBegin message for the sentence. Unit: milliseconds.
    resultStringThe recognition result of the sentence.
    wordsList< Word >The word information of the sentence. The word information is returned only when the enable_words parameter is set to true.
    confidenceDoubleThe confidence level of the recognition result of the sentence. Valid values: 0.0 to 1.0. A larger value indicates a higher confidence level.

    The following table describes the parameters in the words object.

    Parameter

    Type

    Description

    text

    String

    The text of the word.

    startTime

    Integer

    The time when the word appears in the sentence. Unit: milliseconds.

    endTime

    Integer

    The time when the word ends in the sentence. Unit: milliseconds.

4. Complete the recognition task

The client notifies the server that all audio data is sent. The server completes the recognition task and notifies the client that the task is completed.

Error codes

For more information about the error codes that the real-time speech recognition service may return, see error code.