All Products
Search
Document Center

SDK for C++

Last Updated: Nov 20, 2020

This topic describes how to use the C ++ SDK provided by Alibaba Cloud Intelligent speech interaction, including the SDK installation method and SDK sample code.

Note

  • The latest version of the SDK for C++ is 3.0.8, which was released on January 09, 2020.

    This version applies only to the Linux operating system. The Windows operating system is not supported.

  • Before you use this SDK, make sure that you understand how this SDK works. Fore more information, see Overview.

  • The methods of this SDK version are different from those of the earlier version. If you are familiar with the earlier version, pay attention to the updated methods described in this topic.

Download and installation

Download the SDK:

To download the SDK for C++, click here, The compressed package contains the following files or folders:

  • CMakeLists.txt: the CMakeList file of the demo project.

  • readme.txt: the SDK description.

  • release.log: the release notes.

  • version: the version number.

  • build.sh: the demo compilation script.

  • lib: the SDK libraries.

  • build: the compilation directory.

  • demo: the folder that contains demo.cpp files, which are the configuration files of various Intelligent Speech Interaction services. The following table describes the files contained in the folder.

    File name

    Description

    speechRecognizerDemo.cppThe demo of short sentence recognition.
    speechSynthesizerDemo.cppThe demo of speech synthesis.
    speechTranscriberDemo.cppThe demo of real-time speech recognition.
    speechLongSynthesizerDemo.cppThe demo of long-text-to-speech synthesis.
    test0.wav/test1.wavThe 16-bit audio files with a sampling rate of 16,000 Hz for testing.

  • include: the folder that contains SDK header files. The following table describes the files contained in the folder.

    File name

    Description

    nlsClient.h

    The header file of the NlsClient object.

    nlsEvent.h

    The header file of callback events.

    speechRecognizerRequest.h

    The header file of short sentence recognition.

    speechSynthesizerRequest.h

    The header file of speech synthesis and long-text-to-speech synthesis.

    speechTranscriberRequest.h

    The header file of real-time speech recognition.

Compile and run the demo project:

  1. Check the local operating system to ensure that required tools are installed based on the following minimum requirements:

    • Cmake 3.1

    • Glibc 2.5

    • Gcc 4.1.2

  2. Run the following script on the Linux terminal.

    mkdir build
    cd build && cmake .. && make
    cd... /demo# The following executable demo programs are generated: srDemo for short sentence recognition, stDemo for real-time speech recognition, syDemo for speech synthesis, and syLongDemo for long-text-to-speech synthesis.
    ./stDemo appkey <yourAccessKey Id> <yourAccessKey Secret> # The data is used for testing.

Key objects

  • Basic objects

    • NlsClient: the speech processing client, which is equivalent to a factory for all speech processing classes. You can globally create an NlsClient object.

    • NlsEvent: the event object. You can use this object to obtain the request status code, response from the server, and error message.

  • Recognition object

    SpeechRecognizerRequest: the request object of short sentence recognition.

Error codes of the SDK for C++

Error code

Error message

Description and solution

10000001SSL: couldn't create a ......!The error message returned because an internal error has occurred. Try again later.
10000002An official OpenSSL error message.The error message returned because an internal error has occurred. Resolve the error based on the error message and try again later.
10000003A system error message.The error message returned because a system error has occurred. Resolve the error based on the error message.
10000004URL: The url is empty.The error message returned because no endpoint is specified. Check whether an endpoint is specified.
10000005URL: Could not parse WebSocket url.The error message returned because the specified endpoint is invalid. Check whether the specified endpoint is correct.
10000006MODE: unsupport mode.The error message returned because the specified Intelligent Speech Interaction service is not supported. Check whether the Intelligent Speech Interaction service is correctly configured.
10000007JSON: Json parse failed.The error message returned because the server returns an invalid response. Submit a ticket and provide the task ID to Alibaba Cloud.
10000008WEBSOCKET: unkown head type.The error message returned because the server returns an invalid WebSocket type. Submit a ticket and provide the task ID to Alibaba Cloud.
10000009HTTP: connect failed.The error message returned because the client fails to connect to the server. Check the network and try again later.
Official HTTP status codeHTTP: Got bad status.The error message returned because an internal error has occurred. Resolve the error based on the error message.
System error codeIP: ip address is not valid.The error message returned because the IP address is invalid. Resolve the error based on the error message.
System error codeENCODE: convert to utf8 error.The error message returned because the file fails to be converted to the UTF-8 format. Resolve the error based on the error message.
10000010please check if the memory is enough.The error message returned because the memory is insufficient. Check the memory of the local device.
10000011Please check the order of execution.The error message returned because the client calls methods in an invalid order. For example, if the client receives a failed or complete message, the SDK disconnects the client from the server. If the client calls the relevant method to send data, this error message is returned.
10000012StartCommand/StopCommand Send failed.The error message returned because the request contains invalid parameters. Check the settings of request parameters.
10000013The sent data is null or dataSize <= 0.The error message returned because the client sends invalid data. Check the settings of request parameters.
10000014Start invoke failed.The error message returned because the start method times out. Call the stop method to release resources, and then start the recognition process again.
10000015connect failed.The error message returned because the connection between the client and the server fails. Release resources and start the recognition process again.

Service status codes

For more information about the service status codes, see the "Service status codes" section of the API reference.

Sample code

Note

  • The demo uses an audio file with the sampling rate of 16,000 Hz. To obtain correct recognition results, set the model to universal model for the project to which the appkey is bound in the Intelligent Speech Interaction console. You must select a model that matches the audio sampling rate based on your business scenario. For more information about model setting, see Manage projects.

  • The demo uses the default Internet access URL built in the SDK to access the short sentence recognition service. To use an Elastic Compute Service (ECS) instance that resides in the China (Shanghai) region to access this service in an internal network, set the URL for internal access when you create the SpeechRecognizerRequest object.

    request->setUrl("ws://nls-gateway.cn-shanghai-internal.aliyuncs.com/ws/v1")

  • You can obtain the complete sample code from the speechRecognizerDemo.cpp file in the demo folder of the SDK package.

#include <string.h>
#include <unistd.h>
#include <pthread.h>
#include <stdlib.h>
#include <ctime>
#include <string>
#include <iostream>
#include <vector>
#include <fstream>
#include <sys/time.h>
#include "nlsClient.h"
#include "nlsEvent.h"
#include "speechRecognizerRequest.h"
#include "nlsCommonSdk/Token.h"
#define FRAME_SIZE 3200
#define SAMPLE_RATE 16000
using namespace AlibabaNlsCommon;
using AlibabaNls::NlsClient;
using AlibabaNls::NlsEvent;
using AlibabaNls::LogDebug;
using AlibabaNls::LogInfo;
using AlibabaNls::LogError;
using AlibabaNls::SpeechRecognizerRequest;
// Customize the thread parameters.
struct ParamStruct {
      std::string fileName;
      std::string appkey;
      std::string token;
};
// Customize the callback parameters.
struct ParamCallBack {
      int userId;
      char userInfo[8];
};
// Specify a token for service authentication and the timestamp that indicates the validity period of the token. The token and timestamp can be used throughout the project.
// Each time before you call the service, you must check whether the specified token expires.
// If the token expires, you can use the AccessKey ID and AccessKey secret of your Alibaba Cloud account to obtain a new token. Then, reset the g_token and g_expireTime parameters.
// Note: Do not obtain a new token each time you call the short sentence recognition service. A token can be used for service authentication when it is valid. In addition, you can use the same token for all Intelligent Speech Interaction services.
std::string g_akId = "";
std::string g_akSecret = "";
std::string g_token = "";
long g_expireTime = -1;
struct timeval tv;
struct timeval tv1;
// Obtain a new token by using the AccessKey ID and AccessKey secret and obtain a timestamp for the validity period of the token.
// A token can be used when it is valid. You can use the same token for multiple processes, multiple threads, or multiple applications. We recommend that you apply for a new token when the current token is about to expire.
int generateToken(std::string akId, std::string akSecret, std::string* token, long* expireTime) {
      NlsToken nlsTokenRequest;
      nlsTokenRequest.setAccessKeyId(akId);
      nlsTokenRequest.setKeySecret(akSecret);
      if (-1 == nlsTokenRequest.applyNlsToken()) {
            // Receive the error message.
            printf("generateToken Failed: %s\n", nlsTokenRequest.getErrorMsg());
            return -1;
      }
      *token = nlsTokenRequest.getToken();
      *expireTime = nlsTokenRequest.getExpireTime();
      return 0;
}
// @brief: Call the sendAudio method to obtain the sleep duration of audio data sending.
// @param dataSize: the size of the audio data to be sent.
// @param sampleRate: the audio sampling rate. Supported sampling rates include 8,000 Hz and 16,000 Hz.
// @param compressRate: the data compression rate. Set this parameter to 10 for Opus-encoded audio data with a sampling rate of 16,000 Hz and a data compression rate of 10:1. Set this parameter to 1 if the data is not compressed.
// @return: the sleep duration after the audio data is sent.
// @note: For 16-bit pulse-code modulation (PCM)-encoded audio data with a sampling rate of 8,000 Hz, we recommend that you set the sleep duration to 100 ms for every 1,600 bytes sent.
For 16-bit PCM-encoded audio data with a sampling rate of 16,000 Hz, we recommend that you set the sleep duration to 100 ms for every 3,200 bytes sent.
// For audio data in other formats, calculate the sleep duration based on the compression rate. For example, if the compression rate is 10:1 for Opus-encoded audio data with a sampling rate of 16,000 Hz, the sleep duration is calculated in the following way: 3200/10 = 320 ms.
unsigned int getSendAudioSleepTime(int dataSize, int sampleRate, int compressRate) {
      // Only 16-bit audio data is supported.
      const int sampleBytes = 16;
      // Only mono audio data is supported.
      const int soundChannel = 1;
      // The current sampling rate, which indicates the size of data in the specified audio bit depth sampled per second.
      int bytes = (sampleRate * sampleBytes * soundChannel) / 8;
      // The current sampling rate, which indicates the size of data in the specified audio bit depth sampled per millisecond.
      int bytesMs = bytes / 1000;
      // The sleep duration is the size of the audio data to be sent divided by the sampling rate per millisecond.
      int sleepMs = (dataSize * compressRate) / bytesMs;
      return sleepMs;
}
// @brief: Call the start method to connect the client to the server. The SDK reports a started event in an internal thread.
// @param cbEvent: the syntax of the event in a callback. For more information, see the nlsEvent.h file.
// @param cbParam: the custom parameter in a callback. The default value is null. You can set this parameter based on your business requirements.
void OnRecognitionStarted(NlsEvent* cbEvent, void* cbParam) {
      ParamCallBack* tmpParam = (ParamCallBack*)cbParam;
      // The following code demonstrates how to obtain details of the started event and customize callback parameters.
      printf("OnRecognitionStarted: %d, %s\n", tmpParam->userId, tmpParam->userInfo);
      // Obtain the status code returned for the call. If the call is successful, the code 0 or 20000000 is returned. If the call fails, the corresponding error code is returned.
      // The ID of the current recognition task. The task ID is the unique identifier that indicates the interaction between the caller and the server. You must record the task ID. If an error occurs, you can submit a ticket and provide the task ID to Alibaba Cloud to facilitate troubleshooting.
      printf("OnRecognitionStarted: status code=%d, task id=%s\n", cbEvent->getStatusCode(), cbEvent->getTaskId());
      // Obtain the complete information returned by the server.
      //printf("OnRecognitionStarted: all response=%s\n", cbEvent->getAllResponse());
}
// @brief: Specify that the intermediate recognition result is to be returned. When the SDK receives the intermediate recognition result returned by the server, the SDK reports a ResultChanged event in an internal thread.
// @param cbEvent: the syntax of the event in a callback. For more information, see the nlsEvent.h file.
// @param cbParam: the custom parameter in a callback. The default value is null. You can set this parameter based on your business requirements.
void OnRecognitionResultChanged(NlsEvent* cbEvent, void* cbParam) {
      ParamCallBack* tmpParam = (ParamCallBack*)cbParam;
      // The following code demonstrates how to obtain details of the ResultChanged event and customize callback parameters.
      printf("OnRecognitionResultChanged: %d, %s\n", tmpParam->userId, tmpParam->userInfo);
      // The ID of the current recognition task. The task ID is the unique identifier that indicates the interaction between the caller and the server. You must record the task ID. If an error occurs, you can submit a ticket and provide the task ID to Alibaba Cloud to facilitate troubleshooting.
      printf("OnRecognitionResultChanged: status code=%d, task id=%s, result=%s\n", cbEvent->getStatusCode(), cbEvent->getTaskId(), cbEvent->getResult());
      // Obtain the complete information returned by the server.
      //printf("OnRecognitionResultChanged: response=%s\n", cbEvent->getAllResponse());
}
// @brief: When the SDK receives a message from the server indicating that the recognition task is completed, the SDK reports a Completed event in an internal thread.
// @note: After a Completed event is reported, the SDK disconnects the client from the server in an internal thread. At this time, if you call the sendAudio method, -1 is returned. Stop sending audio data in this case.
// @param cbEvent: the syntax of the event in a callback. For more information, see the nlsEvent.h file.
// @param cbParam: the custom parameter in a callback. The default value is null. You can set this parameter based on your business requirements.
void OnRecognitionCompleted(NlsEvent* cbEvent, void* cbParam) {
      ParamCallBack* tmpParam = (ParamCallBack*)cbParam;
      // The following code demonstrates how to obtain details of the Completed event and customize callback parameters.
      printf("OnRecognitionCompleted: %d, %s\n", tmpParam->userId, tmpParam->userInfo);
      // The ID of the current recognition task. The task ID is the unique identifier that indicates the interaction between the caller and the server. You must record the task ID. If an error occurs, you can submit a ticket and provide the task ID to Alibaba Cloud to facilitate troubleshooting.
      printf("OnRecognitionCompleted: status code=%d, task id=%s, result=%s\n", cbEvent->getStatusCode(), cbEvent->getTaskId(), cbEvent->getResult());
      // Obtain the complete information returned by the server.
      //printf("OnRecognitionCompleted: response=%s\n", cbEvent->getAllResponse());
}
// @brief: When an error occurs during the recognition process, the SDK reports a TaskFailed event in an internal thread.
// @note: After a TaskFailed event is reported, the SDK disconnects the client from the server in an internal thread. At this time, if you call the sendAudio method, -1 is returned. Stop sending audio data in this case.
// @param cbEvent: the syntax of the event in a callback. For more information, see the nlsEvent.h file.
// @param cbParam: the custom parameter in a callback. The default value is null. You can set this parameter based on your business requirements.
//@return
void OnRecognitionTaskFailed(NlsEvent* cbEvent, void* cbParam) {
      ParamCallBack* tmpParam = (ParamCallBack*)cbParam;
      // The following code demonstrates how to obtain details of the TaskFailed event and customize callback parameters.
      printf("OnRecognitionTaskFailed: %d, %s\n", tmpParam->userId, tmpParam->userInfo);
      // The ID of the current recognition task. The task ID is the unique identifier that indicates the interaction between the caller and the server. You must record the task ID. If an error occurs, you can submit a ticket and provide the task ID to Alibaba Cloud to facilitate troubleshooting.
      printf("OnRecognitionTaskFailed: status code=%d, task id=%s, error message=%s\n", cbEvent->getStatusCode(), cbEvent->getTaskId(), cbEvent->getErrorMessage());
      // Obtain the complete information returned by the server.
      //printf("OnRecognitionTaskFailed: response=%s\n", cbEvent->getAllResponse());
}
// @brief: When the recognition ends or an error occurs during the recognition process, the SDK disconnects the client from the server and reports a ChannelClosed event in an internal thread.
// @param cbEvent: the syntax of the event in a callback. For more information, see the nlsEvent.h file.
// @param cbParam: the custom parameter in a callback. The default value is null. You can set this parameter based on your business requirements.
void OnRecognitionChannelClosed(NlsEvent* cbEvent, void* cbParam) {
      ParamCallBack* tmpParam = (ParamCallBack*)cbParam;
      // The following code demonstrates how to obtain details of the ChannelClosed event and customize callback parameters.
      printf("OnRecognitionChannelClosed: %d, %s\n", tmpParam->userId, tmpParam->userInfo);
      // Obtain the complete information returned by the server.
      printf("OnRecognitionChannelClosed: response=%s\n", cbEvent->getAllResponse());
      delete tmpParam; // The recognition process ends and the callback parameter is released.
}
void* pthreadFunction(void* arg) {
      int sleepMs = 0;
      ParamCallBack *cbParam = NULL;
      // Initialize custom callback parameters. The following settings are used as an example to demonstrate how to pass parameters. The settings have no effect on the demo.
      // The settings of callback parameters are stored in a heap. When the SDK clears the request objects, it clears the parameter settings as well. You do not need to manually release the parameters.
      cbParam = new ParamCallBack;
      cbParam->userId = rand() % 100;
      strcpy(cbParam->userInfo, "User.");
      // 0. Obtain parameters such as the token and configuration files from custom thread parameters.
      ParamStruct *tst = (ParamStruct *) arg;
      if (tst == NULL) {
            printf("arg is not valid\n");
            return NULL;
      }
      // Open the audio file and obtain audio data.
      std::ifstream fs;
      fs.open(tst->fileName.c_str(), std::ios::binary | std::ios::in);
      if (!fs) {
            printf("%s isn't exist..\n", tst->fileName.c_str());
            return NULL;
      }
      // 1. Create the SpeechRecognizerRequest object of short sentence recognition.
      SpeechRecognizerRequest *request = NlsClient::getInstance()->createRecognizerRequest();
      if (request == NULL) {
            printf("createRecognizerRequest failed\n");
            return NULL;
      }
      request->setOnRecognitionStarted(OnRecognitionStarted, cbParam);        // Set a callback to be fired when the start method succeeds.
      request->setOnTaskFailed(OnRecognitionTaskFailed, cbParam);             // Set a callback to be fired when an unexpected failure occurs.
      request->setOnChannelClosed(OnRecognitionChannelClosed, cbParam);       // Set a callback to be fired when the TCP connection set up for the recognition task is closed.
      request->setOnRecognitionResultChanged(OnRecognitionResultChanged, cbParam); // Set a callback to be fired when an intermediate recognition result is returned.
      request->setOnRecognitionCompleted(OnRecognitionCompleted, cbParam);    // Set a callback to be fired when the recognition task is completed.
      request->setAppKey(tst->appkey.c_str());        // Specify the appkey. This parameter is required. If you do not have an appkey, obtain it as instructed on the Alibaba Cloud international site (alibabacloud.com).
      request->setFormat("pcm");                      // Specify the audio encoding format. This parameter is optional. Valid values: pcm and opus. Default value: pcm.
      request->setSampleRate(SAMPLE_RATE);            // Specify the audio sampling rate. This parameter is optional. Valid values: 16000 and 8000. Default value: 16000.
      request->setIntermediateResult(true);           // Specify whether to return intermediate recognition results. This parameter is optional. Default value: false.
      request->setPunctuationPrediction(true);        // Specify whether to add punctuation marks during post-processing. This parameter is optional. Default value: false.
      request->setInverseTextNormalization(true);     // Specify whether to enable inverse text normalization (ITN) during post-processing. This parameter is optional. Default value: false.
      //request->setEnableVoiceDetection(true);       // Specify whether to enable voice detection. This parameter is optional. Default value: false.
      // Specify the maximum duration of start silence, in milliseconds. This parameter is optional. If the actual start silence exceeds the specified duration, the server sends a RecognitionCompleted event and the SDK completes the recognition task. Note that if you need to use this parameter, you must set enable_voice_detection to true.
      //request->setMaxStartSilence(5000);
      // Specify the maximum duration of end silence, in milliseconds. This parameter is optional. If the actual end silence exceeds the specified duration, the server sends a RecognitionCompleted event and the SDK completes the recognition task. Note that if you need to use this parameter, you must set enable_voice_detection to true.
      //request->setMaxEndSilence(800);
      //request->setCustomizationId("TestId_123"); // Specify the ID of the custom model. This parameter is optional.
      //request->setVocabularyId("TestId_456"); // Specify the vocabulary ID of custom extensive hotwords. This parameter is optional.
      // Pass custom or advanced parameters in the JSON format of {"key": "value"}.
      //request->setPayloadParam("{\"vad_model\": \"farfield\"}");
      request->setToken(tst->token.c_str()); // Specify the token used for account authentication. This parameter is required.
      // 2. Call the start method in asynchronous callback mode. If the method is called, a started event is returned. If the method fails, a TaskFailed event is returned.
      if (request->start() < 0) {
            printf("start() failed. may be can not connect server. please check network or firewalld\n");
            NlsClient::getInstance()->releaseRecognizerRequest(request); // The start() method fails. The SpeechRecognizerRequest object is released.
            return NULL;
      }
      while (!fs.eof()) {
            uint8_t data[FRAME_SIZE] = {0};
            fs.read((char *) data, sizeof(uint8_t) * FRAME_SIZE);
            size_t nlen = fs.gcount();
            if (nlen <= 0) {
                  continue;
      }
      // 3: Send audio data. The sendAudio method is in asynchronous callback mode. If the sendAudio method returns -1, indicating that data fails to be sent, the client stops sending data.
      int ret = request->sendAudio(data, nlen);
      if (ret < 0) {
            // Indicate that data fails to be sent. The client stops sending data cyclically.
            printf("send data fail.\n");
            break;
      }
      // Set the transmission speed of data sending:
      // If you recognize a real-time recording, you do not need to specify the transmission speed by using the sleep method.
      // If you recognize an audio file, which is the mode used in this example, you must specify the transmission speed. Ensure that the data size sent per unit interval approaches to the data size of a unit interval in the audio file.
      sleepMs = getSendAudioSleepTime(nlen, SAMPLE_RATE, 1); // Obtain the sleep duration based on the size of sent data, audio sampling rate, and data compression rate.
      // 4. Set the latency for audio data sending.
      usleep(sleepMs * 1000);
    }
      printf("sendAudio done.\n");
      // 5. Close the audio file.
      fs.close();
      // 6. Notify the server that the audio data is sent.
      // Call the stop method in asynchronous callback mode. If the method fails, a TaskFailed event is returned.
      request->stop();
      // 7. Instruct the SDK to release the SpeechRecognizerRequest object.
      NlsClient::getInstance()->releaseRecognizerRequest(request);
      return NULL;
}
// Enable cyclical recognition.
// To enable cyclical recognition, you must assign a value to count and specify the audio files to be recognized. By default, in this demo project, one audio file is recognized each time the service is called.
void* multiRecognize(void* arg) {
      int count = 2;
      while (count > 0) {
            pthreadFunction(arg);
            count--;
      }
      return NULL;
}
// Recognize a single audio file.
int speechRecognizerFile(const char* appkey) {
      // Obtain the timestamp of the current system time to check whether the token expires.
      std::time_t curTime = std::time(0);
      if (g_expireTime - curTime < 10) {
            printf("the token will be expired, please generate new token by AccessKey-ID and AccessKey-Secret.\n");
            if (-1 == generateToken(g_akId, g_akSecret, &g_token, &g_expireTime)) {
                  return -1;
            }
      }
      ParamStruct pa;
      pa.token = g_token;
      pa.appkey = appkey;
      pa.fileName = "test0.wav";
      pthread_t pthreadId;
      // Start a worker thread to perform speech recognition.
      pthread_create(&pthreadId, NULL, &pthreadFunction, (void *)&pa);
      // Start a worker thread to perform cyclical speech recognition.
      // pthread_create(&pthreadId, NULL, &multiRecognize, (void *)&pa);
      pthread_join(pthreadId, NULL);
      return 0;
}
 // Recognize multiple audio files.
 // If the SDK uses multiple concurrent threads at a time, the SDK recognizes each audio file in a thread. The SDK does not recognize the same audio file in different threads.
 // In the sample code, two threads are used to recognize two audio files.
 // If you are a free-trial user, you can make only a maximum of two concurrent calls.
#define AUDIO_FILE_NUMS 2
#define AUDIO_FILE_NAME_LENGTH 32
int speechRecognizerMultFile(const char* appkey) {
      // Obtain the timestamp of the current system time to check whether the token expires.
      std::time_t curTime = std::time(0);
      if (g_expireTime - curTime < 10) {
      printf("the token will be expired, please generate new token by AccessKey-ID and AccessKey-Secret.\n");
            if (-1 == generateToken(g_akId, g_akSecret, &g_token, &g_expireTime)) {
                  return -1;
            }
      }
      char audioFileNames[AUDIO_FILE_NUMS][AUDIO_FILE_NAME_LENGTH] = {"test0.wav", "test1.wav"};
      ParamStruct pa[AUDIO_FILE_NUMS];
      for (int i = 0; i < AUDIO_FILE_NUMS; i ++) {
            pa[i].token = g_token;
            pa[i].appkey = appkey;
            pa[i].fileName = audioFileNames[i];
      }
      std::vector<pthread_t> pthreadId(AUDIO_FILE_NUMS);
      // Start two worker threads and recognize two audio files at a time.
      for (int j = 0; j < AUDIO_FILE_NUMS; j++) {
            pthread_create(&pthreadId[j], NULL, &pthreadFunction, (void *)&(pa[j]));
      }
      for (int j = 0; j < AUDIO_FILE_NUMS; j++) {
            pthread_join(pthreadId[j], NULL);
      }
      return 0;
}
int main(int arc, char* argv[]) {
      if (arc < 4) {
            printf("params is not valid. Usage: ./demo <your appkey> <your AccessKey ID> <your AccessKey Secret>\n");
            return -1;
      }
      std::string appkey = argv[1];
      g_akId = argv[2];
      g_akSecret = argv[3];
      // Configure output logs of the SDK. The configuration is optional. As configured in the following code, the SDK logs are generated in the log-recognizer.txt file. LogDebug specifies that logs at all levels, including LogInfo, LogWarning, and LogError, are generated. 400 specifies that the maximum size of a single log file is 400 MB.
      int ret = NlsClient::getInstance()->setLogConfig("log-recognizer", LogDebug, 400);
      if (-1 == ret) {
            printf("set log failed.\n");
            return -1;
      }
      // Start the worker thread.
      NlsClient::getInstance()->startWorkThread(4);
      // Recognize a single audio file.
      speechRecognizerFile(appkey.c_str());
      // Recognize multiple audio files.
      //speechRecognizerMultFile(appkey.c_str());
      // All the tasks are completed. Release the NlsClient object before the process exits. Note that the releaseInstance method is not thread-safe.
      NlsClient::releaseInstance();
      return 0;
}