This topic describes the frequently asked questions (FAQ) about using the Natural User Interaction (NUI) SDK.
What can I do if the Intelligent Speech Interaction server returns invalid recognition results or returns only a few words for a sentence?
Check whether the audio sampling rate is the same as that of the model that you configure in the selected project in the Intelligent Speech Interaction console. In addition, check whether your speech data is recorded in the mono audio mode. Only the recording file recognition service can recognize binaural audio.
What can I do if the SDK fails to be initialized?
Check whether you use the valid AccessKey ID and AccessKey secret to generate a token and whether you properly set the Appkey, Access Token, and workspace parameters.
Why am I unable to start a recognition task?
The SDK uses a singleton pattern. You must complete the current recognition task and release the SDK before you use the SDK to start another recognition task.
What can I do if no recognition result is returned?
Check whether the following conditions are met:
The SDK is initialized.
The method used to start the recognition task is called and the vad_mode parameter is properly set.
A callback is triggered based on the value of the AudioState parameter, a callback response is returned, and recording is enabled.
Generally, if the preceding conditions are met but no recognition result is returned, an EVENT_ASR_ERROR event is reported. You can troubleshoot the error based on the error code returned in the event.
How can I manage logs and store audio data?
The SDK allows you to manage logs by severity level. You can specify the log configuration when you initialize the SDK.
In addition, the SDK allows you to set the save_wav and debug_path parameters to specify the directory where audio data is stored when you initialize the SDK. For more information, see Overview.
The save_wav and debug_path parameters in the real-time speech recognition service have the same meaning as those in the short speech recognition service.
What are the limits on the SDK usage?
The SDK provides you with unified access to Intelligent Speech Interaction. You can call the relevant method and process the reported events in the callbacks to use different speech services. You must check the returned error messages and recognition result. Do not call an SDK method in the callbacks. Otherwise, a deadlock may occur.
Why can I do if I am unable to connect to the target framework?
The code in a framework is written in both the Objective-C and C++ programming languages. When you use the SDK, you must use .mm files and specify valid directories for the header file and libraries of your project.