This topic provides answers to some frequently asked questions that you may encounter when you use Queen SDK.

Table of contents

How does Queen SDK perform?

After years of iteration, Queen SDK strives to be the industry leader in performance, retouching effects, and efficiency. Performance optimization matters most for Queen SDK. The SDK performs optimization and iteration for low-end devices and earlier versions of operating systems that are used in Southeast Asia. After several rounds of optimization, Queen SDK can provide industry-leading performance. Multiple modes are available for features provided by Queen SDK. For example, modes such as advanced mode, power saving mode, and intelligent adjustment mode are available for retouching to better meet your business requirements. Each version has a dedicated multidimensional performance testing report. Each version is verified on a large scale by a large number of internal and external users.

How do I solve the black screen issue?

Causes: In most cases, a black screen appears when you use an invalid texture ID or an error occurs during texture generation. A black screen issue may occur if you use the returned texture ID improperly or use an invalid texture ID to render your layers.

Solution: The rendering layer of Queen SDK is implemented based on Open Graphics Library (OpenGL). Make sure that the texture ID passed to Queen SDK is valid. Then, Queen SDK can ensure that the output texture is normally displayed. Queen SDK can return the original texture even if no effect is applied, the parameter is invalid, or you do not apply to use Queen SDK.

After my application is integrated with Queen SDK, why does the screen flicker in solid color, and why does the screen color change if I move the mobile phone but still no image is displayed?

Causes: The construction parameter of the engine toScreen is set to true, the input texture is set to oes, and the render parameter is set to render().

Solution: Modify render() to renderTexture(matrix). If the input texture is oes, you must use renderTexture. The value of the matrix parameter is obtained from surfaceTexture.

What do I do if the application exits unexpectedly after startup?

Check whether textureId in engine.setInputTexture is correct. If textureId is correct, restart the application.

How do I solve the issue that faces cannot be recognized by the sticker or makeup feature if the image is rotated 90 degrees and displayed in landscape mode?

Causes: The construction parameter of the engine toScreen is set to true, the input texture is set to oes, the render parameter is set to render(), and the algorithm adopts the frame fetch scheme instead of the bytebuffer scheme.

Solution: Use the bytebuffer scheme. However, the face image is incorrectly enlarged or narrowed due to width and height changes.

Why are the advanced retouching, makeup, and sticker features ineffective while the basic retouching feature is effective?

If you can use the basic retouching feature, the processes of initialization, parameter settings, and rendering of Queen-engine are valid. The ineffectiveness of the advanced retouching feature is usually caused by invalid parameter settings. If your advanced retouching settings are invalid, faces cannot be recognized and then all effects that require facial landmarks become invalid. The following part lists the common invalid parameter settings:

  • You specify invalid values for the width and height parameters of the input texture for the setInputTexture(int texture, int width, int height, boolean isOES) operation. The values of the width and height parameters must be the width and height of the current texture that is specified by the texture parameter. The value of the isOES parameter specifies whether the texture is an oes texture, which is specific to Android. Whether the texture is oes determines whether Queen-engine needs to perform texture display conversion. If Queen-engine needs to perform texture display conversion, the correct matrix of the current camera needs to be passed by using the render parameter. The width and height that you specify determines the aspect ratio of the displayed rendering view. This may affect the scaling ratio of the rendering view to which the effects of advanced features are applied. A common result is that the effects of advanced features are applied, but the image is enlarged and deformed.
  • You specify invalid values for the input width and height parameters for the updateInputDataAndRunAlg(byte[] imageData, int format, int width, int height, int stride, int inputAngle, int outAngle, int flipAxis, boolean reuseData) operation. The values of the width and height parameters specify the actual width and height of the current input data ImageData. This width and height can be the same as the width and height specified in the preceding scenario, or can be reversed. For example, the value of buffer directly obtained from the camera on Android devices is the buffer when the image is rotated 90 degrees, and the width and height are reversed. The width and height that you specify determines whether faces can be recognized and whether the features that are related to facial recognition are available.
  • You specify invalid values for input parameters such as inputAngle, outAngle, and flipAxis for the updateInputDataAndRunAlg(byte[] imageData, int format, int width, int height, int stride, int inputAngle, int outAngle, int flipAxis, boolean reuseData) operation, or you specify invalid values for input parameters for the updateInputTextureBufferAndRunAlg(int inputAngle, int outAngle, int flipAxis, boolean usePreviousFrame) operation. The inputAngle, outAngle, and flipAxis parameters are required for the facial recognition feature. The inputAngle parameter determines how the algorithm uses the input data or texture, whether images need to be rotated, and how many degrees images are rotated. The outAngle parameter determines how the algorithm renders and displays recognized results, whether images need to be rotated, and how many degrees images are rotated. The flipAxis parameter is an enumeration value, which is defined in Queen-engine and determines whether final rendered images are flipped symmetrically, and whether the flip is along the X-axis or the Y-axis. The preceding parameters are critical for the algorithm that is used to implement facial recognition. The parameters are closely related to the current camera angle and whether the front or rear camera is used. The inputAngle, outAngle, and flipAxis parameters are encapsulated into the tool class QueenCameraHelper.java. You can directly use or adjust the values of these parameters for different applications based on your business requirements.