Native RTS SDK is designed for clients that connect players and Global Real-time Transport Network (GRTN). It is used to pull audio and video streams from GRTN in real time. After streams are processed, Native RTS SDK delivers pulse-code modulation (PCM) audio and H.264 video frames to a player. Then, these video frames are decoded or rendered.

Architecture

Native RTS SDK pulls audio and videos from GRTN, sends them to demuxers and jitter buffers, decodes audio, and frames videos. Then, Native RTS SDK sends the audio and videos to a frame queue in which a player pulls video and audio streams. The player sends audio frames to a renderer and video frames to a decoder, and sends the decoded frames to another renderer. The video renderer synchronizes the video frames with the audio renderer based on the pts parameter. The following figure shows the architecture.

001

In addition to the method of adding the rtsdec.c file of Native RTS SDK for Android or iOS to the player on a mobile client, you can also call functions described in the programming guide to integrate Native RTS SDK with the player. For more information, see API.

Native RTS SDK provides functions that can be called to set the parameters of the player. For more information, see Set parameters.