This topic describes data types used to encapsulate audio and video frames in Real-Time Streaming (RTS) APIs.

Definition

Defines data types used to encapsulate audio and video frames.

struct rts_frame
{
    void *buf;              // where frame data is stored
    int size;               // size of frame data in bytes
    int is_audio;           // 1 for audio frame, 0 for video frame
    unsigned long long pts; // presentation time stamp, in ms
    unsigned long long dts; // decoding time stamp, in ms
    int flag;               // for video frame (is_audio == 0)
                            // bit 0: key frame
                            // bit 1: corruption
                            // bit 2: sps
                            // bit 3: sps change
    int duration;           // in ms
    // use this function to free rts_frame object
    void (*free_ptr)(struct rts_frame *);
    unsigned int uid; // reserved.
    struct pusher_delay delay;
    // msid-value = msid-id [ SP msid-appdata ]
    char msid[NET_MSID_MAX_LENGTH + 1];
};                

Parameters

Parameter Description
buf The buffer of the frame.
size The buffer size.
is_audio Specifies whether the frame is an audio frame. Valid values:
  • 1: audio frame
  • 0: video frame
pts The presentation timestamp. Unit: milliseconds.
dts The decoding timestamp. Unit: milliseconds.
flag This parameter is valid when is_audio is 0. Valid values:
  • 1: damage flag
  • 0: keyframe flag
duration The duration of the frame. Unit: milliseconds.
free_ptr The function pointer that is called to release the current rts_frame object, for example, frm->free_ptr(frm);.
uid The reserved parameter that specifies the stream to which data belongs.
pusher_delay The data used to record the stream ingest delay. For more information, see pusher_delay.