This topic describes the interfaces and basic workflow of the Push SDK for iOS and provides examples of how to use its features.
Features
Supports stream ingest over Real-Time Messaging Protocol (RTMP).
Supports RTS stream ingest and pulling based on Real-Time Communication (RTC).
Supports co-streaming and battles.
Adopts H.264 for video encoding and AAC for audio encoding.
Supports custom configurations for features such as bitrate control, resolution, and display mode.
Supports various camera operations.
Supports real-time retouching and custom retouching effects.
Allows you to add and remove animated stickers as watermarks.
Allows you to stream screen recordings.
Supports external audio and video inputs in different formats such as YUV and pulse-code modulation (PCM).
Supports mixing of multiple streams.
Supports ingest of audio-only and video-only streams and stream ingest in the background.
Supports background music.
Supports video snapshot capture.
Supports automatic reconnection and error handling.
Supports Automatic Gain Control (AGC), Automatic Noise Reduction (ANR), and Acoustic Echo Cancellation (AEC) algorithms.
Allows you to switch between the software and hardware encoding modes for video files. This improves the stability of the encoding module.
Limitations
Take note of the following limits before you use Push SDK for iOS:
You must configure screen orientation before stream ingest. You cannot rotate the screen during live streaming.
You must disable auto screen rotation for stream ingest in landscape mode.
In hardware encoding mode, the value of the output resolution must be a multiple of 16 to be compatible with the encoder. For example, if you set the resolution to 540p, the output resolution is 544 × 960. You must scale the screen size of the player based on the output resolution to prevent black bars.
API references
Procedure
The basic procedure involves the following steps:
Feature usage
Register the SDK
For more information about how to apply for and configure a license, see Integrate a license for the Push SDK.
You must register the SDK before you start stream ingest. Otherwise, you cannot use the features of the Push SDK.
Call the license registration interface at an early stage before you use the Push SDK.
[AlivcLiveBase registerSDK];The AlivcLiveBase class lets you set log levels, set local log paths, and retrieve the SDK version.
In the onLicenceCheck method of the AlivcLiveBase#setObserver interface, you can asynchronously check whether the license is configured.
Configure stream ingest parameters
In the ViewController where you need the stream ingest engine, import the header file #import <AlivcLivePusher/AlivcLivePusher.h>.
Basic stream ingest parameters have default values. We recommend that you use the default values, which lets you perform a simple initialization without extra configuration.
AlivcLivePushConfig *config = [[AlivcLivePushConfig alloc] init];// Initialize the stream ingest configuration class. You can also use initWithResolution to initialize.
config.resolution = AlivcLivePushResolution540P;// The default value is 540P. The maximum supported resolution is 720P.
config.fps = AlivcLivePushFPS20; // We recommend that you use 20 fps.
config.enableAutoBitrate = true; // Enable bitrate control. The default value is true.
config.videoEncodeGop = AlivcLivePushVideoEncodeGOP_2;// The default value is 2. A larger GOP interval results in higher latency. We recommend that you set this to 1 or 2.
config.connectRetryInterval = 2000; // Unit: milliseconds. The reconnection duration is 2s. Set the reconnection interval to at least 1 second. We recommend that you use the default value.
config.previewMirror = false; // The default value is false. In normal cases, select false.
config.orientation = AlivcLivePushOrientationPortrait; // The default is portrait mode. You can set the home button to be on the left or right for landscape mode.For optimal mobile phone performance and to meet network bandwidth requirements, we recommend setting the resolution to 540P. Most mainstream live streaming apps use this resolution.
If you disable bitrate control, the bitrate is fixed at the initial bitrate and does not automatically adjust between the set target and minimum bitrates. If network conditions are unstable, this may cause playback stuttering. Use this option with caution.
Ingest a camera stream
Initialize.
After you configure the stream ingest parameters, you can use the initWithConfig method of the Push SDK to initialize it. The following is sample code:
self.livePusher = [[AlivcLivePusher alloc] initWithConfig:config];NoteAlivcLivePusher does not support multiple instances. Each init call must have a corresponding destroy call.
Register stream ingest callbacks.
The following stream ingest callbacks are supported:
Info: the callbacks that are used for notifications and status detection.
Error: the callbacks that are returned when errors occur.
Network: the callbacks that are related to network.
Register the delegate to receive the corresponding callbacks. The following is sample code:
[self.livePusher setInfoDelegate:self]; [self.livePusher setErrorDelegate:self]; [self.livePusher setNetworkDelegate:self];Start preview.
After the livePusher object is initialized, you can start the preview. You must pass the display view for the camera preview, which inherits from UIView. The following is sample code:
[self.livePusher startPreview:self.view];Start stream ingest.
You can start stream ingest only after the preview is successful. Listen for the onPreviewStarted callback of AlivcLivePusherInfoDelegate. Add the following code inside the callback.
[self.livePusher startPushWithURL:@"Test ingest URL (rtmp://......)"];NoteThe ingest URL supports RTMP and Real-Time Streaming (RTS) (artc://) protocols. For more information about how to obtain the URL, see Generate ingest and streaming URLs.
ApsaraVideo Live does not support ingesting multiple streams to the same ingest URL at the same time. The second stream ingest will be rejected.
Camera-related operations
You can call camera-related operations only after the preview starts. These operations, which include switching cameras, controlling the flash, adjusting focus, zooming, and setting mirroring, can be performed during stream ingest, or in paused or reconnection states. Calling these interfaces before the preview starts has no effect. The following is sample code:
/* Switch between the front and rear cameras */
[self.livePusher switchCamera];
/* Turn the flash on or off. Turning on the flash is not effective for the front camera. */
[self.livePusher setFlash:false];
/* Adjust the focal length to zoom the captured image. A positive parameter value zooms in, and a negative value zooms out. */
CGFloat max = [_livePusher getMaxZoom];
[self.livePusher setZoom:MIN(1.0, max)];
/* Manually focus. Manual focus requires two parameters: 1. point: the focus point (coordinates of the point to focus on); 2. autoFocus: whether to autofocus. This parameter only affects this specific focus operation. Subsequent autofocus behavior follows the setting of the autofocus interface mentioned above. */
[self.livePusher focusCameraAtAdjustedPoint:CGPointMake(50, 50) autoFocus:true];
/* Set whether to autofocus */
[self.livePusher setAutoFocus:false];
/* Mirror settings. There are two mirror-related interfaces: PushMirror for ingest mirroring and PreviewMirror for preview mirroring. The PushMirror setting only affects the playback screen. The PreviewMirror setting only affects the preview screen. The two do not affect each other. */
[self.livePusher setPushMirror:false];
[self.livePusher setPreviewMirror:false];Stream ingest control
Stream ingest control includes operations such as starting, stopping, and destroying stream ingest, stopping the preview, restarting stream ingest, and pausing and resuming camera stream ingest. You can add buttons to perform these operations based on your business needs. The following is sample code:
/* Users can set pauseImage and then call the pause interface to switch from camera stream ingest to static image stream ingest. Audio stream ingest continues. */
[self.livePusher pause];
/* Switch from static image stream ingest to camera stream ingest. Audio stream ingest continues. */
[self.livePusher resume];
/* You can call stopPush during stream ingest. After completion, stream ingest stops. */
[self.livePusher stopPush];
/* You can call stopPreview only in the preview state. Calling stopPreview during stream ingest has no effect. After the preview stops, the preview screen freezes on the last frame. */
[self.livePusher stopPreview];
/* You can call restartPush during stream ingest or when any Error-related callback is received. In an Error state, you can only call this interface (or reconnectPushAsync for reconnection) or call destroy to destroy the stream ingest. After completion, stream ingest restarts, and all internal resources of ALivcLivePusher, including preview and ingest, are restarted. */
[self.livePusher restartPush];
/* You can call this interface during stream ingest or when an AlivcLivePusherNetworkDelegate-related Error callback is received. In an Error state, you can only call this interface (or restartPush to restart stream ingest) or call destroy to destroy the stream ingest. After completion, the stream ingest reconnects to the RTMP server. */
[self.livePusher reconnectPushAsync];
/* After destroying the stream ingest, the ingest and preview stop, and the preview screen is removed. All resources related to AlivcLivePusher are destroyed. */
[self.livePusher destory];
self.livePusher = nil;
/* Get the stream ingest status. */
AlivcLivePushStatus status = [self.livePusher getLiveStatus];Ingest a screen sharing stream
ReplayKit is a screen recording feature introduced in iOS 9. iOS 10 added the ability to call third-party app extensions to live stream screen content. On iOS 10 and later, you can use the Push SDK with an Extension screen recording process to achieve screen sharing for live streaming.
To ensure smooth system operation, iOS allocates relatively few resources to the Extension screen recording process. If the Extension process uses too much memory, the system forcibly terminates it. To work around the memory limit of the Extension process, the Push SDK divides screen recording and ingest into an Extension screen recording process (Extension App) and a main app process (Host App). The Extension App captures screen content and sends it to the Host App through inter-process communication. The Host App creates the AlivcLivePusher stream ingest engine and pushes the screen data to the remote server. Because the entire stream ingest process is completed in the Host App, the Host App can also handle microphone capture and sending. The Extension App is responsible only for screen content capture.
The Push SDK demo uses an App Group to enable inter-process communication between the Extension App and the Host App. This logic is encapsulated in the AlivcLibReplayKitExt.framework.
To implement screen stream ingest on iOS, the Extension screen recording process is created by the system when screen recording is needed. It is responsible for receiving the screen images captured by the system. Follow these steps:
Create an App Group.
Log on to Apple Developer and perform the following operations:
On the Certificates, IDs & Profiles page, register an App Group. For more information, see Register an App Group.
Go back to the Identifier page, select App IDs, and then click your App ID to enable the App Group feature. You must configure the App IDs for the host app and the screen recording extension in the same way. For more information, see Enable App Group.
After you finish, re-download the corresponding Provisioning Profile and configure it in Xcode.
After you correctly complete the operations, the Extension App can communicate with the Host App.
NoteAfter creating the App Group, save the App Group Identifier value for use in subsequent steps.
Create an Extension screen recording process.
The Push SDK for iOS demo includes the AlivcLiveBroadcast and AlivcLiveBroadcastSetupUI app extensions, which support screen sharing for live streaming. To create the Extension screen recording process in the app, perform the following steps:
In your existing project, choose , and select Broadcast Upload Extension, as shown in the following figure:

Modify the Product Name, select Include UI Extension, and click Finish to create the broadcast extension and UI, as shown in the following figure:

Configure the broadcast extension's Info.plist. In the newly created Target, Xcode will create a header file and a source file named SampleHandler by default, as shown in the following figure:

Drag
AlivcLibReplayKitExt.frameworkinto the project so that the Extension Target depends on it.
Replace the code in SampleHandler.m with the following code. Replace `KAPPGroup` in the code with the App Group Identifier that you created in the first step. The following is sample code:#import "SampleHandler.h" #import <AlivcLibReplayKitExt/AlivcLibReplayKitExt.h> @implementation SampleHandler - (void)broadcastStartedWithSetupInfo:(NSDictionary<NSString *,NSObject *> *)setupInfo { //User has requested to start the broadcast. Setup info from the UI extension can be supplied but optional. [[AlivcReplayKitExt sharedInstance] setAppGroup:kAPPGROUP]; } - (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType { if (sampleBufferType != RPSampleBufferTypeAudioMic) { // Audio is captured and sent by the Host App. [[AlivcReplayKitExt sharedInstance] sendSampleBuffer:sampleBuffer withType:sampleBufferType]; } } - (void)broadcastFinished { [[AlivcReplayKitExt sharedInstance] finishBroadcast]; } @end
In your project, create a Broadcast Upload Extension Target. In this Extension Target, integrate the
AlivcLibReplayKitExt.frameworkcustomized for the screen recording extension module.Integrate the Live SDK into the Host App.
In the Host App, create AlivcLivePushConfig and AlivcLivePusher objects. Set `ExternMainStream` to `True` and `AudioFromExternal` to `False`. This configuration indicates that audio is still captured by the SDK. Call `StartScreenCapture` to start receiving screen data from the Extension App, and then start and stop stream ingest. For details, follow these steps:
In the Host App, add dependencies for AlivcLivePusher.framework, AlivcLibRtmp.framework, RtsSDK.framework, and AlivcLibReplayKitExt.framework.

Initialize the Push SDK and configure it to use an external video source.
Set `ExternMainStream` to `True` and `ExternVideoFormat` to `AlivcLivePushVideoFormatYUV420P`. To use the internal SDK for audio capture, set `AudioFromExternal` to `False`. Configure other stream ingest parameters. The following is sample code:
self.pushConfig.externMainStream = true; self.pushConfig.externVideoFormat = AlivcLivePushVideoFormatYUV420P; self.pushConfig.audioSampleRate = 44100; self.pushConfig.audioChannel = 2; self.pushConfig.audioFromExternal = false; self.pushConfig.videoEncoderMode = AlivcLivePushVideoEncoderModeSoft; self.pushConfig.qualityMode = AlivcLivePushQualityModeCustom; self.pushConfig.targetVideoBitrate = 2500; self.pushConfig.minVideoBitrate = 2000; self.pushConfig.initialVideoBitrate = 2000; self.livePusher = [[AlivcLivePusher alloc] initWithConfig:self.pushConfig];Use AlivcLivePusher to complete live streaming functions by calling the following functions:
Start receiving screen recording data.
Replace
kAPPGroupin the code with theApp Group Identifierthat you created earlier. The following is sample code:[self.livePusher startScreenCapture:kAPPGROUP];Start stream ingest.
The following is sample code:
[self.livePusher startPushWithURL:self.pushUrl]Stop stream ingest.
The following is sample code:
[self.livePusher stopPush]; [self.livePusher destory]; self.livePusher = nil;
Configure preview display mode
The Push SDK supports three preview modes. The preview display mode does not affect stream ingest.
ALIVC_LIVE_PUSHER_PREVIEW_SCALE_FILL: The preview fills the window. If the video and window aspect ratios do not match, the preview will be distorted.
ALIVC_LIVE_PUSHER_PREVIEW_ASPECT_FIT: The preview maintains the video's aspect ratio. If the video and window aspect ratios do not match, black bars will appear. This is the default mode.
ALIVC_LIVE_PUSHER_PREVIEW_ASPECT_FILL: The preview crops the video to fit the window's aspect ratio. If the video and window aspect ratios do not match, the preview will be cropped.
The following is sample code:
mAlivcLivePushConfig.setPreviewDisplayMode(AlivcPreviewDisplayMode.ALIVC_LIVE_PUSHER_PREVIEW_ASPECT_FIT);You can set the three modes in AlivcLivePushConfig. You can also dynamically set them during preview and stream ingest using the `setpreviewDisplayMode` API.
This setting affects only the preview display. The resolution of the ingested video stream matches the resolution that is preset in AlivcLivePushConfig and does not change when the preview display mode is changed. The preview display mode is designed to adapt to different phone sizes, which lets you freely choose the preview effect.
Ingest an image
For a better user experience, the SDK provides settings for background image ingest and for image ingest when the bitrate is too low. When the app is moved to the background, the SDK pauses video stream ingest by default and ingests only audio. At this time, you can set an image for stream ingest. For example, you can display an image to remind users that the streamer is away for a moment and will be back soon. The following is sample code:
config.pauseImg = [UIImage imageNamed:@"image.png"];// Set the image for background stream ingest.Additionally, when the network is poor, you can set a static image to be ingested based on your needs. After you set the image, the SDK ingests this image when it detects a low bitrate, which helps avoid video stuttering. The following is sample code:
config.networkPoorImg = [UIImage imageNamed:@"image.png"];// Set the image to ingest when the network is poor.Push external audio and video streams
The Push SDK supports ingesting external audio and video sources, such as an audio or video file.
Configure external audio and video input in the stream ingest configuration.
Insert external video data.
Insert audio data.
The following is sample code:
config.externMainStream = true;// Enable external stream input.
config.externVideoFormat = AlivcLivePushVideoFormatYUVNV21;// Set the video data color format. Here it is set to YUVNV21. You can set it to other formats as needed.
config.externAudioFormat = AlivcLivePushAudioFormatS16;// Set the audio data bit depth format. Here it is set to S16. You can set it to other formats as needed.The following is a code example:
/* Only continuous buffer data of an external video in the YUV or RGB format can be sent using the sendVideoData method. You can use this method to send the video buffer, length, width, height, timestamp, and rotation angle.*/
[self.livePusher sendVideoData:yuvData width:720 height:1280 size:dataSize pts:nowTime rotation:0];
/* If the external video data is in the CMSampleBufferRef format, you can use the sendVideoSampleBuffer method.*/
[self.livePusher sendVideoSampleBuffer:sampleBuffer]
/* You can also convert the CMSampleBufferRef format to a continuous buffer and then pass the buffer to the sendVideoData method. The following code provides an example on how to perform the conversion:*/
// Query the length of a sample buffer
- (int) getVideoSampleBufferSize:(CMSampleBufferRef)sampleBuffer {
if(!sampleBuffer) {
return 0;
}
int size = 0;
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
if(CVPixelBufferIsPlanar(pixelBuffer)) {
int count = (int)CVPixelBufferGetPlaneCount(pixelBuffer);
for(int i=0; i<count; i++) {
int height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer,i);
int stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer,i);
size += stride*height;
}
}else {
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
int stride = (int)CVPixelBufferGetBytesPerRow(pixelBuffer);
size += stride*height;
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return size;
}
// Convert a sample buffer to a continuous buffer
- (int) convertVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer toNativeBuffer:(void*)nativeBuffer
{
if(!sampleBuffer || !nativeBuffer) {
return -1;
}
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int size = 0;
if(CVPixelBufferIsPlanar(pixelBuffer)) {
int count = (int)CVPixelBufferGetPlaneCount(pixelBuffer);
for(int i=0; i<count; i++) {
int height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer,i);
int stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer,i);
void *buffer = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, i);
int8_t *dstPos = (int8_t*)nativeBuffer + size;
memcpy(dstPos, buffer, stride*height);
size += stride*height;
}
}else {
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
int stride = (int)CVPixelBufferGetBytesPerRow(pixelBuffer);
void *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
size += stride*height;
memcpy(nativeBuffer, buffer, size);
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return 0;
}The following is sample code:
/* Only continuous buffer data in external PCM format is supported. Use sendPCMData to send the audio data buffer, length, and timestamp. */
[self.livePusher sendPCMData:pcmData size:size pts:nowTime];Configure watermarks
The Push SDK provides a feature to add watermarks. You can add multiple watermarks. The watermark images must be in PNG format. The following is sample code:
NSString *watermarkBundlePath = [[NSBundle mainBundle] pathForResource:
[NSString stringWithFormat:@"watermark"] ofType:@"png"];// Set the path of the watermark image.
[config addWatermarkWithPath: watermarkBundlePath
watermarkCoordX:0.1
watermarkCoordY:0.1
watermarkWidth:0.3];// Add a watermark.The `coordX`, `coordY`, and `width` parameters are relative values. For example, `watermarkCoordX:0.1` means the x-coordinate of the watermark is at 10% of the x-axis of the ingest screen. If the ingest resolution is 540 × 960, the x-coordinate of the watermark is 54.
The height of the watermark image is scaled proportionally based on the actual width and height of the watermark image and the input width value.
To implement a text watermark, you can first convert the text to an image, and then use this interface to add the watermark.
To ensure the clarity and edge smoothness of the watermark, use a source watermark image that is the same size as the output watermark. For example, if the output video resolution is 544 × 940 and the watermark display width is 0.1, use a source watermark image with a width of approximately 544 × 0.1 = 54.4.
Set video quality
Video quality supports three modes: Resolution Priority, Fluency Priority, and Custom.
To set the video quality, you must enable bitrate control: config.enableAutoBitrate = true;
Resolution Priority (default)
In Resolution Priority mode, the SDK internally configures the bitrate parameters to prioritize the clarity of the ingested video.
config.qualityMode = AlivcLivePushQualityModeResolutionFirst;// Resolution Priority modeFluency Priority
In Fluency Priority mode, the SDK internally configures the bitrate parameters to prioritize the fluency of the ingested video.
config.qualityMode = AlivcLivePushQualityModeFluencyFirst; // Fluency-first modeCustom mode
In Custom mode, the SDK configures the bitrate based on your settings. When you set the mode to Custom, you must define the initial bitrate, minimum bitrate, and target bitrate.
Initial bitrate: The bitrate when the live stream starts.
Minimum bitrate: When the network is poor, the bitrate gradually decreases to the minimum bitrate to reduce video stuttering.
Target bitrate: When the network is good, the bitrate gradually increases to the target bitrate to improve video clarity.
config.qualityMode = AlivcLivePushQualityModeCustom// Set to Custom mode.
config.targetVideoBitrate = 1400; // Target bitrate: 1400 kbit/s
config.minVideoBitrate = 600; // Minimum bitrate: 600 kbit/s
config.initialVideoBitrate = 1000; // Initial bitrate: 1000 kbit/sWhen you set a custom bitrate, refer to the recommended settings from Alibaba Cloud to configure the bitrate. For recommended settings, see the following tables:
Table 1. Recommended settings for Resolution Priority mode
Resolution | initialVideoBitrate | minVideoBitrate | targetVideoBitrate |
360p | 600 | 300 | 1000 |
480p | 800 | 300 | 1200 |
540p | 1000 | 600 | 1400 |
720p | 1500 | 600 | 2000 |
1080p | 1800 | 1200 | 2500 |
Table 1. Recommended settings for Resolution Priority mode
Resolution | initialVideoBitrate | minVideoBitrate | targetVideoBitrate |
360p | 400 | 200 | 600 |
480p | 600 | 300 | 800 |
540p | 800 | 300 | 1000 |
720p | 1000 | 300 | 1200 |
1080p | 1500 | 1200 | 2200 |
Adaptive resolution
After you enable the feature to dynamically adjust the ingest resolution, the resolution is automatically lowered when the network is poor to improve video fluency and clarity. The following is sample code:
config.enableAutoResolution = YES; // Enable adaptive resolution. The default is NO.Adaptive resolution only takes effect when the video quality mode is set to Resolution Priority or Fluency Priority. It is not effective in Custom mode.
Some players may not support dynamic resolution. To use the adaptive resolution feature, we recommend using ApsaraVideo Player.
Background music
The Push SDK provides features such as background music playback, mixing, noise reduction, in-ear monitoring, and muting. Background music-related interfaces can be called only after the preview starts. The following is sample code:
/* Start playing background music. */
[self.livePusher startBGMWithMusicPathAsync:musicPath];
/* Stop playing background music. If BGM is currently playing and you need to switch songs, simply call the start background music interface. You do not need to stop the currently playing background music. */
[self.livePusher stopBGMAsync];
/* Pause background music playback. This interface can be called only after background music starts playing. */
[self.livePusher pauseBGM];
/* Resume background music playback. This interface can be called only when background music is paused. */
[self.livePusher resumeBGM];
/* Enable loop playback for music. */
[self.livePusher setBGMLoop:true];
/* Set the noise reduction switch. After turning on noise reduction, non-human voice parts of the captured sound will be filtered. This may slightly suppress the human voice. We recommend letting users choose whether to enable the noise reduction feature. It is not used by default. */
[self.livePusher setAudioDenoise:true];
/* Set the in-ear monitoring switch. The in-ear monitoring feature is mainly used in KTV scenarios. After turning on in-ear monitoring, the streamer's voice will be heard in the headphones when they are plugged in. When turned off, the human voice cannot be heard in the headphones. In-ear monitoring does not work without headphones. */
[self.livePusher setBGMEarsBack:true];
/* Mixing settings, providing volume adjustment for background music and captured human voice. */
[self.livePusher setBGMVolume:50];// Set the background music volume.
[self.livePusher setCaptureVolume:50];// Set the human voice capture volume.
/* Set mute. When muted, both music and human voice input will be silenced. To mute music or human voice separately, you can use the mixing volume setting interface. */
[self.livePusher setMute:isMute?true:false];Capture snapshots
The Push SDK provides a feature to capture snapshots of the local video stream. The following is sample code:
/* Set the snapshot callback. */
[self.livePushersetSnapshotDelegate:self];
/* Call the snapshot API. */
[self.livePushersnapshot:1interval:1];Configure the retouching feature
The Alibaba Cloud Push SDK offers two retouching modes: Basic and Advanced. Basic retouching supports whitening, skin smoothing, and adding a rosy complexion. Advanced retouching supports face-based whitening, skin smoothing, rosy complexion, eye enlargement, face narrowing, and face slimming. This feature is provided by the Queen SDK. The following is sample code:
#pragma mark - "Retouching type and parameter APIs"/**
* @brief Turn a retouching type on or off.
* @param type A value of QueenBeautyType.
* @param isOpen YES: On, NO: Off.
*
*/
- (void)setQueenBeautyType:(kQueenBeautyType)type enable:(BOOL)isOpen;
/**
* @brief Set retouching parameters.
* @param param A retouching parameter type, one from QueenBeautyParams.
* @param value The value to set. The range is [0,1]. Values less than 0 are set to 0, and values greater than 1 are set to 1.
*/
- (void)setQueenBeautyParams:(kQueenBeautyParams)param
value:(float)value;
#pragma mark - "Filter-related APIs"
/**
* @brief Set a filter image. Before setting a filter image, you need to turn on kQueenBeautyTypeLUT.
* @param imagePath The path of the filter image to set.
*/
- (void)setLutImagePath:(NSString *)imagePath;
#pragma mark - "Face shaping-related APIs"
/**
*@brief Set a face shaping type. Before setting, you need to turn on kQueenBeautyTypeFaceShape.
*@param faceShapeType The type of face shaping to set, refer to QueenBeautyFaceShapeType.
*@param value The value to set.
*/
- (void)setFaceShape:(kQueenBeautyFaceShapeType)faceShapeType
value:(float)value;
#pragma mark - "Makeup-related APIs"
/**
* @brief Set a makeup type and the path to the image material. To set makeup, you need to turn on kQueenBeautyTypeMakeup.
* @param makeupType The makeup type.
* @param imagePaths A collection of paths to the makeup materials.
* @param blend The blend type.
*/
- (void)setMakeupWithType:(kQueenBeautyMakeupType)makeupType
paths:(NSArray<NSString *> *)imagePaths
blendType:(kQueenBeautyBlend)blend;
/**
* @brief Set a makeup type and the path to the image material.
* @param makeupType The makeup type.
* @param imagePaths A collection of paths to the makeup materials.
* @param blend The blend type.
* @param fps The corresponding frame rate.
*/
- (void)setMakeupWithType:(kQueenBeautyMakeupType)makeupType
paths:(NSArray<NSString *> *)imagePaths
blendType:(kQueenBeautyBlend)blend fps:(int)fps;
/**
* @brief Set the transparency of a makeup type. You can specify the gender.
* @param makeupType The makeup type.
* @param isFeMale Whether the gender is female. YES: female, NO: male.
* @param alpha The transparency of the makeup.
*/
- (void)setMakeupAlphaWithType:(kQueenBeautyMakeupType)makeupType
female:(BOOL)isFeMale alpha:(float)alpha;
/**
* @brief Set the blend type for a makeup type.
* @param makeupType The makeup type.
* @param blend The blend type.
*/
- (void)setMakeupBlendWithType:(kQueenBeautyMakeupType)makeupType
blendType:(kQueenBeautyBlend)blend;
/**
* @brief Clear all makeup.
*/
- (void)resetAllMakeupType;Adjust retouching parameters in real time
The Push SDK supports real-time adjustment of retouching parameters during stream ingest. You can turn on the retouching switch and adjust the corresponding parameter values. The following is sample code:
[_queenEngine setQueenBeautyType:kQueenBeautyTypeSkinBuffing enable:YES];
[_queenEngine setQueenBeautyType:kQueenBeautyTypeSkinWhiting enable:YES];
[_queenEngine setQueenBeautyParams:kQueenBeautyParamsWhitening value:0.8f];
[_queenEngine setQueenBeautyParams:kQueenBeautyParamsSharpen value:0.6f];
[_queenEngine setQueenBeautyParams:kQueenBeautyParamsSkinBuffing value:0.6];Configure the live quiz feature
The live quiz feature can be implemented by inserting SEI messages into the live stream, which are then parsed by the player. The Push SDK provides an interface to insert SEI. This interface can only be called during stream ingest. The following is sample code:
/*
msg: The SEI message body to be inserted into the stream. A JSON format is recommended. The ApsaraVideo Player SDK can receive this SEI message, parse it, and display it.
repeatCount: The number of frames to send. To ensure that the SEI is not lost, you need to set the number of repetitions. For example, if set to 100, this SEI message will be inserted into the next 100 frames. The player will remove duplicate SEI messages.
delayTime: The delay in milliseconds before sending.
KeyFrameOnly: Whether to send only on keyframes.
*/
[self.livePusher sendMessage:@"Question information" repeatCount:100 delayTime:0 KeyFrameOnly:false];iPhone X adaptation
In general scenarios, setting the preview view's frame to full screen allows for normal previewing. However, due to the unique screen ratio of the iPhone X, setting the preview view to the full screen size on an iPhone X causes image stretching. We recommend that you do not use a full-screen view for the preview on an iPhone X.
Change the view size during stream ingest
You can iterate through the UIView that you assigned when calling the `startPreview` or `startPreviewAsync` interface and change the frame of all subviews of the preview view. For example:
[self.livePusher startPreviewAsync:self.previewView];
for (UIView *subView in [self.previewView subviews]) {
// ...
}Play external sound effects
To play sound effects or music on the stream ingest page, we recommend using AVAudioPlayer because the SDK currently has a conflict with AudioServicesPlaySystemSound. After playback, you need to update the AVAudioSession and AVAudioPlayer settings. The following is sample code for playing sound effects:
- (void)setupAudioPlayer {
NSString *filePath = [[NSBundle
mainBundle] pathForResource:@"sound" ofType:@"wav"];
NSURL *fileUrl = [NSURL URLWithString:filePath];
self.player = [[AVAudioPlayer alloc] initWithContentsOfURL:fileUrl error:nil];
self.player.volume = 1.0;
[self.player prepareToPlay];
}
- (void)playAudio {
self.player.volume = 1.0;
[self.player play];
// Configure AVAudioSession
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setMode:AVAudioSessionModeVideoChat error:nil];
[session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
[session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker|AVAudioSessionCategoryOptionAllowBluetooth
| AVAudioSessionCategoryOptionMixWithOthers error:nil];
[session setActive:YES error:nil];
}Background mode and phone calls
The SDK handles background-related processes internally, so you do not need to take any action. When the app enters the background, the SDK continues to ingest audio by default, while the video remains on the last frame. You must open the Background Mode option in your app's Capabilities and select Audio, AirPlay, and Picture in Picture. This ensures that the app can capture audio normally when in the background. See the following figure:
If you do not need to maintain audio stream ingest when the app is in the background, you can stop stream ingest when the app enters the background and resume it when the app returns to the foreground. To do this, destroy the stream ingest engine when the app enters the background and recreate it to resume stream ingest when the app returns to the foreground.
In this approach, you must listen for `UIApplicationWillResignActiveNotification` and `UIApplicationDidBecomeActiveNotification` when the app enters the background. Other methods may pose risks.
Listen for callbacks
The Push SDK mainly includes the following callbacks:
Callback type | Callback class name |
AlivcLivePusherInfoDelegate | |
AlivcLivePusherNetworkDelegate | |
AlivcLivePusherErrorDelegate | |
AlivcLivePusherBGMDelegate | |
AlivcLivePusherCustomFilterDelegate |
Stream ingest callbacks
Stream ingest callbacks are used to notify the app of the SDK's status, including callbacks for preview start, rendering the first video frame, sending the first audio/video frame, stream ingest start, and stream ingest stop.
onPushStarted: Indicates a successful connection to the server.
onFirstFramePushed: Indicates that the first audio/video frame was sent successfully.
onPushStarted, onFirstFramePushed: Indicates that the SDK stream ingest was successful.
Network-related callbacks
Network-related callbacks notify the app of the SDK's network and connection status. For short network fluctuations or switches within the reconnection timeout and attempt limits that are set in AlivcLivePushConfig, the SDK automatically reconnects. After a successful reconnection, stream ingest continues.
onConnectFail: Indicates that stream ingest failed. Check whether the ingest URL is invalid, contains invalid characters, has authentication issues, exceeds the maximum concurrent stream ingest limit, or is on the blacklist. Make sure the ingest URL is valid and available before you try to ingest again. Specific error codes include 0x30020901 to 0x30020905, and 0x30010900 to 0x30010901.
onConnectionLost: This is a connection lost callback. After the connection is lost, the SDK automatically reconnects internally and triggers `onReconnectStart`. If the stream ingest connection is not restored after the maximum number of reconnection attempts (`config.connectRetryCount`), `onReconnectError` is triggered.
onNetworkPoor: This is a slow network callback. Receiving this callback indicates that the current network is insufficient to support stream ingest. At this point, stream ingest is still ongoing and has not been interrupted. You can handle your business logic here, such as by displaying a UI notification to the user.
onNetworkRecovery: This is a network recovery callback.
onReconnectError: This is a reconnection failure callback. This indicates that the reconnection failed. Check the current network and restart stream ingest when the network recovers.
onSendDataTimeout: This is a data sending timeout callback. Check the current network. Stop and restart stream ingest after the network recovers.
onPushURLAuthenticationOverdue: This is an authentication expired callback. This callback indicates that the authentication for the current ingest URL has expired. You must pass a new URL to the SDK.
Error callbacks
onSystemError: This is a system device exception callback. You must destroy the engine and try again.
onSDKError: This is an SDK error callback. You must handle it differently based on the error code:
If the error code is 805438211, it indicates poor device performance with low encoding and rendering frame rates. You must notify the streamer and stop time-consuming business logic at the app layer, such as advanced retouching and animations.
You must specifically handle callbacks for when the app lacks microphone or camera permissions. The error code for no microphone permission is 268455940, and the error code for no camera permission is 268455939.
For other errors, you can log them without taking any other action.
Background music callbacks
onOpenFailed: This callback indicates that the background music failed to start. Check whether the music path that was passed to the start background music interface and the music file are correct. You can call
startBGMWithMusicPathAsyncto play it again.onDownloadTimeout: This callback indicates that background music playback timed out. This often occurs when you play background music from a network URL. Prompt the streamer to check the current network status. You can call
startBGMWithMusicPathAsyncto play it again.
Callbacks for external retouching and filter processing
You can use the `AlivcLivePusherCustomFilterDelegate` callback to integrate with third-party retouching SDKs to implement basic and advanced retouching features. The main purpose of `AlivcLivePusherCustomFilterDelegate` is to send the SDK's internal texture or `CVPixelBuffer` to a retouching SDK for processing, and then return the processed texture or `CVPixelBuffer` to the SDK to apply the retouching effects.
Set the `livePushMode` switch in `AlivcLivePushConfig` to `AlivcLivePushBasicMode`. The SDK uses `AlivcLivePusherCustomFilterDelegate` to return the texture ID, not the `CVPixelBuffer`. The core callbacks are as follows:
onCreate: The OpenGL context is created. This callback is typically used to initialize the retouching engine.
onProcess: The OpenGL texture is updated. This method returns the original texture ID from within the SDK. In this callback, call the retouching processing method and return the processed texture ID.
onDestroy: The OpenGL context is destroyed. This callback is typically used to destroy the retouching engine.
Common methods and interfaces
/* In Custom mode, users can adjust the minimum and target bitrates in real time. */
[self.livePusher setTargetVideoBitrate:800];
[self.livePusher setMinVideoBitrate:200]
/* Get the status of whether stream ingest is in progress. */
BOOL isPushing = [self.livePusher isPushing];
/* Get the ingest URL. */
NSString *pushURLString = [self.livePusher getPushURL];
/* Get stream ingest performance debugging information. For specific parameters and descriptions of stream ingest performance, refer to the API documentation or interface comments. */
AlivcLivePushStatsInfo *info = [self.livePusher getLivePushStatusInfo];
/* Get the version number. */
NSString *sdkVersion = [self.livePusher getSDKVersion];
/* Set the log level to filter desired debugging information as needed. */
[self.livePusher setLogLevel:(AlivcLivePushLogLevelDebug)];Debugging tools
The SDK provides a UI debugging tool named DebugView. DebugView is a movable global floating window that always stays on top of the view after it is added. It includes debug features such as viewing stream ingest logs, real-time detection of stream ingest performance parameters, and line charts of key performance metrics.
Do not call the interface to add DebugView in your release version.
The following is sample code:
[AlivcLivePusher showDebugView];// Open the debugging tool.API references
FAQ
Stream ingest fails
You can use the troubleshooting tool to check whether the ingest URL is valid.
How do I obtain information about ingested audio and video streams?
You can go to Stream Management and view and manage ingested audio and video streams in the Active Streams tab.
How do I play a stream?
After you start stream ingest, you can use a player such as ApsaraVideo Player, FFplay, or VLC to test stream pulling. For more information about how to obtain the streaming URL, see Generate ingest and streaming URLs.
App Store submission fails
The RtsSDK provides a library for all platforms. To submit to the App Store, you must remove the emulator architecture. You can use lipo -remove to remove the x86_64 architecture.