All Products
Search
Document Center

Intelligent Media Services:iOS guide

Last Updated:Nov 18, 2025

This topic describes how to integrate the Real-time Conversational AI agent into your iOS app.

Source code

Download link

You can download the source code from GitHub.

Directory structure

├── iOS  // The root directory of the iOS project.
│   ├── AUIAICall.podspec                // The pod description file.
│   ├── Source                                    // The source code files.
│   ├── Resources                                 // The resource files.
│   ├── Example                                   // The source code of the demo.
│   ├── AUIBaseKits                               // The basic UI components. 
│   ├── README.md                                 // The README file.

Environment requirements

  • Xcode 16.0 or later. We recommend that you use the latest official version.

  • CocoaPods 1.9.3 or later

  • A physical device that runs iOS 11.0 or later

Before you begin

Develop relevant API operations on your server or deploy the provided server source code. For more information, see Deploy a project.

Run the demo

  1. Download the source code. Navigate to the Example directory.

  2. Run the pod install --repo-update command in the Example directory. The dependent SDKs are automatically installed.

  3. Open the AUIAICallExample.xcworkspace file and modify the package ID.

  4. Open the AUIAICallAgentConfig.swift file and modify the ID and region of the AI agent.

    // AUIAICallAgentConfig.swift
    // Configure the agent ID.
    let VoiceAgentId = "Your voice agent ID"
    let AvatarAgentId = "Your avatar agent ID"
    let VisionAgentId = "Your vision agent ID"
    let ChatAgentId = "Your chatbot agent ID"
    // Configure the agent region.
    let Region = "cn-shanghai"

    Region name

    Region ID

    China (Hangzhou)

    cn-hangzhou

    China (Shanghai)

    cn-shanghai

    China (Beijing)

    cn-beijing

    China (Shenzhen)

    cn-shenzhen

    Singapore

    ap-southeast-1

  5. Use one of the following methods to start the AI agent:

    • App Server deployed: Open the AUIAICallAppServer.swift file and modify the domain name of the App Server.

      // AUIAICallAppServer.swift
      public let AICallServerDomain = "Domain name of your App Server"
    • App Server not deployed: Open the AUIAICallAuthTokenHelper.java file and configure the EnableDevelopToken parameter. Copy the ARTC AppId and AppKey from the console to generate the authentication token required for starting the AI agent on the app.

      Note

      This method requires embedding your AppKey and other sensitive information into your application. It is for testing and development only. Never use this method in a production environment. Exposing your AppKey on the client side creates a serious security risk.

      // AUIAICallAuthTokenHelper.swift
      @objcMembers public class AUIAICallAuthTokenHelper: NSObject {
      
          // Set it to true to enable development mode.
          private static let EnableDevelopToken: Bool = true     
          // Copy the ARTC AppId from the console.
          private static let RTCDevelopAppId: String = "ARTC AppId of the AI agent"
          // Copy the ARTC AppKey from the console.
          private static let RTCDevelopAppKey: String = "ARTC AppKey of the AI agent"
      
          ...
      }

      To obtain the AppId and AppKey for the ARTC application:

      1. Go to the Intelligent Media Console, click the agent you have created to enter the agent details page.

        image

      2. Click RTC AppID to go to the ApsaraVideo Live console to obtain the AppId and AppKey.

        image

  6. Compile and run the Example Target.

Develop Conversational AI features

You can quickly integrate AUIAICall into your app by performing the following steps.

Step 1: Integrate the source code

  • After you download the source code from the repository, copy the iOS folder to the code directory of your app and rename the folder AUIAICall. Make sure that the folder is placed at the same directory level as your Podfile. You can delete the Example and AICallKit directories.

  • Modify your Podfile to import the following modules:

    • AliVCSDK_ARTC: ApsaraVideo MediaBox SDK for Alibaba Real-Time Communication (ARTC). You can also import AliVCSDK_Standard or AliVCSDK_InteractiveLive. For more information, see iOS.

    • ARTCAICallKit: an SDK for Real-time Conversational AI.

    • AUIFoundation: the basic UI components.

    • AUIAICall: the source code of UI components for Real-time Conversational AI.

    # iOS 11.0 or later is required.
    platform :ios, '11.0'
    
    target' Your app target' do
        # Integrate ApsaraVideo MediaBox SDK based on your business requirements. 
        pod 'AliVCSDK_ARTC', '~> x.x.x'
    
        # An SDK for Real-time Conversational AI.
        pod 'ARTCAICallKit', '~> x.x.x'
    
        # The source code of the basic UI components.
        pod 'AUIFoundation', :path => "./AUIAICall/AUIBaseKits/AUIFoundation/", :modular_headers => true
    
        # The source code of UI components for Real-time Conversational AI.
        pod 'AUIAICall',  :path => "./AUIAICall/"
    end
    Note

    For the latest compatible version number of the ARTC software development kit (SDK), see Release history.

  • Run the pod install --repo-update command.

  • Complete the integration.

Step 2: Configure the project

  • Open the info.Plist file of your project and add the NSMicrophoneUsageDescription, NSCameraUsageDescription, and NSPhotoLibraryUsageDescription permissions.

  • In the project settings, enable Background Modes on the Signing & Capabilities tab. If you do not do so, you must implement code logic to end calls when the app is switched to the background.

Step 3: Configure the source code

  • Make sure that all prerequisites are met.

  • Open the AUIAICallAppServer.swift file and modify the domain name of the App Server.

    // AUIAICallAppServer.swift
    public let AICallServerDomain = "Domain name of your App Server"
    Note

    If your App Server is not deployed, generate the authentication token on the app for quick testing and demonstrations.

Step 4: Call API operations

After completing the preceding steps, you can call API operations in other modules of your app or on its homepage to start conversations with the AI agent. You can also modify the source code.

// Import the components.
import AUIAICall
import ARTCAICallKit
import AUIFoundation

// Check whether the microphone is enabled.
AVDeviceAuth.checkMicAuth { auth in
    if auth == false {
        return
    }
    
    // We recommend that you use the ID of the user who logs on to your app.
    let userId = "xxx"
    // Create a controller based on the user ID.
    let controller = AUIAICallController(userId: userId)
    // Set the ID of the AI agent. It cannot be nil.
    controller.config.agentId = "xxx"
    // Set the type of the AI agent, such as voice, avatar, or vision. It must match the agent ID.
    controller.config.agentType = agentType
    // Set the region of the AI agent. It cannot be nil.
   controller.config.region = "xx-xxx"
    // Create a ViewController for the call.
    let vc = AUIAICallViewController(controller)
    // Open the call page in full screen mode.
    vc.modalPresentationStyle = .fullScreen
    vc.modalTransitionStyle = .coverVertical
    vc.modalPresentationCapturesStatusBarAppearance = true
    self.present(vc, animated: true)
}