All Products
Search
Document Center

AI Coding Assistant Lingma:Configure enterprise inference services

Last Updated:Mar 18, 2026

Lingma Enterprise Dedicated Edition provides a one-stop service for Supervised Fine-Tuning (SFT) model fine-tuning based on Alibaba Cloud Model Studio. This service supports the rapid integration and precise control of private enterprise models to enable customization for business scenarios.

Applicable editions

Enterprise Dedicated Edition

Background information

Many large and medium-sized customers on public clouds have strong requirements for personalization and want to use Supervised Fine-Tuning (SFT) to create solutions that are customized for their business scenarios. To meet this demand, Alibaba Cloud Model Studio provides a one-stop solution that covers data preparation, model fine-tuning, and inference deployment, with full support for SFT capabilities. Lingma can seamlessly integrate with Alibaba Cloud Model Studio to quickly meet customer needs for model fine-tuning and personalization, helping enterprises efficiently achieve their customization goals.

Prerequisites

  1. Service purchase

  2. Obtain the inference service API

    Log on to the Alibaba Cloud Model Studio console, generate an inference service API URL, and record key information such as the API URL, model name, and API key. For more information, see Make your first call to the Qwen API.

Note

Currently, only reasoning models from Alibaba Cloud Model Studio.

After an enterprise administrator configures the API for a private inference service, the Lingma system ensures that when developers in the enterprise are authorized to use the related Lingma services, the system automatically invokes the model service configured for the enterprise.

Procedure

  1. Log on to the Lingma console. In the navigation pane on the left, choose Policy Configuration > Inference Service Configuration.

  2. On the Inference Service Configuration page, configure the model service for the required scenario:

    • Enable the model service configuration.

    • Enter the model URL, model name, and API key.

  3. Click Test Now.

    1. Success: The model service is connected.

    2. Failure: The model service connection failed. Verify that the configuration is correct and test again.

  4. Click Save.

  5. In your IDE, log on to your enterprise account. In the intelligent session window, select the model service. The configured model is now visible.莫3@1x (1)

Disclaimer

Important

This service supports connections to third-party models. Lingma assumes no responsibility for the availability, compliance, or security of these third-party models. Before you use this service, carefully evaluate and review the relevant third-party license agreements to ensure that your use is legal and compliant with regulatory requirements.