All Products
Search
Document Center

Platform For AI:Use LangStudio and DeepSeek to deploy a RAG- and web search-based chatbot

Last Updated:Jan 27, 2026

LangStudio provides a "RAG and web search chatbot" application flow template that integrates real-time web search with RAG capabilities. This enables models to generate more accurate answers by combining real-time information with domain-specific knowledge bases. The template is suitable for high-accuracy scenarios such as finance and healthcare, and supports flexible extension and customization.

Background information

The "RAG and web search chatbot" application flow combines two information sources:

  • Web search: Retrieves real-time data from the internet to keep information current

  • RAG capabilities: Retrieves relevant content from domain-specific knowledge bases for accuracy

This dual-source approach is particularly effective for finance, healthcare, and other fields requiring highly accurate information. Developers can customize the template to meet specific business requirements.

Prerequisites

  • You have registered an account at the SerpApi website and obtained an API key (the free tier provides 100 searches per month).

  • You have selected a vector database type:

    • Faiss: Suitable for test environments, no additional setup required

    • Milvus: Suitable for production environments, supports larger data volumes. You must create a Milvus instance before use

  • You have uploaded the RAG knowledge base corpus to OSS.

1. (Optional) Deploy models

The application flow requires LLM and embedding model services. If you already have OpenAI API-compatible model services, skip this step.

Go to QuickStart > Model Gallery and deploy models for the following scenarios:

Important

Use instruction fine-tuned models only. Base models cannot correctly follow instructions to answer questions.

2. Create connections

2.1 Create an LLM service connection

  1. Go to LangStudio and select a workspace.

  2. On the Connection > Model Service tab, click New Connection.

  3. Create a General LLM Model Service connection.

Create LLM service connection

Key parameters:

Parameter

Description

Service Provider

  • EAS model service: Select the LLM service deployed in Step 1. The system automatically fills in base_url and api_key.

  • Third-party model service: For DeepSeek official service, set base_url to https://api.deepseek.com and obtain api_key from the DeepSeek website.

Model Name

View the model details page in Model Gallery for instructions. For more information, see Create connections - Model service.

2.2 Create an embedding model service connection

Follow the same steps as 2.1 and select General Embedding Model Service type.

Create embedding model connection

2.3 Create a SerpApi connection

  1. On the Connection > Custom Connection tab, click New Connection.

  2. Configure the api_key obtained in Prerequisites.

Create SerpApi connection

3. Create a knowledge base index

Create a knowledge base index to parse, chunk, and vectorize the corpus into a vector database. Key parameters are described below. For other configuration details, see Manage knowledge base indexes.

Parameter

Description

Basic Configurations

Data Source OSS Path

The OSS path of the RAG knowledge base corpus uploaded in Prerequisites.

Output OSS Path

The path for storing intermediate results and index files.

Important

When using FAISS, we recommend configuring this to a directory in the OSS bucket of the current workspace default storage path. Custom roles require AliyunOSSFullAccess permission. For more information, see Cloud resource access authorization.

Embedding Model and Databases

Embedding Type

Select General Embedding Model.

Embedding Connection

Select the connection created in Step 2.2.

Vector Database Type

Select FAISS (used in this example).

4. Create and run the application flow

  1. On the LangStudio Application Flow tab, click Create Application Flow. Select the Chatbot with RAG and Web Search template.

    Create application flow

  2. Click Select Runtime in the upper-right corner and select an existing runtime. If no runtime is available, click Create Runtime on the Runtime tab.

    Create runtime

    Note: The runtime must be started before you can parse Python nodes or view additional tools.

    VPC configuration: When using Milvus, configure the same VPC as the Milvus instance, or ensure the VPCs are interconnected. When using FAISS, no VPC configuration is required.

  3. Configure key nodes:

    Application flow topology

    • Knowledge Retrieval:

      • Index Name: Select the index created in Step 3

      • Top K: Number of matching results to return

    • Serp Search:

      • SerpApi Connection: Select the connection created in Step 2.3

      • Engine: Supports Bing, Google, Baidu, Yahoo, etc. For details, see the SerpApi website

    • LLM:

      • Model Configuration: Select the connection created in Step 2.1

      • Chat History: Whether to use chat history as input

    For more information about each node, see Develop application flows - Node components.

  4. Click Run in the upper-right corner to execute the application flow. For common issues, see LangStudio FAQ.

    Application flow execution result

  5. Click View Logs below the generated answer to view trace details or topology.

    View trace details

5. Deploy the application flow

On the application flow development page, click Deploy in the upper-right corner to deploy the application flow as an EAS service. Key parameters:

  • Resource Information > Instances: For testing, set to 1. For production, configure multiple instances to avoid single points of failure.

  • VPC: SerpApi requires internet access. Configure a VPC with internet access capability. For more information, see Service internet access. When using Milvus, ensure VPC connectivity with the Milvus instance.

For more deployment details, see Deploy and call application flows.

6. Call the service

After deployment, test the service on the Online Debugging tab of the EAS service details page.

Note

The Key in the request body must match the "Chat Input" field in the application flow's Start Node. The default field is question.

Online debugging interface

For more calling methods (such as API calls), see Deploy application flows - Call the service.

References