By Zhuo Guang
In AI application development, there are always some headache-inducing issues: How to securely store sensitive information (such as API keys)? What if model parameters need frequent adjustments? Prompt templates constantly change, requiring a service restart every time, which is cumbersome! Don't worry; today we will discuss how to use Nacos to solve these problems.
Assuming we are developing an AI travel assistant that generates corresponding travel plans based on user-provided destinations, the implementation code based on LangChain and FastAPI is as follows:
import asyncio
import uvicorn
from fastapi import FastAPI
from fastapi.routing import APIRoute
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
class TextRequest(BaseModel):
text: str
class AIAssistant:
async def init(self):
#Initialize the model
LLMParam = {
"temperature": 0,
"model_name": "qwen-turbo",
"streaming": False,
"openai_api_key": "sk-xxxxxxxxxx",
"openai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
self.llm = ChatOpenAI(**LLMParam)
template = (
"""Please write a travel guide for independent travelers visiting {location}. The guide should include the following sections:
1. Introduction: A brief overview of the destination's historical and cultural background.
2. Main Attractions: List and briefly describe several key attractions.
3. Recommended Activities: Suggest a few interesting activities or experiences."""
)
#Initialize prompt
self.prompt = PromptTemplate.from_template(template)
async def run(self):
await self.init()
app = FastAPI(
debug=True,
routes=[
APIRoute("/call", endpoint=self.call,methods=["POST"]),
],
)
config = uvicorn.Config(
app=app,
host="0.0.0.0",
port=80
)
server = uvicorn.Server(config=config)
await server.serve()
async def call(self, text: TextRequest):
return self.llm.invoke(self.prompt.format(location=text.text)).content
async def main():
server = AIAssistant()
await server.run()
if __name__ == "__main__":
asyncio.run(main())
This code looks concise, right? But if you put it into a production environment, you'll soon discover a host of issues:
1. Prompt templates cannot iterate quickly: AI applications may demand modifications to Prompt templates at any time during practical use. In traditional development modes, each template change requires manual code adjustments by developers and redeployment of the application, which is inefficient. This makes quick responses to market changes or user feedback difficult.
2. Model calling parameters are too rigid: When modifications to the model calling parameters (like temperature, etc.) are necessary, you have to retest and redeploy the application. Static parameter management is unable to meet the flexible demands of business adjustments, which may lead to a lack of timely optimization in the quality of generated content, negatively impacting user experience. This lag in parameter adjustments can significantly reduce development efficiency, especially when rapid experimentation and iteration of model performance are required.
3. Sensitive information hard-coded: The current code directly hard-codes sensitive information, such as API keys, posing serious security risks. If the code is leaked, attackers could easily obtain the API key and gain unauthorized access to related services, resulting in data breaches and financial losses.
The above issues are also challenges faced by many current AI applications. To address these pain points, we will introduce Nacos step by step to solve them one by one.
Simply put, Nacos is an open-source product by Alibaba that serves as a registration and configuration center. It is a dynamic service discovery, configuration management, and service management platform aimed at making cloud-native application construction easier. Nacos provides rich multi-language SDK support, covering popular programming languages such as Java, Go, and Python. One major pain point that Nacos resolves in configuration management is the "dynamic" nature of configurations—allowing you to update AI application configurations at any time without restarting the application.
1. Quick iteration of Prompt templates: Prompts are at the core of generative AI, but their optimization often requires multiple experiments. By dynamically loading Prompt templates with Nacos, you can quickly experiment and respond to business changes.
2. Quick iteration of Prompt templates: Prompts are at the core of generative AI, but their optimization often requires multiple experiments. By dynamically loading Prompt templates with Nacos, you can quickly experiment and respond to business changes.
3. Dynamic adjustment of model parameters: Parameters like temperature and maximum generation length may need to be adjusted dynamically based on business requirements. With Nacos, you can directly modify these parameters through the interface, and they will take effect in real time!
4. Secure storage of sensitive information: AI applications often require sensitive information like API keys, which are insecure when hard-coded in the code. Nacos supports configuration encryption, allowing sensitive configurations to be securely managed.
5. Dynamic management of various configurations: Beyond Prompts and dynamic model parameters, any configuration that requires dynamic changes in AI applications can be managed through Nacos, such as rate limiting rules, banned words from user inputs, and black/white lists of IP addresses. Changes can take effect immediately by clicking publish in the Nacos console.
Next, let's use the AI shopping guide case to experience how Nacos enables your AI application to evolve from "primal society" to "modern governance."
First, we need to start a Nacos instance locally. By following the steps below, we can deploy and use Nacos in a Docker container through its Docker image.
1. To download the nacos-docker project:
git clone https://github.com/nacos-group/nacos-docker.git
cd nacos-docker
2. Execute the docker-compose command to start Nacos.
On first execution, it will automatically download the required related Docker images. The duration of waiting depends on your internet speed. You can also download the relevant images in advance to shorten the waiting time for executing deployment commands.
docker-compose -f example/standalone-derby.yaml up
3. Verify whether the Nacos service has started successfully
Use the command docker logs -f $container_id
to view the startup logs of the Nacos service. If you see the following logs, it means the service has started successfully.
Nacos started successfully in xxxx mode. use xxxx storage
After startup, we can access Nacos at http://localhost:8848
.
In modern AI application development, Prompt Engineering has become a crucial step. Whether building dialogue systems, text generation models, or complex multi-modal applications, the design and optimization of Prompts directly affect the performance of the model. Business needs often change rapidly, and how to efficiently adjust Prompts dynamically has become a significant pain point. With Nacos configuration center supporting dynamic changes and historical versions, we can efficiently manage Prompt templates by hosting them on Nacos.
First, we need to migrate the prompts to Nacos. In the Nacos console, create a configuration file with DataId as Prompt.template
, Group as the default DEFAULT_GROUP
, and the content corresponding to the Prompt template:
Please write a travel guide for independent travelers visiting {location}. The guide should include the following sections:
1. Introduction: A brief overview of the destination's historical and cultural background.
2. Main Attractions: List and briefly describe several key attractions.
3. Recommended Activities: Suggest a few interesting activities or experiences.
Then, in the code, introduce the Nacos Python SDK (nacos-sdk-python
), and read the corresponding configuration during Prompt initialization while listening for changes to dynamically sense and apply the updated configuration:
import asyncio
import json
import uvicorn
from fastapi import FastAPI
from fastapi.routing import APIRoute
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from v2.nacos import ClientConfigBuilder, NacosConfigService, ConfigParam
class AIAssistant:
async def init(self):
LLMParam = {
"temperature": 0,
"model_name": "qwen-turbo",
"streaming": False,
"openai_api_key": "sk-xxxxxxxxx",
"openai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
self.llm = ChatOpenAI(**LLMParam)
#Initialize the Nacos configuration client
client_config = (ClientConfigBuilder()
.server_address("localhost:8848")
.build())
config_client = await NacosConfigService.create_config_service(client_config)
#Get Prompt content and initialize
template = await config_client.get_config(ConfigParam(
data_id="Prompt.template",
group="DEFAULT_GROUP"
))
self.prompt = PromptTemplate.from_template(template)
#Listen for changes to Prompt
await config_client.add_listener("Prompt.template", "DEFAULT_GROUP",
self.PromptListener)
async def PromptListener(self,tenant_id:str, group_id:str, data_id:str, content:str):
self.prompt = PromptTemplate.from_template(content)
Through the above steps, we have hosted the prompts on Nacos. Next, we can dynamically adjust and iterate prompts during the AI application runtime. Let's experience it together!
First, in the Nacos console, set the prompt to the following content and start the application:
Please write a travel guide for independent travelers visiting {location}. The guide should include the following sections:
1. Introduction: A brief overview of the destination's historical and cultural background.
2. Main Attractions: List and briefly describe several key attractions.
3. Recommended Activities: Suggest a few interesting activities or experiences.
Execute the following command to call the corresponding AI assistant:
curl -X POST "http://127.0.0.1:80/translate" -H "Content-Type: application/json" -d '{"text": "Shanghai"}'
The returned content from the application is:
We can see that the AI assistant provided a corresponding travel plan based on our input destination of Shanghai.
However, we now feel that the plan provided by the AI assistant is a bit too verbose, and we would like the returned content to be more concise. Additionally, we want the return to be in a structured format for easier user reference. Therefore, we need to modify the prompt. Without Nacos, we would have to modify the code and restart the application, but after hosting the prompt on Nacos, we just need to modify the corresponding prompt content in the Nacos console to:
Please write a concise travel guide for independent travelers visiting {location}, following the structure below:
1.Introduction (one paragraph): Briefly introduce the historical and cultural background of the destination.
2.Main Attractions (in table format): List and briefly describe several key attractions.
3.Recommended Activities (in table format): Suggest a few interesting activities or experiences.
Execute the following command:
curl -X POST "http://127.0.0.1:80/call" -H "Content-Type: application/json" -d '{"text": "Shanghai"}'
The returned result is:
We can see that the model's response has been adjusted according to our prompt requirements, and the prompt changes have taken effect successfully.
Nacos also supports version history tracking. On the History Versions interface, users can view all previously used historical Prompt versions and perform rollbacks. This enables one-click restoration to a previous version, even if the performance of the modified Prompt is unsatisfactory:
When using AI models, calling parameters (such as temperature, maximum length, etc.) directly affect the output results. Fixed parameters are often inadequate for diverse task needs, potentially leading to resource waste or poor user experience. Dynamically managing model calling parameters allows for flexible adjustments based on task characteristics and user needs, optimizing resource utilization and enhancing output quality and user satisfaction, thus better adapting to complex and changing scenarios. Next, we will introduce how to dynamically manage model calling parameters through Nacos.
Similar to hosting Prompts, we need to migrate the model calling parameters to Nacos. We will create a configuration file in Nacos, with DataId as LLMParam.json
, Group as the default DEFAULT_GROUP
, and the content containing the parameters required for model calling:
{
"temperature": 0,
"model_name": "qwen-turbo",
"streaming": false,
"openai_api_key": "sk-xxxxxxxx",
"openai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
Then, we also read the corresponding model calling parameter configuration during model initialization and listen for changes to dynamically perceive and apply the updated model calling parameters:
import asyncio
import json
import uvicorn
from fastapi import FastAPI
from fastapi.routing import APIRoute
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from v2.nacos import ClientConfigBuilder, NacosConfigService, ConfigParam
class AIAssistant:
async def init(self):
#Initialize the Nacos configuration client
client_config = (ClientConfigBuilder()
.server_address("localhost:8848")
.build())
config_client = await NacosConfigService.create_config_service(client_config)
#Get model calling parameters and initialize
LLMParamContent = await config_client.get_config(ConfigParam(
data_id="LLMParam.json",
group="DEFAULT_GROUP"))
LLMParam = json.loads(LLMParamContent)
self.llm = ChatOpenAI(**LLMParam)
template = await config_client.get_config(ConfigParam(
data_id="Prompt.template",
group="DEFAULT_GROUP"
))
self.prompt = PromptTemplate.from_template(template)
#Listen for changes to the Prompt
await config_client.add_listener("Prompt.template", "DEFAULT_GROUP",
self.PromptListener)
#Listen for corresponding configuration changes to dynamically sense changes
await config_client.add_listener("LLMParam.json", "DEFAULT_GROUP",
self.LLMParamListener)
async def LLMParamListener(self,tenant_id:str, group_id:str, data_id:str, content:str):
LLMParam = json.loads(content)
self.llm = ChatOpenAI(**LLMParam)
async def PromptListener(self,tenant_id:str, group_id:str, data_id:str, content:str):
self.prompt = PromptTemplate.from_template(content)
Through the above steps, we have hosted the model calling parameters on Nacos. Now, you can directly modify temperature
or model_name
in the Nacos console without restarting the application. Let's see the effect.
First, we set the model's model_name
to qwen-turbo
in the Nacos console:
Prompt is set to:
Please write a concise city introduction for {location}, with no more than five sentences.
Use the localhost:80 interface to call the model:
curl -X POST "http://127.0.0.1:80/call" -H "Content-Type: application/json" -d '{"text": "Shanghai"}'
The model returns the result as follows:
Shanghai, China's largest and most cosmopolitan city, is a vibrant blend of history and modernity. Located on the Yangtze River Delta, it boasts iconic landmarks like the Bund, a riverside promenade showcasing colonial architecture, and the futuristic skyline of Pudong. Known as a global financial hub, Shanghai hosts numerous multinational corporations and hosts the annual World Expo. The city's rich culture is reflected in its museums, theaters, and bustling street markets. With its diverse culinary scene and lively nightlife, Shanghai offers endless experiences for residents and visitors alike.
When we want to adjust the model calling parameters and change the backend model to deepseek-r1
, we only need to set the model_name
of the calling parameters to deepseek-r1
in the Nacos console:
Call the interface again, and the returned content is as follows:
Shanghai, China's bustling metropolis, seamlessly blends historic charm with futuristic innovation along the Huangpu River. Iconic landmarks like the colonial-era Bund and the towering skyscrapers of Pudong symbolize its dynamic contrast. As a global financial hub, it drives commerce and culture, home to Asia's busiest port and a thriving arts scene. Traditional treasures, such as Yu Garden and ancient temples, coexist with avant-garde museums and cutting-edge architecture. Renowned for its vibrant nightlife, diverse cuisine, and luxury shopping, Shanghai pulses with energy day and night.
We can observe that the style of the returned content from the backend models has changed significantly, with considerable differences in the returned content. In addition to parameters like model_name
, you can also dynamically change the API key and other calling credentials for the model through Nacos, allowing for credential rotation.
Now, with Nacos, you can easily and dynamically change model calling parameters based on business needs, and these changes take effect without restarting the application.
By hosting Prompts and model calling parameters on the Nacos configuration center, we have achieved rapid iteration of Prompts during AI application runtime and dynamic adjustment of model calling parameters. However, during the configuration management of AI applications, there may still be some demands:
● Gray Release: When updating Prompts or model calling parameters, directly launching a full version may lead to unpredictable risks. It is preferred to conduct gray validation on a small number of nodes first, confirming there are no risks before full deployment.
● Secure Storage of Sensitive Information: Model calling parameters may contain sensitive information such as keys. Storing them in plaintext poses leakage risks, which can lead to significant losses. Hence, ensuring the security of sensitive information storage is crucial.
Nacos, in addition to the basic capabilities of dynamically applying configuration changes, also provides features like gray release and configuration encryption. Let's explore how to leverage Nacos' advanced configuration management capabilities to further evolve AI applications.
In AI applications, when updating Prompts to optimize model outputs, directly rolling out a full version may lead to unpredictable risks. The new Prompt might perform poorly in real user scenarios, causing incorrect responses or a decline in user experience. Therefore, we can leverage Nacos' gray configuration capabilities to perform gray releases, initially updating the new Prompt to a subset of AI application nodes, validating its effects and collecting feedback to reduce the risks of a full rollout.
Nacos natively supports IP-based gray releases, while MSE Nacos provides a managed service based on Nacos offered by Alibaba Cloud Microservice Engine (MSE). Its capabilities in configuration gray release are more comprehensive, supporting multi-version tag gray releases. Next, we will demonstrate how to perform a gray release for Prompts based on MSE Nacos.
We will copy the LLMTranslator code and specify the client's label as app=demo1
and app=demo2
when creating the Nacos clients.
#Node one initializes Nacos configuration client
client_config = (ClientConfigBuilder()
.server_address("localhost:8848")
.app_conn_labels({"app":"demo1"})
.build())
config_client = await NacosConfigService.create_config_service(client_config)
#Node two initializes Nacos configuration client and specifies the label
client_config = (ClientConfigBuilder()
.server_address("localhost:8848")
.app_conn_labels({"app":"demo2"})
.build())
config_client = await NacosConfigService.create_config_service(client_config)
The content of the Prompt.template
configuration in Nacos is as follows:
Please list three major tourist attractions in {location}, with a one-sentence description for each attraction.
Start two application nodes on ports 80 and 90 respectively, and execute the following commands to call:
curl -X POST "http://127.0.0.1:80/call" -H "Content-Type: application/json" -d '{"text": "Shanghai"}'
#AIAssistant1 returned result
1. The Bund: As one of Shanghai's iconic landmarks, the Bund is famous for its spectacular contrast between historic buildings and modern skyscrapers across the Huangpu River.
2. Oriental Pearl Radio & TV Tower: Located in the Pudong New Area, it is not only one of Shanghai's most representative landmarks but also a great spot for tourists to enjoy panoramic views of the city.
3. Yuyuan Garden and City God Temple Area: This tourist area combines a classical garden, traditional shops, and local snacks, offering visitors a taste of old Shanghai's cultural charm.
curl -X POST "http://127.0.0.1:90/call" -H "Content-Type: application/json" -d '{"text": "Shanghai"}'
#AIAssistant2 returned result
1. The Bund: As one of Shanghai's iconic attractions, the Bund is renowned for its magnificent row of historic buildings and the striking skyline of modern skyscrapers across the river in Pudong.
2. Oriental Pearl Radio & TV Tower: Located in Lujiazui, Pudong New Area, it is not only one of Shanghai's most iconic modern structures but also a great destination for sightseeing, dining, and entertainment.
3. Yuyuan Garden and City God Temple: This historic and cultural area combines the classical beauty of Yuyuan Garden, a private garden in Ming Dynasty style, with the vibrant atmosphere of traditional commercial streets in the surrounding old district.
It can be observed that the recommended results returned from the two nodes are almost consistent in content and style, as the same prompt is used.
Next, we hope to iterate the prompt to not only include tourist attractions but also introduce local cuisine. To prevent uncontrollable risks from a full rollout, we perform a gray release in the Nacos console.
We open the configuration editing interface, select the gray release based on labels, and choose the gray version name as gray1
. The application node label key is app
, and the value is demo1
, indicating that the gray prompt is effective for AIAssistant1. The configuration content is:
Please list three major tourist attractions and three specialty foods of {location}, with a one-sentence description for each.
After clicking publish, there is no need to restart the application. We can execute the following command to call:
curl -X POST "http://127.0.0.1:80/call" -H "Content-Type: application/json" -d '{"text": "上海"}'
#AIAssistant1 returned result
Here are three major tourist attractions in Shanghai along with their descriptions:
1. **The Bund**: Located along the Huangpu River, it is one of Shanghai's most iconic landmarks, offering a view of the historic "International Architecture Expo" and a stunning panorama of the modern Lujiazui financial district across the river.
2. **Oriental Pearl Radio & TV Tower**: As one of the symbolic structures of Shanghai and China, the Oriental Pearl Tower is not only a great spot for sightseeing but also features exhibitions, revolving restaurants, and other facilities, allowing visitors to enjoy both the scenery and cultural experiences.
3. **Yuyuan Garden and City God Temple**: This integrated tourist area combines classical garden art with traditional culture. Yuyuan Garden is famous for its exquisite layout, while the surrounding old streets are filled with traditional handicrafts and snacks.
Here are three specialty foods from Shanghai along with their descriptions:
1. **Xiaolongbao (Soup Dumplings)**: A traditional Shanghainese dim sum known for its thin skin, generous filling, and rich broth. Be careful when eating as the soup inside can be very hot, but the unforgettable taste makes it worth the effort.
2. **Shengjian Mantou (Pan-Fried Dumplings)**: Also called pan-fried buns, these have a golden crispy bottom while remaining soft and juicy on top, filled with pork and best enjoyed with a specially prepared dipping sauce.
3. **Hongshao Rou (Braised Pork Belly)**: While this dish is popular nationwide, the Shanghai version stands out for its bright red color and rich yet non-greasy flavor. Slow-cooked over a long period, the pork belly absorbs all the seasoning’s essence, resulting in a deep and savory taste.
curl -X POST "http://127.0.0.1:90/call" -H "Content-Type: application/json" -d '{"text": "上海"}'
#AIAssistant2 returned result
1. The Bund: As one of Shanghai's iconic attractions, the Bund is renowned for its magnificent row of historic buildings and the striking skyline of modern skyscrapers across the river in Pudong.
2. Oriental Pearl Radio & TV Tower: Located in Lujiazui, Pudong New Area, it is not only one of Shanghai's most iconic modern structures but also a great destination for sightseeing, dining, and entertainment.
3. Yuyuan Garden and City God Temple: This historic and cultural area combines the classical beauty of Yuyuan Garden, a private garden in Ming Dynasty style, with the vibrant atmosphere of traditional commercial streets in the surrounding old district.
It can be observed that the styles of the translation results returned by the two nodes are different; the gray prompt has already taken effect on AIAssistant1.
Once we have validated the changes made to the prompt, we can proceed with a full release, rolling it out to all online nodes:
Execute the following command again to call:
curl -X POST "http://127.0.0.1:80/call" -H "Content-Type: application/json" -d '{"text": "shanghai"}'
#AIAssistant1 returned result
Here are three major tourist attractions in Shanghai along with their descriptions:
1. **The Bund**: Located along the Huangpu River, it is one of Shanghai's most iconic landmarks. Visitors can admire the historic "International Architecture Expo" and enjoy a stunning view of the modern Lujiazui financial district on the opposite bank.
2. **Oriental Pearl Radio & TV Tower**: As one of the symbolic structures of Shanghai and China, this tower is not only a great spot for sightseeing but also features exhibition halls, revolving restaurants, and other facilities, allowing tourists to experience culture and cuisine while enjoying the breathtaking views.
3. **Yuyuan Garden and City God Temple**: This comprehensive tourist area combines classical garden art with traditional culture. Yuyuan Garden is renowned for its exquisite layout, while the surrounding old streets are filled with a variety of traditional handicrafts and snacks.
Here are three specialty foods from Shanghai along with their descriptions:
1. **Xiaolongbao (Soup Dumplings)**: A traditional Shanghainese delicacy known for its thin skin, generous filling, and rich broth. Be cautious when eating as the soup inside can be very hot, but the unforgettable taste makes it worth the effort.
2. **Shengjian Mantou (Pan-Fried Dumplings)**: Also called pan-fried buns, these have a golden, crispy bottom while remaining soft and juicy on top. Filled with pork and best enjoyed with a specially prepared dipping sauce.
3. **Hongshao Rou (Braised Pork Belly)**: While braised pork belly is popular nationwide, the Shanghai version is particularly famous for its bright red color and rich yet non-greasy flavor. Slow-cooked over a long period, the pork belly absorbs all the seasoning’s essence, resulting in a deep and savory taste.
curl -X POST "http://127.0.0.1:9080/call" -H "Content-Type: application/json" -d '{"text": "上海"}'
#AIAssistant2 returned result
The three main tourist attractions in Shanghai and their features are as follows:
1. **The Bund**: Located along the Huangpu River, it is one of the most famous landmarks in Shanghai, known for its historic European-style architecture and the modern skyline of Lujiazui on the opposite bank.
2. **Oriental Pearl Radio & TV Tower**: Not only is it one of the symbolic buildings of Shanghai, but also the sixth tallest TV tower in Asia. Visitors can ascend to the top to enjoy a spectacular view of the entire city.
3. **Yuyuan Garden and City God Temple**: This is an integrated scenic area that combines classical gardens with traditional commercial streets. Yuyuan Garden showcases exquisite Jiangnan garden art, while the surrounding City God Temple area is renowned for selling various handicrafts and snacks.
Here are introductions to three distinctive Shanghainese cuisines:
1. **Xiaolongbao (Soup Dumplings)**: These small, delicate dumplings with thin skins and juicy fillings are one of the most popular traditional snacks in Shanghai. When eating them, be careful to bite a small hole first to sip the soup before tasting the meat filling.
2. **Shengjian Mantou (Pan-Fried Dumplings)**: With a golden crispy bottom, soft top, and juicy interior, these pan-fried buns are another highly regional specialty noodle snack.
3. **Hongshao Rou (Braised Pork Belly)**: As a classic local dish, braised pork belly is beloved for its bright red color, rich flavor, and tender yet non-greasy texture.
We can see that the recommendation styles returned by the two nodes are consistent; the new version of the prompt has been fully rolled out. If there are any issues during the gray release process, we can choose to stop the gray release.
Through Nacos' gray configuration feature, we achieve the full process from gray launch to full rollout of the prompt, ensuring stability during the iteration process.
Model calling parameters often include sensitive information like API keys. If sensitive information is stored or transmitted in plaintext, it can easily become a target for hackers, leading to data leaks or system abuse. Once such information is leaked, it can have serious consequences for user privacy, enterprise security, and system stability. The Nacos configuration center supports configuration encryption, enabling full-link encryption of sensitive information stored in Nacos, significantly enhancing security.
MSE Nacos goes further by collaborating with Alibaba Cloud's Key Management Service (KMS) to support asymmetric encryption for configurations with higher security levels. In addressing data security risks, MSE Nacos can provide solutions for sensitive configurations that meet national data security requirements defined in the three-level protection standards.
Therefore, we can host model calling parameters on MSE Nacos via encrypted configurations:
Using this approach ensures that configuration information is encrypted during both transmission and storage, greatly improving security.
In the AI era, the fast iteration and evolution of applications has brought new requirements for configuration management in AI scenarios. Nacos is a platform that facilitates dynamic service discovery, configuration management, and service management, making it easier to build cloud-native applications, and it has been widely applied across various scenarios. After years of large-scale testing during Alibaba's Double 11 shopping festival, Nacos has accumulated rich experience and best practices in the field of configuration management. From dynamic effectiveness of configuration changes to gray releases, and from historical version rollbacks to secure storage of sensitive information, combining with Nacos allows AI applications to achieve rapid iteration and continuous evolution, creating greater value, as demonstrated in this article.
Furthermore, according to unreliable sources, both Alibaba and ByteDance have integrated Nacos for configuration management in their AI applications.
Recently, the Nacos 3.0-BETA version has been officially released, significantly enhancing security while introducing new features like distributed locks and fuzzy subscriptions. We welcome everyone to experience and use it. In the future, Nacos will further explore integration with AI scenarios, and the Nacos community is collaborating with the Higress community to explore transforming existing applications into services based on the MCP protocol. Stay tuned!
Lobechat Uses the WolframAlpha MCP Tool to Reduce LLM Hallucinations
552 posts | 53 followers
FollowAlibaba Cloud Native Community - April 15, 2025
Alibaba Cloud Native Community - April 17, 2025
Alibaba Cloud Native Community - April 18, 2025
Alibaba Cloud Data Intelligence - August 14, 2024
Alibaba Cloud Native Community - April 3, 2025
Farruh - February 19, 2024
552 posts | 53 followers
FollowMulti-source metrics are aggregated to monitor the status of your business and services in real time.
Learn MoreAccelerate and secure the development, deployment, and management of containerized applications cost-effectively.
Learn MoreAccelerate AI-driven business and AI model training and inference with Alibaba Cloud GPU technology
Learn MoreTop-performance foundation models from Alibaba Cloud
Learn MoreMore Posts by Alibaba Cloud Native Community