Configure and use Alibaba Cloud Model Studio Token Plan (Team Edition) in Hermes Agent.
Install Hermes Agent
-
Run the following command in your terminal. The installation script automatically installs dependencies such as Python and Git.
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bashNoteNative installation is not supported on Windows. Install WSL2 first, then run the command above in WSL2.
-
After installation, reload your terminal environment.
source ~/.bashrc # If you use zsh, run: source ~/.zshrc -
Verify the installation.
hermes --versionIf a version number is displayed, the installation is successful.
Configure Token Plan (Team Edition)
Hermes Agent is compatible with the OpenAI protocol. Use the hermes config set command to configure the Base URL and API Key for Token Plan (Team Edition).
-
Obtain your Token Plan (Team Edition) API Key.
Go to the Token Plan (Team Edition) page to obtain your API Key.
-
Run the following commands in your terminal to configure the model provider, Base URL, API Key, and default model.
Replace
YOUR_API_KEYwith your actual Token Plan (Team Edition) API Key.hermes config set model.provider custom hermes config set model.base_url https://token-plan.ap-southeast-1.maas.aliyuncs.com/compatible-mode/v1 hermes config set model.api_key YOUR_API_KEY hermes config set model.default qwen3.6-plusThe commands above write the configuration to
~/.hermes/config.yaml. You can also edit the file directly with the following content:
Switch models
After configuration, use the -m flag to switch between models supported by Token Plan (Team Edition) during a conversation.
hermes chat -m qwen3.6-plus
You can also change the default model with hermes config set:
hermes config set model.default qwen3.6-plus
For a full list of supported models, see Token Plan (Team Edition) Overview.
Text models (such as qwen3.6-plus and glm-5) can be used directly. Image generation models use a separate API and require integration through the extension mechanism. For more information, see Integrate multimodal generation models.
Verify the configuration
Run the following command to send a test message.
hermes chat -q "Hello"
If you receive a normal AI response, the configuration is successful.
To enter interactive conversation mode, run:
hermes
FAQ
Still connecting to OpenRouter after configuration
By default, Hermes Agent uses OpenRouter as the inference provider. To use Token Plan (Team Edition), model.provider must be set to custom. Run the following command to confirm:
hermes config set model.provider custom
For more frequently asked questions, see FAQ.