HiClaw v1.1.0 adds 11 new features and fixes 18 bugs. Special thanks to xcaspar, johnlanni, vincent067,cr72589,max-wc,Jingze,YuFeng,luoxiner,googs1025, and 9 contributors in total.

HiClaw can run on top of the native Kubernetes control plane. hiclaw-controller replaces the old single-container mode and uses the standard Controller-Reconciler architecture: an embedded lightweight kube-apiserver + kine stores CRD data, and the Controller coordinates Worker/Team/Manager/Human CRs into containers, Matrix rooms, and gateway routes. In Embedded mode (hiclaw-controller container + standalone hiclaw-manager container), no external Kubernetes cluster is required. For enterprise deployments, the same Controller can run in a real Kubernetes cluster through the official Helm Chart (helm/hiclaw/), supporting Leader Election high availability, RBAC, PVC persistent storage, and Pod template overlays.
HiClaw supports using hermes-agent as a first-class Worker runtime for autonomous programming tasks. Hermes Worker has full autonomous programming Agent capabilities: terminal sandbox execution, multi-file code generation, debugging, visual analysis, and native mautrix Matrix integration — all running inside isolated containers. Unlike the agent runtime (Node.js) and QwenPaw (Python), which handle conversations and tool calls, Hermes is an autonomous programming Agent that can independently plan, execute, and iterate on complex software tasks. The installer provides an interactive choice among three runtimes, and Workers can switch runtimes in place: hiclaw update worker --runtime hermes (container recreation, while Matrix accounts, rooms, credentials, and MinIO data are preserved). Multi-Agent collaboration is also supported — Hermes Worker can participate in team projects together with agent and QwenPaw Workers, with cross-runtime m.mentions message delivery and unattended YOLO-mode autonomous execution.
HiClaw provides a production-grade Helm Chart for deploying HiClaw on Kubernetes clusters. The Chart deploys Tuwunel (Matrix server), MinIO (object storage), Element Web (IM client), and hiclaw-controller as separate Deployments/StatefulSets, with complete Service, RBAC, and Secret resources. Key enterprise features:
hiclaw-credential-provider) integrates with the gateway and storage backend. Per-worker accessEntries in the CRD constrain object storage paths and support tenant isolation.kubectl / hiclaw CLI can be used interchangeably — Worker, Team, Human, and Manager are all standard CRDs, support short names (wk, tm, hm, mgr), and kubectl get workers works directly.The Controller now delegates gateway (Higress) and storage (MinIO/OSS) operations through Provider interfaces. A new hiclaw-credential-provider Sidecar handles STS token issuance, key rotation, and per-worker access policy enforcement. It can integrate with Alibaba Cloud OSS, AWS S3, or any S3-compatible backend without changing Controller code.
The Manager image no longer bundles Higress, Tuwunel, MinIO, and Element Web. Infrastructure services are dedicated to the hiclaw-embedded image (the Controller container), while Manager is a lightweight pure-Agent container (reducing size by about 1.7 GB). This enables independent scaling, restart isolation, and clear separation of responsibilities.
The built-in OpenClaw engine has been upgraded to hiclaw-2026.4.14, bringing Matrix private-network security fixes, structured Matrix debug logs (HICLAW_MATRIX_DEBUG=1), and unified gateway Control UI ports. The openclaw-base base image has been reset from higress/all-in-one (~1.79 GB) to higress/ubuntu:24.04 (~103 MB), shrinking all downstream images (manager, worker, copaw-worker, hermes-worker) by about 1.7 GB. Key compatibility fixes include: setting gateway.bind = \"lan\" to support cross-container access, autoJoin = \"always\" to ensure Matrix rooms are joined reliably, and dangerouslyAllowPrivateNetwork = true to adapt to the FQDN-over-loopback approach for the embedded homeserver.
When upgrading from v1.0.9, workers-registry.json data is automatically migrated into CRD resources. The Worker runtime, model, skills, MCP Server, and team member relationships are all preserved. On first startup, the Controller detects the legacy state and creates the corresponding Worker/Team CRs.
The CLI is preinstalled and automatically authenticated inside the Controller container. Administrators can directly query or manage resources with docker exec -it hiclaw-controller hiclaw get workers without going through the Manager Agent. Supports commands such as create, get, update, delete, apply, worker wake/sleep/status, status, version, and more.
The Worker CRD now supports spec.state: running | stopped. Setting state: stopped (or hiclaw worker sleep) gracefully stops the container while preserving all state; setting state: running (or hiclaw worker wake) restarts it. Manager uses this mechanism to implement automatic sleep when idle and wake on demand.
After a fresh installation, a welcome/onboarding message is automatically sent as a direct message to the administrator, and it works correctly even in Embedded mode. Before sending, the Controller validates both Matrix room membership and LLM authentication readiness at the same time (end-to-end probing), ensuring that Manager will not receive messages it cannot reply to. The install script waits for the welcome message to finish sending, providing a smooth first-use experience.
Interactive Hermes runtime selection, masked secret entry display, version selection, uninstall subcommand (hiclaw-install.sh uninstall), and fast failure when the embedded image is missing (no more silent fallback to the now-defunct legacy architecture path).
hiclaw create worker / hiclaw apply worker ignoring the default model set during administrator installation (HICLAW_DEFAULT_MODEL), causing all newly created Workers to silently use qwen3.5-plus.HICLAW_DEFAULT_WORKER_RUNTIME not actually taking effect — the CRD schema-level default caused the API Server to fill in spec.runtime=openclaw before the Controller saw an empty value. Removed the CRD default and introduced correct environment-variable fallback parsing.--no-wait + heartbeat-delayed processing to provide reliable confirmation replies.JoinRoom after creating the room, instead of relying on the runtime's automatic invite acceptance behavior.hiclaw apply worker --zip ignoring the Worker runtime in manifest.json and always defaulting to openclaw.allowedConsumers being cleared when the Controller restarted, causing Manager/Worker to temporarily receive 403 errors.AGENTS.md / SOUL.md / HEARTBEAT.md being re-pushed by mirror during reconciliation, overwriting the correctly merged versions. These files are now excluded from the mirror and managed by their respective authoritative writers.openclaw Matrix channel when groupAllowFrom updates and message sending occurred concurrently, such as during Worker creation.matrix.autoJoin defaulting to \"off\" in openclaw 2026.4.x, which caused the Agent to remain in invite state forever and never process room events.uninstall not deleting the hiclaw-controller container, which left Docker volumes occupied and preserved old state across reinstalls.openclaw regardless of the original runtime.m.mentions.user_ids for cross-runtime messages, in-container autonomous execution via HERMES_YOLO_MODE=1, and noise suppression with MATRIX_HOME_CHANNEL=disabled.openclaw.json using userId=@default instead of userId=@manager, silently dropping all administrator DM messages.openclaw-base from higress/all-in-one:2.2.1 (~1.79 GB) on top of higress/ubuntu:24.04 (~103 MB), shrinking all downstream images by about 1.7 GB.Building Cross-Cloud Observability: One Architecture, Unified Analytics
701 posts | 57 followers
FollowAlibaba Cloud Native Community - April 22, 2026
Alibaba Cloud Native Community - March 17, 2026
Alibaba Cloud Native Community - March 19, 2026
Alibaba Cloud Native Community - March 5, 2026
Alibaba Cloud Native Community - March 29, 2023
Alibaba Cloud Native Community - April 10, 2026
701 posts | 57 followers
Follow
Container Compute Service (ACS)
A cloud computing service that provides container compute resources that comply with the container specifications of Kubernetes
Learn More
Container Service for Kubernetes
Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn More
Tongyi Qianwen (Qwen)
Top-performance foundation models from Alibaba Cloud
Learn More
Managed Service for Prometheus
Multi-source metrics are aggregated to monitor the status of your business and services in real time.
Learn MoreMore Posts by Alibaba Cloud Native Community