×
Community Blog LLM Continues to Enhance the Minimum Intelligence Level, MCP Continues to Enhance the Maximum Creativity Level

LLM Continues to Enhance the Minimum Intelligence Level, MCP Continues to Enhance the Maximum Creativity Level

This article discusses how LLMs enhance intelligence levels and the MCP protocol boosts creativity in AI, transforming traditional software through innovative integrations and open-source solutions.

Author: Wang Cheng

Introduction

LLM is the brain, and MCP is the hands and feet. LLM continually raises the minimum intelligence level while MCP continually enhances the upper limits of creativity. All applications and software will be transformed by AI, leading to a new paradigm where all applications and software will be transformed by MCP. Over the past year or two, industry practitioners have spent considerable resources optimizing models; in the coming years, everyone will focus on creating various intelligent agents around MCP, which represents a vast territory outside of the dominion of large model vendors.

Although AI still faces ROI challenges in the short term, almost everyone does not doubt its future and does not want to miss out on this "arms race."

We have recently open-sourced or released related capabilities that support the MCP protocol to reduce the engineering costs for developers to adopt MCP. For more details, visit:

Table of Contents

1. Why is MCP Getting Popular?

2. What are the Differences Between MCP and Function Calling?

3. MCP Changes the Supply Side, But the Transformation is on the Consumption Side

4. Has MCP Accelerated the Monetization of Large Models?

5. The More Prosperous the MCP Ecosystem, What Does It Depend On?

Readers who open this article share a common observation: since mid-February, MCP has become popular. Let's take a look at two key indicators that reflect the popularity of open-source projects: GitHub Stars and search indexes.

Stars have been accelerating growth since February:

1

WeChat Index has shown a sudden increase in traffic since February.

From discussions in the community, it is expected that in April, there will be a surge of MCP middleware providers in the country, including Server, Client, Server hosting, Registry, Marketplace, etc., expanding into their respective areas of expertise. This article aims to clarify some easily confused concepts, share some observed monetization opportunities, and discuss our plans and progress on MCP.

1.Why is MCP Getting Popular?

MCP replaces fragmented integration approaches with a single standard protocol during interactions between large models and third-party data, APIs, and systems, evolving from N x N to One for All, allowing AI systems to obtain necessary data in a simpler and more reliable manner.[1]

MCP was released last November and quickly garnered the first wave of market attention. By February of this year, Cursur, Winsurf, and Cline began integrating MCP. Unlike the thousands of previously integrated callers, the introduction of MCP in AI programming can be considered a clarion call for the ecological effects of large models, attracting a large number of developers on the AI programming tool side to the callers, thereby awakening the massive existing applications and systems.

From the perspective of the industry chain, this not only addresses the isolation and fragmentation of AI applications and the vast array of classic online applications but also significantly enhances the depth of use of AI programming tools and expands the user base. It also provides substantial monetization opportunities for AI applications and introduces more traffic to classic online applications. Moreover, it could potentially spur a market for using natural language with specialized software. For instance, Blender MCP connects AI to Blender, allowing users to create, modify, and enhance 3D models with simple text prompts.

Within this ecosystem, MCP, AI applications, AI programming tools, and classic online applications are all beneficiaries; those who integrate first will gain the most benefits.OpenAI's announcement of support for MCP will accelerate its status as the core infrastructure for AI-native applications.

P.S. Since domestic large models have not yet acted on the model context protocol, there remains uncertainty about whether MCP can ultimately become a de facto standard in China.

From the perspective of key productivity programmers, programmers no longer need to switch to Supabase to check database status; instead, they can execute read-only SQL commands using Postgres MCP servers and interact directly with Redis key-value storage using Redis MCP servers from their IDE.

While iterating code, they can also use Browsertools MCP to allow coding assistants to access live environments for feedback and debugging.

This is not new; when using cloud products, programmers tend to prefer calling the capabilities of cloud products via APIs rather than jumping between multiple cloud product consoles.

Programmers are often early adopters of new technologies. As MCP matures, regular consumers will also leverage natural language to boost the prosperity of the MCP industry chain.

2. What are the Differences Between MCP and Function Calling?

Firstly, both MCP and Function Calling are technical implementations for large models to call external data, applications, and systems.

MCP was launched by Anthropic at the end of November 2024, while Function Calling was first introduced by OpenAI in June 2023 (creating an external function as an intermediary to pass requests from the large model while calling external tools, a method most large model vendors also adopt).

However, they have significant differences in positioning, development costs, and other aspects.

Different Positioning:

  • MCP is a standard at the general protocol layer, akin to a "USB-C interface in the AI domain," defining the communication format between LLMs and external tools/data sources without binding to any specific models or vendors, abstracting complex function calls into a client-server architecture.
  • Function Calling is a proprietary capability provided by large model vendors, defined by those vendors. There are differences in interface definitions and development documentation between different large model vendors; it allows models to directly generate function calls, triggering external APIs, relying on the model's own context understanding and structured output capabilities.

Different Development Costs:

  • In the case of Function Calling, using OpenAI as an example, a functional description in JSON Schema format must be written for each external function, along with carefully designing a prompt template to enhance the accuracy of Function Calling responses. If a requirement involves dozens of external systems, design costs can be enormous, and productization costs are extremely high.
  • MCP refers to the large model runtime environment as the MCP Client and the external function runtime environment as the MCP Server, standardizing the operation specifications of both the MCP client and server while requiring communication between them to follow a predetermined prompt template. This enhances global developer collaboration through the MCP Server and facilitates the reuse of global development outcomes.

Different Interaction Methods:

  • MCP implements bidirectional communication through a standardized client-server architecture, requiring developers to pre-configure servers and define interfaces.
  • Function Calling is actively triggered by the model itself, which directly inserts the call request (e.g., in JSON Schema format) during text generation; the host application parses and executes it, returning the result.

Deep Coupling with Model Capabilities:

  • Function Calling is usually deeply bound to the model's contextual understanding. For example, GPT-4's Function Calling can leverage the model's reasoning capabilities to optimize call parameters or adjust subsequent generated content based on the returned results.
  • As a general protocol, MCP needs to transmit information through standardized interfaces, which may sacrifice some collaborative optimization space with specific models.

Real-Time and Low Latency Requirements:

  • The calling logic of Function Calling is directly embedded in the model response process, making it suitable for scenarios that require high real-time performance (e.g., online payments, real-time data analysis).
  • MCP requires a handover through the MCP server, which may introduce additional latency, especially with cross-network calls.

In summary, the comprehensive adaptation of MCP will reduce reliance on Function Calling, particularly in cross-platform and standardized tool integration scenarios.

However, Function Calling will remain irreplaceable in specific scenarios, such as model-driven dynamic decision-making, real-time task execution, proprietary ecosystem integration, etc. Additionally, in some lightweight calling scenarios, Function Calling has advantages in terms of effectiveness.

In the future, the two can complement each other, with MCP serving as the foundational protocol layer and Function Calling as the model enhancement layer, jointly promoting seamless interactions between AI and the external world.

3. MCP Changes the Supply Side, But the Transformation is on the Consumption Side

Different people have varying understandings of the supply side and the consumption side. Here, we define both terms in this article:

  • Supply Side: The industry chain that provides AI Agent services, including cloud vendors, large models, AI applications (including AI Agents), classic online applications, and various AI middleware service providers.
  • Consumption Side: End users who utilize AI Agents.

First, we must mention Devin and Manus.

The emergence of Devin marked a qualitative change in AI programming, transforming from a programming assist tool to a programmer proxy. It is no longer just code completion and assistance; instead, it covers the entire process from requirement analysis to code writing to testing, deployment, and debugging, independently handling complete tasks. Devin revolutionizes the programmer community (domestic users utilize programming proxies, recommending Lingma); on the other hand, Manus transforms the ordinary internet users, allowing interaction with AI to shift from a mere question-and-answer dialogue service model to a general AI proxy that can mobilize online internet services beyond AI applications, autonomously and completely implementing user ideas, realizing a qualitative change from "passive response" to "active co-creation."

The more intelligent the outcome, the more complex the process.The view that "cognitive load is a core barrier to engineering effectiveness" is even more pronounced in AI Agents.

Therefore, AI Agents have a stronger demand for efficient development and engineering paradigms.Unlike classic internet applications, the productization and engineering of AI Agents are considerably more complex.E-commerce applications meet the demand for users to shop without leaving home, and chat applications meet the demand for users to socialize without leaving home; these are forms of physical substitution. AI Agents, on the other hand, represent a substitution of mental and cognitive effort, assisting users in completing a full range of activities from basic survival to advanced creation.

Relying solely on Function Calling to invoke external applications is clearly not an efficient development paradigm.

Only MCP allows developers to easily craft the next Manus.It is akin to the HTTP protocol in the internet world, enabling all clients and websites to communicate based on a single standard, thereby promoting global developer collaboration and accelerating the advent of AGI.

4. Has MCP Accelerated the Monetization of Large Models?

From our observations, this is indeed the case.

Take Firecrawl as an example; this open-source project offers:

  • Comprehensive data scraping: Automatically crawl all accessible sub-pages of an entire website without relying on a sitemap.
  • Data cleaning and formatting: Automatically convert the scraped web content into clean Markdown or structured data, removing irrelevant information such as advertisements and navigation bars, discarding page noise.
  • Handling data retrieval without further processing: Seamlessly interface with models, directly outputting LLM-ready formats, and can be integrated into various AI programming frameworks to accelerate the data preprocessing workflow.

Before supporting MCP, Firecrawl already had fully automated web crawling capabilities, but it relied on traditional technologies, requiring users to manually call Firecrawl services through REST APIs or SDKs, with no seamless integration with large models.

In January of this year, Firecrawl officially introduced the MCP protocol through its integration with the Cline platform, allowing developers to invoke Firecrawl's crawling capabilities via the MCP server, achieving an automated process where "AI models directly control web scraping."

More importantly, users need not worry that protocol binding will affect scalability; in order to realize richer large model capabilities, reliance on multiple middleware providers similar to Firecrawl is essential.

Thus, MCP opens up network effects for large model middleware suppliers, accelerating the monetization capabilities of these players.

The a16z Infra team has created a MCP Market Map.[2]

This map covers the most active areas in the MCP ecosystem today; while there are still many blanks, it will provide inspiration for innovations domestically.

2

As the adoption rate of MCP continues to rise, infrastructure and tools will play a crucial role in the scalability, reliability, and accessibility of the MCP ecosystem.

This leads to a potential outcome that could be entirely different from the traditional internet industry chain: opportunities in the B2B space will be richer compared to B2C.

  • MCP Client: As the caller, it serves as the interaction entry point between users and the MCP ecosystem, focusing on the realization of terminal functionalities.For example, chat applications (such as Claude) provide natural language interaction services, allowing users to invoke AI capabilities through conversations;Coding tool types (such as Cline, Cursor) that invoke the capabilities of external applications and systems within the IDE in AI programming scenarios;Task automation types: Helping users automate the execution of repetitive tasks, such as data processing and workflow scheduling, to boost efficiency.Manus is a typical MCP Client.
  • MCP Server: As the called party, it provides backend service support, including various core functional modules.For example, database types (such as ClickHouse, Supabase) that are responsible for data storage, querying, and management;Design types (such as Figma, Blender) that support design creation, file collaboration, and other functionalities;Productivity tool types (such as Notion, Obsidian) that offer note management, knowledge organization, and other collaborative services;Payment types (such as Stripe) that handle online payment transactions and support the flow of funds in commercial scenarios.
  • MCP Marketplace: Acting as the hub of the ecosystem, aggregating and distributing MCP-related tools, similar to an "app store."On one hand, developers can publish MCP client and server tools here;On the other hand, users can conveniently discover and use various MCP tools (such as MCP.so and Glama), facilitating the circulation and sharing of resources within the ecosystem.
  • Server Generation & Curation: Focusing on the development and maintenance of MCP servers.Providing tools or frameworks (such as Mintlify, Stainless) to assist in server development and simplify the setup process;Optimizing server configuration and feature iterations to ensure stable server performance that meets the requirements of different business scenarios.
  • Connection Management: Coordinating the interactions of various components within the MCP ecosystem.Managing connections between clients and servers, as well as between servers, to ensure efficient data transmission;Optimizing connection stability, handling network protocol adaptations, request routing, etc., ensuring smooth interactions within the ecosystem.
  • Server Hosting: Providing operational environment support for MCP servers.Leveraging cloud computing and other infrastructure (such as Cloudflare, Smithery) to host server code and data;Responsible for the operation and maintenance, scaling, and security of the servers, ensuring their continuous and stable operation.

Higress and its cloud product API gateway are collaborating with Tongyi Lingma, adopting a technical solution of SSE + Tool + Redis, and have launched Higress MCP Server Hosting[3]. This provides two capabilities: firstly, Higress acts as a platform providing simple IO Tool capabilities to interface with several mainstream external applications, and users can also implement complex IO Tool capabilities independently; secondly, as a gateway, it manages identity verification and authorization for access to MCP Servers.

Additionally, Nacos will release capabilities for MCP Registry, including dynamic discovery and management of MCP Servers, and distribution of dynamic prompts to manage the MCP protocol, which can assist non-MCP services in transforming into MCP services.

The combination of Nacos + Higress, along with open-source solutions such as Apache RocketMQ and OTel, is maximizing the reuse of existing cloud-native technology components, significantly reducing the construction costs of AI Agents for classic internet applications.

5. The More Prosperous the MCP Ecosystem, What Does It Depend On?

MCP is essentially a type of API.

The MCP Server encapsulates functional services; its essence is to provide a standardized interface on the server side via the MCP protocol.Whenever cross-network access is involved, it requires authentication, authorization, data encryption and decryption, and anti-attack mechanisms.At this point, a gateway for managing MCP Servers is necessary.

Similar to an API gateway, a gateway managing MCP will enforce access controls, route requests to the correct MCP servers, handle load balancing, and cache responses to improve efficiency.This is particularly important in multi-tenant environments, as different users and agents require different permissions.The standardized gateway will simplify interactions between clients and servers, enhance security, and provide better observability, thus making MCP deployments more scalable and manageable.

  • Authentication: Verify the identity of users, devices, or services to prevent unauthorized entities from accessing the ecosystem.For example, when users log into the MCP client, they verify their identity through account passwords, tokens, etc., to avoid malicious attacks or illegal access.
  • Authorization: Provide granular control over permissions, determining the range of executable operations for users or services once authentication is successful.For example, regular users can only utilize basic MCP server functions, while advanced users or specific services may gain higher privileges such as database read/write access and calling sensitive tools.
  • Traffic Control: Implement functionalities such as request filtering, rate limiting, protocol conversion, etc.For instance, rate limiting can be applied to high-concurrency requests, intercepting illegal requests, and uniformly handling encrypted transmissions to enhance overall safety and stability in the ecosystem.

You will soon experience these capabilities on the Higress gateway.Higress, as an AI-native API gateway, provides a complete open-source MCP Server hosting solution that facilitates protocol conversion from existing APIs to MCP.For more details, visit Higress Open Source Remote MCP Server Hosting Solution and Upcoming MCP Market.

Additionally, Nacos acts as the MCP Registry, playing a control plane role that not only manages metadata for tools but also transforms existing APIs into MCP protocols.Nacos can assist applications in quickly converting existing API interfaces into MCP protocol interfaces, in conjunction with the Higress AI gateway, to achieve conversions between the MCP protocol and existing protocols.

In the MCP ecosystem, due to the more complex and diverse calling relationships, observability is also a crucial infrastructure that cannot be overlooked:

  • Fault troubleshooting and issue diagnosis: Collecting logs to record discrete events occurring in various components of the ecosystem (such as MCP clients and servers). When problems arise, developers and maintenance personnel can trace system behavior based on these logs to quickly identify points of failure.Link tracing can analyze the invocation paths of requests between different components, determining which parts encountered errors or blocking, as well as whether inputs and outputs meet expectations, assisting in locating issues that arise during cross-component interactions. Additionally, call chain analysis can be conducted on invocation statuses.
  • Performance Optimization: Aggregated metrics perform statistical analysis on key indicators such as system resource usage (e.g., CPU, memory usage), response time, throughput, etc., identifying performance bottlenecks and providing the basis for system configuration optimization and architectural adjustments.For example, if it is found that a certain MCP server is slow to respond during high concurrency, targeted optimizations can be made to the code or additional hardware resources can be allocated.
  • Service Quality Monitoring: Real-time monitoring of the operational status and availability of services within the MCP ecosystem, promptly detecting issues affecting user experience, such as service interruptions and excessive delays, and triggering corresponding alert mechanisms to enable quick responses from maintenance personnel, ensuring service stability and reliability.**

As a standardized map service capability platform, Amap has taken the lead in launching its MCP Server, offering 12 core functions to support the development of enterprise-grade intelligent agent applications.

We anticipate a swift emergence of a large wave of MCP Servers and MCP middleware in the country, accelerating the productization and engineering of AI Agents.

[1] https://mp.weixin.qq.com/s/zYgQEpdUC5C6WSpMXY8cxw
[2] https://a16z.com/a-deep-dive-into-mcp-and-the-future-of-ai-tooling/
[3] https://github.com/alibaba/higress/tree/main/plugins/wasm-go/mcp-servers

0 1 0
Share on

You may also like

Comments

Related Products