Building the internet of agents: a technical dive in AI agent protocols and their role in evolving intelligence systems

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

As Great language model (LLM) The agents gain ground through corporate and research ecosystems, a fundamental gap has emerged: communication. While today's agents can reason, plan and act independently, their ability to coordinate with other agents or the interface with external tools remains constrained by the absence of standardized protocols. This neck of communication strangulation fragments not only the agent's landscape, but also limits the scalability, interoperability and emergence of collaborative AI systems.

A recent survey carried out by researchers from the University of Shanghai Jiao Tong and the ANP community offers the first complete taxonomy and the evaluation of protocols for AI agents. The work has a classification scheme in principle, explores existing protocol frames and describes future orientations for scalable, secure and intelligent agent ecosystems.

The problem of communication in modern AI agents

The deployment of LLM agents has exceeded the development of mechanisms that allow them to interact with each other or with external resources. In practice, most agent interactions are based on ad hoc APIs or fragile paradigms that call for functions – supplies that lack generalization, safety and compatibility guarantees of crossed sellers.

The problem is similar to the first days of the Internet, where the absence of common transport and application layer protocols prevented the exchange of transparent information. Like the overall TCP / IP and HTTP connectivity catalyzed, the standard protocols for AI agents are ready to serve as a skeleton of a future “agents' internet”.

A framework for agent protocols: context vs collaboration

The authors offer a two -dimensional classification system that delimits agent protocols along two axes:

  1. Context protocols in relation to the context
    • Context -based protocols Govern how agents interact with external data, tools or APIs.
    • Inter-agent protocols Activate peer communication, the delegation of tasks and coordination between several agents.
  2. Protocols for general use vs
    • Protocols for general use are designed to operate in various environments and types of agents.
    • Protocols specific to the domain are optimized for specific applications such as human agent dialogue, robotics or IoT systems.

This classification helps to clarify design compromises through flexibility, performance and specialization.

Key protocols and their design principles

1 and 1 Model context protocol (MCP)Anthropic

MCP is a context protocol for general use which facilitates structured interaction between LLM agents and external resources. Its architecture is moving the reasoning (host agents) of execution (customers and servers), improving security and scalability. In particular, MCP reduces the risk of confidentiality by ensuring that sensitive user data is processed locally, rather than integration directly into the function calls generated by LLM.

2 Agent agent protocol (A2A)Google

Designed for secure and asynchronous collaboration, A2A allows agents to exchange tasks and artefacts in corporate circles. It emphasizes modularity, multimodal support (for example, files, flows) and opaque execution, preserving IP while allowing interoperability. The protocol defines standardized entities such as Agent cards,, TasksAnd Artifacts For a robust workflow orchestration.

3 and 3 Agent network protocol (ANP)Open source

ANP plans a network of decentralized agent on a web scale. Built at the top of decentralized identity (DID) and semantic meta-protocol layers, the ANP facilitates encrypted communication without confidence between agents through heterogeneous domains. It introduces abstractions in layers for the discovery, negotiation and execution of tasks – positioning itself as a base for an open “internet”.

Performance metrics: a holistic assessment framework

To assess the robustness of the protocol, the survey introduces a complete framework based on seven evaluation criteria:

  • Efficiency – the flow, latency and the use of resources (for example, token cost in LLM)
  • Scalability – Support for growing agents, dense communication and allocation of dynamic tasks
  • Security – Fine grain authentication, access control and desensitization to the context
  • Reliability – Delivery of robust messages, flow control and connection persistence
  • Extensibility – Ability to evolve without breaking compatibility
  • Operability – ease of deployment, observability and implementation of the platform
  • Interoperability – Compatibility of the cross-system between languages, platforms and suppliers

This framework reflects both the principles of conventional network protocol and specific challenges for agents such as semantic coordination and multi-tours workflows.

Towards emerging collective intelligence

One of the most convincing arguments for the normalization of the protocol lies in the potential of collective intelligence. By aligning communication strategies and capacities, agents can train dynamic coalitions to solve complex tasks – to wipe robotics or modular cognitive systems. Protocols such as Agora Go further by allowing agents to negotiate and adapt new protocols in real time, using routines generated by LLM and structured documents.

Likewise, protocols like Loka Integrate ethical reasoning and identity management into the communication layer, ensuring that agent ecosystems can evolve in a responsible, transparent and safe manner.

The road to come: static interfaces with adaptive protocols

In the meantime, the authors describe three stages of the evolution of the protocol:

  • Short -term: Put rigid functional calls to dynamic and scalable protocols.
  • Halfway through: Pass APIs based on rules to agent ecosystems capable of self-organization and negotiation.
  • Long -term: Emergence of diaper infrastructure that supports preserving, collaborative and intelligent agent networks.

These trends indicate a gap in relation to the design of traditional software towards a more flexible and native computer paradigm.

Conclusion

The future of AI will not be shaped only by the architecture of models or training data – it will be shaped by the way in which agents communicate, coordinate and learn from each other. Protocols are not just technical specifications; These are the connective tissue of intelligent systems. By formalizing these layers of communication, we unlock the possibility of a network of decentralized, secure and interoperable agents – an architecture capable of setting up far beyond the capacities of any unique model or frame.


Discover the model on Paper. Also, don't forget to follow us Twitter And join our Telegram And Linkedin Group. Don't forget to join our 90K + ML Subdreddit.

🔥 (Register now) Minicon Virtual Conference on AIA: Free registration + presence certificate + 4 hours (May 21, 9 a.m. to 1 p.m. PST) + Practical workshop


Sana Hassan, consulting trainee at Marktechpost and double -degree student at Iit Madras, is passionate about the application of technology and AI to meet the challenges of the real world. With a great interest in solving practical problems, it brings a new perspective to the intersection of AI and real life solutions.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.