Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
Agent interoperability gains steam, but organizations continue to offer new interoperability protocols while industry continues to determine the standards to be adopted.
A group of researchers Carnegie Mellon University proposed a new interoperability protocol governing the identity, responsibility and ethics of autonomous AI agents. The orchestration in layers for knowledge agents, or Loka, could join other standards proposed such as the agent2AGENT of Google (A2A) and the model context of Model (MCP) of Anthropic.
In a paperThe researchers noted that the rise of AI agents highlights the importance of governing them.
“As their presence develops, the need for a standardized framework to govern their interactions becomes essential,” wrote the researchers. “Despite their growing omnipresence, AI agents often operate in partitioned systems, without a common protocol for communication, ethical reasoning and respect for jurisdictional regulations. This fragmentation presents significant risks, such as interoperability problems, ethical disormination and gaps in responsibility. ”
To remedy this, they propose the Loka Open Source, which would allow agents to prove their identity, “exchange of semantically rich and ethically annotated messages”, add responsibility and establish ethical governance throughout the agent’s decision -making process.
Loka is based on what researchers call a layer of identity as a universal agent, a framework that attributes to agents a unique and verifiable identity.
“We are considering Loka as a fundamental architecture and an appeal to re -examine the fundamental elements – identity, intention, confidence and ethical consensus – which should underlie agents’ interactions. As the scope of AI agents develops, it is crucial to assess whether our existing infrastructure can facilitate the transition from Rajesh Ranjan, one of the researchers ,,, told Venturebeat.
Loka layers
Loka works like a layer in layers. The first battery revolves around identity, which presents what the agent is. This includes a decentralized identifier, or a “unique and cryptographic verifiable ID”. This would allow users and other agents to verify the agent’s identity.
The following layer is the communication layer, where the agent informs another agent of his intention and the task he must accomplish. This is followed by ethics later and the safety layer.
Loka’s ethics layer exposes how the agent behaves. It incorporates “a flexible but robust ethical decision -making framework that allows agents to adapt to variable ethical standards depending on the context in which they operate”. The Loka Protocol uses collective decision -making models, allowing the agents of the framework to determine their next steps and to assess whether these stages are aligned with ethical and responsible standards.
Meanwhile, the safety layer uses what researchers describe as a “quantum resilient cryptography”.
What differentiates Loka
The researchers said Loka stands out because he establishes crucial information so that agents can communicate with other agents and operate independently on different systems.
Loka could be useful for businesses to guarantee the safety of the agents they deploy around the world and provide a traceable way to understand how the agent has made decisions. A fear of many companies is that an agent draws from another system or will access private data and will make a mistake.
Ranjan said that the system “highlights the need to define who are the agents and how they make decisions and how they are responsible”.
“Our vision is to enlighten critical questions which are often overshadowed in the rush to AI agents: how to create ecosystems where these agents can be reliable, held responsible and ethically interoperable through various systems?” Said Ranjan.
Loka will have to compete with other protocols and agent norms that are emerging now. Protocols like MCP and A2A have found a large audience, not only because of the technical solutions they provide, but because these projects are supported by the organizations that people know. Anthropic started MCP, while Google supports A2A, and the two protocols have gathered many companies open to use – and to improve – these standards.
Loka operates independently, but Ranjan said that he had received “very encouraging and exciting comments” from other researchers and other institutions to extend the Loka research project.