MCP is Not Enough: Addressing Security Flaws with a WebAssembly-Based Standard
The rapid proliferation of applications built around Large Language Models (LLMs) has brought a common challenge to the forefront: how do we effectively provide these models with the necessary external context? It’s well understood that LLMs, much like humans, can “hallucinate” or produce incorrect information when operating without sufficient context. We cannot always rely solely on the knowledge embedded within a model, this knowledge may be outdated (due to knowledge cutoffs), limited to publicly available information, and unable to access private or real-time data. These limitations restrict the utility of LLMs. Consequently, all LLM providers offer methods to supplement this, such as Retrieval Augmented Generation (RAG) or tool calling, each with provider-specific implementations.