

Recent
Uncovering limitations of Gemini over OpenAI
Recently, while developing my AI agent library, agentai
, I introduced a new CI pipeline to run examples against various models for each pull request. This process ensures that all elements of the library work correctly with each new release. I started by running these tests using GitHub Models, primarily for convenience (as they are already integrated) and to enable external contributors to use the same test suite in the same environment.
MCP is Not Enough: Addressing Security Flaws with a WebAssembly-Based Standard
The rapid proliferation of applications built around Large Language Models (LLMs) has brought a common challenge to the forefront: how do we effectively provide these models with the necessary external context? It’s well understood that LLMs, much like humans, can “hallucinate” or produce incorrect information when operating without sufficient context. We cannot always rely solely on the knowledge embedded within a model, this knowledge may be outdated (due to knowledge cutoffs), limited to publicly available information, and unable to access private or real-time data. These limitations restrict the utility of LLMs. Consequently, all LLM providers offer methods to supplement this, such as Retrieval Augmented Generation (RAG) or tool calling, each with provider-specific implementations.
Crate Kickstart: Essential Tips for Bootstrapping a Rust Project With Modern Tooling
Recently, I was working on the AI_Devs course, which aims to teach how AI agents work and how to create them. I completed all the exercises using the Rust programming language, and during that time, I started developing a common library to group code that could be shared across different subprojects for the course. After finishing the entire course, one file — agent.rs
— stood out to me as particularly useful in most of the exercises. This was the initial implementation of my library, AgentAI.