Unifying Equations and Inequalities
Physics is a broad field that encompasses and describes phenonmenon from the diffusion of molecules in a glass of water to the motion of planets in solar systems. Despite the vastness of the field, there are only a small set of equations and inequalities that are used to summarize the entirety the different fields such as Maxwell’s equations for electromagnetism, the Navier-Stokes equations for fluid dynamics, and $E=mc^2$ for relativity to name a few. The same is true for deep learning and AI where only a few equations and inequalities are needed to understand the majority of progress in the field. In this post, I will summarize the theoretical minimum needed to understand deep learning and where it’s headed in the future.
The What and Why of MCP
The building block for all types of communication is standardization. The first humans to communicate had to agree on a standard — a shared understanding of sounds, gestures, or symbols — in order to convey meaning and ensure that intentions could be accurately interpretted by others. In today’s modern web apps, the standard of communication are application programming interfaces (APIs) in which a server sets a standard for clients to communicate with its service. It’s often up to the client developer to ensure that the standard is being met everytime the application wants to call an external service such as Google Maps, Weather Services, etc…. With the rise of agentic AI we no longer have a developer manually coding each interaction — instead, the AI must learn to interpret and follow these API standards autonomously. This shifts in responsibility of adhering to communication standards from humans to intelligent agents means that we must make a new communication layer such as Model Context Protocol (MCP) on top of APIs that are understandable and digestable to agents. Just as early humans needed shared rules to communicate effectively, modern agents need protocols like MCP to reason, plan, and act reliably across a growing ecosystem of tools and services.
In 1976, a British statistician, George Box, wrote “all models are wrong, but some are useful.” Nowhere is this more apparent than in machine learning, where models are not just wrong but often unreliable despite their immense usefulness. If we look towards other disciplines such as civil engineering or traditional software engineering, engineers in these fields work in the reliablility domains of 99.9999999% such that the likelihood of their systems failing is guaranteed to be less than the chance of a large meteorite hitting Earth tomorrow.