Dynamically discover tools and optimize tool calling

So MCP servers are getting a lot of heat overloading contexts with tool schemas and confusing the LLMS. Anthropic suggests an optimized tool calling approach (Code execution with MCP: Building more efficient agents) which reduces cost and increases efficiency. That said, it only works with Anthropic and is tied to MCP servers still AFAIK and architecture wise quite complex.

But it inspired me to this idea: https://github.com/christianalfoni/ai-code-tools.

What if we can keep the existing API for tools. Instead of MCP servers we just do what we normally do, integrate with APIs with our exact needs and the LLM automatically discovers and efficiently and safely executes them.

In my mind this reduces complexity, simplifies the model drastically and gives back more control on what the LLM is allowed to do without risking overloaded contexts. Just provide isolated tools to whatever local/external data you want.

Not trying to sell anything here. This was just an idea and curious what people think about this :slight_smile:

1 Like

This is an interesting approach to tool discovery and optimization! Your idea of keeping the existing API for tools while adding automatic discovery and efficient execution sounds promising :smiley: