((((sandro.net))))

segunda-feira, 9 de março de 2026

Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP https://ift.tt/jAHxF0I

Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP Every MCP server injects its full tool schemas into context on every turn — 30 tools costs ~3,600 tokens/turn whether the model uses them or not. Over 25 turns with 120 tools, that's 362,000 tokens just for schemas. mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand: mcp2cli --mcp https://ift.tt/XJB6W8f --list # ~16 tokens/tool mcp2cli --mcp https://ift.tt/XJB6W8f create-task --help # ~120 tokens, once mcp2cli --mcp https://ift.tt/XJB6W8f create-task --title "Fix bug" No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface. Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns. It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): `npx skills add knowsuchagency/mcp2cli --skill mcp2cli` Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub. https://ift.tt/KvlZq7p https://ift.tt/KvlZq7p March 9, 2026 at 02:18AM

Nenhum comentário:

DJ Sandro

http://sandroxbox.listen2myradio.com