terça-feira, 9 de dezembro de 2025
Show HN: Zonformat– 35–60% fewer LLM tokens using zero-overhead notation https://ift.tt/lxkfDVj
Show HN: Zonformat– 35–60% fewer LLM tokens using zero-overhead notation hey HN! Roni from India — ex-Google Summer of Code(GSoC) @ internet archive, full-stack dev. got frustrated watching json bloat my openai/claude bills by 50%+ on redundant syntax, so i built ZON over a few weekends: zero-overhead notation that compresses payloads ~50% vs json (692 tokens vs 1,300 on gpt-5-nano benchmarks) while staying 100% human-readable and lossless. Playground -> https://ift.tt/ha1NBkV ROI calculator -> https://ift.tt/zvBWlyr <2kb typescript lib with 100% test coverage. drop-in for openai sdk, langchain js/ts, claude, llama.cpp, streaming, zod schemas—validates llm outputs at runtime with zero extra cost. Benchmarks -> https://ift.tt/J3eE4gm try it: npm i zon-format or uv add zon-format, then encode/decode in <10s (code in readme). full site with benchmarks: https://zonformat.org github → https://github.com/ZON-Format harsh feedback on perf, edge cases, or api very welcome. if it saves you a coffee's worth of tokens, a star would be awesome let's make llm prompts efficient again https://zonformat.org December 9, 2025 at 03:55AM
Assinar:
Postar comentários (Atom)
DJ Sandro
http://sandroxbox.listen2myradio.com
Nenhum comentário:
Postar um comentário