((((sandro.net))))

sexta-feira, 27 de março de 2026

Show HN: Superfast – Cognitive Memory Graphs for Enterprise AI Agents https://ift.tt/xJqyoUD

Show HN: Superfast – Cognitive Memory Graphs for Enterprise AI Agents Superfast is an evolution of the Superpowers agent framework, now integrated with FastMemory—a concurrent Rust engine that maps unstructured text into a CBFDAE (Component, Block, Function, Data, Access, Event) functional ontology. While RAG has become the standard for "adding knowledge" to LLMs, it often fails at scale due to semantic noise and the destruction of logical boundaries during chunking. Superfast treats memory as an architectural layer. It utilizes Louvain community detection to mathematically derive functional clusters, giving agents a deterministic "Logic Layer" that persists across sessions. We’ve maintained the strict TDD and Socratic discipline of the original framework but scaled it for environments like Microsoft Fabric and AWS Glue where "token waste" is a primary bottleneck. Check it out here: https://ift.tt/r618lIY March 27, 2026 at 01:38AM

Show HN: New Causal Impact Library https://ift.tt/klz2y4u

Show HN: New Causal Impact Library https://ift.tt/hxLru4Z March 27, 2026 at 12:12AM

Show HN: Scroll bar scuba dude swimming as you scroll https://ift.tt/FtC9DGI

Show HN: Scroll bar scuba dude swimming as you scroll Hi! Instead of a boring scrollbar I made a scuba dude that swims down the page when you scroll. The idea came from nostalgia; remember SkiFree game on Windows? I wanted a skier skiing down my site. Building the skier mechanics in pure javascript turned out to be difficult so I started with a runner, added a santa hat, and recently built scuba buddy. You can try it directly as soon as you start to scroll down the page, and see the other animations with the "Change Animation" button. Technical details: entirely javascript, takes scroll depth value (window.scrollY) and inputs that into math.sin() functions. This lets CSS (transform: rotate() property) create rotations of the various stick-figure's html elements, giving the character animation based on the input which is a user scrolling. It is pretty cumbersome to sync correctly when building the animations. It's difficult to get the upper and lower arms / legs to stay connected and have the animation transitions appear smooth. Posted the runner about year ago here on hn. https://ift.tt/nPrKcqE Should I re-try the skiier or something else? Thank for checking it out! https://ift.tt/7Y3a5Lm March 27, 2026 at 12:12AM

Show HN: Sup AI, a confidence-weighted ensemble (52.15% on Humanity's Last Exam) https://ift.tt/zZ9haPO

Show HN: Sup AI, a confidence-weighted ensemble (52.15% on Humanity's Last Exam) Hi HN. I'm Ken, a 20-year-old Stanford CS student. I built Sup AI. I started working on this because no single AI model is right all the time, but their errors don’t strongly correlate. In other words, models often make unique mistakes relative to other models. So I run multiple models in parallel and synthesize the outputs by weighting segments based on confidence. Low entropy in the output token probability distributions correlates with accuracy. High entropy is often where hallucinations begin. My dad Scott (AI Research Scientist at TRI) is my research partner on this. He sends me papers at all hours, we argue about whether they actually apply and what modifications make sense, and then I build and test things. The entropy-weighting approach came out of one of those conversations. In our eval on Humanity's Last Exam, Sup scored 52.15%. The best individual model in the same evaluation run got 44.74%. The relative gap is statistically significant (p < 0.001). Methodology, eval code, data, and raw results: - https://sup.ai/research/hle-white-paper-jan-9-2026 - https://github.com/supaihq/hle Limitations: - We evaluated 1,369 of the 2,500 HLE questions (details in the above links) - Not all APIs expose token logprobs; we use several methods to estimate confidence when they don't We tried offering free access and it got abused so badly it nearly killed us. Right now the sustainable option is a $5 starter credit with card verification (no auto-charge). If you don't want to sign up, drop a prompt in the comments and I'll run it myself and post the result. Try it at https://sup.ai . My dad Scott (@scottmu) is in the thread too. Would love blunt feedback, especially where this really works for you and where it falls short. Here's a short demo video: https://www.youtube.com/watch?v=DRcns0rRhsg https://sup.ai March 26, 2026 at 12:45PM

quinta-feira, 26 de março de 2026

Show HN: Hooky – A lightweight HTTP webhook server written in Go https://ift.tt/bNAyxlt

Show HN: Hooky – A lightweight HTTP webhook server written in Go I built a lightweight HTTP webhook server written in Go. You can use it to trigger scripts from HTTP requests. It has built-in secret validation, rate limiting, and configurable execution controls. It can run standalone or in a container. https://ift.tt/0UxKFZb March 26, 2026 at 06:08AM

Show HN: Relay – The open-source Claude Cowork for OpenClaw https://ift.tt/VRxANbF

Show HN: Relay – The open-source Claude Cowork for OpenClaw https://ift.tt/FYKgbhG March 26, 2026 at 07:23AM

Show HN: Robust LLM Extractor for Websites in TypeScript https://ift.tt/dUWETFb

Show HN: Robust LLM Extractor for Websites in TypeScript We've been building data pipelines that scrape websites and extract structured data for a while now. If you've done this, you know the drill: you write CSS selectors, the site changes its layout, everything breaks at 2am, and you spend your morning rewriting parsers. LLMs seemed like the obvious fix — just throw the HTML at GPT and ask for JSON. Except in practice, it's more painful than that: - Raw HTML is full of nav bars, footers, and tracking junk that eats your token budget. A typical product page is 80% noise. - LLMs return malformed JSON more often than you'd expect, especially with nested arrays and complex schemas. One bad bracket and your pipeline crashes. - Relative URLs, markdown-escaped links, tracking parameters — the "small" URL issues compound fast when you're processing thousands of pages. - You end up writing the same boilerplate: HTML cleanup → markdown conversion → LLM call → JSON parsing → error recovery → schema validation. Over and over. We got tired of rebuilding this stack for every project, so we extracted it into a library. Lightfeed Extractor is a TypeScript library that handles the full pipeline from raw HTML to validated, structured data: - Converts HTML to LLM-ready markdown with main content extraction (strips nav, headers, footers), optional image inclusion, and URL cleaning - Works with any LangChain-compatible LLM (OpenAI, Gemini, Claude, Ollama, etc.) - Uses Zod schemas for type-safe extraction with real validation - Recovers partial data from malformed LLM output instead of failing entirely — if 19 out of 20 products parsed correctly, you get those 19 - Built-in browser automation via Playwright (local, serverless, or remote) with anti-bot patches - Pairs with our browser agent (@lightfeed/browser-agent) for AI-driven page navigation before extraction We use this ourselves in production at Lightfeed, and it's been solid enough that we decided to open-source it. GitHub: https://ift.tt/qrLFi7f npm: npm install @lightfeed/extractor Apache 2.0 licensed. Happy to answer questions or hear feedback. https://ift.tt/qrLFi7f March 26, 2026 at 12:55AM

DJ Sandro

http://sandroxbox.listen2myradio.com