((((sandro.net))))
Manuntençao para Pcs
sexta-feira, 15 de maio de 2026
Show HN: I solved my study problems by talking to a goose https://ift.tt/W4Zjn2U
Show HN: I solved my study problems by talking to a goose I used to study by rereading notes, and then I blanked in the exam hall. Did some research, and found that my experience isn’t isolated, and that passive review doesn’t force retrieval, so nothing sticks, and I knew I had to do something about it. That’s why I built Professor Goose. You pick a topic, explain it out loud to a goose, and he keeps probing until he understands you. Never gives you the answer, just keeps asking follow ups until a sound understanding is reached, which in turn makes you figure stuff out or realise you never understood your topic in the first place. Free to try, no account needed, upload your syllabus for exam board specific questions. Curious whether this approach resonates with others, it sure has for me. https://ift.tt/TW8BfSz May 15, 2026 at 03:06AM
Show HN: Parse LLM Markdown streams incrementally on the server or client https://ift.tt/eDraJix
Show HN: Parse LLM Markdown streams incrementally on the server or client Most AI chat applications (such as ChatGPT or Claude) stream their responses to the client as markdown text. As each new chunk of text arrives, the front end typically re-parses the entire markdown document to render the updated message. This works, but it can quickly slow down the UI for long responses. I’ve been obsessing over ways to make this more efficient, so I wrote a markdown parser that can parse streaming markdown (semi) incrementally. Instead of re-processing the whole document each time, it only parses what’s new, processing each line only once. Block‑level nodes are buffered until they’re complete (for example, once a paragraph is done and won’t be extended by more text). This also makes parsing the markdown on server possible. The main demo does exactly that. Additionally, animating markdown blocks becomes much simpler and efficient, as a result. Here’s a demo if you’d like to see it in action:
https://markdownparser.vercel.app/experimental Feel free to type 'Render a table with 10 rows' to see each table row animate in. I’ve spent a lot of time thinking about this problem, so if you’re working on similar issues, I'd love to chat. https://ift.tt/oWGlHKV May 14, 2026 at 07:31PM
Show HN: Full Stack HQ – Claude.md and Agent Stack for Claude Code https://ift.tt/sFXdY2g
Show HN: Full Stack HQ – Claude.md and Agent Stack for Claude Code Permission-first config kit for Claude Code and Google Antigravity IDE.
Installs CLAUDE.md + GEMINI.md + 10 specialist agents + 28 skills
with one command. MIT license. https://ift.tt/E3z4cZR May 14, 2026 at 05:55PM
quinta-feira, 14 de maio de 2026
Show HN: Showing the same info in 248 different UI https://ift.tt/C3n5QGj
Show HN: Showing the same info in 248 different UI https://ift.tt/G8Wlr9C May 13, 2026 at 08:16PM
Show HN: Containarium – self-hosted sandbox for AI agents, MCP-native https://ift.tt/3LQhgad
Show HN: Containarium – self-hosted sandbox for AI agents, MCP-native https://ift.tt/fXliL1n May 13, 2026 at 11:46PM
Show HN: Nibble https://ift.tt/2oVlswW
Show HN: Nibble An attempt at a single pass LLVM frontend in ~3000 lines of C without external dependencies, malloc, or an AST. Included are some graphical examples. The IR isn't perfect, and the README touches on one particular downfall https://ift.tt/n1pyAXJ May 13, 2026 at 10:46PM
quarta-feira, 13 de maio de 2026
Show HN: Hollow – A local multi-agent OS that builds its own tools https://ift.tt/3XcrITV
Show HN: Hollow – A local multi-agent OS that builds its own tools https://ift.tt/aw8imfs May 13, 2026 at 07:19AM
Assinar:
Postagens (Atom)
DJ Sandro
http://sandroxbox.listen2myradio.com