

This article is from two weeks ago, fix your bot


This article is from two weeks ago, fix your bot


From what I can see, this is something the Thunderbird team had developed for their own internal tooling, and they’re open sourcing it.


After reading through the GitHub docs, the most impressive thing is that they open sourced their Thunderbolt coding agent for Claude Code. There are quite a few skills available for implementation planning, dependency/build environment setup, coding, linting/cleanup, QA, and managing agent pull requests. Pretty good examples if you are looking at building Claude Code skills.


It sounds like a step further than open-webui; it’s an enterprise grade client-server model for access to agents, workflows, and centralized knowledge repositories for RAG.
In addition to local chatbot for executive/admin use, I can see this being the backend for developers running Cursor or some other AI enhanced IDE, with local knowledge stores holding proprietary documents and running against local large models.
I am also curious about time share and prioritization of resources; I assume it would queue simultaneous requests. Presumably this would let you more effectively pool local compute, rather than providing A100 GPUs to each developer that may sit unused when they’re not working.
Edit: Somewhat impressively, this whole stack does not even include a local inference provider; so it does everything except local models right now, and requests are forwarded to cloud inference providers (Anthropic, OpenAI, etc). But it does have the backend started for rate limiting and queuing, and true “fully offline/local” is on the roadmap, just not there yet.


deleted by creator


A motorcycle. You can’t outrun the radio.


So far, has a single legal challenge against scraping ever been successful?
Note the right hand steering wheel.


I hope this is an 8-bit theatre spinoff.


Maybe a positive side effect will be OS and applications beginning to be more conscious of their RAM consumption. I am absolutely certain that due to the era of cheap memory storage, applications (browsers especially) have gotten insanely bloated.
Keep AI models out of your web browser and core operating system, and maybe 4GB can still cut it.
And since it seems like you’re not shying away from shows with some fucked up things kids probably shouldn’t have watched but are kinda formative anyways