The Huntarr situation (score 200+ and climbing today) is getting discussed as a Huntarr problem. It’s not. It’s a structural problem with how we evaluate trust in self-hosted software.
Here’s the actual issue:
Docker Hub tells you almost nothing useful about security.
The ‘Verified Publisher’ badge verifies that the namespace belongs to the organization. That’s it. It says nothing about what’s in the image, how it was built, or whether the code was reviewed by anyone who knows what a 403 response is.
Tags are mutable pointers. huntarr:latest today is not guaranteed to be huntarr:latest tomorrow. There’s no notification when a tag gets repointed. If you’re pulling by tag in production (or in your homelab), you’re trusting a promise that can be silently broken.
The only actually trustworthy reference is a digest: sha256:.... Immutable, verifiable, auditable. Almost nobody uses them.
The Huntarr case specifically:
Someone did a basic code review — bandit, pip-audit, standard tools — and found 21 vulnerabilities including unauthenticated endpoints that return your entire arr stack’s API keys in cleartext. The container runs as root. There’s a Zip Slip. The maintainer’s response was to ban the reporter.
None of this would have been caught by Docker Hub’s trust signals, because Docker Hub’s trust signals don’t evaluate code. They evaluate namespace ownership.
What would actually help:
- Pull by digest, not tag. Pin your compose files.
- Check whether the image is built from a public, auditable Dockerfile. If the build process is opaque, that’s a signal.
- Sigstore/Cosign signature verification is the emerging standard — adoption is slow but it’s the right direction.
- Reproducible builds are the gold standard. Trust nothing, verify everything.
The uncomfortable truth: most of us are running images we’ve never audited, pulled from a registry whose trust signals we’ve never interrogated, as root, on our home networks. Huntarr made the news because someone did the work. Most of the time, nobody does.
As a software developer, it is a known best practice when you are using external software to use a specific version. Never use “latest” except in testing and development. Once it’s ready you pin it to a version ( which can be the latest version, just make sure to actually specify that version id). Then again I’ve never set up an *arr stack before
One thing that sucks about that is you might miss an upgrade that needed to happen before a large version jump later. It’s pretty rare but I believe I’ve seen a container break like that and the upgrade was misery.
use a specific version
Ha! Prove the version is valid with checksums and signatures. “But the label said it was that version”? No sympathy.
I’m like 90% sure that this post is AI Slop, and I just love the irony.
First of all, the writing style reads a lot like AI… but that is not the biggest problem. None of the mitigations mentioned has anything to do with the Huntarr problem. Sure, they have their uses, but the problem with Huntarr was that it was a vibe coded piece of shit. Using immutable references, image signing or checking the Dockerfile would do fuck-all about the problem that the code itself was missing authentication on some important sensitive API Endpoints.
Also, Huntarr does not appear to be a Verified Publisher at all. Did their status get revoked, or was that a hallucination to begin with?
To be fair though the last paragraph does have a point, but for a homelab I don’t think it’s feasible to fully review the source code of everything you install. It would rather come down to being careful with things that are new and doesn’t have an established reputation, which is especially a problem in the era of AI coding. Like the rest of the *arr stack is probably much safer because it’s open source projects that have been around for a long time and had had a lot of eyes on it.
for a homelab I don’t think it’s feasible to fully review the source code of everything you install
Here’s what you can actually do:
- Consider if you actually need the application and stop applications you don’t use
- Don’t allow public access unless it is necessary, consider VPN/reverse proxies with client authentication (if supported)
- isolate applications that don’t need to talk to each other
-
- see also rootless podman, firewalls, virtual machines, etc
-
- don’t forget network access, if everything runs on 127.0.0.1 and every service shares it then they can all talk to each other! (See also network namespaces or VMs)
- Don’t reuse passwords
- keep software up to date
- actually evaluate the quality of the project if it needs access to sensitive information
-
- see open issues, closed issues that stand out
-
- check for audits or at least a history of good effort™
Sure you wont always catch ai slop this way but you don’t need to read a line of code to at least be reasonably sure your arr stack won’t get to the family photos.
The account is 2 days old and this is its only post.
the post is so obviously AI and OP has not responded to any comments. who is upvoting this?
Perhaps — but — it — brings — up — some — good — points — to — ponder.
The idea that this kind of workflow could be full of risk has been debated … since the CPAN days. If you pull in black box code without inspecting it, then you deserve the day you begged for.
…and if you chose a model that doesn’t allow for easy validation, that’s still on you.
My password is Huntarr2
I know it’s not the issue here really but
the container runs as root
That’s why we need to push for more self hosted containers to support running rootless. There’s no reason for it other than laziness IMHO.
It’s wild to me how many people will jump through a bunch of other random security hoops but not blink an eye about running containers as root
Laziness is a lazy diagnosis, complexity and ignorance are the more common causes.
Pinning your versions just means updating will be a pain, and you’ll probably start running outdated containers that are security risks.
It’s not like you’re doing code audits every updates anyway. Just use containers that are established and seem trustworthy. It’s all you can really do.
sure, but Renovate can be used in such scenarios. MR is open, scan is triggered in the CI/CD pipeline and that’s how you verify
Pull by digest just ensures that people end up running an ancient version, vulnerabilities and all long after any issues were patched, so that isn’t a one-size-fits-all solution either.
Most projects are well behaved, so pulling latest makes sense, they likely have fixes that you need. In the case of an actually malicious project, the answer is to not run it at all. Huntarr showed their hand, you cannot trust any of their code.
I generally agree with the sentiment but don’t pull by latest, or at the very least don’t expect every new version to work without issue.
Most projects are very well behaved as you say but they still need to upgrade major versions now and again that contains breaking charges.
I spebt an afternoon putting my compose files into git, setting up a simple CI pipeline and use renovate to automatically create PR’s when things update. Now all my services are pinned to specific versions and when there’s an update, I get a PR to make the change along with a nice change log telling me what’s actually changed.
It’s a little more effort but things don’t suddenly break any more. Highly recommend this approach.
That does sound like a good approach. Are you able to share that CI pipeline? I am mostly happy to risk the occasional breakage, nothing is really critical. But something more reliable would probably save me some drama every so often when it does break.
Absolutely! Here’s my CI pipeline, it’s actually super basic: https://gist.github.com/neoKushan/bd92031bb9c8db3320e8c19d5dae3194
Happy to answer questions if you like.
I just added my compose files to the repo, that CI file and set up renovate https://github.com/renovatebot/renovate to create my PR’s for me.
I use digests in my docker compose files, and I update them when new versions are released (after reading the release notes) 🤷
Unfortunately that approach is simply not feasible unless you have very few containers or you make it your full time job.
I dunno, I’ve never found it all that onerous.
I have a couple of dozen (perhaps ~50) containers running across a bunch of servers, I read the release notes via RSS so I don’t go hunting for news of updates or need to remember to check, and I update when I’m ready to. Security updates will probably be applied right away (unless I’ve read the notes and decided it’s not critical for my deployment(s)), for feature updates I’ll usually wait a few days (dodged a few bullets that way over the years) or longer if I’m busy, and for major releases I’ll often wait until the first point release unless there’s something new I really want.
Unless there are breaking changes it takes a few moments to update the docker-compose.yaml and then
dcp(aliased todocker compose pull) anddcdup(aliased todocker compose down && docker compose up -d && docker compose logs -f).I probably do spend upwards of maybe 15 or 20 minutes a week under normal circumstances, but it’s really not a full time job for me 🤷.
I guess it depends on the containers that are being run. I have 175 containers on my systems, and between them I get somewhere around 20 updates a day. It’s simply not possible for me to read through all of those release notes and fully understand the implications of every update before implementing them.
So instead I’ve streamlined my update process to the point that any container with an available update gets a button on an OliveTin page, and clicking that button pulls the update and restarts the container. With that in place I don’t need fully autonomous updates, I can still kick them off manually without much effort, which lets me avoid updating certain “problematic” containers until after I’ve read the release notes while still blindly updating the rest of them. Versions all get logged as well, so if something does go wrong with an update (which does happen from time to time, though it’s fairly rare) I can easily roll back to the previous image and then wait for a fix before updating again.
I have 175
@suicidaleggroll is running a Docker Hub backup. LOL
Yeah this is why I use Debian instead of containers, you can read the release notes on a stable release.
Is manually updating based on trusting the accuracy of the release notes any more secure than just trusting “latest”?
You might, but I bet the majority of people set and forget.
I rely on watchtower to keep things up to date.
With an API that just runs unauthenticated, I’m unsure what any of these suggestions is supposed to improve here.
oh i run huntarr! is it a problem in a lokal homelab? if yes then i just nuke it
I believe they are talking about this.
If you have it at all exposed to the internet, you should probally terminate it
As a summery: Multiple endpoints on the software don’t check for authentication and an unauthenticated person can retrieve your complete settings configuration including your API keys and your password and also change your current configuration, Just by sending a simple POST request.
That’s wild to me that that was something that was able to be done.
ah yes i have googled it and found the reddit post, when i come home i remove it.it dindn 't have that many funktions i needed, but i did like that it was a controll dashboard.
I don’t think those are sufficient. We could prove that a given binary can be produced from a given repo commit, but that doesn’t actually ensure that the code itself is safe. Malicious code is malicious code even if it’s reproducible.
This. So you’ve pinned to a specific reproducible version. Great! It’s still horribly riddled with vulnerabilities.
I think many people just learned the first lesson of “trust but verify”. 🤣








