

Not cancelled. But they may have been flagged internally, I don’t know.
We weren’t violating their terms, only violating their built in model guidelines. American models are usually very sensitive. They’d rather err on the side of blocking content than risk allowing questionable content that is lawful.
But even adjusting prompts, it didn’t yield reliable results. So we have to use uncensored open weights models for many things. It’s not SOTA, but it’s better than nothing.


It’s not factual. You’re just an idiot typing a single prompt, probably with no agentic loop or curated database to keep it on line. Then you get mad like a caveman wondering why sticks only give fire half the time because you’re not fucking understanding what you’re working with.