Ok so i think i do all of these things and would just describe them as “other ways to prompt and LLM” - i think the nuance youre shooting for here is that using these methods you are “pre-preparing” the prompt - not thinking about it at prompt-time and thus likely to miss stuff.
e.g. Feeding a TODO is just the same as copy-pasting that todo in as a prompt.
No, it’s not the same as copying and pasting the TODO into a prompt. Embedding the TODO in code instead of the prompt reduces tokens burned and increases accuracy because it’s observing the TODO in context. Sure you can write more prompting to provide that context, but it still won’t be as accurate. The less context you provide via prompting and instead provide more context through automatic deterministc feedback the better the results
Okay so now I think you’re describing the behaviour I take for granted with the harness i.e. Claude Code.
Having good repo readiness through a good agents/claude.md file + tests + docs means the LLM is able to read more files into its context.
It never occurred to me that anyone would prompt in isolation of their repos but I guess thats exactly what it was like for me last year when I was just feeding ChatGPT prompts away from the repo.
Ok so i think i do all of these things and would just describe them as “other ways to prompt and LLM” - i think the nuance youre shooting for here is that using these methods you are “pre-preparing” the prompt - not thinking about it at prompt-time and thus likely to miss stuff.
e.g. Feeding a TODO is just the same as copy-pasting that todo in as a prompt.
Have I understood you correctly?
No, it’s not the same as copying and pasting the TODO into a prompt. Embedding the TODO in code instead of the prompt reduces tokens burned and increases accuracy because it’s observing the TODO in context. Sure you can write more prompting to provide that context, but it still won’t be as accurate. The less context you provide via prompting and instead provide more context through automatic deterministc feedback the better the results
Okay so now I think you’re describing the behaviour I take for granted with the harness i.e. Claude Code.
Having good repo readiness through a good agents/claude.md file + tests + docs means the LLM is able to read more files into its context.
It never occurred to me that anyone would prompt in isolation of their repos but I guess thats exactly what it was like for me last year when I was just feeding ChatGPT prompts away from the repo.