• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: March 6th, 2025

help-circle


  • Where I live the supermarkets are often super crowded and things like that don’t really happen or maybe only for a few seconds before someone gets angry and resolves it.

    However, I have seen the paying with coins at checkout thing. I prefer that over people who try to discuss discounts and prices at the checkout though.









  • The LLMs will just predict probabilities for the single next token based on all previous tokens in the context window (it’s own and the ones entered by the user, system prompt or tool calls). The inference engine / runtime decides which token will be selected, usually one with high probably but that’s configurable.

    The LLM can also generate (predict) special tokens like “end of imaginary dialogue” to end it’s turn (the runtime will give the user a chance to reply) or to call tools (the runtime will call the tool and add the result to the context window).

    The LLM does not really care about if the stuff in the context was put there by a user, the system prompt, a tool or whatnot. It just predicts the next token probabilities. If you configure the runtime accordingly it will happily “play” the role of the user or of a tool (you usually don’t want that).

    Some of the tool calls are e.g. web searches etc. and the search results will be added to the context window. The LLM can decide to do more calls for further research, save data in “memory” that can be accessed by later “sessions” or call other tools (new tools pop up daily).

    Models tend to get larger context windows with every update (right now it’s usually between 250K - 2M tokens but the model performance usually gets worse with more filled context windows (needle in a hay stack).

    To keep the window small agentic tools often “compact” the context window by summarizing it and then starting a new session with the compacted context.

    Sometimes a task is split into multiple sessions (agents) that each have their own context window. E.g. one extra session for a long context subtask like analysis of a long document with a specific task and the result is then sent to an orchestrator agent in charge of the big picture.

    The fact that everything in the context window regardless of the origin is used to predict the next token is also the reason why it’s so difficult to avoid prompt injection. It all “looks” the same for the LLM and there is no “hard coded” way from excluding anything.