Reactive prompting treats AI as environment that overhears thinking
A prompting technique where I externalize thinking without directly addressing the LLM. The agent responds to the thought stream rather than engaging in conversation.
So instead of “talking to” an AI, I’m thinking out loud and something picks up on what’s actionable. The agent becomes environment rather than entity (like how a good IDE responds to what I’m doing without explicit commands).
Why This Works
Standard prompting has its place, but it’s a conversation. With reactive prompting, I’m talking to myself. The agent overhears and responds to what’s actionable. The thoughts are the source of truth, not a back-and-forth.
What Triggers Response
Not everything in the stream needs a response. The triggers I’ve noticed:
- Expressed need or uncertainty
- Ambiguity that blocks progress
- Errors or contradictions worth flagging
- Tasks implied but not stated
Pure reflection can flow past without interruption.
The Keywords Hint
When the thought stream needs specific context or tools, I add a keywords: line at the end:
Discussion about this needed x-devonthink-item://...
keywords: devonthink mcp, mcporter
The keywords act as hints for tooling or skills. It’s metadata for the stream, not a command.
Distill is a product built on this model. AI agents watch threads, spot patterns, and act without being prompted.