Reactive Prompting
A prompting technique where the user externalizes thinking without directly addressing the LLM. The agent responds to the thought stream rather than engaging in conversation.
The difference is subtle but changes the mental model: instead of “talking to” an AI, I’m thinking out loud and something picks up on what’s actionable. Similar to how Strflow works for capture, but with a responsive layer.
Distill is a product built on this exact model. It calls itself “the insight-to-action loop for people who think for a living.” AI agents watch threads, spot patterns, and act without being prompted. The user thinks out loud into threads, and agents do groundwork autonomously.
Why This Works
Standard prompting has friction. I’m performing for an audience, framing requests, adding context. When it’s just externalized thinking, that friction drops. The thoughts become the source of truth, not a negotiation.
The agent becomes environment rather than entity. Like how a good IDE responds to what I’m doing without explicit commands.
What Triggers Response
Not everything in the stream needs a response. The triggers I’ve noticed:
- Expressed need or uncertainty
- Ambiguity that blocks progress
- Errors or contradictions worth flagging
- Tasks implied but not stated
Pure reflection can flow past without interruption.
The Keywords Technique
When the thought stream needs specific context or tools, I add a keywords: line at the end. This signals what the agent should activate without making it a direct request.
Example:
Discussion about this needed x-devonthink-item://...
keywords: devonthink mcp, mcporter
The keywords act as hints for tooling, context retrieval, or skills. The agent picks them up and loads what’s relevant. It’s metadata for the stream, not a command.
Voice Considerations
Early attempts at reactive responses came out robotic. Stripping direct address removed the connective tissue of natural language.
The fix: keep thinking-out-loud markers like “I’ve found that…” or “Sure, it works, but…” or parenthetical asides. Those aren’t direct address. They’re voice. The response should read like someone’s notes, not a telegram.
Tension With Pushback
There’s a question of whether an agent in reactive mode can still push back on bad ideas. Usually, I’d want both. The reactive mode handles execution well, but catching errors in thinking might need a different trigger (maybe only when something looks off, not on every response).
Claude Code brainstorming mode
This prompting technique could be tested as a new Claude Code /brainstorm:reactive-tone command which sets the tone for other brainstorm commands like context priming. Essentally this would make the agent a thinking partner that can capture free form thinking. Then I can switch back to normal promping techniques in a different agent.
Claude Code also have output styles, which can be maybe a better way to set the tone like this. The problem is that the tone needs switching, maybe there is a way to programatically set the output style.
Output styles can implement this. They’re custom markdown files in .claude/output-styles/ that replace the system prompt entirely. Switch mid-session with /output-style [name], switch back to collaboration mode with /output-style default. This avoids system prompt injection.
Dictation based thinking
When we have an idea or problem we want to figure out, recording thinking out loud could pair up with this prompting technique and brainstorming mode. Essentially dictation can be converted to text, which can be cleaned-up, and pasted into the agent, after it has been conditioned with the /brainstorm:reactive-tone command.