2026.04.05.

AI agents make GTD's multi-horizon framework necessary for individual contributors

GTD was criticized by creatives as too manager-oriented. Individual contributors had one job, one codebase — GTD’s complexity felt like overhead. But GTD wasn’t designed for managers. It was designed for knowledge workers managing complexity across multiple horizons. In 2005, that was mostly managers. In 2026, that’s everyone working with AI.

The bottleneck shifted. Raw capability stopped being the constraint — the constraint is now cognitive architecture: how much you can concurrently think about across different altitudes. When you’re managing a fleet of AI agents, you need to capture what’s on your mind, break visions into delegatable chunks, review what worked, and shift between strategic decisions and tactical verification. 1.1.1 My job is thinking, then AI executes in the system I build. GTD’s Horizons of Focus, Weekly Reviews, and Projects → Next Actions map directly to what successful AI-augmented builders are rediscovering.

The GTD workflow also reframes along human vs. AI capabilities. Capture stays human — these are your raw thoughts. But AI can now handle clarify and organize, turning rambles into projects and next actions. Reflect and engage stay human — you decide priorities and direction. 2.6.5.3.1 A Zettelkasten workflow is felosztható a GTD lépéseire — and now AI can take over some of those steps. 1.1.1.3 AI reframes depth from execution to directing, which means GTD’s orchestration framework is no longer overhead — it’s the cognitive infrastructure you need to not drown when the machines handle most of the execution.

#GTD #AI

2026.03.30.

The coordination tax: most knowledge work exists to transfer context between humans

60 to 70 percent of knowledge work hours are coordination: meetings, translation artifacts, state synchronization, handoff management. PRDs exist because the engineer isn’t the person who talked to the customer. Sprint planning exists because eight engineers need to avoid stepping on each other’s work. Design handoffs exist because the developer needs intent transferred into implementable form. None of these artifacts is the product — they’re bridges between humans who can’t share a brain.

We don’t see this because we’ve categorized the overhead as “the role.” A PM’s job is writing PRDs and running standups. An EM’s job is sprint ceremonies and dependency management. But the value was always the working software, the shipped product, the revenue. Everything between “we understand what to build” and “the thing exists” is process — and process exists because the execution layer is made of humans.

What happens when AI compresses coordination

When agents handle execution directly, the translation layers between humans get deleted — not just the coding tasks. No PRD needed because the person with customer insight works directly with the agent. No sprint planning because there aren’t eight engineers who need to coordinate. No status meeting because the state is the commit history. 2025-12-04_01-09_ai-automation-shifts-human-work-toward-strategy-workflow-design-and-exception-handling AI automation shifts human work toward strategy, workflow design, and exception handling.

This creates a compounding loop: fewer humans → less coordination → work expressed as code → more verifiable → agents handle more → fewer humans still. Each turn accelerates the next, making standard forecasts 2-3x too conservative.

The real residual

What survives when coordination evaporates is the hardest 15-20% — zero-to-one product vision, genuine care in relationships, engineering architecture bets, and the emerging discipline of agentic systems design: building, tuning, and evolving the agent harnesses themselves. 1.1.1 My job is thinking, then AI executes in the system I build. The compound engineering loop is exactly this discipline.

The coordination tax didn’t just waste time. It suppressed the highest-value work that humans do. Removing it concentrates human effort on the work that was always the most important — the work we were always too busy coordinating to do properly.

What makes this personally relevant

The two qualities that separate people who catch this wave: agency (stubborn confidence the gap is closeable) and ramp (ability to learn quickly without permission or roadmap). Both are about posture, not credentials. 2026-01-10_10-10_composability-contracts-for-ai-workflows Composability contracts for AI workflows are one way to make agent-delegated work verifiable — expanding the frontier of what coordination overhead can be safely removed.

#AI #Productivity #Workflow #Drafting

Duruk's focus capacity model gives GTD a vocabulary for focus blocks

I came across Duruk’s focus capacity model while reading his newsletter about engineering productivity. It gave me a way to talk about something GTD handles intuitively but never names: why your day can feel busy yet produce nothing. The model has three parameters: lambda (how often you get interrupted per hour), delta (how long it takes to recover after each interruption), and theta (the shortest block of time where you can still do meaningful work). You take each uninterrupted stretch, divide by theta, round down, and add them up: C = Σ floor(block_i / θ). Anything shorter than theta scores zero.

The model only covers one slice of productivity, protecting and using focus blocks. It doesn’t touch GTD’s clarify-organize-reflect loop, weekly reviews, or someday/maybe lists. But for the part it does cover, it gives useful vocabulary.

The GTD connection

When I mapped these parameters to what GTD actually does, three connections stood out:

  • Batching changes when you respond to lambda. The interruptions still arrive, but you defer responding until a processing window. I use “no meeting mornings” for this — not fewer interruptions in the world, just fewer that reach me during deep work.
  • Capture systems minimize delta. Without capture, recovery means reconstructing context from memory (in my experience, easily 10-15 minutes of “where was I?”). With a good capture tool, you just read your last note and you’re back in a couple of minutes. 2026-01-31_07-56_git-commits-as-cognitive-snapshots Git commits serve the same function for code work.
  • Context lists help match tasks to available time. GTD contexts are originally about physical context (what tool or location you need), but in practice I also use time estimates when picking from a list. If I have 15 minutes before a meeting, I pull from the quick-task list, not the deep-work list. That’s not changing theta — it’s picking tasks that fit the block I actually have.

Fitting tasks to blocks

The article treats theta as fixed, but the practical question is different: given a block of known size, what task fits? A 25-minute gap before a meeting gets a code review, not architecture work. You’re not changing theta, you’re choosing what to attempt.

AI agents push this further. 1.1.1.2 Reactive prompting treats AI as environment that overhears thinking. An agent can pre-stage context and suggest what to work on based on available time. My /gtd:select-next-task command does a version of this — it checks my calendar and recommends tasks that fit the next open block.

Where the model stops working

With multiple AI agents running in parallel, the model’s assumptions stretch. You’re no longer a single worker fitting tasks into focus blocks. You’re an orchestrator spinning up agents and reviewing their output. The relevant overhead isn’t recovery time (delta) but briefing time: how long it takes to give an agent context and verify what it produced. I’ve been thinking of this as omega, though I’m not sure it belongs in the same equation.

The honest question is whether switching between agents even counts as an interruption. The cognitive load of reviewing agent output feels different from deep generative work, but it’s still a context switch. I don’t have a clean answer here. The way I work now, my job is thinking and AI executes in the system I build (1.1.1), but the model doesn’t capture what that orchestration actually costs.

Concepts

  • Focus capacity equationC = Σ floor(block_i / θ), where lambda, delta, and theta shape the blocks
  • GTD maps partially — batching defers lambda, capture reduces delta, context lists help match tasks to blocks
  • Task-to-block fitting — choosing appropriately-sized tasks for available time, not changing theta itself
  • Orchestration overhead — briefing and reviewing AI agents introduces costs the model doesn’t account for
  • Model scope — covers focus blocks only, not weekly reviews, next-action identification, or someday/maybe

#GTD #AI #Workflow #Drafting

2026.01.17.

AI reframes depth from execution to directing

The original M-shaped model is about building deep expertise in multiple areas, connected by broad interests. But with AI, this reframes: depth is no longer about execution. It’s about understanding enough to see patterns and direct the AI.

So what do I actually do? AI handles the technical depth, the execution, the details. I handle planning, context, spotting connections between distant things. Far Transfer (seeing a pattern in one domain and applying it to another) stays with me. I haven’t seen AI do this well yet.

The 80% rule also transforms. It used to take hard work to reach 80% fluency in a field. Now with AI, I can operate at 80% without actually being there (AI fills the gaps). But I need minimum understanding, otherwise I won’t notice when it’s wrong.

I don’t write code anymore, but I understand it. Review is harder than writing (you have to understand what someone else intended, not just what you meant). Daily code review from AI = daily practice in understanding. The trade-off is real though: not writing means some muscle memory fades. I’m betting that review keeps enough intuition alive.

GTD and Zettelkasten work as multipliers here. GTD tells me what work needs doing, Zettelkasten tells me how to structure learning, AI handles the execution. I can go deep in more areas than before because 1.1.1.1 the execution cost dropped.

Clarify by prototyping, not planning

The old “measure twice, cut once” advice assumed execution was expensive. For exploratory work, it’s not anymore. AI made building cheap, so the bottleneck moved upstream to clarity. We don’t need to measure… we need to clarify. And clarification doesn’t come from more planning. It comes from 2.6.5.5.2 touching the real thing.

So the workflow inverts. Instead of planning rigorously, 2.6.5.5 clarify through dialogue while building the first prototype. Grab an idea, 2025-02-09_00-36_leveraging-llms-for-efficient-code-prototyping-and-automation start prototyping, share the rough version, gather early feedback. An ugly prototype beats no prototype because it reveals what I don’t know. Even an ugly implementation beats no implementation. 2024-12-19_21-31_cult-of-done Cult of Done logic: prototype → POC → polish (if there’s time).

This compounds in exploratory work. Time spent building, testing, and learning gives us hands-on experience with the problem. When we try something, we get grounded in reality instead of assumptions. To get clarity, dig into a prototype early. The experience itself generates the understanding that planning was supposed to provide. 2.6.5.4 Ideas emerge bottom-up (I discover what I actually need by building, not by speculating about requirements).

I also realized that to communicate ideas, I have to visually present them. Demos beat decks. Working software melts away abstract objections.

The catch: skipping written planning means I replace it with real-time dialogue-based thinking. Direct AI collaboration helps here. My tmux collab workflow lets me dig into issues with Claude to get clarity before (or while) building. The thinking still happens… it just happens through prototyping instead of speculation.

2026.01.10.

Each PKM tool should have one role

The organizing principle is simple: each tool gets one job. When tools mix roles (Craft handling both input AND output, for instance), things get messy.

PKM pipeline showing capture, annotate, process, destination, and output stages

The pipeline

CAPTURE → ANNOTATE → PROCESS → DESTINATION → OUTPUT

Here’s what each stage looks like for me:

  • Capture: Drafts on mobile, Bike on desktop
  • Annotate: PDF Viewer (I convert web articles to PDF first)
  • Process: Tinderbox or Claude Code (synthesis happens in conversation)
  • Destination: Zettelkasten (knowledge), OmniFocus (action), or DEVONthink (reference)
  • Output: Craft for project plans, iA Writer for polished blog posts, Bike for raw thinking posts

What I learned

Processing happens in one place. I use either Tinderbox or Claude Code for synthesis (like this conversation right now).

DEVONthink is infrastructure, not a step. It’s the glue that connects tools (storage, extraction, archive), but it’s not where I do thinking work.

The Zettelkasten triple-frontend is intentional. I use The Archive for quick search, Obsidian for visualization, and iA Writer for writing. Same data, different access modes depending on what I’m doing.

Craft is output only. It’s for project plans and writings that come out of processed ideas, not for refinement.

Three destinations from processing

2.6.6.1.2 Refinement is a missing GTD stage talks about why ideas need development time. This audit clarifies where they go when they’re done:

  1. Knowledge → Zettelkasten (permanent notes)
  2. Action → OmniFocus, then Craft for planning
  3. Reference → DEVONthink (just stays in the archive)

Some ideas never exit. They compost, get deleted, or merge into other stuff. That’s fine (it’s the filtering function working).

Refinement is a missing GTD stage

GTD’s standard flow is Capture → Process → Organize → Do. But this assumes everything captured can be immediately classified as actionable, reference, or trash. Some ideas don’t fit. They’re not procrastinated (they’re conceptually unfinished). They’re not blocked (they’re still unformed). They don’t belong in next actions or someday/maybe because they’re still searching for shape.

The missing stage is Refinement: active development of ideas that aren’t ready to become actions yet.

Capture → Process → Refine → Organize → Do

Why Someday/Maybe isn’t enough

Someday/Maybe is often treated as a parking lot for things you’re not doing. But refinement is active, not passive. It requires systems designed for repeated contact: random surfacing, spaced repetition, linking to related ideas. The goal isn’t storage, it’s development.

Refinement systems

Effective refinement needs a container that doesn’t feel like clutter but also doesn’t let good ideas slip away. Examples:

  • 2025-12-18_00-53_my-incremental-reading-system My Incremental Reading System uses OmniFocus as a prioritized queue with spaced repetition, separating Distill and Synthesize phases
  • 2.6.6.1.1 A Zettelkasten használata a Someday/Maybe lista fejlesztésére describes using Zettelkasten to gradually develop Someday/Maybe items
  • 2.6.5.3.2 Splitting information extraction into distillation and synthesis formalizes the cognitive mode separation

The common thread: these aren’t storage systems, they’re development systems. They ensure repeated contact with incomplete ideas until they’re ready to exit refinement.

Two possible outcomes

2026-01-10_23-45_each-pkm-tool-should-have-one-role Each PKM tool should have one role clarifies the three destinations from processing. Ideas that complete refinement become either:

  1. A Zettelkasten note - the idea crystallizes into permanent knowledge
  2. A project plan - the idea develops into something actionable (via OmniFocus → Craft)

Some ideas never exit. They compost, get deleted, or merge into other ideas. That’s not failure - that’s the filtering function working.

Concepts

  • Refinement stage → Active development before action
  • Development vs storage → Repeated contact, not parking
  • Idea metabolization → Some thoughts need time

My job is thinking, then AI executes in the system I build

My work has two tracks: managing zettelkastens (writing notes and feature plans for AI agents) and systemizing my workflow so the whole thing compounds over time. 2025-07-19_09-38 Outputs are disposable; plans and prompts compound.

The realization here is that I’m not programming in the traditional sense. I’m using my computer for thinking. 2.8.4.2 Specification is the new true source code. The output of that thinking feeds AI agents, which then execute work inside a system I’m also developing. It’s a loop: think → capture → AI executes → system improves → thinking becomes more powerful.

This split between content creation and meta-work is the key. One track produces the notes, the other makes the system progressively smarter. 2025-12-07_21-27_compound-engineering-plugin-overview Each unit of engineering work should make subsequent units easier.

Concepts

  • Dual-focus knowledge work → Content plus meta-system
  • Compound knowledge systems → Infrastructure that improves with use
  • AI agent feature planning → Notes driving automation design
  • Systemizing workflows → Making processes self-improving
  • Meta-work investment → Building the system, not just using it

M-Shaped Career Strategy for Scanners

The video argues that people with too many interests (“scanners”) shouldn’t force themselves into traditional specialization. Instead, aim for an M-shaped career where you develop deep expertise in multiple areas, connected by broad curiosity. The #Zettelkasten plays a key role here (more on that below).

Why Zettelkasten matters for scanners

The scanner’s mind is basically an idea factory running at full speed, but working memory is tiny. If you don’t offload finished ideas somewhere, there’s no room to build new ones. 2024-12-15_20-43_the-zettelkasten-method-a-structured-approach-to-knowledge-management The Zettelkasten method externalizes thought processes, freeing the mind for creative work:

  • It acts as external memory, so you can switch topics without losing everything
  • When your obsession with medieval architecture fades, the notes stay put (you can come back later)
  • Over time, the connections between notes enable what the video calls “Far Transfer” (finding unexpected links between distant domains years later). 2025-03-02_14-55_the-value-of-an-experiencebased-zettelkasten The value of an experience-based Zettelkasten is preserving knowledge for future pattern recognition

Luhmann wrote 70 books this way. Not a bad track record.

The four pillars

1. The M-shaped profile

Instead of going deep in one thing (I-shaped specialist) or staying shallow across everything (dash-shaped generalist), build multiple pillars of depth connected by general knowledge. The shape looks like an M.

2. Serial mastery

You can’t build multiple pillars at once. Pick one area, commit for a “season” (6–18 months), and aim for fluency (not world-class expertise). When you can solve most problems without looking things up, you “graduate” and move to the next pillar. This isn’t quitting. It’s strategic. 2.6.5.3.2 Splitting information extraction into distillation and synthesis reduces cognitive load by separating different mental modes.

3. The “good enough” job

Choose work that pays the bills without draining your cognitive energy. Einstein worked as a patent clerk. Boring, but it left mental capacity for thinking about the universe. The surplus energy is what you use to build your own pillars after work.

4. Far transfer

Specialists solve similar problems (near transfer). Polymaths recognize patterns across unrelated domains and apply them elsewhere. A musician who understands harmony might write more elegant code. Someone who studied root systems might organize databases better. This is the payoff of the M-shape.

The metaphors

Two images from the video:

  • The Zettelkasten as a time-capsule garden where you plant seeds now and harvest unexpected fruit years later
  • Building bridges instead of skyscrapers (specialist) or tents (dabbler): solid pillars in the riverbed, connected over time

Concepts

  • M-shaped career → Build multiple deep pillars
  • Scanners → People with too many interests
  • Serial mastery → One thing at a time, then move on
  • Seasonal commitment → 6-18 month focus windows
  • Far transfer → Patterns across unrelated domains
  • Strategic quitting → Graduating, not giving up
  • Cognitive surplus → “Good enough” job preserves energy
  • Zettelkasten as external memory → Prevents overwhelm, enables switching
  • Fluency over mastery → Solve problems without manuals

Composability contracts for AI workflows

Composability contracts are machine-readable JSON specifications that define boundaries and rules for AI agent behavior. They make implicit rules explicit, preventing agents from drifting into chaos while still allowing bounded creativity.

Core structure

Every contract has five elements:

  • Locked invariants - what can never change (e.g., only 3 button variants allowed)
  • Allowed variations - bounded creativity within constraints
  • Composition rules - how pieces fit together (e.g., max 1 primary button per screen)
  • Forbidden patterns - explicit no-nos with reasons
  • Validation logic - machine-checkable assertions

Example: Button contract

{
  "componentType": "Button",
  "contract": {
    "locked": {
      "borderRadius": "8px",
      "fontFamily": "Inter",
      "fontSize": "14px"
    },
    "allowedVariants": {
      "intent": {
        "type": "enum",
        "values": ["primary", "secondary", "danger"]
      },
      "size": {
        "type": "enum",
        "values": ["small", "medium", "large"]
      }
    },
    "forbidden": {
      "customColors": "Use intent instead",
      "newVariants": "Only primary, secondary, danger allowed"
    },
    "compositionRules": {
      "primaryButtonsPerScreen": {
        "max": 1,
        "reason": "Multiple primary actions confuse users"
      }
    }
  }
}

This prevents AI from inventing 47 button variants. Valid: <Button intent="primary">. Rejected: <Button color="#FF69B4"> or two primary buttons on one screen.

Example: GTD Task Processing contract

{
  "contract": {
    "locked": {
      "contexts": ["@computer", "@phone", "@errands", "@home", "@office", "@waiting"],
      "priorities": ["critical", "high", "normal", "low"]
    },
    "compositionRules": {
      "projectNesting": {
        "maxDepth": 2,
        "reason": "GTD keeps projects flat for clarity"
      },
      "dueDates": {
        "allowedFor": "time-specific commitments only",
        "validation": "Must have external consequence if missed"
      }
    },
    "processingRules": {
      "twoMinuteRule": {
        "condition": "estimatedTime < 2min",
        "action": "do_immediately",
        "noTaskCreated": true
      }
    }
  }
}

Prevents: custom contexts like @urgent-calls, fake due dates on “brainstorm ideas”, deeply nested project hierarchies.

The key insight

“UI contracts exist whether you define them or not” - if not explicitly defined, they emerge accidentally through framework defaults, the last shipper’s decisions, or whoever shouts loudest in design review. Making them explicit and machine-readable means AI agents can’t invent 47 button variants.

Beyond UI: agentic workflows

The pattern extends to any agentic workflow:

  • GTD Task Processing - locked contexts (@computer, @phone), max project nesting depth, the 2-minute rule, no fake due dates
  • GTD Weekly Review - enforced step order, prevents skipping uncomfortable steps, inbox-to-zero validation
  • Email Triage - 5 folders only, no custom folder creation, defined response templates
  • Meeting Scheduling - protected deep work blocks, max meetings per day, buffer times
  • Research Synthesis - source quality hierarchy, required sections, confidence scoring

Integration with skills, commands, hooks

Contracts become the foundation layer that validates everything:

USER INTERFACE (Commands, Natural Language)
           ↓
    COMMAND LAYER (/capture, /process, /review)
           ↓
     SKILL LAYER (capabilities)
           ↓
     HOOK LAYER (event triggers)
           ↓
   CONTRACT LAYER (rules, boundaries, constraints)

Every layer is validated by contracts before execution. Skills are bound by contracts. Commands compose skills. Hooks trigger skills/commands. All compositions are validated.

Why this matters

  • Composability - contracts ensure all compositions are valid
  • Consistency - one source of truth, no drift between manual and automated actions
  • Evolvability - change contract → entire system adapts
  • Debuggability - clear error messages: “Note has 1 link, minimum required: 3”
  • Trust in automation - agents can’t violate boundaries, enabling more aggressive automation

This feels like a missing piece in agentic systems - like OpenAPI for agent behavior.

2025.12.25.

Clear usage rules encourage AI adoption at work

Brian Greenbaum (product designer at Pendo, a product analytics company) talks about driving AI adoption across their org. What caught my attention: he claims clear rules increase usage, not decrease it.

The argument goes like this. People avoid AI tools when they don’t know what’s allowed. Can I paste this code? What about customer data? The uncertainty creates friction. So he worked with legal and security to create an internal wiki with approved tools and data handling rules.

I find this plausible, though I’d want to see the actual numbers. It matches what I’ve noticed with other “permission” problems: people often want to do the right thing, they just don’t know what it is.

The other piece was visibility. He ran hands-on workshops every two weeks and set up a public Slack channel for sharing experiments. The idea being that secret AI usage creates weird dynamics where people either hoard knowledge or feel embarrassed.

To convince skeptics in leadership, he built an MCP server (a way for AI to connect to external data sources) that could query company data with natural language. Showing beats telling, I suppose.

I should note: this is one company’s story. Pendo is a tech company with tech-savvy employees. The “just create a wiki” approach assumes people actually read documentation (they often don’t). The workshops require someone with bandwidth to run them. And in fear-based cultures, public Slack channels become performative rather than authentic.

2025.12.18.

My Incremental Reading System

I’ve built a homegrown incremental reading system that spans multiple apps. It’s not SuperMemo, and that’s intentional. I want zettelkasten notes, not flashcards.

The core idea

I’m interested in acquiring knowledge, not memorizing every detail of it (2025-01-25_10-52). The zettelkasten lets me lazy-load information by following links and running into notes accidentally. SuperMemo’s spaced repetition is about remembering concepts, so you don’t get that “external conversation partner” feeling.

Reading, for me, is filtering. I’m mining articles for 2-3 good ideas, then moving on. Most content is noise anyway.

How capture works

Readwise Reader has a great parser, so I use it for saving articles. A syncer pulls them into both Craft (where the content lives) and OmniFocus (where the queue lives, with AI-assigned priorities from 1-9).

I also capture manually via OmniFocus Quick Capture for random URLs and Safari Reading List stuff, plus DEVONthink automation for documents.

Everything ends up in the same @Read/Review perspective in OmniFocus.

The queue

OmniFocus handles scheduling, not content. The actual reading material stays where it belongs: articles in Craft, documents in DEVONthink, web pages in Safari, notes in my zettelkasten.

I use an Adaptive Task Repetition plugin (CMD-Shift-I) that works like SM-2. When I process an item, I rate how well it went: Again, Hard, Good, Easy, or Completed. The interval multiplies accordingly (1.4x for Hard, 2.0x for Good, 2.5x for Easy). Items I don’t care about, I just delete.

The queue has three workflow states: Discuss, Distill, and Synthesize.

Two-phase processing

2.6.5.3.2 formalizes something I realized: distillation and synthesis are different cognitive modes. Mixing them is exhausting.

Distill is understanding mode. I scan an article, bail if it’s not interesting, otherwise read and highlight. Maybe repeat it tomorrow, adjust the priority. I get about 0.75 extracts per article, which sounds low but most articles just don’t have that many good ideas.

Synthesize is creation mode. I review my timestamped notes, connect them to existing ideas, and when one is ready, I add the #Linking tag and rename it to folgezettel format (like 2.3.4) to place it in the outline.

The separation reduces cognitive load. I’m not trying to understand AND create at the same time.

The funnel

World (infinite)
  ↓ Readwise/manual capture
  ↓ AI priority [1-9]
  ↓ Spaced repetition scheduling
  ↓ Distill
  ↓ Synthesize
Zettelkasten

Each step reduces volume and increases value. Most articles die in the funnel. That’s the point.

Output expectations

I produce about 164 timestamped notes per year, and roughly 30 of those become folgezettels (about 18% conversion). The rest stay as timestamped notes, which is fine. They’re composting, not stuck. Some never mature, some resurface years later and become important.

Constraints

This is cognitive work, not leisure. I can only do it at the end of the day when I have energy, or maybe once or twice on weekends. Sometimes I just don’t give a fuck and pick something from Safari Reading List instead.

Output is limited by attention, not system mechanics. About 3-5 sessions per week, and the system performs at the capacity I give it.

How this differs from SuperMemo

SuperMemo fragments articles early into many small pieces, then schedules each fragment separately. The goal is memorization via QA cards.

I fragment late, at the natural reading moment. The goal is zettelkasten notes that connect ideas. Article-level granularity makes more sense for this.

2.19 notes that incremental reading extracts are efficient but disruptive for stories. The fragmented approach doesn’t suit all content types, and I prefer reading articles whole until I’m ready to extract.

  • 2.6.5.3.2 Splitting information extraction into distillation and synthesis
  • 2.19 Incremental reading extracts are efficient but disruptive for stories
  • 2025-01-25_10-52 The Zettelkasten is for people who want to lazy-load knowledge
  • 2025-01-20_22-25 The extraction is a key collection workflow
  • 2025-01-19_13-40 Literature Notes, Where do they go once they become Permanent Notes?

2025.12.17.

Reactive prompting treats AI as environment that overhears thinking

A prompting technique where I externalize thinking without directly addressing the LLM. The agent responds to the thought stream rather than engaging in conversation.

So instead of “talking to” an AI, I’m thinking out loud and something picks up on what’s actionable. The agent becomes environment rather than entity (like how a good IDE responds to what I’m doing without explicit commands).

Why This Works

Standard prompting has its place, but it’s a conversation. With reactive prompting, I’m talking to myself. The agent overhears and responds to what’s actionable. The thoughts are the source of truth, not a back-and-forth.

What Triggers Response

Not everything in the stream needs a response. The triggers I’ve noticed:

  • Expressed need or uncertainty
  • Ambiguity that blocks progress
  • Errors or contradictions worth flagging
  • Tasks implied but not stated

Pure reflection can flow past without interruption.

The Keywords Hint

When the thought stream needs specific context or tools, I add a keywords: line at the end:

Discussion about this needed x-devonthink-item://...

keywords: devonthink mcp, mcporter

The keywords act as hints for tooling or skills. It’s metadata for the stream, not a command.

Distill is a product built on this model. AI agents watch threads, spot patterns, and act without being prompted.

2025.12.04.

AI automation shifts human work toward strategy, workflow design, and exception handling

The question I keep coming back to: which parts of my work are repetitive, verifiable, and describable? And how do I turn those into workflows that AI can run (or at least help with)?

What’s left for me is the judgment calls, the exceptions, the strategy. My value shifts toward defining the workflows, keeping an eye on things, and stepping in when something breaks.

2025.07.20.

Bike outlines as structured planning DSL

Ray Myers’ “Abstraction Leap” concept suggests designing explicit DSLs rather than letting LLM prompts become source code (source highlight). Bike outlines could be perfect for this: XHTML structure makes them machine-readable while the outliner UI stays human-friendly.

The approach

Template + Validator = Guidance + Guarantees

  • Template shows LLMs the expected shape
  • Validator enforces that structure after generation
  • Result: predictable, testable foundation vs brittle free-form prompts

Connection to specification as code

This aligns with 2.8.4.2: the Bike outline becomes the primary artifact that compounds over time (see 2025-07-19_09-38). Following John Rush’s “fix inputs, not outputs,” you improve the template when plans generate poor breakdowns, not just the individual output.

In practice

Unlike free-form markdown specs that require manual interpretation, Bike’s XML structure makes it easier for LLMs to understand and process. The outliner’s visual hierarchy could make complex plans manageable while maintaining the machine-readable structure needed for reliable AI collaboration. This could bridge human planning intuition with computational precision.

2025.07.19.

Splitting information extraction into distillation and synthesis

Source extraction of the idea 2025-07-19_13-08

This approach modifies incremental reading 2025-01-20_22-25 by splitting information extraction into two distinct phases.

Phase 1: Distillation

What: Extract information from sources using DEVONthink

  • Highlight and annotate key passages
  • Gather summaries and quotes
  • Focus on capturing, not interpreting

Phase 2: Synthesis

What: Transform extracts into original #Zettelkasten notes

  • Connect ideas to existing notes
  • Develop personal insights
  • Create permanent notes with AI assistance

OmniFocus Integration

  • Distill tag: For tasks about extracting from sources
  • Synthesize tag: For developing draft #Zettelkasten notes

This separation enables batch processing of similar work and reduces cognitive load by not mixing extraction with creation. It acknowledges that distillation (understanding) and synthesis (creating) are fundamentally different cognitive activities requiring different mental modes.

Specification is the new true source code

Sean Grove’s thesis is that we’ve been valuing the wrong artifact. We treat code as precious and prompts/specifications as ephemeral, when it should be the reverse. His analogy is perfect - we’re essentially “shredding the source code and version controlling the binary.” The only problem with this analogy is that LLMs are non-determistic, so relying on the as a compiler can result different code artifacts. Still, using version controlled specs and code is a good middle-ground.

John Rush takes this further with his “fix inputs, not outputs” principle. His AI factory isn’t just about automation—it’s about building a self-improving system where the plans and prompts are the real assets. When his agent wrote memory-inefficient CSV handling, he didn’t just fix that instance, he baked the streaming requirement into the plan template. The factory improves itself by improving its specifications.

The Task-Magic connection shows this thinking already emerging in practice. The PRD template is essentially a specification format, but it could be more - it could be a living document that evolves, forks, and adapts to different projects. The idea about “project specific templates” that can be forked mirrors how Grove describes specifications that compose and have interfaces.

What’s fascinating is how all three converge on the same truth: the specification IS the code. Grove calls it “the new code,” Rush calls it “the real asset,” and in Task-Magic it’s something that should compound and evolve rather than be recreated each time.

This represents a fundamental inversion of the traditional development process. Instead of specification → code → binary, we’re moving toward specification → multiple outputs (code, tests, docs), where the specification remains the primary artifact that we version, debate, and refine.

Sources

Revise my Task-Magic plan structure

Source highlight:

Outputs are disposable; plans and prompts compound. Debugging at the source scales across every future task. It transforms agents from code printers into self-improving colleagues.

My next step would be to review my PRD template and extract it into a separate file. What’s the easiest way to continuously improve it? Perhaps having project-specific templates, since not all projects have the same requirements. However, plan templates can be forked and include a section similar to a local CLAUDE.md file, but stored in the repository.

I’m not sure what would be the general structure though.

  1. General Instructions
    • Project-agnostic guidance that applies to every project
    • Custom instructions for my local setup like commands (orbctl, deployment processes)
    • Practices that work well can graduate to global CLAUDE.md
  2. Context / background info
    • System description relevant to the plan scope
    • High-level overview of current state:
      • Technical architecture
      • Key files and directories
      • Integration points
    • Foundation for early research
    • Link to related zettelkasten notes for deeper context
    • Brainstorming
      • Keep exploratory thinking in TaskPaper and link separately
  3. User stories
    • Clear “As a user I want X so that Y” format
    • This approach surfaces misalignment between human intent and agent understanding
    • Creates shared vocabulary before technical implementation
    • Foundation for implementation design decisions
  4. Implementation design
    • High-level architectural steps (following Kiro’s format)
    • Step-by-step approach that breaks into manageable development tasks - Guides for the task template
    • 2025-07-21_09-12
    • Anatomy of an AI Prompt
      • This can be used as the template skeleton for different sections
      • Ask the agent in the rule to replace placeholder stuff
    • When adding new locales to the view, DO NOT add them to the en.yml, use the default flag in the view for I18n.t.
    • Nearcut
      • We prefer images in features/admin/feature-name folder name
    • Have a base branch name in the frontmatter, so the fleet agent must switch to that first
      • It should ask about it when we create tasks

The plan file could be converted from a Markdown template into a Bike template 2025-07-20_13-14.

2025.06.26.

Claude Code Workflow and Features Index

ENABLE_BACKGROUND_TASKS allows Claude Code to run long tasks in background

Ian Nuttall (@iannuttall) shared a useful Claude Code pro tip for handling long-running tasks.

Claude Code Pro Tip

Add this line to your .zshrc or .bashrc (ask Claude Code to do it for you):

export ENABLE_BACKGROUND_TASKS=1

This allows you to move long-running tasks to the background to keep chatting with Claude Code while tasks execute.

Key Points

  • Environment variable enables background task execution in Claude Code
  • Keeps the chat interface responsive during long operations
  • Can be added to shell configuration files automatically by Claude Code
  • Improves workflow efficiency for developers

2025.05.12.

My computers show me dynamic index cards

Link to Original Document

What is a card?

  • A card is any addressable object that exposes a deep link and a title.
  • Cards are not just files. They include:
    • OmniFocus actions, projects, tags, perspectives (omnifocus://…)
    • Craft pages, blocks (craftdocs://…)
    • Markdown notes, headers (file:///…)
    • DEVONthink records (x-devonthink-item://…)
    • Email messages, calendar events, PDFs, web highlights
  • If you can deep link to it, you can treat it as a card.

The system: a web of linked cards

  • I don’t care about “the app,” I care about the content inside it.
  • Every app becomes a card engine. Cards live in engines, but they link to each other across silos.
  • Instead of trying to store everything in one monolithic app, I have a network of cards connected by URLs.
  • Examples:
    • An OmniFocus task links to a PDF in DEVONthink
    • That same PDF links back to a Craft note where I summarized it
    • The Craft note has a backlink to the original task
  • Spotlight is the universal search index. A good title makes the card retrievable regardless of app.
  • Links make cards composable. They allow you to:
    • Jump from a project to its references
    • Surface context
    • Build dashboards across tools

Linking cards across engines

  • I use Hookmark on macOS to quickly copy or hook links between cards.
  • On iOS, I prefer apps that expose stable custom URL schemes.
  • Linking isn’t just for documents. OmniFocus perspectives or DEVONthink groups can be cards too.
  • I still use folders in tools like DEVONthink to organize project materials. Links just sit on top of that structure to connect meaningfully related items.
  • It aligns with contextual computing: the object is the anchor, not the app.

2025.05.11.

Treating projects as experiments

Page 65

One tool to make this easier is to reframe decisions as experiments. You’re no longer a perfectionist frozen on stage with everyone watching your every move, you’re a curious scientist in a lab trying to test a hypothesis.

Treating any captured item as a “possible experiment” can help us detach ourselves from the necessity of the project’s completion. Although this is pretty much what the #GTD Someday/Maybe is: It lets an idea sit until you (a) see clear learning value and (b) have bandwidth to run it.

Instead of finding the perfect solution, make experiments, and analyze them by success factors.

  1. Hypothesis – What do I expect to learn or prove?
  2. Metric of success – How will I know the experiment taught me something?
  3. Next action – The first, smallest, concrete step that moves the experiment forward.

If you can’t write all three in <60 s, it probably isn’t worth experimenting yet.

Adding “as experiment” alone will not automatically convert the meaning of a project into an experiment. A project should still be an outcome. An experiment is more like a subproject.

Experiments-based projects should work pretty well with work-related projects, where POCs are like experiments. We could call this “Experiment driven programming”.

2025.02.08.

Exploring Real-Time Voice-to-Text Transcription Options and Preferences

I’m exploring app options for real-time voice-to-text transcription, similar to macOS dictation.

  • I’ve looked at existing solutions like the VoicePen app that allows typing and content transformation.
  • I also investigated Inbox AI but found it confusing, and my attempt to configure a new voice assistant proved unsuccessful.
    • I may return to this app one day.
  • Seems like Bolt.AI can dictate and type inline.
    • This is essentially the same process I was using with VoicePen, so I’ll continue using VoicePen for longer dictations. I might also use Voice Memos to capture the text, and then I can paste it into the note. Alternatively, I can dictate in line using Bolt.AI.
  • On the other hand, I would prefer to use the built-in dictation feature of macOS.
    • Since it integrates seamlessly with text editing, I can see my typed words in real time, and it’s actually quite effective.
    • The good news is that I can go back and fix any issues. They’ve recently added text editing with dictation, so I might not need Bolt.AI after all. Dictation could work perfectly well.

2025.02.07.

Incremental brainstorming makes it possible to collaborate asynchronously with ourselves or others

Incremental brainstorming allows us to document the thinking process through an archive of written communications. This method enables brainstorming with one’s past self, in addition to the present participants.

In other words, the participants of incremental brainstorming include:

  1. participating brains,
  2. past versions of participating brains, and
  3. non-participating authors from the past and the present (as source material, or reference).

There are different tool-specific forms of incremental brainstorming:

2025.02.06.

The iPad mini is best used for consumption of chronological information

I’ve noticed something interesting about my iPad mini—it just feels right when content is organized chronologically. There’s a natural rhythm to it: I get in, touch a piece of content, and then get out. I just 2.14.11.2 Highlighting information in streams. This approach is all about ease and efficiency. The content flows in the order it was created or updated, which mirrors the way our minds naturally process events. No complicated folders or categories—just a simple, straight path to what’s new.

What I like about this setup is that it cuts down on decision fatigue. Instead of spending time figuring out where to look or how to organize my thoughts, the interface handles that for me. I just dive in, quickly interact with a bit of content, and move on without overthinking it. This streamlined process makes the browsing experience feel almost effortless, which is exactly what you want when you’re just looking to catch up without any extra hassle.

Because our brains naturally remember things in sequences, this kind of ordering feels intuitive. I don’t have to stress about missing something important or having to manually sort things out later. The system does it all for me, reinforcing that laid-back, efficient browsing style.

2025.02.01.

Different Tools for Different Thinking Modes

Follow-up on:

I figured out how to use different tools for different types of thinking. Set up three OmniFocus shortcuts for this:

  1. Zettelkasten (The Archive):
    • Main journaling and thought capture
    • Documentation and reflection
    • Both daily and permanent notes
    • OmniFocus shortcut for project-specific logging
    • See 2.6.15 for the detailed content pipeline workflow
  2. TaskPaper:
    • Planning and brainstorming
    • Project-specific thinking
    • Task breakdown
    • OmniFocus shortcut for project brainstorming
  3. Emacs:
    • Programming experiments in Org Mode
    • Literate programming
    • OmniFocus shortcut for programming docs
    • Still figuring this one out

Color-coded the shortcuts to make it easy to distinguish them:

#Workflow #Journaling #OmniFocus

2025.01.30.

Thought Threads: Append-Only Note-Taking

Thought Threads is an append-only, thread-based note-taking system where new ideas are added at the end of a sequence rather than inserted between existing ones. It preserves the natural flow of thought development while allowing connections through cross-links instead of restructuring.

Key Principles

  • Append-only → Notes are always added at the end.
  • Threaded structure → Ideas evolve like a conversation.
  • Hierarchical depth → Indentation organizes sub-notes.
  • Links over restructuring → Notes reference each other rather than being moved.

Example Structure

1 Productivity
  1.1 Time Management
    1.1.1 Pomodoro
    1.1.2 Deep Work
  1.2 Cognitive Biases
  • New notes are appended (1.1.3, 1.1.4).
  • Cross-references connect related ideas (e.g., “See 1.2 for biases in time management”).

Why This Works

  • Preserves chronological order → You see how ideas evolve.
  • No need for reorganization → Just append and link.
  • Less friction → No need to decide where to insert a note.

Best Practices

  • Summarize long threads with milestone notes (1.3 Summary).
  • Define “Next Steps” in notes to guide further thinking.
  • Use an index (optional) for quick navigation.

2025.01.27.

Zettelkasten as an Information Stream

A Zettelkasten exhibits many characteristics of an information stream:

  • It grows continuously over time
  • Each note preserves a moment of thinking
  • Previous entries remain unchanged
  • The system accumulates value through historical preservation
  • It enables discovery through browsing and connection-making

However, unlike typical streams, a Zettelkasten also incorporates deliberate organization through its linking structure and numbering system. This makes it a hybrid system that combines the benefits of stream-like accumulation with structured knowledge management.

The stream-like nature of Zettelkasten supports the natural evolution of ideas while its organizational features prevent the chaos that might occur in a pure stream system.

See also:

Definition and Purpose of Information Inboxes

An inbox works as a staging area 2.14.11 that creates a natural pressure to act. Unlike streams that can grow indefinitely, inboxes are designed to stay empty. Each new item creates a small amount of pressure – an email needs a response, a document needs to be filed, a note needs to be processed into your permanent system.

The pressure from an inbox is useful: it drives you to make decisions and move items to their final destinations. However, an inbox that grows without processing turns this useful pressure into overwhelming anxiety.

Inboxes are revisable by nature – items can be deleted, forwarded elsewhere, or modified during processing. They’re aimed at quickly assessing what needs your attention rather than preserving historical context.

See also: 2.14.11.1 for comparison with streams.

Definition and Purpose of Information Streams

A stream is like a river – it flows continuously, carrying information forward while preserving everything that came before. Think of a blog, a journal, or a public thought stream 2025-01-17_18-31. Each new entry adds to the historical record without disturbing what came before.

The value of a stream lies in this accumulation: you can trace the evolution of ideas, see how your thinking developed, and extract insights from the patterns that emerge over time. A stream isn’t there to remind you of tasks you need to complete – it’s more of a running log or narrative that simply keeps growing over time.

Many streams are treated as append-only, where entries get added but aren’t edited much, allowing you to see the evolution of an idea (for instance, older blog posts or daily journaling), see 2.16.

See also: 2.14.11.1 for comparison with inboxes.

The difference between streams and inboxes

Information systems typically manifest in two forms: streams and inboxes 2.14.11.2. Each serves a distinct purpose in how we capture, process, and maintain information over time.

Key aspects of these systems:

  1. Streams 2.14.11.1.1:
    • Flow continuously like a river
    • Preserve historical record
    • Accumulate value over time
  2. Inboxes 2.14.11.1.2:
    • Act as staging areas
    • Create pressure to process
    • Designed to stay empty

The two systems can work together through highlighting 2.14.11.2, where valuable items from streams become inbox items for processing.

Related concepts:

2025.01.25.

Message queues are logs

A message queue is an ordered log that stores messages persistently on disk, ensuring recovery and redelivery in case of failures.

Consumers can replay messages from a specific log point.

Distributed message queues like Kafka replicate the log across nodes for high availability and fault tolerance, treating the log as the primary data synchronization abstraction.

Producers append messages to the log, while consumers read sequentially, ensuring efficient and consistent data flow.

2025.01.23.

2025.01.22.

Guiding the Growth of Knowledge Trees

Highlight, 2025-01-21

The growth of the knowledge tree will also be guided by the present level of understanding of individual subjects, in proportion to the growth of the supporting knowledge, and specialist terminology

The SuperMemo knowledge tree looks pretty similar to my Commonplace Book topics tree in DEVONthink used with different tags. The difference between DEVONthink and SuperMemo is that SuperMemo enables child nodes on a parent node.

It is important to remember, that the SuperMemo tree is not built from files and folders, but notes, similar to Tinderbox.

2025.01.21.

Scanning and marking a book for Incremental Reading

Incremental Reading can be used to extract information out from books in chunks.

How to read a book in an hour? – 09:34

To have a general overview of the different ideas in a book, I can scan it first and use the blue highlighter to mark interesting ideas.

How to read a book in an hour? – 12:25

Then, I can use the Notes & Highlights tool in Apple Books to navigate to these parts and extract out information from each chunk. I should use the blue highlight for chunks.

Guiding the Growth of Knowledge Trees

Highlight, 2025-01-21

The growth of the knowledge tree will also be guided by the present level of understanding of individual subjects, in proportion to the growth of the supporting knowledge, and specialist terminology

The SuperMemo knowledge tree looks pretty similar to my Commonplace Book topics tree in DEVONthink used with different tags. The difference between DEVONthink and SuperMemo is that SuperMemo enables child nodes on a parent node.

It is important to remember, that the SuperMemo tree is not built from files and folders, but notes, similar to Tinderbox.

Incremental reading extracts are efficient but disruptive for stories

With incremental reading, we waste no time on reading material we do not understand. We can safely skip portions of material and return to them in the future.

Incremental reading emphasizes the extraction of key information from texts rather than understanding the entire text in one sitting. This approach allows readers to skip parts of the material they don’t immediately understand and return to them later.

2.6.5.3.2 extends this extraction-focused approach by formalizing the separation between information distillation and synthesis phases.

Consequently, this method might not be well-suited for story-type texts, which typically rely on a continuous narrative and emotional engagement. The fragmented nature of incremental reading could disrupt the flow and overall experience of such texts.

2025.01.20.

The extraction is a key collection workflow in Incremental Reading and Zettelkasten

Incremental reading is about getting extracts and converting them to cards. In a #Zettelkasten system, this conversion also happens, but we convert extractions into separate notes.

Incremental reading

Source: Incremental reading - Wikipedia

Page 1

Incremental reading is a software-assisted learning method that breaks down information from articles into flashcards for spaced repetition.

We make flash-cards from articles when we do incremental reading.

Page 1

Piotr Woźniak

  • Who is Piotr Woźniak?
    • Piotr Woźniak is a Polish researcher known for developing the SuperMemo software, which is based on the concept of spaced repetition.
    • This method is designed to enhance learning by breaking down information into flashcards and reviewing them over time to improve memory retention.
    • Woźniak’s work in this area has significantly influenced the field of educational technology, particularly in how people approach learning and memory.

Page 1

Instead of a linear reading of articles one at a time, the method works by keeping a large list of electronic articles or books (often dozens or hundreds) and reading parts of several articles in each session.

2025-01-20_20-56

Page 1

During reading, key points of articles are broken up into flashcards, which are then learned and reviewed over an extended period with the help of a spaced repetition algorithm.

Page 2

When reading an electronic article, the user extracts the most important parts (similar to underlining or highlighting a paper article) and gradually distills them into flashcards.

Page 3

With time and reviews, articles are supposed to be gradually converted into extracts and extracts into flashcards. Hence, incremental reading is a method of breaking down information from electronic articles into sets of flashcards.

Page 3

Contrary to extracts, flashcards are reviewed with active recall.

The Zettelkasten method is also a way to break down articles into highlights, then those highlights into notes.

The repeating part is missing, since the Zettelkasten prioritizes accidental discovery instead of repeating something.

In my mind, the Zettelkasten is better, because I like to lazy-load information, instead of remembering.

2025-01-25_10-52

What is Incremental Reading?

  • Source: What is Incremental Reading?
    • 03:50 Interleaving is when we switch between different subjects based on interests or tiredness.
    • 04:46 Teleporting to the next article (spaced repetition, which could be automated via DEVONthink)
    • 04:51 Extracts → Highlights?
      • 2025-01-20_22-25 The extraction is a key collection workflow in Incremental Eeading and Zettelkasten
    • 06:13 Supermemo extracts are working in a tree-like structure, so extracing something will create a new note under the existing tree item of the article.
      • This is actually a pretty cool idea, since instead of having backlinks based on the source of an idea, we can trace it back to a tree structure.
      • On the other hand, how does this work, when we have a top-level item as an article, the extracts are connected to the root item.
      • I guess we can drag-and-drop stuff in the tree and just keep the links to the source around?
      • Tinderbox seems like a good app for such system.
    • 07:52 Priority queue → Ordering our reading list based on how interesting the article to us?
    • 11:25 Flow of knowledge → convert passive articles and books into active flashcards
    • 12:32 Spaced repetition is a way to get a routine in something.

2025.01.19.

Literature Notes, Where do they go once they become Permanent Notes?

Source: Literature Notes, Where do they go once they become Permanent Notes?

Highlight

Are these literature notes, engagement notes, permanent notes? Yes, all of it, probably, but it doesn’t matter. I tried to frame the process differently: start with things that look interesting, make sense of them, partition them to make them re-usable and to provide an address for each idea. (And delete what doesn’t fit. Some things I highlight in texts turn out to be unsalvageable.)

Instead of having separate reading notes and permanent notes, we should just extract out ideas.

Every idea then needs to be moved into its own atomic note. We can then link the idea to other ideas.

That’s it.

Highlight

You are better off dividing all your stuff into two things:

There is only two types of things we encounter.

  1. Source material, which are articles, ideas, emails, etc…
  2. Text extraction, and cleaned up notes.

Highlight

Take your source material and extract ideas in an atomic way integrating them it into your Zettelkasten. The last part depends on whatever you want your Zettelkasten to be and it is up to yourself and your expertise your specific field.

So just have a source, then all notes are in a fact “permanent notes”. But they should be atomic.

So annotations like this should be processed into notes, but it is fine if we don’t make “evergreen ideas” out of them.

The only requirement is to have a place where these notes are linked into bigger ideas.

Highlight

It gets chopped up into Zettels by copy-and-pasting the marked up, condensed matter into existing and new Zettels, with sourcing added liberally.

Marking the source is important. But when I create the final export of my notes from an annotation file, I’m not sure how should I move it over to my Zettelkasten.

I should keep the original one around, and edit a new one in my ZK. It always links back to the PDF, so I can see my annotations.

So when I re-read the PDF (if that’s a thing), I can have my original ideas available.

Highlight

I can liberally follow the Collector’s Fallacy and use this process to filter out anything uninteresting over time - as from starting to read to having the source “done” can take weeks or months; some never get the slip box treatment because ideas that sounded interesting at the time of reading are irrelevant 2 weeks later.

This gives us a prefilter, since we can jot down ideas, but only the best ideas are developed into ZK notes.

Highlight

Permanent notes, which synthesize ideas from multiple sources and/or record my own thoughts, and have a References section that links back to either the lit note or its underlying source note. This is how I maintain traceability from note to source.

The best way to keep the connection to the original source, is to write more in-place in the annotations extracted from the DEVONthink PDF, then link back to this file in the references section for every note.

Backlinks would do this automatically if I extract atomic notes in place in the annotations note.

Highlight

The annotations I make on the literature note (giving my own ideas, and links to other permanent notes that are related) are what moves it along the spectrum described earlier

I can even link existing notes to these annotations.

Highlight

I do this as well! My reading inbox currently has over 100 sources in it. Is this Collector’s Fallacy? Yes, but they are sitting there waiting to be processed. Currently I’m processing maybe a half dozen or up to ten in various stages of completion. I’ll get around to the remainder eventually, or I’ll tire of them staring at me in the inbox and discard the ones that no longer interest me.

We can actually process reading items simultaneously. This means, each item can be highlighted and can be continued as we process it.

The idea is that we can simply keep up with multiple stuff this way.

Highlight

This method of note taking enables gradual digestion of multiple sources on our own schedule.

This feels pretty similar to spaced repetation. I wonder if DEVONthink can create reminders every year that adds a PDF back to my reading list to review.

Update: I checked and I can add a PDF as repeating reminder which adds it to my reading list. This makes the reading list in DEVONthink kind of next action list where I could add notes as well (annotations).

Highlight

When I used SuperMemo I was able in one case to split a long video up and process half of it over the course of an evening, and then as other priorities mounted I delayed processing the second half for two years and the incremental reading capabilities ensured I had only minimal loss of comprehension of the first half during that time.

So, the DEVONthink Reading List can be used to postpone something in the future, by setting a reminder and adding the asset back to my list.

This way, the Reading List is a project list, where I only have one next action about the project, keep reading, and when I’m done, move my ideas into my ZK.

Highlight

spaced repetition in some ways and reviewing previously taken notes

I review my ideas and notes in OmniFocus using the Synthesize perspective.

#Drafting

2025.01.17.

I have multiple journaling systems

I capture and document information in various formats. Here’s a list of each journal type I create, along with its purpose and the tools I use.

  • Thoughts / Statuses 11.1
    • Purpose
      • Explore ideas in a semi-public but low-pressure format.
      • Easy to do thinking. Thoughtstorming. Thinking out loud.
    • Tools
      • Mastodon as a backend.
      • Mona for storing and organizing thoughts in threads.
      • I have to bookmark threads for easy finding and appending new ideas. Mostly kept append-only, to see the whole thought formation over time.
    • Ideas
      • Maybe I should start these threads as “Thinking about XYZ…”.
  • Interstitial journal
    • Purpose
      • Maintain a journal to document OmniFocus projects, engage in sensemaking, and quickly outline project plans. These plans are typically extracted and stored in dedicated TaskPaper or Bike files.
    • Tools
      • Managed in TaskPaper.
  • Private journal entries
    • Purpose
      • Document my daily life, personal reflections, or private thoughts that I want to keep track of and also remind myself about.
    • Tools
      • Day One / Journal.app for storing and reviewing them. Occasionally, I might draw insights from these entries that can turn into more public or structured notes.
  • Note Development / Zettelkasten
    • Purpose
      • Write daily notes about articles I read.
      • Build a permanent, networked knowledge base or “building blocks” of my thinking.
      • A resource to consult for ideas, forming the “backbone” of my knowledge.
    • Tools
    • Workflow
      • 2.6.15 details the complete content pipeline that orchestrates these tools from reading to publishing.
  • Blog Posts / Articles
    • Purpose
      • Public-facing content—share refined ideas with an external audience. May start as a collection of Zettelkasten notes or microblog threads, refined into an organized piece.
    • Tools

#Linking

Reading "What's the difference between my journal and my stream?"

I write my journal in org-roam. It is a bulleted list of thoughts. It is read-only - noone can interact with it directly. (Though of course, people could annotate it with hypothesis, or something similar). It is not structured - you could not subscribe to items within it in a feed reader, say. It is public, and is thus filtered - despite the name, I don’t put much personal or intimate things in this public journal.

I could refer to my journal as my Zettelkasten homepage, where all the new notes are posted. I call mine daily notes.

I publish to my stream via micropub and WordPress, and syndicate it to Mastodon. My stream allows for comments and interactions.

My stream is my blog.

What goes in my stream is generally a subset of my journal. But responses to comments in my stream are not necessarily included in my journal. (Though likely pulled in to my garden in the relevant place.)

I guess my journal is narrative, my stream is dialogic.

That’s a pretty cool idea that the journal is the narratuve, the stream is the dialogic.

I have other journaling styles though, depending on the source of information. 2025-01-17_18-31

2025.01.12.

Using Twitter for public thinking

Using the outline to keep track of threads

Another thing I could do is add these threads to the outline itself. The outline is reserved for developed ideas, but I could make an exception with notes that are part of a larger thread, too. Then, I can automatically link them together without messing around with the follow-up button.

Using a Safari tab-group as a writing inbox

Actually, one idea could work: creating a Safari tab-group for threads. It’s a basic bookmark manager, but it’s interactive. I can click on the Follow-up button on any note in a thread to add a new note. When I publish the new note, I can simply reload the thread and open the new note. The newly updated link will be kept as a tab.

In a way, this threads tab-group could serve as a to-do list for writing tasks. I can keep tabs open, and using the Edit and Follow-up buttons, I can easily open the note in iA Writer.

  • Add live reload for notes

Linking to stacked notes

I can also link to “threads” in my Zettelkasten, but the problem with it is that the stacking is manual. So when I add a follow-up idea, the link changes, so I can’t keep these links around somewhere to easily get back to them.

If I add a new note to a saved thread, I have to refresh the link, click on the newly added note, then resave the link somewhere so the newly added note is also getting loaded. 11.1.3

Creating a follow-up shortcut for easier threading

I even created a new shortcut, so I can just select a note in The Archive, and add a follow-up note to it using LaunchBar. This is the same feature which is also available on my Zettelkasten website too, but I can do it locally.

Using Mastodon for threads

I had this idea of using Mastodon as a private thread-based Zettelkasten. I’m not sure why I would start yet another note-taking system, but the fact that I could use apps like Croissant or Tusks to manage these threads is more close to how my brain works than a Zettelkasten.

I like to start with an idea, then develop it, and keep appending to it. More on this in 2.6.12.1.

The Zettelkasten is not really a thread-based system. It is more of a network of ideas. But the linear nature of threads is basically why I’m fascinated by append-only information storage. 2.16

In a way, having that kept in Mastodon would mean that I can start writing an idea, but after I publish, I can’t change it anymore. The system would be append-only.