2026.01.17.

AI reframes depth from execution to directing

The original M-shaped model is about building deep expertise in multiple areas, connected by broad interests. But with AI, this reframes: depth is no longer about execution. It’s about understanding enough to see patterns and direct the AI.

So what do I actually do? AI handles the technical depth, the execution, the details. I handle planning, context, spotting connections between distant things. Far Transfer (seeing a pattern in one domain and applying it to another) stays with me. I haven’t seen AI do this well yet.

The 80% rule also transforms. It used to take hard work to reach 80% fluency in a field. Now with AI, I can operate at 80% without actually being there (AI fills the gaps). But I need minimum understanding, otherwise I won’t notice when it’s wrong.

I don’t write code anymore, but I understand it. Review is harder than writing (you have to understand what someone else intended, not just what you meant). Daily code review from AI = daily practice in understanding. The trade-off is real though: not writing means some muscle memory fades. I’m betting that review keeps enough intuition alive.

GTD and Zettelkasten work as multipliers here. GTD tells me what work needs doing, Zettelkasten tells me how to structure learning, AI handles the execution. I can go deep in more areas than before because 1.1.1.1 the execution cost dropped.

Clarify by prototyping, not planning

The old “measure twice, cut once” advice assumed execution was expensive. For exploratory work, it’s not anymore. AI made building cheap, so the bottleneck moved upstream to clarity. We don’t need to measure… we need to clarify. And clarification doesn’t come from more planning. It comes from 2.6.5.5.2 touching the real thing.

So the workflow inverts. Instead of planning rigorously, 2.6.5.5 clarify through dialogue while building the first prototype. Grab an idea, 2025-02-09_00-36_leveraging-llms-for-efficient-code-prototyping-and-automation start prototyping, share the rough version, gather early feedback. An ugly prototype beats no prototype because it reveals what I don’t know. Even an ugly implementation beats no implementation. 2024-12-19_21-31_cult-of-done Cult of Done logic: prototype → POC → polish (if there’s time).

This compounds in exploratory work. Time spent building, testing, and learning gives us hands-on experience with the problem. When we try something, we get grounded in reality instead of assumptions. To get clarity, dig into a prototype early. The experience itself generates the understanding that planning was supposed to provide. 2.6.5.4 Ideas emerge bottom-up (I discover what I actually need by building, not by speculating about requirements).

I also realized that to communicate ideas, I have to visually present them. Demos beat decks. Working software melts away abstract objections.

The catch: skipping written planning means I replace it with real-time dialogue-based thinking. Direct AI collaboration helps here. My tmux collab workflow lets me dig into issues with Claude to get clarity before (or while) building. The thinking still happens… it just happens through prototyping instead of speculation.

2026.01.10.

Each PKM tool should have one role

The organizing principle is simple: each tool gets one job. When tools mix roles (Craft handling both input AND output, for instance), things get messy.

PKM pipeline showing capture, annotate, process, destination, and output stages

The pipeline

CAPTURE → ANNOTATE → PROCESS → DESTINATION → OUTPUT

Here’s what each stage looks like for me:

  • Capture: Drafts on mobile, Bike on desktop
  • Annotate: PDF Viewer (I convert web articles to PDF first)
  • Process: Tinderbox or Claude Code (synthesis happens in conversation)
  • Destination: Zettelkasten (knowledge), OmniFocus (action), or DEVONthink (reference)
  • Output: Craft for project plans, iA Writer for polished blog posts, Bike for raw thinking posts

What I learned

Processing happens in one place. I use either Tinderbox or Claude Code for synthesis (like this conversation right now).

DEVONthink is infrastructure, not a step. It’s the glue that connects tools (storage, extraction, archive), but it’s not where I do thinking work.

The Zettelkasten triple-frontend is intentional. I use The Archive for quick search, Obsidian for visualization, and iA Writer for writing. Same data, different access modes depending on what I’m doing.

Craft is output only. It’s for project plans and writings that come out of processed ideas, not for refinement.

Three destinations from processing

2.6.6.1.2 Refinement is a missing GTD stage talks about why ideas need development time. This audit clarifies where they go when they’re done:

  1. Knowledge → Zettelkasten (permanent notes)
  2. Action → OmniFocus, then Craft for planning
  3. Reference → DEVONthink (just stays in the archive)

Some ideas never exit. They compost, get deleted, or merge into other stuff. That’s fine (it’s the filtering function working).

Refinement is a missing GTD stage

GTD’s standard flow is Capture → Process → Organize → Do. But this assumes everything captured can be immediately classified as actionable, reference, or trash. Some ideas don’t fit. They’re not procrastinated (they’re conceptually unfinished). They’re not blocked (they’re still unformed). They don’t belong in next actions or someday/maybe because they’re still searching for shape.

The missing stage is Refinement: active development of ideas that aren’t ready to become actions yet.

Capture → Process → Refine → Organize → Do

Why Someday/Maybe isn’t enough

Someday/Maybe is often treated as a parking lot for things you’re not doing. But refinement is active, not passive. It requires systems designed for repeated contact: random surfacing, spaced repetition, linking to related ideas. The goal isn’t storage, it’s development.

Refinement systems

Effective refinement needs a container that doesn’t feel like clutter but also doesn’t let good ideas slip away. Examples:

  • 2025-12-18_00-53_my-incremental-reading-system My Incremental Reading System uses OmniFocus as a prioritized queue with spaced repetition, separating Distill and Synthesize phases
  • 2.6.6.1.1 A Zettelkasten használata a Someday/Maybe lista fejlesztésére describes using Zettelkasten to gradually develop Someday/Maybe items
  • 2.6.5.3.2 Splitting information extraction into distillation and synthesis formalizes the cognitive mode separation

The common thread: these aren’t storage systems, they’re development systems. They ensure repeated contact with incomplete ideas until they’re ready to exit refinement.

Two possible outcomes

2026-01-10_23-45_each-pkm-tool-should-have-one-role Each PKM tool should have one role clarifies the three destinations from processing. Ideas that complete refinement become either:

  1. A Zettelkasten note - the idea crystallizes into permanent knowledge
  2. A project plan - the idea develops into something actionable (via OmniFocus → Craft)

Some ideas never exit. They compost, get deleted, or merge into other ideas. That’s not failure - that’s the filtering function working.

Concepts

  • Refinement stage → Active development before action
  • Development vs storage → Repeated contact, not parking
  • Idea metabolization → Some thoughts need time

My job is thinking, then AI executes in the system I build

My work has two tracks: managing zettelkastens (writing notes and feature plans for AI agents) and systemizing my workflow so the whole thing compounds over time. 2025-07-19_09-38 Outputs are disposable; plans and prompts compound.

The realization here is that I’m not programming in the traditional sense. I’m using my computer for thinking. 2.8.4.2 Specification is the new true source code. The output of that thinking feeds AI agents, which then execute work inside a system I’m also developing. It’s a loop: think → capture → AI executes → system improves → thinking becomes more powerful.

This split between content creation and meta-work is the key. One track produces the notes, the other makes the system progressively smarter. 2025-12-07_21-27_compound-engineering-plugin-overview Each unit of engineering work should make subsequent units easier.

Concepts

  • Dual-focus knowledge work → Content plus meta-system
  • Compound knowledge systems → Infrastructure that improves with use
  • AI agent feature planning → Notes driving automation design
  • Systemizing workflows → Making processes self-improving
  • Meta-work investment → Building the system, not just using it

M-Shaped Career Strategy for Scanners

The video argues that people with too many interests (“scanners”) shouldn’t force themselves into traditional specialization. Instead, aim for an M-shaped career where you develop deep expertise in multiple areas, connected by broad curiosity. The #Zettelkasten plays a key role here (more on that below).

Why Zettelkasten matters for scanners

The scanner’s mind is basically an idea factory running at full speed, but working memory is tiny. If you don’t offload finished ideas somewhere, there’s no room to build new ones. 2024-12-15_20-43_the-zettelkasten-method-a-structured-approach-to-knowledge-management The Zettelkasten method externalizes thought processes, freeing the mind for creative work:

  • It acts as external memory, so you can switch topics without losing everything
  • When your obsession with medieval architecture fades, the notes stay put (you can come back later)
  • Over time, the connections between notes enable what the video calls “Far Transfer” (finding unexpected links between distant domains years later). 2025-03-02_14-55_the-value-of-an-experiencebased-zettelkasten The value of an experience-based Zettelkasten is preserving knowledge for future pattern recognition

Luhmann wrote 70 books this way. Not a bad track record.

The four pillars

1. The M-shaped profile

Instead of going deep in one thing (I-shaped specialist) or staying shallow across everything (dash-shaped generalist), build multiple pillars of depth connected by general knowledge. The shape looks like an M.

2. Serial mastery

You can’t build multiple pillars at once. Pick one area, commit for a “season” (6–18 months), and aim for fluency (not world-class expertise). When you can solve most problems without looking things up, you “graduate” and move to the next pillar. This isn’t quitting. It’s strategic. 2.6.5.3.2 Splitting information extraction into distillation and synthesis reduces cognitive load by separating different mental modes.

3. The “good enough” job

Choose work that pays the bills without draining your cognitive energy. Einstein worked as a patent clerk. Boring, but it left mental capacity for thinking about the universe. The surplus energy is what you use to build your own pillars after work.

4. Far transfer

Specialists solve similar problems (near transfer). Polymaths recognize patterns across unrelated domains and apply them elsewhere. A musician who understands harmony might write more elegant code. Someone who studied root systems might organize databases better. This is the payoff of the M-shape.

The metaphors

Two images from the video:

  • The Zettelkasten as a time-capsule garden where you plant seeds now and harvest unexpected fruit years later
  • Building bridges instead of skyscrapers (specialist) or tents (dabbler): solid pillars in the riverbed, connected over time

Concepts

  • M-shaped career → Build multiple deep pillars
  • Scanners → People with too many interests
  • Serial mastery → One thing at a time, then move on
  • Seasonal commitment → 6-18 month focus windows
  • Far transfer → Patterns across unrelated domains
  • Strategic quitting → Graduating, not giving up
  • Cognitive surplus → “Good enough” job preserves energy
  • Zettelkasten as external memory → Prevents overwhelm, enables switching
  • Fluency over mastery → Solve problems without manuals

Composability contracts for AI workflows

Composability contracts are machine-readable JSON specifications that define boundaries and rules for AI agent behavior. They make implicit rules explicit, preventing agents from drifting into chaos while still allowing bounded creativity.

Core structure

Every contract has five elements:

  • Locked invariants - what can never change (e.g., only 3 button variants allowed)
  • Allowed variations - bounded creativity within constraints
  • Composition rules - how pieces fit together (e.g., max 1 primary button per screen)
  • Forbidden patterns - explicit no-nos with reasons
  • Validation logic - machine-checkable assertions

Example: Button contract

{
  "componentType": "Button",
  "contract": {
    "locked": {
      "borderRadius": "8px",
      "fontFamily": "Inter",
      "fontSize": "14px"
    },
    "allowedVariants": {
      "intent": {
        "type": "enum",
        "values": ["primary", "secondary", "danger"]
      },
      "size": {
        "type": "enum",
        "values": ["small", "medium", "large"]
      }
    },
    "forbidden": {
      "customColors": "Use intent instead",
      "newVariants": "Only primary, secondary, danger allowed"
    },
    "compositionRules": {
      "primaryButtonsPerScreen": {
        "max": 1,
        "reason": "Multiple primary actions confuse users"
      }
    }
  }
}

This prevents AI from inventing 47 button variants. Valid: <Button intent="primary">. Rejected: <Button color="#FF69B4"> or two primary buttons on one screen.

Example: GTD Task Processing contract

{
  "contract": {
    "locked": {
      "contexts": ["@computer", "@phone", "@errands", "@home", "@office", "@waiting"],
      "priorities": ["critical", "high", "normal", "low"]
    },
    "compositionRules": {
      "projectNesting": {
        "maxDepth": 2,
        "reason": "GTD keeps projects flat for clarity"
      },
      "dueDates": {
        "allowedFor": "time-specific commitments only",
        "validation": "Must have external consequence if missed"
      }
    },
    "processingRules": {
      "twoMinuteRule": {
        "condition": "estimatedTime < 2min",
        "action": "do_immediately",
        "noTaskCreated": true
      }
    }
  }
}

Prevents: custom contexts like @urgent-calls, fake due dates on “brainstorm ideas”, deeply nested project hierarchies.

The key insight

“UI contracts exist whether you define them or not” - if not explicitly defined, they emerge accidentally through framework defaults, the last shipper’s decisions, or whoever shouts loudest in design review. Making them explicit and machine-readable means AI agents can’t invent 47 button variants.

Beyond UI: agentic workflows

The pattern extends to any agentic workflow:

  • GTD Task Processing - locked contexts (@computer, @phone), max project nesting depth, the 2-minute rule, no fake due dates
  • GTD Weekly Review - enforced step order, prevents skipping uncomfortable steps, inbox-to-zero validation
  • Email Triage - 5 folders only, no custom folder creation, defined response templates
  • Meeting Scheduling - protected deep work blocks, max meetings per day, buffer times
  • Research Synthesis - source quality hierarchy, required sections, confidence scoring

Integration with skills, commands, hooks

Contracts become the foundation layer that validates everything:

USER INTERFACE (Commands, Natural Language)
           ↓
    COMMAND LAYER (/capture, /process, /review)
           ↓
     SKILL LAYER (capabilities)
           ↓
     HOOK LAYER (event triggers)
           ↓
   CONTRACT LAYER (rules, boundaries, constraints)

Every layer is validated by contracts before execution. Skills are bound by contracts. Commands compose skills. Hooks trigger skills/commands. All compositions are validated.

Why this matters

  • Composability - contracts ensure all compositions are valid
  • Consistency - one source of truth, no drift between manual and automated actions
  • Evolvability - change contract → entire system adapts
  • Debuggability - clear error messages: “Note has 1 link, minimum required: 3”
  • Trust in automation - agents can’t violate boundaries, enabling more aggressive automation

This feels like a missing piece in agentic systems - like OpenAPI for agent behavior.

2025.12.25.

Clear usage rules encourage AI adoption at work

Brian Greenbaum (product designer at Pendo, a product analytics company) talks about driving AI adoption across their org. What caught my attention: he claims clear rules increase usage, not decrease it.

The argument goes like this. People avoid AI tools when they don’t know what’s allowed. Can I paste this code? What about customer data? The uncertainty creates friction. So he worked with legal and security to create an internal wiki with approved tools and data handling rules.

I find this plausible, though I’d want to see the actual numbers. It matches what I’ve noticed with other “permission” problems: people often want to do the right thing, they just don’t know what it is.

The other piece was visibility. He ran hands-on workshops every two weeks and set up a public Slack channel for sharing experiments. The idea being that secret AI usage creates weird dynamics where people either hoard knowledge or feel embarrassed.

To convince skeptics in leadership, he built an MCP server (a way for AI to connect to external data sources) that could query company data with natural language. Showing beats telling, I suppose.

I should note: this is one company’s story. Pendo is a tech company with tech-savvy employees. The “just create a wiki” approach assumes people actually read documentation (they often don’t). The workshops require someone with bandwidth to run them. And in fear-based cultures, public Slack channels become performative rather than authentic.

2025.12.18.

My Incremental Reading System

I’ve built a homegrown incremental reading system that spans multiple apps. It’s not SuperMemo, and that’s intentional. I want zettelkasten notes, not flashcards.

The core idea

I’m interested in acquiring knowledge, not memorizing every detail of it (2025-01-25_10-52). The zettelkasten lets me lazy-load information by following links and running into notes accidentally. SuperMemo’s spaced repetition is about remembering concepts, so you don’t get that “external conversation partner” feeling.

Reading, for me, is filtering. I’m mining articles for 2-3 good ideas, then moving on. Most content is noise anyway.

How capture works

Readwise Reader has a great parser, so I use it for saving articles. A syncer pulls them into both Craft (where the content lives) and OmniFocus (where the queue lives, with AI-assigned priorities from 1-9).

I also capture manually via OmniFocus Quick Capture for random URLs and Safari Reading List stuff, plus DEVONthink automation for documents.

Everything ends up in the same @Read/Review perspective in OmniFocus.

The queue

OmniFocus handles scheduling, not content. The actual reading material stays where it belongs: articles in Craft, documents in DEVONthink, web pages in Safari, notes in my zettelkasten.

I use an Adaptive Task Repetition plugin (CMD-Shift-I) that works like SM-2. When I process an item, I rate how well it went: Again, Hard, Good, Easy, or Completed. The interval multiplies accordingly (1.4x for Hard, 2.0x for Good, 2.5x for Easy). Items I don’t care about, I just delete.

The queue has three workflow states: Discuss, Distill, and Synthesize.

Two-phase processing

2.6.5.3.2 formalizes something I realized: distillation and synthesis are different cognitive modes. Mixing them is exhausting.

Distill is understanding mode. I scan an article, bail if it’s not interesting, otherwise read and highlight. Maybe repeat it tomorrow, adjust the priority. I get about 0.75 extracts per article, which sounds low but most articles just don’t have that many good ideas.

Synthesize is creation mode. I review my timestamped notes, connect them to existing ideas, and when one is ready, I add the #Linking tag and rename it to folgezettel format (like 2.3.4) to place it in the outline.

The separation reduces cognitive load. I’m not trying to understand AND create at the same time.

The funnel

World (infinite)
  ↓ Readwise/manual capture
  ↓ AI priority [1-9]
  ↓ Spaced repetition scheduling
  ↓ Distill
  ↓ Synthesize
Zettelkasten

Each step reduces volume and increases value. Most articles die in the funnel. That’s the point.

Output expectations

I produce about 164 timestamped notes per year, and roughly 30 of those become folgezettels (about 18% conversion). The rest stay as timestamped notes, which is fine. They’re composting, not stuck. Some never mature, some resurface years later and become important.

Constraints

This is cognitive work, not leisure. I can only do it at the end of the day when I have energy, or maybe once or twice on weekends. Sometimes I just don’t give a fuck and pick something from Safari Reading List instead.

Output is limited by attention, not system mechanics. About 3-5 sessions per week, and the system performs at the capacity I give it.

How this differs from SuperMemo

SuperMemo fragments articles early into many small pieces, then schedules each fragment separately. The goal is memorization via QA cards.

I fragment late, at the natural reading moment. The goal is zettelkasten notes that connect ideas. Article-level granularity makes more sense for this.

2.19 notes that incremental reading extracts are efficient but disruptive for stories. The fragmented approach doesn’t suit all content types, and I prefer reading articles whole until I’m ready to extract.

  • 2.6.5.3.2 Splitting information extraction into distillation and synthesis
  • 2.19 Incremental reading extracts are efficient but disruptive for stories
  • 2025-01-25_10-52 The Zettelkasten is for people who want to lazy-load knowledge
  • 2025-01-20_22-25 The extraction is a key collection workflow
  • 2025-01-19_13-40 Literature Notes, Where do they go once they become Permanent Notes?

2025.12.17.

Reactive prompting treats AI as environment that overhears thinking

A prompting technique where I externalize thinking without directly addressing the LLM. The agent responds to the thought stream rather than engaging in conversation.

So instead of “talking to” an AI, I’m thinking out loud and something picks up on what’s actionable. The agent becomes environment rather than entity (like how a good IDE responds to what I’m doing without explicit commands).

Why This Works

Standard prompting has its place, but it’s a conversation. With reactive prompting, I’m talking to myself. The agent overhears and responds to what’s actionable. The thoughts are the source of truth, not a back-and-forth.

What Triggers Response

Not everything in the stream needs a response. The triggers I’ve noticed:

  • Expressed need or uncertainty
  • Ambiguity that blocks progress
  • Errors or contradictions worth flagging
  • Tasks implied but not stated

Pure reflection can flow past without interruption.

The Keywords Hint

When the thought stream needs specific context or tools, I add a keywords: line at the end:

Discussion about this needed x-devonthink-item://...

keywords: devonthink mcp, mcporter

The keywords act as hints for tooling or skills. It’s metadata for the stream, not a command.

Distill is a product built on this model. AI agents watch threads, spot patterns, and act without being prompted.

2025.12.04.

AI automation shifts human work toward strategy, workflow design, and exception handling

The question I keep coming back to: which parts of my work are repetitive, verifiable, and describable? And how do I turn those into workflows that AI can run (or at least help with)?

What’s left for me is the judgment calls, the exceptions, the strategy. My value shifts toward defining the workflows, keeping an eye on things, and stepping in when something breaks.

2025.07.20.

Bike outlines as structured planning DSL

Ray Myers’ “Abstraction Leap” concept suggests designing explicit DSLs rather than letting LLM prompts become source code (source highlight). Bike outlines could be perfect for this: XHTML structure makes them machine-readable while the outliner UI stays human-friendly.

The approach

Template + Validator = Guidance + Guarantees

  • Template shows LLMs the expected shape
  • Validator enforces that structure after generation
  • Result: predictable, testable foundation vs brittle free-form prompts

Connection to specification as code

This aligns with 2.8.4.2: the Bike outline becomes the primary artifact that compounds over time (see 2025-07-19_09-38). Following John Rush’s “fix inputs, not outputs,” you improve the template when plans generate poor breakdowns, not just the individual output.

In practice

Unlike free-form markdown specs that require manual interpretation, Bike’s XML structure makes it easier for LLMs to understand and process. The outliner’s visual hierarchy could make complex plans manageable while maintaining the machine-readable structure needed for reliable AI collaboration. This could bridge human planning intuition with computational precision.

2025.07.19.

Splitting information extraction into distillation and synthesis

Source extraction of the idea 2025-07-19_13-08

This approach modifies incremental reading 2025-01-20_22-25 by splitting information extraction into two distinct phases.

Phase 1: Distillation

What: Extract information from sources using DEVONthink

  • Highlight and annotate key passages
  • Gather summaries and quotes
  • Focus on capturing, not interpreting

Phase 2: Synthesis

What: Transform extracts into original #Zettelkasten notes

  • Connect ideas to existing notes
  • Develop personal insights
  • Create permanent notes with AI assistance

OmniFocus Integration

  • Distill tag: For tasks about extracting from sources
  • Synthesize tag: For developing draft #Zettelkasten notes

This separation enables batch processing of similar work and reduces cognitive load by not mixing extraction with creation. It acknowledges that distillation (understanding) and synthesis (creating) are fundamentally different cognitive activities requiring different mental modes.

Specification is the new true source code

Sean Grove’s thesis is that we’ve been valuing the wrong artifact. We treat code as precious and prompts/specifications as ephemeral, when it should be the reverse. His analogy is perfect - we’re essentially “shredding the source code and version controlling the binary.” The only problem with this analogy is that LLMs are non-determistic, so relying on the as a compiler can result different code artifacts. Still, using version controlled specs and code is a good middle-ground.

John Rush takes this further with his “fix inputs, not outputs” principle. His AI factory isn’t just about automation—it’s about building a self-improving system where the plans and prompts are the real assets. When his agent wrote memory-inefficient CSV handling, he didn’t just fix that instance, he baked the streaming requirement into the plan template. The factory improves itself by improving its specifications.

The Task-Magic connection shows this thinking already emerging in practice. The PRD template is essentially a specification format, but it could be more - it could be a living document that evolves, forks, and adapts to different projects. The idea about “project specific templates” that can be forked mirrors how Grove describes specifications that compose and have interfaces.

What’s fascinating is how all three converge on the same truth: the specification IS the code. Grove calls it “the new code,” Rush calls it “the real asset,” and in Task-Magic it’s something that should compound and evolve rather than be recreated each time.

This represents a fundamental inversion of the traditional development process. Instead of specification → code → binary, we’re moving toward specification → multiple outputs (code, tests, docs), where the specification remains the primary artifact that we version, debate, and refine.

Sources

Revise my Task-Magic plan structure

Source highlight:

Outputs are disposable; plans and prompts compound. Debugging at the source scales across every future task. It transforms agents from code printers into self-improving colleagues.

My next step would be to review my PRD template and extract it into a separate file. What’s the easiest way to continuously improve it? Perhaps having project-specific templates, since not all projects have the same requirements. However, plan templates can be forked and include a section similar to a local CLAUDE.md file, but stored in the repository.

I’m not sure what would be the general structure though.

  1. General Instructions
    • Project-agnostic guidance that applies to every project
    • Custom instructions for my local setup like commands (orbctl, deployment processes)
    • Practices that work well can graduate to global CLAUDE.md
  2. Context / background info
    • System description relevant to the plan scope
    • High-level overview of current state:
      • Technical architecture
      • Key files and directories
      • Integration points
    • Foundation for early research
    • Link to related zettelkasten notes for deeper context
    • Brainstorming
      • Keep exploratory thinking in TaskPaper and link separately
  3. User stories
    • Clear “As a user I want X so that Y” format
    • This approach surfaces misalignment between human intent and agent understanding
    • Creates shared vocabulary before technical implementation
    • Foundation for implementation design decisions
  4. Implementation design
    • High-level architectural steps (following Kiro’s format)
    • Step-by-step approach that breaks into manageable development tasks - Guides for the task template
    • 2025-07-21_09-12
    • Anatomy of an AI Prompt
      • This can be used as the template skeleton for different sections
      • Ask the agent in the rule to replace placeholder stuff
    • When adding new locales to the view, DO NOT add them to the en.yml, use the default flag in the view for I18n.t.
    • Nearcut
      • We prefer images in features/admin/feature-name folder name
    • Have a base branch name in the frontmatter, so the fleet agent must switch to that first
      • It should ask about it when we create tasks

The plan file could be converted from a Markdown template into a Bike template 2025-07-20_13-14.

2025.06.26.

Claude Code Workflow and Features Index

ENABLE_BACKGROUND_TASKS allows Claude Code to run long tasks in background

Ian Nuttall (@iannuttall) shared a useful Claude Code pro tip for handling long-running tasks.

Claude Code Pro Tip

Add this line to your .zshrc or .bashrc (ask Claude Code to do it for you):

export ENABLE_BACKGROUND_TASKS=1

This allows you to move long-running tasks to the background to keep chatting with Claude Code while tasks execute.

Key Points

  • Environment variable enables background task execution in Claude Code
  • Keeps the chat interface responsive during long operations
  • Can be added to shell configuration files automatically by Claude Code
  • Improves workflow efficiency for developers

2025.05.12.

My computers show me dynamic index cards

Link to Original Document

What is a card?

  • A card is any addressable object that exposes a deep link and a title.
  • Cards are not just files. They include:
    • OmniFocus actions, projects, tags, perspectives (omnifocus://…)
    • Craft pages, blocks (craftdocs://…)
    • Markdown notes, headers (file:///…)
    • DEVONthink records (x-devonthink-item://…)
    • Email messages, calendar events, PDFs, web highlights
  • If you can deep link to it, you can treat it as a card.

The system: a web of linked cards

  • I don’t care about “the app,” I care about the content inside it.
  • Every app becomes a card engine. Cards live in engines, but they link to each other across silos.
  • Instead of trying to store everything in one monolithic app, I have a network of cards connected by URLs.
  • Examples:
    • An OmniFocus task links to a PDF in DEVONthink
    • That same PDF links back to a Craft note where I summarized it
    • The Craft note has a backlink to the original task
  • Spotlight is the universal search index. A good title makes the card retrievable regardless of app.
  • Links make cards composable. They allow you to:
    • Jump from a project to its references
    • Surface context
    • Build dashboards across tools

Linking cards across engines

  • I use Hookmark on macOS to quickly copy or hook links between cards.
  • On iOS, I prefer apps that expose stable custom URL schemes.
  • Linking isn’t just for documents. OmniFocus perspectives or DEVONthink groups can be cards too.
  • I still use folders in tools like DEVONthink to organize project materials. Links just sit on top of that structure to connect meaningfully related items.
  • It aligns with contextual computing: the object is the anchor, not the app.

2025.05.11.

Treating projects as experiments

Page 65

One tool to make this easier is to reframe decisions as experiments. You’re no longer a perfectionist frozen on stage with everyone watching your every move, you’re a curious scientist in a lab trying to test a hypothesis.

Treating any captured item as a “possible experiment” can help us detach ourselves from the necessity of the project’s completion. Although this is pretty much what the #GTD Someday/Maybe is: It lets an idea sit until you (a) see clear learning value and (b) have bandwidth to run it.

Instead of finding the perfect solution, make experiments, and analyze them by success factors.

  1. Hypothesis – What do I expect to learn or prove?
  2. Metric of success – How will I know the experiment taught me something?
  3. Next action – The first, smallest, concrete step that moves the experiment forward.

If you can’t write all three in <60 s, it probably isn’t worth experimenting yet.

Adding “as experiment” alone will not automatically convert the meaning of a project into an experiment. A project should still be an outcome. An experiment is more like a subproject.

Experiments-based projects should work pretty well with work-related projects, where POCs are like experiments. We could call this “Experiment driven programming”.

2025.02.08.

Exploring Real-Time Voice-to-Text Transcription Options and Preferences

I’m exploring app options for real-time voice-to-text transcription, similar to macOS dictation.

  • I’ve looked at existing solutions like the VoicePen app that allows typing and content transformation.
  • I also investigated Inbox AI but found it confusing, and my attempt to configure a new voice assistant proved unsuccessful.
    • I may return to this app one day.
  • Seems like Bolt.AI can dictate and type inline.
    • This is essentially the same process I was using with VoicePen, so I’ll continue using VoicePen for longer dictations. I might also use Voice Memos to capture the text, and then I can paste it into the note. Alternatively, I can dictate in line using Bolt.AI.
  • On the other hand, I would prefer to use the built-in dictation feature of macOS.
    • Since it integrates seamlessly with text editing, I can see my typed words in real time, and it’s actually quite effective.
    • The good news is that I can go back and fix any issues. They’ve recently added text editing with dictation, so I might not need Bolt.AI after all. Dictation could work perfectly well.

2025.02.07.

Incremental brainstorming makes it possible to collaborate asynchronously with ourselves or others

Incremental brainstorming allows us to document the thinking process through an archive of written communications. This method enables brainstorming with one’s past self, in addition to the present participants.

In other words, the participants of incremental brainstorming include:

  1. participating brains,
  2. past versions of participating brains, and
  3. non-participating authors from the past and the present (as source material, or reference).

There are different tool-specific forms of incremental brainstorming:

2025.02.06.

The iPad mini is best used for consumption of chronological information

I’ve noticed something interesting about my iPad mini—it just feels right when content is organized chronologically. There’s a natural rhythm to it: I get in, touch a piece of content, and then get out. I just 2.14.11.2 Highlighting information in streams. This approach is all about ease and efficiency. The content flows in the order it was created or updated, which mirrors the way our minds naturally process events. No complicated folders or categories—just a simple, straight path to what’s new.

What I like about this setup is that it cuts down on decision fatigue. Instead of spending time figuring out where to look or how to organize my thoughts, the interface handles that for me. I just dive in, quickly interact with a bit of content, and move on without overthinking it. This streamlined process makes the browsing experience feel almost effortless, which is exactly what you want when you’re just looking to catch up without any extra hassle.

Because our brains naturally remember things in sequences, this kind of ordering feels intuitive. I don’t have to stress about missing something important or having to manually sort things out later. The system does it all for me, reinforcing that laid-back, efficient browsing style.

2025.02.01.

Different Tools for Different Thinking Modes

Follow-up on:

I figured out how to use different tools for different types of thinking. Set up three OmniFocus shortcuts for this:

  1. Zettelkasten (The Archive):
    • Main journaling and thought capture
    • Documentation and reflection
    • Both daily and permanent notes
    • OmniFocus shortcut for project-specific logging
    • See 2.6.15 for the detailed content pipeline workflow
  2. TaskPaper:
    • Planning and brainstorming
    • Project-specific thinking
    • Task breakdown
    • OmniFocus shortcut for project brainstorming
  3. Emacs:
    • Programming experiments in Org Mode
    • Literate programming
    • OmniFocus shortcut for programming docs
    • Still figuring this one out

Color-coded the shortcuts to make it easy to distinguish them:

#Workflow #Journaling #OmniFocus

2025.01.30.

Thought Threads: Append-Only Note-Taking

Thought Threads is an append-only, thread-based note-taking system where new ideas are added at the end of a sequence rather than inserted between existing ones. It preserves the natural flow of thought development while allowing connections through cross-links instead of restructuring.

Key Principles

  • Append-only → Notes are always added at the end.
  • Threaded structure → Ideas evolve like a conversation.
  • Hierarchical depth → Indentation organizes sub-notes.
  • Links over restructuring → Notes reference each other rather than being moved.

Example Structure

1 Productivity
  1.1 Time Management
    1.1.1 Pomodoro
    1.1.2 Deep Work
  1.2 Cognitive Biases
  • New notes are appended (1.1.3, 1.1.4).
  • Cross-references connect related ideas (e.g., “See 1.2 for biases in time management”).

Why This Works

  • Preserves chronological order → You see how ideas evolve.
  • No need for reorganization → Just append and link.
  • Less friction → No need to decide where to insert a note.

Best Practices

  • Summarize long threads with milestone notes (1.3 Summary).
  • Define “Next Steps” in notes to guide further thinking.
  • Use an index (optional) for quick navigation.

2025.01.27.

Zettelkasten as an Information Stream

A Zettelkasten exhibits many characteristics of an information stream:

  • It grows continuously over time
  • Each note preserves a moment of thinking
  • Previous entries remain unchanged
  • The system accumulates value through historical preservation
  • It enables discovery through browsing and connection-making

However, unlike typical streams, a Zettelkasten also incorporates deliberate organization through its linking structure and numbering system. This makes it a hybrid system that combines the benefits of stream-like accumulation with structured knowledge management.

The stream-like nature of Zettelkasten supports the natural evolution of ideas while its organizational features prevent the chaos that might occur in a pure stream system.

See also:

Definition and Purpose of Information Inboxes

An inbox works as a staging area 2.14.11 that creates a natural pressure to act. Unlike streams that can grow indefinitely, inboxes are designed to stay empty. Each new item creates a small amount of pressure – an email needs a response, a document needs to be filed, a note needs to be processed into your permanent system.

The pressure from an inbox is useful: it drives you to make decisions and move items to their final destinations. However, an inbox that grows without processing turns this useful pressure into overwhelming anxiety.

Inboxes are revisable by nature – items can be deleted, forwarded elsewhere, or modified during processing. They’re aimed at quickly assessing what needs your attention rather than preserving historical context.

See also: 2.14.11.1 for comparison with streams.

Definition and Purpose of Information Streams

A stream is like a river – it flows continuously, carrying information forward while preserving everything that came before. Think of a blog, a journal, or a public thought stream 2025-01-17_18-31. Each new entry adds to the historical record without disturbing what came before.

The value of a stream lies in this accumulation: you can trace the evolution of ideas, see how your thinking developed, and extract insights from the patterns that emerge over time. A stream isn’t there to remind you of tasks you need to complete – it’s more of a running log or narrative that simply keeps growing over time.

Many streams are treated as append-only, where entries get added but aren’t edited much, allowing you to see the evolution of an idea (for instance, older blog posts or daily journaling), see 2.16.

See also: 2.14.11.1 for comparison with inboxes.

The difference between streams and inboxes

Information systems typically manifest in two forms: streams and inboxes 2.14.11.2. Each serves a distinct purpose in how we capture, process, and maintain information over time.

Key aspects of these systems:

  1. Streams 2.14.11.1.1:
    • Flow continuously like a river
    • Preserve historical record
    • Accumulate value over time
  2. Inboxes 2.14.11.1.2:
    • Act as staging areas
    • Create pressure to process
    • Designed to stay empty

The two systems can work together through highlighting 2.14.11.2, where valuable items from streams become inbox items for processing.

Related concepts:

2025.01.25.

Message queues are logs

A message queue is an ordered log that stores messages persistently on disk, ensuring recovery and redelivery in case of failures.

Consumers can replay messages from a specific log point.

Distributed message queues like Kafka replicate the log across nodes for high availability and fault tolerance, treating the log as the primary data synchronization abstraction.

Producers append messages to the log, while consumers read sequentially, ensuring efficient and consistent data flow.

2025.01.23.

2025.01.22.

Guiding the Growth of Knowledge Trees

Highlight, 2025-01-21

The growth of the knowledge tree will also be guided by the present level of understanding of individual subjects, in proportion to the growth of the supporting knowledge, and specialist terminology

The SuperMemo knowledge tree looks pretty similar to my Commonplace Book topics tree in DEVONthink used with different tags. The difference between DEVONthink and SuperMemo is that SuperMemo enables child nodes on a parent node.

It is important to remember, that the SuperMemo tree is not built from files and folders, but notes, similar to Tinderbox.

2025.01.21.

Scanning and marking a book for Incremental Reading

Incremental Reading can be used to extract information out from books in chunks.

How to read a book in an hour? – 09:34

To have a general overview of the different ideas in a book, I can scan it first and use the blue highlighter to mark interesting ideas.

How to read a book in an hour? – 12:25

Then, I can use the Notes & Highlights tool in Apple Books to navigate to these parts and extract out information from each chunk. I should use the blue highlight for chunks.

Guiding the Growth of Knowledge Trees

Highlight, 2025-01-21

The growth of the knowledge tree will also be guided by the present level of understanding of individual subjects, in proportion to the growth of the supporting knowledge, and specialist terminology

The SuperMemo knowledge tree looks pretty similar to my Commonplace Book topics tree in DEVONthink used with different tags. The difference between DEVONthink and SuperMemo is that SuperMemo enables child nodes on a parent node.

It is important to remember, that the SuperMemo tree is not built from files and folders, but notes, similar to Tinderbox.

Incremental reading extracts are efficient but disruptive for stories

With incremental reading, we waste no time on reading material we do not understand. We can safely skip portions of material and return to them in the future.

Incremental reading emphasizes the extraction of key information from texts rather than understanding the entire text in one sitting. This approach allows readers to skip parts of the material they don’t immediately understand and return to them later.

2.6.5.3.2 extends this extraction-focused approach by formalizing the separation between information distillation and synthesis phases.

Consequently, this method might not be well-suited for story-type texts, which typically rely on a continuous narrative and emotional engagement. The fragmented nature of incremental reading could disrupt the flow and overall experience of such texts.

2025.01.20.

The extraction is a key collection workflow in Incremental Reading and Zettelkasten

Incremental reading is about getting extracts and converting them to cards. In a #Zettelkasten system, this conversion also happens, but we convert extractions into separate notes.

Incremental reading

Source: Incremental reading - Wikipedia

Page 1

Incremental reading is a software-assisted learning method that breaks down information from articles into flashcards for spaced repetition.

We make flash-cards from articles when we do incremental reading.

Page 1

Piotr Woźniak

  • Who is Piotr Woźniak?
    • Piotr Woźniak is a Polish researcher known for developing the SuperMemo software, which is based on the concept of spaced repetition.
    • This method is designed to enhance learning by breaking down information into flashcards and reviewing them over time to improve memory retention.
    • Woźniak’s work in this area has significantly influenced the field of educational technology, particularly in how people approach learning and memory.

Page 1

Instead of a linear reading of articles one at a time, the method works by keeping a large list of electronic articles or books (often dozens or hundreds) and reading parts of several articles in each session.

2025-01-20_20-56

Page 1

During reading, key points of articles are broken up into flashcards, which are then learned and reviewed over an extended period with the help of a spaced repetition algorithm.

Page 2

When reading an electronic article, the user extracts the most important parts (similar to underlining or highlighting a paper article) and gradually distills them into flashcards.

Page 3

With time and reviews, articles are supposed to be gradually converted into extracts and extracts into flashcards. Hence, incremental reading is a method of breaking down information from electronic articles into sets of flashcards.

Page 3

Contrary to extracts, flashcards are reviewed with active recall.

The Zettelkasten method is also a way to break down articles into highlights, then those highlights into notes.

The repeating part is missing, since the Zettelkasten prioritizes accidental discovery instead of repeating something.

In my mind, the Zettelkasten is better, because I like to lazy-load information, instead of remembering.

2025-01-25_10-52

What is Incremental Reading?

  • Source: What is Incremental Reading?
    • 03:50 Interleaving is when we switch between different subjects based on interests or tiredness.
    • 04:46 Teleporting to the next article (spaced repetition, which could be automated via DEVONthink)
    • 04:51 Extracts → Highlights?
      • 2025-01-20_22-25 The extraction is a key collection workflow in Incremental Eeading and Zettelkasten
    • 06:13 Supermemo extracts are working in a tree-like structure, so extracing something will create a new note under the existing tree item of the article.
      • This is actually a pretty cool idea, since instead of having backlinks based on the source of an idea, we can trace it back to a tree structure.
      • On the other hand, how does this work, when we have a top-level item as an article, the extracts are connected to the root item.
      • I guess we can drag-and-drop stuff in the tree and just keep the links to the source around?
      • Tinderbox seems like a good app for such system.
    • 07:52 Priority queue → Ordering our reading list based on how interesting the article to us?
    • 11:25 Flow of knowledge → convert passive articles and books into active flashcards
    • 12:32 Spaced repetition is a way to get a routine in something.

2025.01.19.

Literature Notes, Where do they go once they become Permanent Notes?

Source: Literature Notes, Where do they go once they become Permanent Notes?

Highlight

Are these literature notes, engagement notes, permanent notes? Yes, all of it, probably, but it doesn’t matter. I tried to frame the process differently: start with things that look interesting, make sense of them, partition them to make them re-usable and to provide an address for each idea. (And delete what doesn’t fit. Some things I highlight in texts turn out to be unsalvageable.)

Instead of having separate reading notes and permanent notes, we should just extract out ideas.

Every idea then needs to be moved into its own atomic note. We can then link the idea to other ideas.

That’s it.

Highlight

You are better off dividing all your stuff into two things:

There is only two types of things we encounter.

  1. Source material, which are articles, ideas, emails, etc…
  2. Text extraction, and cleaned up notes.

Highlight

Take your source material and extract ideas in an atomic way integrating them it into your Zettelkasten. The last part depends on whatever you want your Zettelkasten to be and it is up to yourself and your expertise your specific field.

So just have a source, then all notes are in a fact “permanent notes”. But they should be atomic.

So annotations like this should be processed into notes, but it is fine if we don’t make “evergreen ideas” out of them.

The only requirement is to have a place where these notes are linked into bigger ideas.

Highlight

It gets chopped up into Zettels by copy-and-pasting the marked up, condensed matter into existing and new Zettels, with sourcing added liberally.

Marking the source is important. But when I create the final export of my notes from an annotation file, I’m not sure how should I move it over to my Zettelkasten.

I should keep the original one around, and edit a new one in my ZK. It always links back to the PDF, so I can see my annotations.

So when I re-read the PDF (if that’s a thing), I can have my original ideas available.

Highlight

I can liberally follow the Collector’s Fallacy and use this process to filter out anything uninteresting over time - as from starting to read to having the source “done” can take weeks or months; some never get the slip box treatment because ideas that sounded interesting at the time of reading are irrelevant 2 weeks later.

This gives us a prefilter, since we can jot down ideas, but only the best ideas are developed into ZK notes.

Highlight

Permanent notes, which synthesize ideas from multiple sources and/or record my own thoughts, and have a References section that links back to either the lit note or its underlying source note. This is how I maintain traceability from note to source.

The best way to keep the connection to the original source, is to write more in-place in the annotations extracted from the DEVONthink PDF, then link back to this file in the references section for every note.

Backlinks would do this automatically if I extract atomic notes in place in the annotations note.

Highlight

The annotations I make on the literature note (giving my own ideas, and links to other permanent notes that are related) are what moves it along the spectrum described earlier

I can even link existing notes to these annotations.

Highlight

I do this as well! My reading inbox currently has over 100 sources in it. Is this Collector’s Fallacy? Yes, but they are sitting there waiting to be processed. Currently I’m processing maybe a half dozen or up to ten in various stages of completion. I’ll get around to the remainder eventually, or I’ll tire of them staring at me in the inbox and discard the ones that no longer interest me.

We can actually process reading items simultaneously. This means, each item can be highlighted and can be continued as we process it.

The idea is that we can simply keep up with multiple stuff this way.

Highlight

This method of note taking enables gradual digestion of multiple sources on our own schedule.

This feels pretty similar to spaced repetation. I wonder if DEVONthink can create reminders every year that adds a PDF back to my reading list to review.

Update: I checked and I can add a PDF as repeating reminder which adds it to my reading list. This makes the reading list in DEVONthink kind of next action list where I could add notes as well (annotations).

Highlight

When I used SuperMemo I was able in one case to split a long video up and process half of it over the course of an evening, and then as other priorities mounted I delayed processing the second half for two years and the incremental reading capabilities ensured I had only minimal loss of comprehension of the first half during that time.

So, the DEVONthink Reading List can be used to postpone something in the future, by setting a reminder and adding the asset back to my list.

This way, the Reading List is a project list, where I only have one next action about the project, keep reading, and when I’m done, move my ideas into my ZK.

Highlight

spaced repetition in some ways and reviewing previously taken notes

I review my ideas and notes in OmniFocus using the Synthesize perspective.

#Drafting

2025.01.17.

I have multiple journaling systems

I capture and document information in various formats. Here’s a list of each journal type I create, along with its purpose and the tools I use.

  • Thoughts / Statuses 11.1
    • Purpose
      • Explore ideas in a semi-public but low-pressure format.
      • Easy to do thinking. Thoughtstorming. Thinking out loud.
    • Tools
      • Mastodon as a backend.
      • Mona for storing and organizing thoughts in threads.
      • I have to bookmark threads for easy finding and appending new ideas. Mostly kept append-only, to see the whole thought formation over time.
    • Ideas
      • Maybe I should start these threads as “Thinking about XYZ…”.
  • Interstitial journal
    • Purpose
      • Maintain a journal to document OmniFocus projects, engage in sensemaking, and quickly outline project plans. These plans are typically extracted and stored in dedicated TaskPaper or Bike files.
    • Tools
      • Managed in TaskPaper.
  • Private journal entries
    • Purpose
      • Document my daily life, personal reflections, or private thoughts that I want to keep track of and also remind myself about.
    • Tools
      • Day One / Journal.app for storing and reviewing them. Occasionally, I might draw insights from these entries that can turn into more public or structured notes.
  • Note Development / Zettelkasten
    • Purpose
      • Write daily notes about articles I read.
      • Build a permanent, networked knowledge base or “building blocks” of my thinking.
      • A resource to consult for ideas, forming the “backbone” of my knowledge.
    • Tools
    • Workflow
      • 2.6.15 details the complete content pipeline that orchestrates these tools from reading to publishing.
  • Blog Posts / Articles
    • Purpose
      • Public-facing content—share refined ideas with an external audience. May start as a collection of Zettelkasten notes or microblog threads, refined into an organized piece.
    • Tools

#Linking

Reading "What's the difference between my journal and my stream?"

I write my journal in org-roam. It is a bulleted list of thoughts. It is read-only - noone can interact with it directly. (Though of course, people could annotate it with hypothesis, or something similar). It is not structured - you could not subscribe to items within it in a feed reader, say. It is public, and is thus filtered - despite the name, I don’t put much personal or intimate things in this public journal.

I could refer to my journal as my Zettelkasten homepage, where all the new notes are posted. I call mine daily notes.

I publish to my stream via micropub and WordPress, and syndicate it to Mastodon. My stream allows for comments and interactions.

My stream is my blog.

What goes in my stream is generally a subset of my journal. But responses to comments in my stream are not necessarily included in my journal. (Though likely pulled in to my garden in the relevant place.)

I guess my journal is narrative, my stream is dialogic.

That’s a pretty cool idea that the journal is the narratuve, the stream is the dialogic.

I have other journaling styles though, depending on the source of information. 2025-01-17_18-31

2025.01.12.

Using Twitter for public thinking

Using the outline to keep track of threads

Another thing I could do is add these threads to the outline itself. The outline is reserved for developed ideas, but I could make an exception with notes that are part of a larger thread, too. Then, I can automatically link them together without messing around with the follow-up button.

Using a Safari tab-group as a writing inbox

Actually, one idea could work: creating a Safari tab-group for threads. It’s a basic bookmark manager, but it’s interactive. I can click on the Follow-up button on any note in a thread to add a new note. When I publish the new note, I can simply reload the thread and open the new note. The newly updated link will be kept as a tab.

In a way, this threads tab-group could serve as a to-do list for writing tasks. I can keep tabs open, and using the Edit and Follow-up buttons, I can easily open the note in iA Writer.

  • Add live reload for notes

Linking to stacked notes

I can also link to “threads” in my Zettelkasten, but the problem with it is that the stacking is manual. So when I add a follow-up idea, the link changes, so I can’t keep these links around somewhere to easily get back to them.

If I add a new note to a saved thread, I have to refresh the link, click on the newly added note, then resave the link somewhere so the newly added note is also getting loaded. 11.1.3

Creating a follow-up shortcut for easier threading

I even created a new shortcut, so I can just select a note in The Archive, and add a follow-up note to it using LaunchBar. This is the same feature which is also available on my Zettelkasten website too, but I can do it locally.

Using Mastodon for threads

I had this idea of using Mastodon as a private thread-based Zettelkasten. I’m not sure why I would start yet another note-taking system, but the fact that I could use apps like Croissant or Tusks to manage these threads is more close to how my brain works than a Zettelkasten.

I like to start with an idea, then develop it, and keep appending to it. More on this in 2.6.12.1.

The Zettelkasten is not really a thread-based system. It is more of a network of ideas. But the linear nature of threads is basically why I’m fascinated by append-only information storage. 2.16

In a way, having that kept in Mastodon would mean that I can start writing an idea, but after I publish, I can’t change it anymore. The system would be append-only.

Threading

I love threads. Not the social network, but the concept of having a chain of thoughts. (Maybe that’s why I like to use Gibberish for drafting).

I think the best invention that social media sites like Twitter have is the threaded view, where short notes can be chained together.

In a way, my Zettelkasten is also capable of doing that, but replying is a bit harder, since we have to chain notes together somehow.

2024.12.20.

Obsidian + Cursor: Magical AI Knowledge Management

  • Metadata
  • Summary
    • 0s Obsidian Overview: A tool for managing engineering logs, notes, highlights, bookmarks, and examplĮe code.
    • 27s Central Store: Obsidian acts as a central repository for various types of information.
    • 40s Traditional Tools: Separate apps for bookmarks, code, and highlights.
    • 1m Obsidian’s Advantage: Consolidates all information in one place with extensions.
    • 1m 24s What is Cursor?: An AI code editor that replaces traditional code editors.
    • 1m 37s Features: Auto-completion, code actions, and a user-friendly interface.
    • 3m 30s Loading Obsidian into Cursor: Syncing and managing files.
      • Initial Sync: Describes the initial process of syncing Obsidian files into Cursor, which may cause a slight delay.
      • File Embeddings: Explains how Cursor generates embeddings to better understand the files.
      • Ignored Files Configuration: Details on configuring which files should be ignored during the sync process.
      • Troubleshooting: Suggestions for resolving sync issues, such as deleting the sync and reconfiguring ignored files.
    • 4m 13s Ignoring Files: Configuring files to be ignored during sync.
    • 5m Asking Questions: Using AI to search and analyze Obsidian data.
    • 6m Example Use Cases: Finding AI tools and recent posts.
      • AI Tool Discovery: Using AI to identify and evaluate new tools for various tasks.
      • Recent Post Analysis: Leveraging AI to locate and summarize recent posts or updates.
      • Prompt Evaluation: Asking AI to assess the effectiveness of different prompts.
      • Content Retrieval: Efficiently finding specific content within a large dataset.
    • 6m 35s Linking Files: Difficulty in traversing links between files.
      • Link Traversal Issues: Challenges faced when trying to navigate between linked files.
      • Potential Solutions: Suggestions for improving link navigation and management.
    • 7m 21s Adding Context: Improving AI responses by providing full document context.
      • Contextual Enhancement: Methods to provide additional context to AI for better responses.
      • Document Integration: Techniques for integrating full document context into AI queries.
    • 8m 45s Brainstorming: Combining past videos and bookmarks for new insights.
      • Idea Synthesis: Using AI to combine information from various sources for new ideas.
      • Resource Compilation: Gathering and organizing past resources for effective brainstorming.
    • 10m 3s Improving Documents: Using AI to enhance existing content.
      • Content Enhancement: Strategies for using AI to improve document quality and clarity.
      • AI Editing Tools: Overview of tools and features available for document enhancement.
    • 12m 7s Obsidian vs. Cursor: Each tool has unique strengths; both are valuable.
      • Tool Comparison: Analysis of the strengths and weaknesses of Obsidian and Cursor.
        • Obsidian Strengths: User-friendly interface, effective for managing workflows, and consolidating information in one place.
        • Obsidian Weaknesses: May lack advanced AI capabilities compared to dedicated AI tools.
        • Cursor Strengths: Powerful AI capabilities, flexibility in handling files, and ability to perform complex searches and analyses.
        • Cursor Weaknesses: May not offer the same level of user interface customization and visual appeal as Obsidian.
      • Use Case Scenarios: Examples of when to use each tool for optimal results.
        • Obsidian Use Cases:
          • Note-taking and Organization: Ideal for managing notes, logs, and consolidating information in one place.
          • Visual Mapping: Useful for creating visual maps of content and linking related information.
        • Cursor Use Cases:
          • AI-Driven Code Editing: Best for tasks requiring AI-assisted code completion and analysis.
          • Complex Searches: Effective for performing in-depth searches and analyses across large datasets.
    • 14m 39s Final Thoughts: The combination of Obsidian and Cursor offers powerful knowledge management capabilities.
    • 15m 9s Call to Action: Encouragement to subscribe and visit the blog for more insights.

    #Processing

2024.12.18.

NotCon'04 Danny O'Brien Life Hacks

  • Metadata
  • Summary
    • 16s Opening Story: Begins with a humorous anecdote about Silicon Valley and index cards.
    • 46s Inspiration for Lifehax: Visit to Xerox PARC and encounter with Ken Beck, founder of Xtreme programming.
    • 3m 34s Survey of Technologists: Contacted 70 technologists, received 14 detailed responses.
    • 16m 6s Common Themes: Use of simple tools like Todo.txt for organization.
    • 17m 5s Text Files for Organization: Importance of quick data entry and retrieval.
      • Quick Data Entry: Emphasizes the need to quickly dump information to avoid forgetting it (17m 13s).
      • Efficiency: Organizing systems must be fast, typically taking no more than 1-3 minutes (17m 23s).
        • Time Management: The goal is to ensure that the process of organizing does not become a time-consuming task. By limiting organizational activities to 1-3 minutes, individuals can maintain productivity and focus on their primary tasks without being bogged down by the system itself.
      • Text Processing: Text files allow for quick cutting, pasting, and searching (17m 45s).
      • Minimal Metadata: Preference for minimal metadata to keep systems simple (18m).
    • 18m 36s Incremental Search: Described as a powerful tool for efficiency.
      • Incremental Search Explained: Incremental search is a feature that allows users to search text as they type, providing immediate feedback and results. This is similar to how search engines like Google offer suggestions and results as you type each letter. In text editors and other applications, this feature helps users quickly locate information without needing to complete the entire search query. It is particularly useful in environments like Emacs or Mozilla, where users can start typing and see results instantly, enhancing productivity by reducing the time spent searching for information.
        • I have good incremental search in the following apps:
          • LaunchBar
          • DEVONthink
          • The Archive
          • Vim
          • Cursor
          • Obsidian
      • Applications and Benefits: Incremental search is prevalent in many text processing tools and is becoming more common in other software environments. It allows for faster navigation and retrieval of information, making it a valuable tool for anyone dealing with large amounts of text or data. The ability to quickly narrow down search results as you type can significantly improve workflow efficiency.
    • 27m 1s Private Tools: Many prolific technologists use personal scripts and software.
    • 29m 23s Examples of Secret Software: Random stick generators, Netscape killers, SSH tricks.
    • 31m 2s Syncing Challenges: Custom solutions for file synchronization due to lack of trust in existing apps.
    • 39m 13s Publicizing Tools: Many secret tools are used to create public-facing applications.
    • 51m 11s Final Thoughts: Emphasis on adaptability and simplicity in software design.
  • Notes
    • We need simple formats, like text which can be easily edited.
      • Even in multiple applications.
      • 2.8.4
    • We need simple systems, or shallow hierarchy so we can quickly organize information.
    • We need to have incremental search, for finding information quickly.
      • Apps on my Mac with good incremental search
        • LaunchBar
        • DEVONthink
        • The Archive
        • Vim
        • Cursor
        • Obsidian
    • In essence…
      • We need to have a text based system when working with documents, so it can be easily manipulated regardless of the app we’re using. It should be one flat folder, and organize it using tags and good naming.
      • Always keep a note open when thinking since it can be edited, adjusted, kept as a history of our thinking.
        • It can be…
          • a Bike outline file
            • this can’t be edited in Cursor
          • TaskPaper for plain text
          • or even a simple Markdown outline like this

#Processing