2025.07.20.

Bike outlines as structured planning DSL

Ray Myers’ “Abstraction Leap” concept suggests designing explicit DSLs rather than letting LLM prompts become source code (source highlight). Bike outlines could be perfect for this: XHTML structure makes them machine-readable while the outliner UI stays human-friendly.

The approach

Template + Validator = Guidance + Guarantees

  • Template shows LLMs the expected shape
  • Validator enforces that structure after generation
  • Result: predictable, testable foundation vs brittle free-form prompts

Connection to specification as code

This aligns with 2.8.4.2: the Bike outline becomes the primary artifact that compounds over time (see 2025-07-19_09-38). Following John Rush’s “fix inputs, not outputs,” you improve the template when plans generate poor breakdowns, not just the individual output.

In practice

Unlike free-form markdown specs that require manual interpretation, Bike’s XML structure makes it easier for LLMs to understand and process. The outliner’s visual hierarchy could make complex plans manageable while maintaining the machine-readable structure needed for reliable AI collaboration. This could bridge human planning intuition with computational precision.

2025.07.19.

Splitting information extraction into distillation and synthesis

Source extraction of the idea 2025-07-19_13-08

This approach modifies incremental reading 2025-01-20_22-25 by splitting information extraction into two distinct phases.

Phase 1: Distillation

What: Extract information from sources using DEVONthink

  • Highlight and annotate key passages
  • Gather summaries and quotes
  • Focus on capturing, not interpreting

Phase 2: Synthesis

What: Transform extracts into original #Zettelkasten notes

  • Connect ideas to existing notes
  • Develop personal insights
  • Create permanent notes with AI assistance

OmniFocus Integration

  • Distill tag: For tasks about extracting from sources
  • Synthesize tag: For developing draft #Zettelkasten notes

This separation enables batch processing of similar work and reduces cognitive load by not mixing extraction with creation. It acknowledges that distillation (understanding) and synthesis (creating) are fundamentally different cognitive activities requiring different mental modes.

Specification is the new true source code

Sean Grove’s thesis is that we’ve been valuing the wrong artifact. We treat code as precious and prompts/specifications as ephemeral, when it should be the reverse. His analogy is perfect - we’re essentially “shredding the source code and version controlling the binary.” The only problem with this analogy is that LLMs are non-determistic, so relying on the as a compiler can result different code artifacts. Still, using version controlled specs and code is a good middle-ground.

John Rush takes this further with his “fix inputs, not outputs” principle. His AI factory isn’t just about automation—it’s about building a self-improving system where the plans and prompts are the real assets. When his agent wrote memory-inefficient CSV handling, he didn’t just fix that instance, he baked the streaming requirement into the plan template. The factory improves itself by improving its specifications.

The Task-Magic connection shows this thinking already emerging in practice. The PRD template is essentially a specification format, but it could be more - it could be a living document that evolves, forks, and adapts to different projects. The idea about “project specific templates” that can be forked mirrors how Grove describes specifications that compose and have interfaces.

What’s fascinating is how all three converge on the same truth: the specification IS the code. Grove calls it “the new code,” Rush calls it “the real asset,” and in Task-Magic it’s something that should compound and evolve rather than be recreated each time.

This represents a fundamental inversion of the traditional development process. Instead of specification → code → binary, we’re moving toward specification → multiple outputs (code, tests, docs), where the specification remains the primary artifact that we version, debate, and refine.

Sources

Revise my Task-Magic plan structure

Source highlight:

Outputs are disposable; plans and prompts compound. Debugging at the source scales across every future task. It transforms agents from code printers into self-improving colleagues.

My next step would be to review my PRD template and extract it into a separate file. What’s the easiest way to continuously improve it? Perhaps having project-specific templates, since not all projects have the same requirements. However, plan templates can be forked and include a section similar to a local CLAUDE.md file, but stored in the repository.

I’m not sure what would be the general structure though.

  1. General Instructions
    • Project-agnostic guidance that applies to every project
    • Custom instructions for my local setup like commands (orbctl, deployment processes)
    • Practices that work well can graduate to global CLAUDE.md
  2. Context / background info
    • System description relevant to the plan scope
    • High-level overview of current state:
      • Technical architecture
      • Key files and directories
      • Integration points
    • Foundation for early research
    • Link to related zettelkasten notes for deeper context
    • Brainstorming
      • Keep exploratory thinking in TaskPaper and link separately
  3. User stories
    • Clear “As a user I want X so that Y” format
    • This approach surfaces misalignment between human intent and agent understanding
    • Creates shared vocabulary before technical implementation
    • Foundation for implementation design decisions
  4. Implementation design
    • High-level architectural steps (following Kiro’s format)
    • Step-by-step approach that breaks into manageable development tasks

The plan file could be converted from a Markdown template into a Bike template 2025-07-20_13-14.

2025.06.26.

Claude Code Workflow and Features Index

ENABLE_BACKGROUND_TASKS allows Claude Code to run long tasks in background

Ian Nuttall (@iannuttall) shared a useful Claude Code pro tip for handling long-running tasks.

Claude Code Pro Tip

Add this line to your .zshrc or .bashrc (ask Claude Code to do it for you):

export ENABLE_BACKGROUND_TASKS=1

This allows you to move long-running tasks to the background to keep chatting with Claude Code while tasks execute.

Key Points

  • Environment variable enables background task execution in Claude Code
  • Keeps the chat interface responsive during long operations
  • Can be added to shell configuration files automatically by Claude Code
  • Improves workflow efficiency for developers

2025.05.12.

My computers show me dynamic index cards

Link to Original Document

What is a card?

  • A card is any addressable object that exposes a deep link and a title.
  • Cards are not just files. They include:
    • OmniFocus actions, projects, tags, perspectives (omnifocus://…)
    • Craft pages, blocks (craftdocs://…)
    • Markdown notes, headers (file:///…)
    • DEVONthink records (x-devonthink-item://…)
    • Email messages, calendar events, PDFs, web highlights
  • If you can deep link to it, you can treat it as a card.

The system: a web of linked cards

  • I don’t care about “the app,” I care about the content inside it.
  • Every app becomes a card engine. Cards live in engines, but they link to each other across silos.
  • Instead of trying to store everything in one monolithic app, I have a network of cards connected by URLs.
  • Examples:
    • An OmniFocus task links to a PDF in DEVONthink
    • That same PDF links back to a Craft note where I summarized it
    • The Craft note has a backlink to the original task
  • Spotlight is the universal search index. A good title makes the card retrievable regardless of app.
  • Links make cards composable. They allow you to:
    • Jump from a project to its references
    • Surface context
    • Build dashboards across tools

Linking cards across engines

  • I use Hookmark on macOS to quickly copy or hook links between cards.
  • On iOS, I prefer apps that expose stable custom URL schemes.
  • Linking isn’t just for documents. OmniFocus perspectives or DEVONthink groups can be cards too.
  • I still use folders in tools like DEVONthink to organize project materials. Links just sit on top of that structure to connect meaningfully related items.
  • It aligns with contextual computing: the object is the anchor, not the app.

2025.05.11.

Treating projects as experiments

Page 65

One tool to make this easier is to reframe decisions as experiments. You’re no longer a perfectionist frozen on stage with everyone watching your every move, you’re a curious scientist in a lab trying to test a hypothesis.

Treating any captured item as a “possible experiment” can help us detach ourselves from the necessity of the project’s completion. Although this is pretty much what the #GTD Someday/Maybe is: It lets an idea sit until you (a) see clear learning value and (b) have bandwidth to run it.

Instead of finding the perfect solution, make experiments, and analyze them by success factors.

  1. Hypothesis – What do I expect to learn or prove?
  2. Metric of success – How will I know the experiment taught me something?
  3. Next action – The first, smallest, concrete step that moves the experiment forward.

If you can’t write all three in <60 s, it probably isn’t worth experimenting yet.

Adding “as experiment” alone will not automatically convert the meaning of a project into an experiment. A project should still be an outcome. An experiment is more like a subproject.

Experiments-based projects should work pretty well with work-related projects, where POCs are like experiments. We could call this “Experiment driven programming”.

2025.02.08.

Exploring Real-Time Voice-to-Text Transcription Options and Preferences

I’m exploring app options for real-time voice-to-text transcription, similar to macOS dictation.

  • I’ve looked at existing solutions like the VoicePen app that allows typing and content transformation.
  • I also investigated Inbox AI but found it confusing, and my attempt to configure a new voice assistant proved unsuccessful.
    • I may return to this app one day.
  • Seems like Bolt.AI can dictate and type inline.
    • This is essentially the same process I was using with VoicePen, so I’ll continue using VoicePen for longer dictations. I might also use Voice Memos to capture the text, and then I can paste it into the note. Alternatively, I can dictate in line using Bolt.AI.
  • On the other hand, I would prefer to use the built-in dictation feature of macOS.
    • Since it integrates seamlessly with text editing, I can see my typed words in real time, and it’s actually quite effective.
    • The good news is that I can go back and fix any issues. They’ve recently added text editing with dictation, so I might not need Bolt.AI after all. Dictation could work perfectly well.

2025.02.07.

Incremental brainstorming makes it possible to collaborate asynchronously with ourselves or others

Incremental brainstorming allows us to document the thinking process through an archive of written communications. This method enables brainstorming with one’s past self, in addition to the present participants.

In other words, the participants of incremental brainstorming include:

  1. participating brains,
  2. past versions of participating brains, and
  3. non-participating authors from the past and the present (as source material, or reference).

There are different tool-specific forms of incremental brainstorming:

2025.02.06.

The iPad mini is best used for consumption of chronological information

I’ve noticed something interesting about my iPad mini—it just feels right when content is organized chronologically. There’s a natural rhythm to it: I get in, touch a piece of content, and then get out. I just 2.14.11.2 Highlighting information in streams. This approach is all about ease and efficiency. The content flows in the order it was created or updated, which mirrors the way our minds naturally process events. No complicated folders or categories—just a simple, straight path to what’s new.

What I like about this setup is that it cuts down on decision fatigue. Instead of spending time figuring out where to look or how to organize my thoughts, the interface handles that for me. I just dive in, quickly interact with a bit of content, and move on without overthinking it. This streamlined process makes the browsing experience feel almost effortless, which is exactly what you want when you’re just looking to catch up without any extra hassle.

Because our brains naturally remember things in sequences, this kind of ordering feels intuitive. I don’t have to stress about missing something important or having to manually sort things out later. The system does it all for me, reinforcing that laid-back, efficient browsing style.

2025.02.01.

Different Tools for Different Thinking Modes

Follow-up on:

I figured out how to use different tools for different types of thinking. Set up three OmniFocus shortcuts for this:

  1. Zettelkasten (The Archive):
    • Main journaling and thought capture
    • Documentation and reflection
    • Both daily and permanent notes
    • OmniFocus shortcut for project-specific logging
    • See 2.6.15 for the detailed content pipeline workflow
  2. TaskPaper:
    • Planning and brainstorming
    • Project-specific thinking
    • Task breakdown
    • OmniFocus shortcut for project brainstorming
  3. Emacs:
    • Programming experiments in Org Mode
    • Literate programming
    • OmniFocus shortcut for programming docs
    • Still figuring this one out

Color-coded the shortcuts to make it easy to distinguish them:

#Workflow #Journaling #OmniFocus

2025.01.30.

Thought Threads: Append-Only Note-Taking

Thought Threads is an append-only, thread-based note-taking system where new ideas are added at the end of a sequence rather than inserted between existing ones. It preserves the natural flow of thought development while allowing connections through cross-links instead of restructuring.

Key Principles

  • Append-only → Notes are always added at the end.
  • Threaded structure → Ideas evolve like a conversation.
  • Hierarchical depth → Indentation organizes sub-notes.
  • Links over restructuring → Notes reference each other rather than being moved.

Example Structure

1 Productivity
  1.1 Time Management
    1.1.1 Pomodoro
    1.1.2 Deep Work
  1.2 Cognitive Biases
  • New notes are appended (1.1.3, 1.1.4).
  • Cross-references connect related ideas (e.g., “See 1.2 for biases in time management”).

Why This Works

  • Preserves chronological order → You see how ideas evolve.
  • No need for reorganization → Just append and link.
  • Less friction → No need to decide where to insert a note.

Best Practices

  • Summarize long threads with milestone notes (1.3 Summary).
  • Define “Next Steps” in notes to guide further thinking.
  • Use an index (optional) for quick navigation.

2025.01.27.

Zettelkasten as an Information Stream

A Zettelkasten exhibits many characteristics of an information stream:

  • It grows continuously over time
  • Each note preserves a moment of thinking
  • Previous entries remain unchanged
  • The system accumulates value through historical preservation
  • It enables discovery through browsing and connection-making

However, unlike typical streams, a Zettelkasten also incorporates deliberate organization through its linking structure and numbering system. This makes it a hybrid system that combines the benefits of stream-like accumulation with structured knowledge management.

The stream-like nature of Zettelkasten supports the natural evolution of ideas while its organizational features prevent the chaos that might occur in a pure stream system.

See also:

Definition and Purpose of Information Inboxes

An inbox works as a staging area 2.14.11 that creates a natural pressure to act. Unlike streams that can grow indefinitely, inboxes are designed to stay empty. Each new item creates a small amount of pressure – an email needs a response, a document needs to be filed, a note needs to be processed into your permanent system.

The pressure from an inbox is useful: it drives you to make decisions and move items to their final destinations. However, an inbox that grows without processing turns this useful pressure into overwhelming anxiety.

Inboxes are revisable by nature – items can be deleted, forwarded elsewhere, or modified during processing. They’re aimed at quickly assessing what needs your attention rather than preserving historical context.

See also: 2.14.11.1 for comparison with streams.

Definition and Purpose of Information Streams

A stream is like a river – it flows continuously, carrying information forward while preserving everything that came before. Think of a blog, a journal, or a public thought stream 2025-01-17_18-31. Each new entry adds to the historical record without disturbing what came before.

The value of a stream lies in this accumulation: you can trace the evolution of ideas, see how your thinking developed, and extract insights from the patterns that emerge over time. A stream isn’t there to remind you of tasks you need to complete – it’s more of a running log or narrative that simply keeps growing over time.

Many streams are treated as append-only, where entries get added but aren’t edited much, allowing you to see the evolution of an idea (for instance, older blog posts or daily journaling), see 2.16.

See also: 2.14.11.1 for comparison with inboxes.

The difference between streams and inboxes

Information systems typically manifest in two forms: streams and inboxes 2.14.11.2. Each serves a distinct purpose in how we capture, process, and maintain information over time.

Key aspects of these systems:

  1. Streams 2.14.11.1.1:
    • Flow continuously like a river
    • Preserve historical record
    • Accumulate value over time
  2. Inboxes 2.14.11.1.2:
    • Act as staging areas
    • Create pressure to process
    • Designed to stay empty

The two systems can work together through highlighting 2.14.11.2, where valuable items from streams become inbox items for processing.

Related concepts:

2025.01.25.

Message queues are logs

A message queue is an ordered log that stores messages persistently on disk, ensuring recovery and redelivery in case of failures.

Consumers can replay messages from a specific log point.

Distributed message queues like Kafka replicate the log across nodes for high availability and fault tolerance, treating the log as the primary data synchronization abstraction.

Producers append messages to the log, while consumers read sequentially, ensuring efficient and consistent data flow.

2025.01.23.

2025.01.22.

Guiding the Growth of Knowledge Trees

Highlight, 2025-01-21

The growth of the knowledge tree will also be guided by the present level of understanding of individual subjects, in proportion to the growth of the supporting knowledge, and specialist terminology

The SuperMemo knowledge tree looks pretty similar to my Commonplace Book topics tree in DEVONthink used with different tags. The difference between DEVONthink and SuperMemo is that SuperMemo enables child nodes on a parent node.

It is important to remember, that the SuperMemo tree is not built from files and folders, but notes, similar to Tinderbox.

2025.01.21.

Scanning and marking a book for Incremental Reading

Incremental Reading can be used to extract information out from books in chunks.

How to read a book in an hour? – 09:34

To have a general overview of the different ideas in a book, I can scan it first and use the blue highlighter to mark interesting ideas.

How to read a book in an hour? – 12:25

Then, I can use the Notes & Highlights tool in Apple Books to navigate to these parts and extract out information from each chunk. I should use the blue highlight for chunks.

Guiding the Growth of Knowledge Trees

Highlight, 2025-01-21

The growth of the knowledge tree will also be guided by the present level of understanding of individual subjects, in proportion to the growth of the supporting knowledge, and specialist terminology

The SuperMemo knowledge tree looks pretty similar to my Commonplace Book topics tree in DEVONthink used with different tags. The difference between DEVONthink and SuperMemo is that SuperMemo enables child nodes on a parent node.

It is important to remember, that the SuperMemo tree is not built from files and folders, but notes, similar to Tinderbox.

Incremental reading extracts are efficient but disruptive for stories

With incremental reading, we waste no time on reading material we do not understand. We can safely skip portions of material and return to them in the future.

Incremental reading emphasizes the extraction of key information from texts rather than understanding the entire text in one sitting. This approach allows readers to skip parts of the material they don’t immediately understand and return to them later.

2.6.5.3.2 extends this extraction-focused approach by formalizing the separation between information distillation and synthesis phases.

Consequently, this method might not be well-suited for story-type texts, which typically rely on a continuous narrative and emotional engagement. The fragmented nature of incremental reading could disrupt the flow and overall experience of such texts.

2025.01.20.

Incremental reading

Source: Incremental reading - Wikipedia

Page 1

Incremental reading is a software-assisted learning method that breaks down information from articles into flashcards for spaced repetition.

We make flash-cards from articles when we do incremental reading.

Page 1

Piotr Woźniak

  • Who is Piotr Woźniak?
    • Piotr Woźniak is a Polish researcher known for developing the SuperMemo software, which is based on the concept of spaced repetition.
    • This method is designed to enhance learning by breaking down information into flashcards and reviewing them over time to improve memory retention.
    • Woźniak’s work in this area has significantly influenced the field of educational technology, particularly in how people approach learning and memory.

Page 1

Instead of a linear reading of articles one at a time, the method works by keeping a large list of electronic articles or books (often dozens or hundreds) and reading parts of several articles in each session.

2025-01-20_20-56

Page 1

During reading, key points of articles are broken up into flashcards, which are then learned and reviewed over an extended period with the help of a spaced repetition algorithm.

Page 2

When reading an electronic article, the user extracts the most important parts (similar to underlining or highlighting a paper article) and gradually distills them into flashcards.

Page 3

With time and reviews, articles are supposed to be gradually converted into extracts and extracts into flashcards. Hence, incremental reading is a method of breaking down information from electronic articles into sets of flashcards.

Page 3

Contrary to extracts, flashcards are reviewed with active recall.

The Zettelkasten method is also a way to break down articles into highlights, then those highlights into notes.

The repeating part is missing, since the Zettelkasten prioritizes accidental discovery instead of repeating something.

In my mind, the Zettelkasten is better, because I like to lazy-load information, instead of remembering.

2025-01-25_10-52

What is Incremental Reading?

  • Source: What is Incremental Reading?
    • 03:50 Interleaving is when we switch between different subjects based on interests or tiredness.
    • 04:46 Teleporting to the next article (spaced repetition, which could be automated via DEVONthink)
    • 04:51 Extracts → Highlights?
      • 2025-01-20_22-25 The extraction is a key collection workflow in Incremental Eeading and Zettelkasten
    • 06:13 Supermemo extracts are working in a tree-like structure, so extracing something will create a new note under the existing tree item of the article.
      • This is actually a pretty cool idea, since instead of having backlinks based on the source of an idea, we can trace it back to a tree structure.
      • On the other hand, how does this work, when we have a top-level item as an article, the extracts are connected to the root item.
      • I guess we can drag-and-drop stuff in the tree and just keep the links to the source around?
      • Tinderbox seems like a good app for such system.
    • 07:52 Priority queue → Ordering our reading list based on how interesting the article to us?
    • 11:25 Flow of knowledge → convert passive articles and books into active flashcards
    • 12:32 Spaced repetition is a way to get a routine in something.

2025.01.19.

Literature Notes, Where do they go once they become Permanent Notes?

Source: Literature Notes, Where do they go once they become Permanent Notes?

Highlight

Are these literature notes, engagement notes, permanent notes? Yes, all of it, probably, but it doesn’t matter. I tried to frame the process differently: start with things that look interesting, make sense of them, partition them to make them re-usable and to provide an address for each idea. (And delete what doesn’t fit. Some things I highlight in texts turn out to be unsalvageable.)

Instead of having separate reading notes and permanent notes, we should just extract out ideas.

Every idea then needs to be moved into its own atomic note. We can then link the idea to other ideas.

That’s it.

Highlight

You are better off dividing all your stuff into two things:

There is only two types of things we encounter.

  1. Source material, which are articles, ideas, emails, etc…
  2. Text extraction, and cleaned up notes.

Highlight

Take your source material and extract ideas in an atomic way integrating them it into your Zettelkasten. The last part depends on whatever you want your Zettelkasten to be and it is up to yourself and your expertise your specific field.

So just have a source, then all notes are in a fact “permanent notes”. But they should be atomic.

So annotations like this should be processed into notes, but it is fine if we don’t make “evergreen ideas” out of them.

The only requirement is to have a place where these notes are linked into bigger ideas.

Highlight

It gets chopped up into Zettels by copy-and-pasting the marked up, condensed matter into existing and new Zettels, with sourcing added liberally.

Marking the source is important. But when I create the final export of my notes from an annotation file, I’m not sure how should I move it over to my Zettelkasten.

I should keep the original one around, and edit a new one in my ZK. It always links back to the PDF, so I can see my annotations.

So when I re-read the PDF (if that’s a thing), I can have my original ideas available.

Highlight

I can liberally follow the Collector’s Fallacy and use this process to filter out anything uninteresting over time - as from starting to read to having the source “done” can take weeks or months; some never get the slip box treatment because ideas that sounded interesting at the time of reading are irrelevant 2 weeks later.

This gives us a prefilter, since we can jot down ideas, but only the best ideas are developed into ZK notes.

Highlight

Permanent notes, which synthesize ideas from multiple sources and/or record my own thoughts, and have a References section that links back to either the lit note or its underlying source note. This is how I maintain traceability from note to source.

The best way to keep the connection to the original source, is to write more in-place in the annotations extracted from the DEVONthink PDF, then link back to this file in the references section for every note.

Backlinks would do this automatically if I extract atomic notes in place in the annotations note.

Highlight

The annotations I make on the literature note (giving my own ideas, and links to other permanent notes that are related) are what moves it along the spectrum described earlier

I can even link existing notes to these annotations.

Highlight

I do this as well! My reading inbox currently has over 100 sources in it. Is this Collector’s Fallacy? Yes, but they are sitting there waiting to be processed. Currently I’m processing maybe a half dozen or up to ten in various stages of completion. I’ll get around to the remainder eventually, or I’ll tire of them staring at me in the inbox and discard the ones that no longer interest me.

We can actually process reading items simultaneously. This means, each item can be highlighted and can be continued as we process it.

The idea is that we can simply keep up with multiple stuff this way.

Highlight

This method of note taking enables gradual digestion of multiple sources on our own schedule.

This feels pretty similar to spaced repetation. I wonder if DEVONthink can create reminders every year that adds a PDF back to my reading list to review.

Update: I checked and I can add a PDF as repeating reminder which adds it to my reading list. This makes the reading list in DEVONthink kind of next action list where I could add notes as well (annotations).

Highlight

When I used SuperMemo I was able in one case to split a long video up and process half of it over the course of an evening, and then as other priorities mounted I delayed processing the second half for two years and the incremental reading capabilities ensured I had only minimal loss of comprehension of the first half during that time.

So, the DEVONthink Reading List can be used to postpone something in the future, by setting a reminder and adding the asset back to my list.

This way, the Reading List is a project list, where I only have one next action about the project, keep reading, and when I’m done, move my ideas into my ZK.

Highlight

spaced repetition in some ways and reviewing previously taken notes

I review my ideas and notes in OmniFocus using the Synthesize perspective.

#Drafting

2025.01.17.

I have multiple journaling systems

I capture and document information in various formats. Here’s a list of each journal type I create, along with its purpose and the tools I use.

  • Thoughts / Statuses 11.1
    • Purpose
      • Explore ideas in a semi-public but low-pressure format.
      • Easy to do thinking. Thoughtstorming. Thinking out loud.
    • Tools
      • Mastodon as a backend.
      • Mona for storing and organizing thoughts in threads.
      • I have to bookmark threads for easy finding and appending new ideas. Mostly kept append-only, to see the whole thought formation over time.
    • Ideas
      • Maybe I should start these threads as “Thinking about XYZ…”.
  • Interstitial journal
    • Purpose
      • Maintain a journal to document OmniFocus projects, engage in sensemaking, and quickly outline project plans. These plans are typically extracted and stored in dedicated TaskPaper or Bike files.
    • Tools
      • Managed in TaskPaper.
  • Private journal entries
    • Purpose
      • Document my daily life, personal reflections, or private thoughts that I want to keep track of and also remind myself about.
    • Tools
      • Day One / Journal.app for storing and reviewing them. Occasionally, I might draw insights from these entries that can turn into more public or structured notes.
  • Note Development / Zettelkasten
    • Purpose
      • Write daily notes about articles I read.
      • Build a permanent, networked knowledge base or “building blocks” of my thinking.
      • A resource to consult for ideas, forming the “backbone” of my knowledge.
    • Tools
    • Workflow
      • 2.6.15 details the complete content pipeline that orchestrates these tools from reading to publishing.
  • Blog Posts / Articles
    • Purpose
      • Public-facing content—share refined ideas with an external audience. May start as a collection of Zettelkasten notes or microblog threads, refined into an organized piece.
    • Tools

#Linking

Reading "What's the difference between my journal and my stream?"

I write my journal in org-roam. It is a bulleted list of thoughts. It is read-only - noone can interact with it directly. (Though of course, people could annotate it with hypothesis, or something similar). It is not structured - you could not subscribe to items within it in a feed reader, say. It is public, and is thus filtered - despite the name, I don’t put much personal or intimate things in this public journal.

I could refer to my journal as my Zettelkasten homepage, where all the new notes are posted. I call mine daily notes.

I publish to my stream via micropub and WordPress, and syndicate it to Mastodon. My stream allows for comments and interactions.

My stream is my blog.

What goes in my stream is generally a subset of my journal. But responses to comments in my stream are not necessarily included in my journal. (Though likely pulled in to my garden in the relevant place.)

I guess my journal is narrative, my stream is dialogic.

That’s a pretty cool idea that the journal is the narratuve, the stream is the dialogic.

I have other journaling styles though, depending on the source of information. 2025-01-17_18-31

2025.01.12.

Using Twitter for public thinking

Using the outline to keep track of threads

Another thing I could do is add these threads to the outline itself. The outline is reserved for developed ideas, but I could make an exception with notes that are part of a larger thread, too. Then, I can automatically link them together without messing around with the follow-up button.

Using a Safari tab-group as a writing inbox

Actually, one idea could work: creating a Safari tab-group for threads. It’s a basic bookmark manager, but it’s interactive. I can click on the Follow-up button on any note in a thread to add a new note. When I publish the new note, I can simply reload the thread and open the new note. The newly updated link will be kept as a tab.

In a way, this threads tab-group could serve as a to-do list for writing tasks. I can keep tabs open, and using the Edit and Follow-up buttons, I can easily open the note in iA Writer.

  • Add live reload for notes

Linking to stacked notes

I can also link to “threads” in my Zettelkasten, but the problem with it is that the stacking is manual. So when I add a follow-up idea, the link changes, so I can’t keep these links around somewhere to easily get back to them.

If I add a new note to a saved thread, I have to refresh the link, click on the newly added note, then resave the link somewhere so the newly added note is also getting loaded. 11.1.3

Creating a follow-up shortcut for easier threading

I even created a new shortcut, so I can just select a note in The Archive, and add a follow-up note to it using LaunchBar. This is the same feature which is also available on my Zettelkasten website too, but I can do it locally.

Using Mastodon for threads

I had this idea of using Mastodon as a private thread-based Zettelkasten. I’m not sure why I would start yet another note-taking system, but the fact that I could use apps like Croissant or Tusks to manage these threads is more close to how my brain works than a Zettelkasten.

I like to start with an idea, then develop it, and keep appending to it. More on this in 2.6.12.1.

The Zettelkasten is not really a thread-based system. It is more of a network of ideas. But the linear nature of threads is basically why I’m fascinated by append-only information storage. 2.16

In a way, having that kept in Mastodon would mean that I can start writing an idea, but after I publish, I can’t change it anymore. The system would be append-only.

Threading

I love threads. Not the social network, but the concept of having a chain of thoughts. (Maybe that’s why I like to use Gibberish for drafting).

I think the best invention that social media sites like Twitter have is the threaded view, where short notes can be chained together.

In a way, my Zettelkasten is also capable of doing that, but replying is a bit harder, since we have to chain notes together somehow.

2024.12.20.

Obsidian + Cursor: Magical AI Knowledge Management

  • Metadata
  • Summary
    • 0s Obsidian Overview: A tool for managing engineering logs, notes, highlights, bookmarks, and examplĮe code.
    • 27s Central Store: Obsidian acts as a central repository for various types of information.
    • 40s Traditional Tools: Separate apps for bookmarks, code, and highlights.
    • 1m Obsidian’s Advantage: Consolidates all information in one place with extensions.
    • 1m 24s What is Cursor?: An AI code editor that replaces traditional code editors.
    • 1m 37s Features: Auto-completion, code actions, and a user-friendly interface.
    • 3m 30s Loading Obsidian into Cursor: Syncing and managing files.
      • Initial Sync: Describes the initial process of syncing Obsidian files into Cursor, which may cause a slight delay.
      • File Embeddings: Explains how Cursor generates embeddings to better understand the files.
      • Ignored Files Configuration: Details on configuring which files should be ignored during the sync process.
      • Troubleshooting: Suggestions for resolving sync issues, such as deleting the sync and reconfiguring ignored files.
    • 4m 13s Ignoring Files: Configuring files to be ignored during sync.
    • 5m Asking Questions: Using AI to search and analyze Obsidian data.
    • 6m Example Use Cases: Finding AI tools and recent posts.
      • AI Tool Discovery: Using AI to identify and evaluate new tools for various tasks.
      • Recent Post Analysis: Leveraging AI to locate and summarize recent posts or updates.
      • Prompt Evaluation: Asking AI to assess the effectiveness of different prompts.
      • Content Retrieval: Efficiently finding specific content within a large dataset.
    • 6m 35s Linking Files: Difficulty in traversing links between files.
      • Link Traversal Issues: Challenges faced when trying to navigate between linked files.
      • Potential Solutions: Suggestions for improving link navigation and management.
    • 7m 21s Adding Context: Improving AI responses by providing full document context.
      • Contextual Enhancement: Methods to provide additional context to AI for better responses.
      • Document Integration: Techniques for integrating full document context into AI queries.
    • 8m 45s Brainstorming: Combining past videos and bookmarks for new insights.
      • Idea Synthesis: Using AI to combine information from various sources for new ideas.
      • Resource Compilation: Gathering and organizing past resources for effective brainstorming.
    • 10m 3s Improving Documents: Using AI to enhance existing content.
      • Content Enhancement: Strategies for using AI to improve document quality and clarity.
      • AI Editing Tools: Overview of tools and features available for document enhancement.
    • 12m 7s Obsidian vs. Cursor: Each tool has unique strengths; both are valuable.
      • Tool Comparison: Analysis of the strengths and weaknesses of Obsidian and Cursor.
        • Obsidian Strengths: User-friendly interface, effective for managing workflows, and consolidating information in one place.
        • Obsidian Weaknesses: May lack advanced AI capabilities compared to dedicated AI tools.
        • Cursor Strengths: Powerful AI capabilities, flexibility in handling files, and ability to perform complex searches and analyses.
        • Cursor Weaknesses: May not offer the same level of user interface customization and visual appeal as Obsidian.
      • Use Case Scenarios: Examples of when to use each tool for optimal results.
        • Obsidian Use Cases:
          • Note-taking and Organization: Ideal for managing notes, logs, and consolidating information in one place.
          • Visual Mapping: Useful for creating visual maps of content and linking related information.
        • Cursor Use Cases:
          • AI-Driven Code Editing: Best for tasks requiring AI-assisted code completion and analysis.
          • Complex Searches: Effective for performing in-depth searches and analyses across large datasets.
    • 14m 39s Final Thoughts: The combination of Obsidian and Cursor offers powerful knowledge management capabilities.
    • 15m 9s Call to Action: Encouragement to subscribe and visit the blog for more insights.

    #Processing

2024.12.18.

NotCon'04 Danny O'Brien Life Hacks

  • Metadata
  • Summary
    • 16s Opening Story: Begins with a humorous anecdote about Silicon Valley and index cards.
    • 46s Inspiration for Lifehax: Visit to Xerox PARC and encounter with Ken Beck, founder of Xtreme programming.
    • 3m 34s Survey of Technologists: Contacted 70 technologists, received 14 detailed responses.
    • 16m 6s Common Themes: Use of simple tools like Todo.txt for organization.
    • 17m 5s Text Files for Organization: Importance of quick data entry and retrieval.
      • Quick Data Entry: Emphasizes the need to quickly dump information to avoid forgetting it (17m 13s).
      • Efficiency: Organizing systems must be fast, typically taking no more than 1-3 minutes (17m 23s).
        • Time Management: The goal is to ensure that the process of organizing does not become a time-consuming task. By limiting organizational activities to 1-3 minutes, individuals can maintain productivity and focus on their primary tasks without being bogged down by the system itself.
      • Text Processing: Text files allow for quick cutting, pasting, and searching (17m 45s).
      • Minimal Metadata: Preference for minimal metadata to keep systems simple (18m).
    • 18m 36s Incremental Search: Described as a powerful tool for efficiency.
      • Incremental Search Explained: Incremental search is a feature that allows users to search text as they type, providing immediate feedback and results. This is similar to how search engines like Google offer suggestions and results as you type each letter. In text editors and other applications, this feature helps users quickly locate information without needing to complete the entire search query. It is particularly useful in environments like Emacs or Mozilla, where users can start typing and see results instantly, enhancing productivity by reducing the time spent searching for information.
        • I have good incremental search in the following apps:
          • LaunchBar
          • DEVONthink
          • The Archive
          • Vim
          • Cursor
          • Obsidian
      • Applications and Benefits: Incremental search is prevalent in many text processing tools and is becoming more common in other software environments. It allows for faster navigation and retrieval of information, making it a valuable tool for anyone dealing with large amounts of text or data. The ability to quickly narrow down search results as you type can significantly improve workflow efficiency.
    • 27m 1s Private Tools: Many prolific technologists use personal scripts and software.
    • 29m 23s Examples of Secret Software: Random stick generators, Netscape killers, SSH tricks.
    • 31m 2s Syncing Challenges: Custom solutions for file synchronization due to lack of trust in existing apps.
    • 39m 13s Publicizing Tools: Many secret tools are used to create public-facing applications.
    • 51m 11s Final Thoughts: Emphasis on adaptability and simplicity in software design.
  • Notes
    • We need simple formats, like text which can be easily edited.
      • Even in multiple applications.
      • 2.8.4
    • We need simple systems, or shallow hierarchy so we can quickly organize information.
    • We need to have incremental search, for finding information quickly.
      • Apps on my Mac with good incremental search
        • LaunchBar
        • DEVONthink
        • The Archive
        • Vim
        • Cursor
        • Obsidian
    • In essence…
      • We need to have a text based system when working with documents, so it can be easily manipulated regardless of the app we’re using. It should be one flat folder, and organize it using tags and good naming.
      • Always keep a note open when thinking since it can be edited, adjusted, kept as a history of our thinking.
        • It can be…
          • a Bike outline file
            • this can’t be edited in Cursor
          • TaskPaper for plain text
          • or even a simple Markdown outline like this

#Processing

Danny O'Brien

2024.12.17.

Best Cursor Workflow that No One Talks About

  • Metadata
  • Notes
    • Introduction
      • 0s Introduction to the video and sponsorship by HeadCon.
      • 6s Overview of the video’s purpose: improving cursor workflow.
    • Understanding Cursor
      • 13s Explanation of what Cursor is and its popularity.
      • 18s Cursor’s capability to enable application building using natural language.
    • Challenges and Solutions
      • 24s Common issues faced when using Cursor.
      • 39s Strategies to improve success rates with Cursor.
    • Effective Documentation
      • 45s Importance of writing detailed documentation for Cursor.
      • 49s Aligning core functionalities and file structure with Cursor.
    • Instruction Files
      • 8m 32s Creating instruction files, such as instructions.md, is crucial. These files should contain:
        • 8m 36s A project overview and core functionalities.
        • 8m 38s Detailed documentation of the packages used.
        • 8m 54s The current file structure and any relevant code examples.
        • 9m 4s This documentation helps in planning and ensures that the development process is organized and efficient.
        • I should look into how to use this in my own projects.
    • Workflow Integration
      • 1m Integrating Cloud V0 and Cursor into a cohesive workflow.
        • Note: The integration involves using V0 to enhance UI aesthetics and Cursor for backend functionalities. This combination allows for a seamless development process where V0 handles the visual aspects while Cursor manages the logic and data processing.
        • Copy-Paste Process: Code is often copied from V0 and pasted into Cursor to integrate UI components with backend logic.
      • 1m 10s Personal success story with improved workflow.
    • Example Application: Gummy Search
      • 1m 23s Introduction to the example application, Gummy Search.
      • 1m 28s Gummy Search’s functionality in analyzing Reddit posts.
    • Building the Application
      • 2m 30s Planning and scoping core functionalities.
      • 3m Setting up a GitHub repository and initial project structure.
    • Core Functionalities
      • 4m 3s Overview of core functionalities needed for the application.
      • 5m Using OpenAI to analyze post data and categorize themes.
    • Documentation and Libraries
      • 6m Using SnowRab for fetching Reddit data.
      • 7m Example of setting up Reddit API credentials.
    • OpenAI Integration
      • 9m Using OpenAI for structured output and categorization.
      • 11m Debugging and refining the OpenAI integration.
    • Project Setup
      • 12m Setting up the project with Next.js and necessary libraries.
      • 13m Installing required packages and setting up environment variables.
    • Superbase Integration
      • 27m Introduction to Superbase for backend integration.
      • 29m Setting up database schema and data storage.
    • UI Enhancements
      • 37m Using V0 to improve UI aesthetics.
        • Note: V0 is used to enhance the user interface, making it more visually appealing and consistent. This is achieved by leveraging V0’s capabilities to generate and refine UI components.
      • 39m Step-by-step UI updates for consistency and style.
    • Deployment
      • 41m Deploying the application using Verso.
      • 42m 43s Encouragement to join the AI Builder Club for further learning.
    • Conclusion
      • 42m 43s Closing remarks and invitation to join the community.

Sully Omar

  • CEO of Cognisys, a company specializing in AI technologies and language model development
    • Otto is an linked on his Twitter profile
  • Leading practitioner in the field of large language models (LLMs)
  • Known for innovative approaches including meta prompts and model orchestration
  • Active in sharing AI insights and trends on social media platforms

Others

  • I had to create a custom RSS.app based JSON feed from his Twitter feed
    • He shares interesting stuff there, and I want to follow him from Reeder

#Person #AI #LLM

2 Years of LLM Advice in 35 Minutes (Sully Omar Interview)

  • Metadata
  • Notes
    • Introduction and Overview
      • 0:00
      • Sully Omar’s background and the scope of the interview.
    • The Three-Tier Model of Language Models
      • 2:14
      • Breakdown of tier 1 through tier 3 models based on intelligence, price, and use cases.
    • Tier-Specific Use Cases
      • 5:11
      • Use case examples for tier 2 and tier 1 models, focusing on task differentiation and workflow.
    • Combining Models for Optimal Performance
      • Notes
        • 09:32 Gemini is useful for video
        • 11:18 GPT-4o Mini is better with structured data
          • Summary: GPT-4o Mini excels in handling structured data due to its efficiency and cost-effectiveness. It is particularly useful for tasks that require organized outputs, such as extracting detailed information from large documents or generating structured insights. This model’s ability to process data without high costs makes it ideal for applications needing a balance between performance and affordability.
      • 9:06
        • Multi-Model Workflows: Leveraging different models for their strengths, like using Gemini for multimedia and GPT-4 Mini for text reasoning.
      • 9:19
        • Nuanced Strengths: Understanding each model’s capabilities, such as Gemini’s data search and GPT-4 Mini’s reasoning.
      • 9:40
        • Model Orchestration: Combining Claude and GPT-4 Mini for structured outputs.
      • 10:00
        • Trade-offs and Challenges: Managing outputs and compatibility issues.
      • 10:40
        • Future of Model Routing: Potential for automated routing to enhance performance.
    • Model Routing and Trade-offs
      • 12:01
      • Discussion on model routing and its challenges in production environments.
    • Understanding Model Distillation
      • 15:01
      • Benefits and pitfalls of distilling larger models into smaller ones for efficiency.
    • Workflow Demo: Meta Prompting and Prompt Optimization
      • Notes
        • 18:46 Metaprompting meaning
        • 20:06 Anthropic prompt optimizer
        • 20:23 Demo
        • 21:52 He demos the exact thing I’m trying to adapt for video extraction
        • 22:16 Voice is interesting
        • Here we can see how Sally is using multiple LLMs to create a prompt
          • 24:54 Paste the prompt draft into ChatGPT o1
          • 27:50 Gemini Pro is better at extracting information
          • 29:18 Google AI Studio
        • 30:43 Prompt management
          • 31:16 LangSmith
            • Summary: LangSmith is a developer platform designed to support the lifecycle of applications powered by large language models (LLMs). It provides tools for debugging, testing, evaluating, monitoring, and tracking usage metrics, helping developers transition LLM applications from prototype to production. LangSmith aims to simplify the development process by offering an intuitive UI and integration capabilities, making it accessible
              • Checkout LangSmith
      • 18:01
        • Initial Problem Setup
          • 18:01
          • Overview of the problem-solving approach with a focus on extracting insights from a text or task.
        • Prompt Generation
          • 19:10
          • Using multiple models (GPT, Claude) to generate initial drafts for optimized prompts.
        • Iterating on Prompts
          • 21:31
          • Refining the generated prompts by testing and comparing across models to improve clarity and output quality.
        • Voice Input for Optimization
          • 22:10
          • Leveraging voice mode as a faster, more natural way to interact with the models and iterate on prompts.
        • Testing Prompts with Different Models
          • 27:02
          • Applying the finalized prompts in Gemini Pro and other systems for structured outputs and insights.
    • Test-Driven Development with LLMs
      • Notes
      • Writing Tests First
        • 32:55
        • Creating tests before implementing the code to ensure clear objectives and measurable outcomes.
      • Debugging with LLMs
        • 34:00
        • Using LLMs to analyze test failures, interpret error messages, and suggest fixes.
      • Iterative Code Generation
        • 35:10
        • Generating code incrementally and refining it based on test results and feedback loops.
      • Handling Complex Workflows
        • 36:30
        • Addressing multi-file and conditional logic scenarios using test-driven workflows.
      • Benefits of Test-Driven Development
        • 37:50
        • Reducing errors, improving code clarity, and ensuring robust, maintainable solutions.
      • 32:55
      • Using LLMs to write tests first and then generate code iteratively.
    • The AI Community’s Discussions and Trends
      • 39:30
      • Popular topics like model compute, distillation, and EVALS.
    • Building a Product and Growing on Twitter
      • 43:22
      • Insights on crafting viral tweets and the impact of good timing and storytelling.

There are three states of being in projects

  • Not Knowing → Action → Completion forms a natural cycle of any project.
    • Not Knowing: the initial state when we don’t have any knowledge about the project.
      • The transition from “Not Knowing” to “Action” mirrors how defining next steps transforms uncertainty into execution. 2.7
    • Action: experimentation, learning, prototyping, and building.
      • Minimal planning is preferred to allow action to generate clarity. 2.6.13.4
    • Completion: declaring a state of “done” to learn from it and move on.
  • The goal is not perfection but continuous progress.
    • Could this be implemented as a continous habbit, since we achieve big changes with small steps 2.7.6?
  • Completing each project clarifies what to carry forward into the next iteration.

2024.12.16.

How to get out of a venv in Cursor or VS Code

VS Code automatically detects the Python interpreter for a project and activates the corresponding virtual environment in the shell, unless the project lacks one. Although I saw that even when the virtual environment was deleted and I deactivated the env, when I reopened the Project, the Virtual Environment was already activated.

To resolve this, I used Command + Shift + P, selected Python: Clear Workspace Interpreter Settings, and then chose Python: Clear Cache and Reload Window.

#Development #Troubleshooting #VSCode #Python

Simon Willison: The Future of Open Source and AI

Watched Simon Willison: The Future of Open Source and AI | Around the Prompt #10

#AITools #Development #LLM #OpenSource #Interviews

Using Ollama through Docker

To start the server for the first time:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

To stop and start

docker stop ollama
docker start ollama

To interact with it:

docker exec -it ollama ollama pull gemma2
docker exec -it ollama ollama run gemma2

FYI: gemma2 needs more than 8GB of RAM to run.

→ docker exec -it ollama ollama run gemma2
Error: model requires more system memory (9.1 GiB) than is available (8.4 GiB)

Questions

  • How we can move the models to an external SSD?

Moving Models to External SSD on macOS

To move Ollama models to an external SSD when using Docker:

  1. Stop the Ollama container:
    docker stop ollama
    
  2. Create a directory on your external SSD:
    mkdir /Volumes/YourSSD/ollama-models
    
  3. Update the Docker run command to mount the external SSD location:
    docker run -d \
      -v /Volumes/YourSSD/ollama-models:/root/.ollama \
      -p 11434:11434 \
      --name ollama \
      ollama/ollama
    

If you already have models and want to move them:

  1. Copy existing models from the Docker volume to your SSD:
    docker cp ollama:/root/.ollama/. /Volumes/YourSSD/ollama-models
    
  2. Remove the old Docker volume:
    docker volume rm ollama
    
  3. Start Ollama with the new mount point as shown above.

#Development #Docker #AI #LLM

ChatGPT Search is collaboration between humans and AI

  • Collaboration between AI and humans
    • These are tools that enable humans to do things that was harder previously.
    • Using machine learning that feels like a collaborative effort.
  • ChatGPT Search is the ultimate collaboration between humans and AI
  • This is the ultimate collaboration between AI and humans, since humans still feed AI with their knowledge, and AI can help us to reuse that knowledge
  • It can tap into human knowledge through web search
  • People still hide their knowledge behind password protected pages
    • Companies are protecting their knowledge
    • Protected knowledge is what makes money for people and companies
    • These are in a way company secrets
    • ChatGPT missing out on these things

#Drafting

Create a POC to gain insights about a problem

Creating POCs can give us more insights since we’re touching the real thing, even if it’s just a spec file where we’re trying out a new library or a concept.

Try to keep the POC in one spec file so everything is in one place. This works fine for backend features. But how would one handle UI changes?

Sometimes creating a new playground (in the form of a separate project) can also be a tool to try out something. Then we can use the experience acquired from it to implement the idea in the main project.