The Ideoscope: Instruments for Seeing Ideas
Instruments, Not Search Engines
Most software for working with recorded media assumes you know what you're looking for. You type a query, you get results. This is search. It's useful, but it's not thinking.
When a researcher sits down with two hundred hours of interview recordings, they don't start with a query. They start with a feeling — a sense that something is in there, some thread connecting what was said in March to what was said in November, some pattern in how people talk about loss or power or home. The work isn't retrieval. It's perception. And perception requires instruments.
A microscope doesn't answer questions. It makes a world visible that wasn't visible before, and in doing so, it changes which questions you think to ask. The astronomer doesn't query the telescope. The telescope reveals, and the astronomer responds.
We've been building search engines for recorded media — transcribe it, index it, let people type keywords. But the interesting problem isn't search. It's sight. How do you see the ideas inside an archive?
An ideoscope is an instrument for seeing ideas. Not retrieving them — seeing them. The difference matters. Retrieval assumes the question is already formed. Sight is what forms the question in the first place.
The Lens Workshop
The system works like this: you upload an archive of recorded media — interviews, lectures, conversations — and instead of giving you a search bar, we give you a workshop. A place to build lenses.
A lens is a small application designed to look at your archive from a particular angle. One lens might let you ask questions and get answers with cited clips. Another might visualize how topics shift across episodes. Another might cluster speakers by what they care about. Each lens uses the same underlying capabilities — search, transcription, conversation, structured analysis — but combines them differently, because different archives and different questions require different instruments.
The simplest lens is a search box. A query goes in, ranked items come out.
Composing Operations
The SDK exposes a set of typed operations. Each one has a clear signature — what goes in, what comes out:
The tensor product (⊗) means "bundled together" — multiple inputs carried on parallel wires. Operations compose with >> ("then") and @ ("alongside"). copy duplicates a wire, id passes a wire through unchanged.
These aren't just notations — they're the grammar of a monoidal category. The diagrams below are generated directly from algebraic expressions. Try editing the expression to see how the wiring changes:
The AMA Pipeline
The Ask Me Anything lens follows a three-step pipeline: search for relevant items, extract clips from each, then synthesize a cited answer. The diagram specifies the shape; the code below is the execution:
// Step 1: Find relevant items
const items = await ideo.search(query);
// Step 2: Extract clips from each item
const clips = await Promise.all(
items.map(item => ideo.clip(query, { itemId: item.id }))
);
// Step 3: Synthesize a cited answer
const [answer, sources] = await ideo.cite(query, clips); Each step of this pipeline has a clear type signature, and for a fixed number of results the whole thing composes in the monoidal category: fan the query into search, unpack the results onto parallel wires, run clip on each, gather the clips into cite. The diagram would be wide — five parallel clip boxes feeding into one cite box — but it's a perfectly valid wiring diagram.
The gap is that search returns a variable-length collection. The number of results isn't known at diagram-compile time, so you can't draw wires for items that may or may not exist. The map in the code above handles this: it iterates over however many items came back. This is where the imperative runtime fills in what the fixed-arity diagram can't express.
Lenses That Teach
Here's where it gets interesting: the lenses teach the system.
When someone builds a lens that works well — a Q&A tool with proper citations, a timeline that reveals patterns — the system extracts the underlying pattern. Not the specific code, but the shape of the approach: search for relevant clips, format them as numbered sources, ask the language model to answer using only those sources, render the citations as playable references.
That pattern becomes available to the next person building a lens. The system accumulates not data, but ways of looking. A library of proven approaches to making ideas visible.
This is different from a feature roadmap. A product team deciding what features to build is guessing at what users need. A system that learns from how people successfully explore their archives is discovering what works. The instrument improves not because someone plans an improvement, but because it absorbs the intelligence of everyone who uses it.
The composition system makes patterns precise. Each lens has a diagram — a wiring specification. Patterns extracted from successful lenses are diagrams too, reusable shapes that the next builder can plug in:
Building New Instruments
We tend to think understanding comes first and tools come second — that you figure out what you want to know, then build a tool to help. But the history of science suggests the opposite. The microscope wasn't built because someone wanted to see cells. The microscope was built, and then cells were discovered. The instrument came first. The understanding followed.
Archives of recorded media are full of ideas that no one has seen yet — not because the ideas are hidden, but because the instruments for seeing them haven't been built yet. Every conversation contains patterns the participants didn't notice. Every series of interviews contains threads the interviewer didn't track. The ideas are there, in the recordings, waiting for the right lens.
Your archive is full of things you haven't seen yet. Not because you haven't looked, but because you haven't built the right instrument for looking. That's what we're making: the instrument. And it gets sharper every time someone uses it.