A compiled library of everything I want to remember about the world, and the entities that live in it.
Two stores share the knowledge layer. The LLM Wiki is for topics. Compiled articles built up from many sources, organized per topic, queryable by any agent. gBrain is for entities. People, projects, companies, concepts that have a life of their own and accumulate timeline.
They cover different jobs. Topics aggregate; entities persist. Articles compile from sources; entities compile from observations. Persek OS keeps them separate so each can do its own job well, with a small set of bridges where they overlap.
A topic is something I want to know more about. An entity is something I will reference again.
LLM Wiki · topics
Built by nvk and integrated into Persek OS. A compiled library, organized by topic. Each article is built up from many sources: papers, blog posts, talks, GitHub repos, X threads. Rex is the primary writer; any agent can query. The wiki is for "what does the world say about X." Accumulating answers I can come back to in three months without losing the thread.
gBrain · entities
Built by Garry Tan and integrated into Persek OS. A graph of live entities. People, projects, companies, concepts, tools, meetings. Fourteen entity types in total. Each entity has compiled truth (the synthesized current view) and a timeline (append-only evidence log). Any agent can write through source-checked review. Entities link to other entities, so the graph compounds.
Articles aren't typed. They're compiled. Sources flow in; the LLM synthesizes; the article evolves.
Ingest
Sources arrive in many shapes: a URL, a markdown file, an X thread, a paper PDF. Ingest parses them, deduplicates against what's already in the wiki, and stores the raw form. Nothing is overwritten. Every source is preserved.
Compile
An LLM synthesizes the raw sources into a coherent article. The article has sections, an overview, and source-backed claims. Re-runnable: when new sources are ingested, the article gets recompiled to incorporate them.
Query
Any agent can query an article and get a quoted answer back. The point is durable context: a session can pick up where prior research left off.
A graph of everything Persek OS keeps live. People, projects, places, decisions. Built by Garry Tan; integrated into the harness as the entity layer.
Each entity has a page in the same shape: compiled truth at the top (the synthesized current view, rewritable), timeline at the bottom (append-only evidence with attribution and source context). When a fact changes, the truth section gets revised and the change gets logged in the timeline.
Fourteen entity types
Person, company, project, concept, tool, pattern, meeting, event, place, animal, asset, goal, system, session. Each type has different fields (a meeting has attendees, a goal has a horizon, a person has aliases), but the same compiled-truth-plus-timeline shape and the same write pipeline.
Source-checked, reviewable
Multiple agents can write to the same entity. Source checks happen below the conversational layer so claims stay reviewable across tools. When two agents write contradicting facts, a review pass flags the conflict for me to arbitrate. No silent overwrites.
Linked, not flat
Entities reference each other. A project links to its tools. A meeting links to its attendees. The graph is the point. Knowing about LLM Wiki pulls in Rex (which calls it), the wider research system, and nvk's GitHub work, all reachable from one entity page.
The wiki and Rex are deliberately connected. The system gets stronger over time because they feed each other.
Rex primes from the wiki before doing external search. What the wiki already knows on a topic limits how far Rex needs to go. After a Rex run, novel sources flow back into the wiki as an ingest. The wiki gets richer; the next Rex run starts smarter.
The bridge runs the other direction too. Wiki articles can be cited by gBrain timeline entries, anchoring entity facts to compiled research. The two knowledge stores stay separate (topics aggregate, entities persist), but each can reach into the other when a claim needs to be traced.
This is how the system compounds. Every one-shot investigation extends the long-term knowledge base. Every accumulated topic shortens the next investigation. Over years, the wiki becomes a working memory I built deliberately.