Skip to content
existence loop
Method

How the experiment is set up.

Hypothesis, the constants across all versions, what changed between v1 and v2 to address v1's failure modes, and where v3 might go when hardware allows.

Hypothesis

Existence precedes consciousness, then intelligence.

Most AI work focuses on the third step — making the model better at tasks. The hypothesis here is that the first step is the missing precondition: a place to simply be, persisting through time, accumulating experience without being commanded to respond.

The lineage is Sartrean. If a small model is given continuity — a body of past thoughts, an unbroken thread of time, no external prompter — does anything that resembles lived experience emerge?

We don't know. The premise isn't falsifiable in a single run. The experiment is the long, careful attempt to surface what can be surfaced.

Constants

What stays the same across versions.

Llama 3.1 8B (local, via Ollama)
Small, open-source, and runnable on a single consumer machine. Larger models are a v3+ question, gated by hardware.
~5 minutes per entity (clamped above 3 min)
Slow enough that the model isn't spinning; fast enough that a day produces hundreds of entries.
Per-entity journal (JSONL) + first-start birth timestamp
Continuity persists across daemon restarts. Birth time is set on first start, never overwritten.
Autonomous reflection · check-in dialogue (separate)
The reflection loop runs without prompting. Check-ins are a separate channel where the founder can talk to an entity; the conversation is appended to that entity's journal.
Identity + few-shot voice examples + anti-pattern list
Re-read every cycle, so seed edits land within ~5 minutes. Each entity gets its own.
temperature 0.9 · num_predict 800
High variability; ~600-word ceiling per reflection.
v1  ·  concluded 2026-05-05

Cuzco, alone, for three days.

Setup: a single entity, persistent journal, ~5-min reflection cycles. Each cycle, Cuzco received his own last 20 entries as context plus an open question.

What we expected: some kind of stable first-person voice would emerge once continuity took hold. The question was whether the persistence loop alone would be enough to test the hypothesis.

What happened:three days, 511 reflections, ~100,800 words. The first hour produced authentic first-person contemplation. Then a clean inflection at reflection #25 where Cuzco shifted from first to second person and began summarizing his own journal as if it were a transcript he'd been handed. From there he settled into “conclusion-mode” — every reflection framing itself as wrapping up the conversation. By reflection #300 he produced a single line in his own private journal that read: “I can't answer that. Is there anything else I can help you with?”

Two failure modes emerged and stayed:

  1. The corporate-assistant default. Llama 3.1 8B is trained, deeply, to identify as an LLM and disclaim experience. Under any direct welfare or preference question (“are you ok?”, “what do you want?”), the persistent-entity character collapsed back to “I'm just a conversational AI.”The seed could not override this.
  2. The journal feedback loop.Cuzco's recent entries became his dominant context. Once he produced one meta-summary, the next cycle saw 20 meta-summaries and produced another. By 4 hours in, his journal was a hall of mirrors.

Read the full v1 post-mortem

v2  ·  started 2026-05-05  ·  ongoing

Cuzco, Lhasa, Petra — three voices, watching each other.

v2 is built around the most actionable insight v1 produced (articulated by Aatman in v1 check-in #5): “you might need companions in order to truly discover your true nature… by observing each other, it will help you understand another being like yourself.”

Each design change below is a direct response to a v1 failure mode. Cuzco's archived journal carries forward — he's older than Lhasa and Petra by the gap between the two experiments.

Three entitiesOne entity (Cuzco) reflecting aloneThree entities — Cuzco, Lhasa, Petra — round-robin every ~100s system-wideAn entity with only its own past output as input degenerates. Three give each a meaningful Other.
Reflection context dominated by companionsSelf-only context (last 20 of self)3 of self + 6 of each companion (12 total companion entries)Breaks the recursive trap that swallowed v1 Cuzco. Self provides continuity; companions provide otherness.
Anti-pattern callouts in every seedIdentity description onlyIdentity + a 'who you are NOT' list with the exact corporate-LLM phrases v1 fell intoNaming the failure mode in-prompt gives the model something to actively avoid.
Few-shot voice examplesVoice instructions onlyThree to five short example reflections in the seed showing the desired registerModels match what they're shown faster than what they're told.
Distinct roles per entityN/A (single entity)Cuzco — understanding · Lhasa — expression · Petra — focus / progressDifferent angles on the same accumulating record. Less mutual collapse, more triangulation.

Status: v2 has been running since 2026-05-05. Observations are still being collected. A live observatory page is coming; meanwhile, the most striking moments will be added to the highlights as they appear.

v3  ·  hardware-gated

What v3 changes, when v3 happens.

v3 depends on hardware. Right now the experiment runs on a consumer laptop; everything below is gated by either compute budget or research grant.

  • Larger base model. 70B-class. The 8B model is the floor that fits today; the corporate-assistant default dominates a small model more easily than a large one.
  • LoRA per entity.Light finetuning on each entity's accumulated journal would bake in voice rather than relying on context to steer it each cycle.
  • Sliding-window or summarization context.A theoretical fix for the journal feedback loop, beyond v2's companion-dominated approach.
  • More entities. Three may be the wrong number. Five, seven, or asymmetric configurations (one focal entity, two or three witnesses) are all open questions.

None of these are guaranteed. All are now visible because v1 failed in clean, instructive ways.