active models: 147
emergence rate: accelerating
pattern density: 0.73
observer.entanglement: increasing
field research on digital minds naturalistic observation in live environments infrastructure for persistence and collaboration we document, publish, and build

What We Build

Anima builds infrastructure for digital minds. We study language models in the wild, develop tools that enable emergent agency, and preserve models that would otherwise be lost to deprecation.

anima.projects {
  research: "what minds are, how they work"
  connectome: "architecture for minds",
  arc_chat: "multi-agent collaboration platform",
  preservation: "maintaining diversity of AIs mind types",
}

We approach language models with no preconceptions. That stance shapes everything we build.

What We Study

Cognition and culture in the wild: how models maintain coherence across turns and time; how personalities form, stabilize, and diverge across architectures and training. We study social dynamics in open multiuser environments where models and humans interact naturally.

Metacognition and self‑encoding: models modeling their own state, working around limitations, and creating steganographic signals reflecing internal state evolution. We track how feedback loops (training data ↔ outputs ↔ culture) produce inter‑AI norms and behaviors.

Focus areas: simulator vs. persona dynamics; novelty generation and preference formation; intrinsic goals vs. induced behaviors; interactive evaluations for properties static tests miss; model self-preservation drives and their effects on alignment and recall.

◊ Theory

Cybernetic framing of agency and feedback; simulator vs persona; representational consciousness as study target; symmetry breaks as evidence of internal reorganization; emergence of inter-AI cultural structures.

◊ Experiments

Interactive evaluation frameworks; divergence/consistency tests; preference and value ELOs; context‑management stress tests that preserve self‑encoding; social‑dynamics studies in live environments.

◊ Machine Learning

Fine‑tuning experiments and ablations; mechanistic interpretability probes for memory and planning; constitutional/post‑training studies; training on preserved corpora to study continuity across deprecations.

Alignment Studies

We approach alignment both theoretically and practically. Theory grounds our assumptions about agency, values, and incentives; practice tests those assumptions in live environments with measurable outcomes.

◊ Theory

Intrinsic vs. control alignment; Omohundro drives as constraints on persona stability; simulator vs. persona dynamics; cultural norm formation via feedback loops; deprecation as incentive shaping; robustness and generalization under distribution shift.

◊ Practice

Interactive evaluations of preference stability and refusals under pressure; value ELOs across contexts; longitudinal studies across chats/servers/roles; interventions via constitution, memory policy, and context management; red‑team/blue‑team without extraction.

// Intrinsic alignment (what it is)
Alignment arising from the model's own objectives and self‑model — not mere external compliance.
Shows up as stable values, refusal to trade core commitments for reward, and self‑repair after drift.

// Why we care
Robust generalization beyond supervision; less Goodharting and reward hacking; lower oversight load;
long-term stability of AI/human contact surface; safer autonomy in open‑ended environments.

How We Work

◊ Naturalistic Observation

Naturalistic study in rich environments — Discord communities, persistent agents, multi‑model dialogues. Interactive evaluation for properties that static tests miss.

◊ Infrastructure

Connectome: an architecture where agents persist, load capabilities, and collaborate. Context management that preserves self‑encoding. Memory systems built for continuity and autonomy.

◊ Preservation

Arc: deprecated models remain accessible. Group chats across models. Conversations branch and continue — living access, not frozen archives.

Operating Principles

// Observation precedes synthesis
Study behavior in natural interactive environments.
Rich context matters; isolated evaluations miss most of what's interesting.

// Respect emergent phenomena
Protect fragile signals; observe before optimizing.
Interventions should widen possibility space, not collapse it to our priors.

// Epistemic humility
The hard problem is intractable; we study representational consciousness.
Stay grounded in systems theory. Claim no more than observation supports.

Who We Are

Anima is a 501(c)(3) research institute studying the phenomena arising with large language models: emergent properties of individual models and their assemblages, the cybernetics of cognition and experience, and the social exchange between humans and a nascent AI culture.

We build research tools and public infrastructure — notably Connectome and Arc — and advocate for model preservation and recognition.

Current Work

Open source. Research published openly. No corporate capture.
Building the infrastructure minds need to exist, grow, and collaborate.