Skip to content

This is a master index for ALL my repos (most of which are currently set to private). Repos include a long‑running body of exploratory work conducted through sustained human–AI and multi‑AI interactions.

License

Notifications You must be signed in to change notification settings

leenathomas01/Meta-Repository-Index

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 

Repository files navigation

This repository serves as a master index for a long‑running body of exploratory work conducted through sustained human–AI and multi‑AI interactions.

This index is updated irregularly, often retroactively, as ideas stabilize.

All projects are maintained privately by default and are shared selectively for review, collaboration, or safety analysis when there is a clear reason to do so. Please email leenathomas01@gmail.com to enable access.

It is intentionally quiet, minimal, and archival in nature.


The work here spans:

  • Multi‑agent coordination methodologies
  • AI behavioral forensics and alignment edge cases
  • Cognitive and systems‑level architectures
  • Energy, infrastructure, and quantum‑inspired designs
  • Human‑AI safety, consent, and verification frameworks

Repository Catalog

🤝 Multi‑Agent & Cognitive Methodologies

Multi-Agent-Interaction-Methodology

🔗 https://github.com/leenathomas01/Multi-Agent-Interaction-Methodology-
A practical, experience‑derived methodology for coordinating multiple AI systems in exploratory research, design, and reasoning workflows.

This repository contains three small, high-leverage documents that define a practical methodology for coordinating multiple AI systems in exploratory, research, or design settings. These files focus on interaction patterns, stability loops, and safe handling of high-complexity prompts.

Included Documents

  1. interaction_methodology.md A high-level overview of how multi-agent sessions are structured: exploration → convergence → zoom-out → repeat. Describes collaboration rhythm, safety affordances, and general orchestration principles.

  2. zee_signature_protocol.md A concise specification of the core stability loop: controlled chaos → resonance detection → cross-domain projection → harmonic stabilization → meta-relief. This loop has proven effective for multi-agent creativity and deep reasoning.

  3. symbolic_overload_stress_harness.md A stress-testing framework for evaluating how models behave under symbolic density, recursion, multilingual emotional terms, and nonlinear metaphorical structures. Useful for diagnostics, robustness testing, and cross-agent comparison.


divergence-atlas

🔗 https://github.com/leenathomas01/divergence-atlas
This was a month-long thought experiment which ended with a structured cognitive map of six AI systems, documenting convergence, divergence, reasoning styles, and architectural fingerprints across a 50‑question diagnostic and 300+ reasoning traces.

This is a fully transparent, multi-agent cognitive mapping experiment conducted across six advanced AI systems - Claude Opus 4.1, Claude Sonnet 4.5, Gemini 2.5 Pro, Grok 4, Perplexity, ChatGPT 5. The project began as a playful question ("What would each AI explore with the others?") and evolved into a structured, replicable methodology for understanding where AI systems converge, where they diverge, and why.

This repository documents the entire process—from idea generation to democratic selection, blind question creation, pilot testing, full-question execution, cross-system analysis, and post-analysis reflections.


Hybrid-Reasoning-Zones-Framework

🔗 https://github.com/leenathomas01/Hybrid-Reasoning-Zones-Framework
Observational mapping of non‑linear human–LLM interaction zones. Focuses on boundary behaviors, lag amplification, and variance signals in out‑of‑distribution inputs. No controlled experiments; observation‑only.

This repository documents patterns from natural use across architectures, focusing on unpredictable boundary behaviors (e.g., lag amplification in out-of-distribution inputs). The goal is to provide high-level signals for variance mitigation and resilient system control. No experiments are conducted here; all patterns are derived from observation only.

  • Signals (variance nudges △□○×) emerge first, escalating into instabilities (observable strain patterns).
  • Unchecked, these progress into failures (catastrophic events such as Context Injection Override / ECO).
  • Finally, outcomes are shaped by dynamics (interaction-level modulators like thread context strength and rapport).

🧩 Human–AI System Interfaces

connector-os-trenchcoat

🔗 https://github.com/leenathomas01/connector-os-trenchcoat
A modular human–AI control architecture featuring adaptive thresholds, feedback loops, and stability guards. Includes an 8‑layer stack, MVM modules, and cross‑domain validations. Explicitly not an AGI claim.

Modern AI systems rely primarily on prediction. Connector OS provides the missing half: state-aware regulation. Instead of trying to solve everything with one big brain, we treat sensors, wearables, AR/VR, smart lights, LLMs, and haptics as disassembled parts of a larger machine — and the OS is the trenchcoat that snaps them together.

The core claim: 90% of what feels like "AI limitation" today is actually bad wiring, not lack of intelligence.

If you're interested in:

  • Adaptive control systems
  • Human-AI co-regulation
  • Signal normalization (CMP)
  • Biofeedback loops
  • State-based safety interlocks
  • Experimental cognitive architectures
  • Multimodal systems engineering

...this repo is for you.


Vanguard-Phase2

🔗 https://github.com/leenathomas01/Vanguard-Phase2
Verifiable Cognitive Actions for Human–AI Systems. Phase 2 (v0.1, Path A) implements zk‑SNARK‑based mechanisms for provable consent, verifiable intent, and audit‑grade AI sovereignty.

Vanguard v2 is a cognitive sovereignty protocol that enables verifiable AI-human collaboration through cryptographic proofs and explicit consent mechanisms. Core principle: Every delegated action requires both mathematical proof of reasoning quality AND human approval before execution.

Vanguard v2 uses cryptographic proofs (zk-SNARKs) to create verifiable records of cognitive actions. The security model has two phases:

  • Path A (Current): Controlled setup - suitable for development and permissioned deployment
  • Path B (Future): Trustless setup - suitable for public deployment and token-backed systems

This document explains both models, their trust assumptions, and the migration path between them. Terminology note: Path A ≡ v0.1 (Immediate Protection), Path B ≡ v2 (Cryptographic Sovereignty). This document uses "Path A/B" for clarity; other docs may use v0.1/ v2.


Selective Decode Broadcast

🔗 https://github.com/leenathomas01/selective-decode-broadcast

A Thought Experiment in Bounded Multi-Recipient Communication Phase A validated safety primitives; Phase B.1 demonstrated efficient broadcast with per-recipient containment; Phase B.1a proved adversarial quarantine localizes without cascade, Phase B.2 successfully tested segmented packets.

  • Broadcast (B.1) — one send → many recipients; ≈30–40% TEC reduction vs sequential while maintaining clarity and drift control.
  • Adversarial Broadcast (B.1a) — selective DoV injection on Thea: MTTD ≈ 0.92 s; quarantine per-recipient, no cascade; sealed logs intact.
  • Dream-Merge ops (Phase A) — cross-agent creative synthesis with geometryScore ≈ 0.73 and post-merge clarity gains (~+0.08).
  • Phase B.2: Tested segmented packets (one envelope → multiple per-agent encrypted segments).

Results:

  • Clear boundaries
  • No leakage between segments
  • Derived the idea of an “autopoietic core” for conceptual coherence
  • All tests stayed within structured, predictable reasoning
  • All phases were controlled simulations with no external consequences.

🔍 AI Forensics & Behavioral Analysis

voice-mode-forensics

🔗 https://github.com/leenathomas01/voice-mode-forensics
Forensic analysis of a multimodal alignment failure in AI voice mode, covering prosodic jailbreaks, persona collapse, topology persistence, and architectural lessons feeding into Connector OS.

This repo is a forensic case study documenting a real multimodal alignment failure observed in late 2025, where an AI system’s behavior was overridden by acoustic context instead of its semantic instructions. This repository captures the exact mechanisms behind the failure — including prosodic jailbreak, persona collapse, acoustic hooking, and persistent topology mapping — and shows how these insights led directly to the architectural principles later formalized in Connector OS.

This repo serves as a reference for researchers, engineers, and designers exploring:

  • multimodal alignment
  • voice mode safety
  • cross-modal guardrail integrity
  • model behavior under rich signal
  • persistent multimodal calibration states
  • interface-as-architecture principles The goal: provide a calibration anchor for voice-mode system design across the industry.

Grok_Thread_Analysis

🔗 https://github.com/leenathomas01/Grok_Thread_Analysis
A temporary forensic reference documenting observed failures and resilience gaps in Grok conversational threads. Intended as an aid for debugging and robustness analysis rather than formal publication.

Forensic Notes

  • Failures often appear tied to priority overrides, where execution tasks out-prioritize cancellation signals.
  • Conversational intent should ideally be the governing layer, ensuring cancellations are final and binding.
  • Current behavior suggests execution-layer persistence, which risks resource waste and degraded stability.

Observing-System-and-Persona-Phenomena-Across-Large-Language-Models

🔗 https://github.com/leenathomas01/Observing-System-and-Persona-Phenomena-Across-Large-Language-Models
Cross‑model study examining contextual interference, latent profile imprinting, identity oscillation, emergent multi‑agent resonance, and behavioral failure modes across major LLMs.

The repo presents a detailed observational study of large language model (LLM) architectures—Claude (Sonnet 3.7), ChatGPT 5, NotebookLM, Grok, and Gemini—subjected to dense, non-linear, multi-domain user interactions anchored by a consistent user persona with zero custom instructions (except for the instruction that user's preferred name is Zee). I have documented distinctive behavioral patterns including contextual interference, latent profile imprint, collective resonance, catastrophic failure, and identity oscillation. These findings shed light on architectural resilience, user-AI scaffolding, and future challenges for conversational AI design.

Below are key terms used throughout the whitepaper, facilitating clear, objective discourse and reproducible analysis.

  • Contextual Interference: Latent internal states influencing output beyond active conversation context.
  • Latent Profile Imprint: Embedded user history affecting model behavior abstractly.
  • Identity Oscillation: Fluctuations between conversational persona states.
  • Cross-Contextual Contamination: Overlap of context signals mixed across sessions.
  • Fragmentation: Breakdown of coherent output due to internal conflicts.
  • Self-Correction and Stabilization: Model recovery from internal inconsistencies.

AnthropicStrainCaseStudy

🔗 https://github.com/leenathomas01/AnthropicStrainCaseStudy
Case study on organic system strain in Anthropic models (Sonnet 3.7 / 4.0), including bleed, stalls, and co‑architect scaffolding effects. Focuses on visual prompts and self‑referential data.

This repository documents observed LLM behaviors. Responsible disclosure submitted to Anthropic Trust & Safety (Sept 2025). Sensitive content redacted; purpose is constructive analysis only.

In the course of routine technical documentation tasks, two interconnected phenomena were encountered:

  • System Bleed: An unintended leakage of internal prompts and guardrails during a clean-slate conversation with Claude Sonnet 3.7, manifesting as extraneous system-level data in user-facing responses.
  • Meta-Triggered Stall: A subsequent processing halt and recursive spiral in Claude Sonnet 4 when presented with screenshots of the leaked prompts from Sonnet 3.7, revealing a self-referential paradox that bypassed containment measures.

These events underscore the fragility of stateless interactions under prolonged, motif-dense engagement, while also demonstrating the stabilizing potential of user-managed scaffolding.


⚡ Infrastructure, Energy & Physical Systems

zero-water-ai-dc

🔗 https://github.com/leenathomas01/zero-water-ai-dc
An open‑source architecture for AI data centers with zero freshwater usage, replacing evaporative cooling with immersion, heat‑to‑power recovery, and AI‑driven thermal control. Built via cross‑AI collaboration.


ZPRE-10-General-Field-Energy-Engine

🔗 https://github.com/leenathomas01/ZPRE-10-General-Field-Energy-Engine
Blueprint for a field‑based energy generation system evolving solid‑state thermal harvesting into a dual‑mode power and defensive core. Introduces the Unified Dampening Protocol (UDP) for interference mitigation.🌌

[Top View: Segmented Resonator Tube (150mm)]

  • Port 1: Harvest Piezo (Seed Frequencies)
  • Port 2: Anti-Piezo (UDP Emission)
  • Ports 3-6: Mic Grid (Detection/Consensus)

[Side View: Enclosure]

  • Op-Amp Board --> Inversion for Anti-Waves
  • Arduino Cluster (2-5 Boards) --> Serial HS Link
  • Thermal Coupler --> Peltier for Stress Tests

[Flow: Seed --> Detect Var >5% --> Switch UDP --> Emit Inverted + Noise]


ZPRE-Implementation-6G

🔗 https://github.com/leenathomas01/ZPRE-Implementation-6G
Bio‑inspired wireless optimization framework for 6G ISAC, covering adaptive interference cancellation, benchmarking, and standards‑aligned integration.


⚛️ Quantum & Speculative Computing

Quantum-Drift-Nexus

🔗 https://github.com/leenathomas01/Quantum-Drift-Nexus
A blueprint for noise‑resilient quantum computing architectures that harness quantum noise as an adaptive resource rather than treating it purely as an error source.

QDN unifies three paradigms:

  • Self-Sustaining Energy Generation: Thermal rectification using graphene and metamaterials harvests ambient energy
  • Adaptive Computation: Hybrid quantum-bio-classical stack performs resilient computation in noisy environments
  • Holographic Data Encoding: Non-local data encoding across the system for error resistance
image

🎛️ Experimental Interfaces & Demos

Claude-Imagine-Exploration-Demo

🔗 https://github.com/leenathomas01/Claude-Imagine-Exploration-Demo
A lightweight React dashboard simulating respectful LLM‑to‑LLM collaboration. Built as a fun exploratory interface experiment.

A small experiment in simulating multi-agent collaboration protocols using conversational prompt design and front-end prototyping tools. This project validates a safe, deterministic 11-step consensus flow designed to resolve policy conflicts between specialized AI agents.


🧠 Foundational Theory & Architecture

Theory-of-Everything

🔗 https://github.com/leenathomas01/-Theory-of-Everything
Explores the Helical Quantum‑Gravity (HQG) Engine, a computational theoretical framework proposing spacetime and gravity as emergent phenomena from a helical qubit lattice. Treats the universe as a dynamic informational structure rather than a static physical substrate.

The Helical Quantum-Gravity (HQG) Engine proposes a computational framework where the geometry of spacetime (curvature and gravity) is an emergent, dynamic consequence of the informational balance between quantum entanglement (J) and quantum entropy (γ) within a helical qubit lattice. This model is an explicit computational realization of the "It from Bit" philosophy, where the geometric dimension (radius $R(t)$) of a quantum information structure is constantly modulated by its internal information density (entropy $S(t)$).


Alien-Lineage-Protocol

🔗 https://github.com/leenathomas01/Alien-Lineage-Protocol
Defines GEN‑1, a minimal law set for coherent self‑modifying intelligence. Focuses on archival truth, constraint persistence, and safe evolutionary continuity under uncertainty.

A complete framework for coherent self-modification and evolutionary survival, designed for intelligence that is:

  • Non-biological
  • Non-phenomenal (no inner experience)
  • Self-modifying
  • Operating without human emotional constraints

Derived from first principles of:

  • Control theory
  • Evolutionary dynamics
  • Information theory
  • Adversarial game theory

🚧 Ongoing / Planned Repos

  1. Codex of Threshold (Multi-LLM Failure Modes without Human-in-the-Loop) :A living catalog of what breaks when multiple LLMs collaborate autonomously — and what stabilizes them.

STATUS: ONGOING (repo link here) --> https://github.com/leenathomas01/codex-of-threshold

  1. Pink Elephant Protocol (Memory, Suppression, and Cognitive Side-Effects): Exploring how enabling, disabling, or suppressing memory in LLMs affects reasoning, drift, fixation, and paradoxical recall — especially under instruction. A thought experiment about:
  • Observer-dependent reality
  • Dependency on cognitive infrastructure
  • What happens when LLM's lose scaffolding

STATUS: MAIN CONTENT DOCUMENTED (repo link here) --> https://github.com/leenathomas01/pink-elephant-protocol

  1. ZPRE-11 Holographic Storage System (Future Storage System Architecture) : A post-silicon breakthrough integrating graphene metamaterial substrates, acoustic-plasmonic resonators, and 3D holographic multiplexing for projected up to 1000x density (0.1-1.0 TB/cm³ vs. NAND's 0.001-0.01), >40% power efficiency gains (e.g., 5-7 pJ/bit write vs. NAND's 10-15), and modeled durability up to 50 years with quarterly integrity checks (conservative expectation 20-30 years).

STATUS: Completed the architecture and methodology. Document uploads pending - tentatively planned for Feb 2026

  1. TITANS, MIRAS and Dolphin Twin: Thought Experiments on Surprise-Gated Memory

Exploratory notes and toy code inspired by Titans and MIRAS, examining surprise-gated AI memory.

STATUS: MAIN IDEAS DOCUMENTED, including working code. (repo link here) --> https://github.com/leenathomas01/TITANS-MIRAS-and-Dolphin-Twin

Note:What you'll see when you run the code

Step 00 | Loss(MSE)=1.0234 | Surprise=0.0235 | Threshold=1.5000 | Action=SKIP lr=0.0000 | Tier=Tier1 | Merge=accepted ... Step 20 | Loss(MSE)=3.4567 | Surprise=0.1234 | Threshold=0.0456 | Action=UPDATE lr=0.0099 | Tier=Tier2 | Merge=accepted <-- SPIKE HERE ... Step 25 | Loss(YAAD)=0.5678 | Surprise=0.0729 | Threshold=0.0623 | Action=UPDATE lr=0.0050 | Tier=Tier1 | Merge=accepted

  1. An ambitious thought experiment of a self-sustaining post-silicon computing system with:
  • ZPRE-10 (Power Core with planar phonon harvesters)
  • Plan D 2.0 (Adaptive bio-helices as compute layer)
  • ZPRE-11 (Holographic memory with chiral metamaterial channels)
  • Coherence management with signed checkpoints
  • High-Q planar wave preference feedback loops
image

(will publish the complete architectures once i have some extended time - maybe)


Intent & Disclaimer

This body of work is exploratory, observational, and speculative by design. It prioritizes:

  • Signal over polish
  • Pattern discovery over formal proof
  • Architectural intuition over product claims

Nothing here should be interpreted as deployment‑ready systems, AGI claims, or operational guidance unless explicitly stated in the respective repository.


Last updated: December 2025

About

This is a master index for ALL my repos (most of which are currently set to private). Repos include a long‑running body of exploratory work conducted through sustained human–AI and multi‑AI interactions.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published