Truth-Driven Agreement-Ethic (TDAE) & Participatory Democracy (PD)
Full TDAE philosophy here: https://www.ashmanroonz.ca/2025/09/the-truth-driven-agreement-ethic-tdae.html
A living, people-centered democracy amplified by AI. Truth is the foundation. Agreement shapes morality. Ethics evolve with knowledge.
- Why Now
- Core Idea
- How It Works (Layers)
- TDAE Foundations
- Centers of Focus
- Ethics, Safety, and Rights
- System Architecture
- Quick Start
- Configuration
- API (MVP)
- Roadmap
- Contributing
- FAQ
- Glossary
- License
Representative democracy asks most people to speak once every few years. Between elections, voices get filtered or lost. Polarization and information overload make it hard to trust the process.
WhatNow reimagines governance as a continuous, inclusive, transparent process: Every voice. Every day.
- TDAE (Truth-Driven Agreement-Ethic): Truth provides constraints. Within truth and non-harm, morality emerges from fair agreements among those affected.
- PD (Participatory Democracy): The operating system for society that implements TDAE as a continuous loop: Input β Synthesis β Action β Feedback β Learning.
- Government = Organizing Field: Not a separate ruler above the people, but a field that belongs to the people, coordinating collective action.
- Centers of Focus: Many and temporaryβdynamic βwork roomsβ that open, deliberate, decide, implement, review, then close.
- People Layer β Individuals/communities share lived experience, values, and proposals.
- Personal AI Advocate (local-first) β Helps you clarify views, see evidence, and submit consented input. You control it.
- Civic Network β Secure identity (one-person-one-voice), consent management, encrypted transport.
- Collective Synthesis Engine β Clusters topics, links claims to evidence, models positions, surfaces convergence/divergence, drafts options.
- Action Layer β Agencies & institutions enact policies with clear, auditable links to citizen input.
- Feedback & Evaluation β Impact monitoring, audits, citizen review, and iterative improvement.
Promise: Technology that helps us become more human, not less.
Principles
- Truth is the foundation (facts, evidence, consistent patterns).
- Agreement shapes morality (within truth and non-harm).
- Knowledge enables growth (ethics evolve as understanding deepens).
- Compassion is key (accountability includes learning).
Core Axioms
- Truth is real (independent of belief).
- Truth appears plural (vantage points differ).
- Truth is convergent (aligns across perspectives).
- Truth is directional (we can move closer to it).
- Truth is functional (supports prediction, coherence, trust).
- Ethics evolve as we do (agreements refine with knowledge).
- Openable by anyone with a clear charter, scope, timeline.
- Lifecycle: open β deliberate β draft options β decision β implementation β review β close.
- Scaled: neighborhood β city β region β national; horizontally coordinated.
- Outputs: transparent trade-offs, readiness thresholds, minority positions preserved.
Privacy by Design
- Data minimization β’ Local-first by default β’ E2E encryption for sensitive flows
- Differential privacy for public dashboards β’ Full export/delete rights
Anti-Manipulation
- Verified personhood (privacy-preserving) β’ Open algorithms β’ Public audit logs
- Independent citizen oversight β’ Anomaly detection for coordinated interference
Equity & Access
- Multichannel participation (app, web, SMS/voice, kiosks) β’ Multilingual support
- Inclusive outreach β’ Impact audits across demographics
Transparency
- Evidence graphs & reliability ratings β’ Counter-arguments surfaced
- Decision journals β’ Model cards (versions, training data, known limits)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β People Layer β
β - Citizens, communities, orgs β
βββββββββββββββββ²ββββββββββββββββββββββββββββββββββββββββ¬βββββββββββββ
β β
Local-first β encrypted, consented
Personal AI Advocate β Civic Network
β βΌ
βββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Collective Synthesis Engine β
β - Topic clustering, evidence graphs, position modeling β
β - Convergence analysis, option drafting, readiness thresholds β
βββββββββββββββββ²ββββββββββββββββββββββββββββββββββββββββ¬βββββββββββββ
β β
β β accountability links
β βΌ
Feedback & Audits Action Layer (Institutions)
- Metrics, review, iteration - Implement, report, iterateMVP stack: Node.js backend, MongoDB, React/Next frontend (mobile-first), basic real-time updates.
Prerequisites
- Node.js v18+
- MongoDB (local or Atlas)
Setup
git clone https://github.com/AshmanRoonz/WhatNow.git
cd WhatNow
cp .env.example .env # fill MONGODB_URI, JWT_SECRET, etc.
npm install
npm run dev # concurrently start client+server if configured
# OR:
npm run server # backend only
npm run client # frontend only
# visit http://localhost:3000Docker (optional)
# assuming Dockerfile + docker-compose.yml provided
docker compose up --buildSeed (optional)
npm run seed # adds sample users, Centers of Focus, and issues.env (example)
NODE_ENV=development
PORT=3000
MONGODB_URI=mongodb://localhost:27017/whatnow
JWT_SECRET=change-me
CORS_ORIGIN=http://localhost:3000
# Feature flags
FEATURE_SYNTHESIS_BASIC=true
FEATURE_BIOMETRIC_SIM=false
Scripts
npm run devβ dev server(s)npm run server/npm run clientβ split startnpm run seedβ load mock datanpm run lint/npm run testβ quality gates
Project Structure (suggested)
/api # Express routes, controllers, validators
/app # Frontend (Next/React)
/core # shared types, utils
/models # Mongo schemas
/services # synthesis, auth, identity, evidence
/scripts # seeders, maintenance
Minimal surfaces to exercise the loop. Auth uses JWT (bearer).
Auth
POST /api/auth/signupβ email, pass (dev-only)POST /api/auth/loginβ returns JWT
Centers of Focus
GET /api/centersβ list (query:status=active|open|closed)POST /api/centersβ create (title, charter, scope, timeframe)GET /api/centers/:idβ detail, lifecycle state, threadsPOST /api/centers/:id/contributeβ submit structured input (requires consent flags)
Synthesis
GET /api/centers/:id/synthesisβ topic map, convergence, options (basic)POST /api/centers/:id/voteβ record preference on options
Evidence
POST /api/evidenceβ link claim β source (URL), add reliability tagGET /api/evidence/:claimIdβ view graph for claim
Admin (flag-gated)
POST /api/centers/:id/advanceβ move lifecycle β next stagePOST /api/centers/:id/charterβ update scope/timeline with audit note
Response Examples available in /api/docs (OpenAPI stub recommended).
0β6 weeks
- β Draft charter & core philosophy
- β Minimal prototype: one input β one synthesis view
- β¬ Basic backend/API (Auth, Centers, Synthesis stub)
- β¬ Mobile-friendly frontend + kiosk mode
- β¬ Privacy & security baseline audit
6β16 weeks
- β¬ Polls and option voting
- β¬ Evidence graphs & claim reliability ratings
- β¬ Minority-position preservation in decisions
- β¬ Real-time dashboards (participation, convergence)
Months 4β8
- β¬ Multi-scale centers (local β regional)
- β¬ Advanced NLP clustering and consensus mapping
- β¬ Published audits & impact tracking
- β¬ Accessibility: SMS/voice, multilingual, assistive tech
Months 9β12
- β¬ Privacy-preserving personhood verification
- β¬ Open algorithms, public model cards, oversight boards
- β¬ Bindings with institutions for formal adoption
- β¬ Open-source core components (AGPLv3)
We welcome builders, policy thinkers, community organizers, and critics.
- Fork the repo
- Create a feature branch (
feat/your-idea) - Commit with clear messages
- Open a PR describing the problem, solution, and tests
- Engage in reviewβdisagree and commit when needed
Code of Conduct: Be kind. Argue in good faith. Center truth and shared aims.
Security: See SECURITY.md to report vulnerabilities responsibly.
Is this βrelativismβ? No. TDAE is truth-constrained. Within truth and non-harm, agreements define whatβs fair. Thatβs not βanything goesβ; itβs bounded pluralism.
Wonβt AI dominate people? Personal AIs are local-first and user-controlled. Collective models are open and auditable. Humans set goals and make decisions; AI assists with clarity and synthesis.
How are minorities protected? Minority positions are preserved with rationale and rights framing; decisions require readiness thresholds and explicit trade-offs.
How do you stop bots and brigading? Verified personhood (privacy-preserving), anomaly detection, rate-limits, audit logs, and independent oversight.
- TDAE: Truth-Driven Agreement-Ethicβtruth bounds, agreements shape morality.
- PD: Participatory Democracyβcontinuous, inclusive decision-making.
- Organizing Field: Government reconceived as a coordinating function owned by the people.
- Center of Focus: Temporary, scoped structure to concentrate attention/action.
- Personal AI Advocate: Local assistant that helps you participate on your terms.
- Synthesis Engine: System that maps topics, links evidence, models positions, drafts options.
- Evidence Graph: Linked claimsβsources with reliability ratings.
- Code: Planned AGPLv3 (strong copyleft to keep improvements open)
- Docs & Diagrams: CC BY-SA 4.0 (Currently private; finalize before public release.)
Ashman Roonz & contributors. This document is a living blueprint. Improve it. Fork it. Pilot it.