▶︎ Click or tap the video window above to play the demo — please start the video before reading the rest of this README.
PhD Research Project Proposal
MishMash WP1: AI for Artistic Performances
Focus: Human-AI Interaction in Live Music Performance
BrainJam is a real-time musical performance system exploring human-AI co-performance through brain-computer interfaces. Unlike traditional AI music generation, BrainJam positions AI as a responsive co-performer rather than an autonomous generator, emphasizing performer agency and expressive control.
- How can AI act as a responsive co-performer rather than an autonomous generator?
- Can brain signals serve as expressive control inputs while maintaining performer agency?
- What interaction patterns emerge when humans and AI collaborate musically in real-time?
- Hybrid Adaptive Agent: Combines symbolic logic (reliability) + optional ML (personalization)
- Real-time Performance: <30ms total latency for live performance
- Performer-Led Design: AI never generates autonomously—all outputs modulate performer input
- BCI as Control: EEG/fNIRS/EMG signals treated as expressive inputs, not semantic decoding
┌─────────────────────────────────────────────────────────────┐
│ BrainJam Architecture │
├─────────────────────────────────────────────────────────────┤
│ Input Layer │ AI Layer │ Output Layer │
│ • EEG/fNIRS/EMG ──► │ Hybrid Agent ──► │ Piano Synth │
│ • MIDI/Keyboard ──► │ • Agent Memory ──► │ Guitar Synth│
│ • Mock Signals │ • EEG Mapper │ • Beats │
└───────────────────────┴──────────────────────┴───────────────┘
- Hybrid Adaptive Agent 🧠: Three behavioral states (calm/active/responsive), <5ms inference
- Sound Engines 🎵: DDSP Piano, Guitar, Beat Generator
- Agent Memory 💭: GRU-based dialogue learning (JSB Chorales)
- EEG Mapper 🔬: EEGNet architecture, OpenMIIR compatible
# Clone and install
git clone https://github.com/curiousbrutus/brainjam.git
cd brainjam
pip install -r requirements.txt
# Run interactive GUI
streamlit run streamlit_app/app.pyfrom performance_system.agents import HybridAdaptiveAgent
from performance_system.sound_engines import DDSPPianoSynth, BeatGenerator
# Initialize
agent = HybridAdaptiveAgent()
piano = DDSPPianoSynth()
beats = BeatGenerator()
# Performance loop
for controls in signal_stream:
response = agent.respond(controls)
audio = piano.generate(0.5, {'midi_note': 60}) + beats.generate(0.5, {'tempo': 120})brainjam/
├── performance_system/ # Core system (agents, synths, mappers)
├── streamlit_app/ # Interactive GUI
├── examples/ # Usage demos
├── tests/ # Unit tests
├── docs/ # Documentation
│ ├── architecture/ # Technical design
│ └── research/ # Research context
├── models/ # Model info
└── literature/ # Academic references
Research Focus: Human-AI collaboration in creative contexts
Key Questions:
- How to maintain performer agency with AI assistance?
- Can BCIs enable expressive musical control?
- What makes AI "feel" like a musical partner?
- Performer-Led Systems (Tanaka, 2006): AI responds, never overrides
- Interactive ML (Fiebrink, 2011): Real-time adaptation with user control
- BCMIs (Miranda & Castet, 2014): Brain signals as expressive input
- Hybrid Agent Architecture: Symbolic + ML with guaranteed agency
- Real-time BCI Integration: <30ms latency, graceful fallbacks
- Musical Co-Performance: Learned dialogue patterns from Bach chorales
- Fully functional prototype
- Evaluation framework for agency/flow/responsiveness
- Comprehensive documentation and demos
Planned User Studies:
- Agency Assessment (SAM + custom scales)
- Flow State (FSS-2 questionnaire)
- Performance Quality (expert + audience ratings)
- Learning Curve (longitudinal study)
See docs/research/interaction_measures/ for details.
For Researchers: docs/research/ - Ethics, limitations, evaluation
For Developers: docs/architecture/ - Technical design, components
For Users: QUICK_START.md, examples/
Project Status:
LIMITATIONS.md- Key limitations and appropriate use casesIMPROVEMENTS.md- Suggested improvements and development roadmap
- Python 3.9+, NumPy/SciPy, PyTorch (optional)
- Streamlit GUI, scikit-learn
- Performance: <30ms latency, 44.1kHz audio, 10Hz control rate
- Hybrid Adaptive Agent with 3 states
- DDSP Piano/Guitar + Beat Generator
- Agent Memory (GRU) + EEG Mapper (EEGNet)
- Interactive GUI + documentation
- User study design
- Model training (JSB Chorales, OpenMIIR)
- Real EEG hardware integration
BCI Music: Tanaka (2006), Miranda & Castet (2014)
Interactive ML: Fiebrink (2011), Lawhern et al. (2018)
Audio Synthesis: Engel et al. (2020), Karplus & Strong (1983)
See literature/ for detailed summaries.
Project: BrainJam - AI-Mediated Musical Performance
Purpose: PhD Research Application
eyyub.gvn@gmail.com
Academic research project for PhD application. Contact for usage permissions.
Built with 🧠 + 🎵 + 🤖 for exploring human-AI musical collaboration
