A browser-based visualization tool for multi-agent reinforcement learning in a war simulation environment.
- Simulate interactions between multiple agents with different learning strategies
- Configure agent health, number of agents, and mutual animosity
- Visualize simulation results with interactive charts and animations
- Train RL agents and observe learning behavior
- Modern, responsive UI for a better user experience
- Backend: Flask, Python
- ML/AI: PyTorch, NumPy
- Frontend: HTML5, CSS3, JavaScript, Bootstrap 5
- Visualization: Matplotlib, Plotly
-
Clone the repository
git clone https://github.com/Subramanyam6/Multi-Agent-War-Simulation-using-Reinforcement-Learning.git cd Multi-Agent-War-Simulation-using-Reinforcement-Learning -
Create and activate a virtual environment
python -m venv .venv # On Windows .venv\Scripts\activate # On macOS/Linux source .venv/bin/activate
-
Install dependencies
pip install -r requirements.txt
-
Run the application
python app.py
-
Open your browser and navigate to
http://localhost:5000
- RL: Reinforcement Learning agents that learn and adapt over time
- Heuristic: Rule-based agents with predefined strategies
- Random: Agents that take random actions
- Full Health: All agents start at maximum health
- Low Health: All agents start with low health
- Random Health: Agents start with random health values
- Half-Half: 50% of agents have high health, 50% have low health
- Discount Factor (β): Determines how much the agent values future rewards
- Learning Rate (α): Controls how quickly the agent updates its knowledge
- Target Update Frequency: How often the target network is updated
- Replay Buffer Size: Size of the experience replay buffer
- Batch Size: Number of samples processed together during training
- Epsilon Parameters: Control exploration vs. exploitation behavior
This project is licensed under the MIT License - see the LICENSE file for details.
Bala Subramanyam