An autonomous code-review service for GitHub pull requests.
It fetches PR diffs, runs lightweight static checks, asks a local LLM (Ollama) via Agno to produce a strict JSON review, stores results in Postgres, and exposes a FastAPI API. Work is executed asynchronously using Celery with Redis as broker/result backend.
- API
POST /analyze-pr→ enqueue a PR reviewGET /status/{task_id}→ check statusGET /results/{task_id}→ fetch structured JSON results
- Async processing with Celery + Redis
- Persistent results in Postgres (JSONB)
- Local LLM review via Agno + Ollama (e.g.,
llama3.1,qwen2.5) - Structured logging for debugging & metrics
- FastAPI – lightning-fast web framework
- Agno – agent framework to orchestrate LLMs
- uv – ultra-fast Python package manager & virtualenvs
- Ollama – local LLM runtime (pull models like
llama3.1) - Postgres – relational DB (we store results as JSONB)
- Celery + Redis – task queue + broker/backend
gh-code-review-agent/
app/
agents/
code_reviewer.py
Agno agent for reviewing the PR based on the files using Ollama
controller/
github_controller.py
controller for the endpoint which exposes all the endpoint
models/
db_models.py
sqlalchemy db model for Persistence storage
schema.py
pydantic model for Request/Reponse data parsing, validation, and serialization.
services/
github_service.py
a service class which fetch the files from github using github apis
static_checks.py
service to check the fetch files from github PR for static checks like features, bug etc
tasks/
celery_app.py
celery intilaztation class
tasks.py
background celery task which performs analysis and stores results based on the task id.
utils/
auth_dependancy.py
auth decorater class which checks if the auth header have verifed JWT for secure API endpoints
db.py
sqlalchemy db class which creates tables to the postgrace db based on the models defined in models/db_models.py
config.py
config class for fetch env varibles
main.py
main class and FastAPu entry point for the endooints and other services.
etc/ (requirement files)
base.txt
dev.txt
Dockerfile
.env.example
.gitignore
pyproject.toml
README.md
Body
{
"repo_url": "https://github.com/<owner>/<repo>",
"pr_number": 123,
"github_token": "optional_token_for_private_repos"
}Response
{ "task_id": "uuid", "status": "pending" }{ "task_id": "uuid", "status": "pending|processing|completed|failed", "error": null }
{
"task_id": "uuid",
"status": "completed",
"results": {
"files": [
{
"name": "main.py",
"issues": [
{
"type": "style",
"line": 15,
"description": "Line too long",
"suggestion": "Break line into multiple lines",
"severity": "low"
}
]
}
],
"summary": { "total_files": 1, "total_issues": 1, "critical_issues": 0 }
}
}-
Prereqs
- Python 3.11+ (3.12 recommended)
- macOS/Linux/WSL
- Git
-
install uv
- macOS/linus
curl -LsSf https://astral.sh/uv/install.sh | sh export PATH="$HOME/.local/bin:$PATH" # ensure in PATH uv --version
- macOS/linus
-
create venv
-
uv venv .venv source .venv/bin/activate
-
-
Install & run Redis
- macOS (Homebrew)
brew install redis brew services start redis # or: redis-server
- macOS (Homebrew)
-
Install Ollama, pull the model and run
- install ollama
curl -fsSL https://ollama.com/install.sh | sh - Pull a model (one-time):
ollama pull llama3.1 # or qwen2.5:14b, deepseek-r1, etc. - Run Ollam
ollama run llama3.1 # or qwen2.5:14b, deepseek-r1, etc.
- install ollama
-
Run API & worker
- API
or if already activated the .venv
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
uv run fastapi dev
- Worker
or if already activated the .venv
uv run celery -A app.celery_app.celery worker -l INFO
celery -A app.tasks.celery_app.celery worker -l INFO
- API
I have secured all API endpoint with JWT auth token which you need to pass in the auth headers. Sample token.
BEARER_SYSTEM_JWT=for testing purponse you can uncomment this line in main.py #L11
# app = FastAPI(title = get_settings().APP_NAME,dependencies=[Depends(token_required)])
app = FastAPI(title = get_settings().APP_NAME)