Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 61 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Cortex Linux Environment Configuration
# Copy this file to .env and configure your settings

# =============================================================================
# API Provider Selection
# =============================================================================
# Choose your AI provider: claude, openai, or ollama
# Default: ollama (free, local inference)
CORTEX_PROVIDER=ollama

# =============================================================================
# Claude API (Anthropic)
# =============================================================================
# Get your API key from: https://console.anthropic.com
# ANTHROPIC_API_KEY=sk-ant-your-key-here

# =============================================================================
# OpenAI API
# =============================================================================
# Get your API key from: https://platform.openai.com
# OPENAI_API_KEY=sk-your-key-here

# =============================================================================
# Kimi K2 API (Moonshot)
# =============================================================================
# Get your API key from: https://platform.moonshot.cn
# MOONSHOT_API_KEY=your-key-here

# =============================================================================
# Ollama (Local LLM) - FREE!
# =============================================================================
# No API key required - runs locally on your machine
# Install: curl -fsSL https://ollama.ai/install.sh | sh
# Or run: python scripts/setup_ollama.py

# Ollama base URL (default: http://localhost:11434)
OLLAMA_BASE_URL=http://localhost:11434

# Model to use (options: llama3.2, llama3.1:8b, mistral, codellama:7b, phi3)
OLLAMA_MODEL=llama3.2

# =============================================================================
# Usage Notes
# =============================================================================
#
# Quick Start with Ollama (Free):
# 1. Run: python scripts/setup_ollama.py
# 2. Set CORTEX_PROVIDER=ollama (already done above)
# 3. Test: cortex install nginx --dry-run
#
# Using Cloud APIs (Paid):
# 1. Get an API key from Anthropic or OpenAI
# 2. Uncomment and set ANTHROPIC_API_KEY or OPENAI_API_KEY above
# 3. Set CORTEX_PROVIDER=claude or CORTEX_PROVIDER=openai
# 4. Test: cortex install nginx --dry-run
#
# Priority Order:
# - .env file in current directory (highest)
# - ~/.cortex/.env
# - /etc/cortex/.env (Linux only)
#
136 changes: 136 additions & 0 deletions OLLAMA_QUICKSTART.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# Ollama Quick Start Guide

## 🚀 Setup in 3 Steps

### 1. Install Dependencies
```bash
cd cortex
source venv/bin/activate
pip install -e .
```

### 2. Set Up Ollama
```bash
# Interactive setup (recommended)
python scripts/setup_ollama.py

# Or non-interactive
python scripts/setup_ollama.py --model llama3.2 --non-interactive
```

### 3. Test
```bash
# Run test suite
python tests/test_ollama_integration.py

# Test with Cortex
export CORTEX_PROVIDER=ollama
cortex install nginx --dry-run
```

## 📝 Configuration

### Environment Variables (.env)
```bash
CORTEX_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
```

### Config File (~/.cortex/config.json)
```json
{
"api_provider": "ollama",
"ollama_model": "llama3.2",
"ollama_base_url": "http://localhost:11434"
}
```

## 🔧 Common Commands

```bash
# Setup
python scripts/setup_ollama.py

# Manage Ollama
ollama serve # Start service
ollama list # List models
ollama pull llama3.2 # Download model
ollama rm old-model # Remove model
ollama run llama3.2 "test" # Test model

# Use with Cortex
export CORTEX_PROVIDER=ollama
cortex install nginx --dry-run
cortex ask "how do I update Ubuntu?"

# Switch providers
export CORTEX_PROVIDER=claude # Use Claude
export CORTEX_PROVIDER=ollama # Use Ollama
```

## 🎯 Recommended Models

| Use Case | Model | Size | RAM |
|----------|-------|------|-----|
| **General (default)** | llama3.2 | 2GB | 4GB |
| **Fast/Low RAM** | llama3.2:1b | 1.3GB | 2GB |
| **Better Quality** | llama3.1:8b | 4.7GB | 8GB |
| **Code Tasks** | codellama:7b | 3.8GB | 8GB |

## 🐛 Troubleshooting

### Ollama Not Running
```bash
# Check status
ollama list

# Start service
ollama serve &
# Or with systemd
sudo systemctl start ollama
```

### Connection Issues
```bash
# Test connection
curl http://localhost:11434/api/tags

# Check if port is in use
sudo lsof -i :11434
```

### Out of Memory
```bash
# Use smaller model
ollama pull llama3.2:1b
export OLLAMA_MODEL=llama3.2:1b
```

## 📚 Full Documentation

- [Complete Setup Guide](docs/OLLAMA_SETUP.md)
- [LLM Integration](docs/LLM_INTEGRATION.md)
- [Main README](README.md)

## 💡 Tips

1. **Start small**: Use `llama3.2` (2GB) for testing
2. **GPU helps**: Ollama auto-detects NVIDIA/AMD GPUs
3. **Free forever**: No API costs, everything runs locally
4. **Works offline**: Perfect for air-gapped systems
5. **Mix providers**: Use Ollama for simple tasks, Claude for complex ones

## 🎉 Quick Win

```bash
# Complete setup in one go
python scripts/setup_ollama.py && \
export CORTEX_PROVIDER=ollama && \
cortex install nginx --dry-run && \
echo "✅ Ollama is working!"
```

---

**Need help?** Check [OLLAMA_SETUP.md](docs/OLLAMA_SETUP.md) or join [Discord](https://discord.gg/uCqHvxjU83)
12 changes: 10 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ cortex install "tools for video compression"

- **OS:** Ubuntu 22.04+ / Debian 12+
- **Python:** 3.10 or higher
- **API Key:** [Anthropic](https://console.anthropic.com) or [OpenAI](https://platform.openai.com)
- **API Key:** [Anthropic](https://console.anthropic.com) or [OpenAI](https://platform.openai.com) *(optional - use Ollama for free local inference)*

### Installation

Expand All @@ -95,9 +95,17 @@ source venv/bin/activate
# 3. Install Cortex
pip install -e .

# 4. Configure API key
# 4. Configure AI Provider (choose one):

## Option A: Ollama (FREE - Local LLM, no API key needed)
python scripts/setup_ollama.py

## Option B: Claude (Cloud API - Best quality)
echo 'ANTHROPIC_API_KEY=your-key-here' > .env

## Option C: OpenAI (Cloud API - Alternative)
echo 'OPENAI_API_KEY=your-key-here' > .env

# 5. Verify installation
cortex --version
```
Expand Down
2 changes: 2 additions & 0 deletions cortex/env_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,8 @@ def get_api_key_sources() -> dict[str, str | None]:
"OPENAI_API_KEY",
"MOONSHOT_API_KEY",
"CORTEX_PROVIDER",
"OLLAMA_BASE_URL",
"OLLAMA_MODEL",
]

for key in api_keys:
Expand Down
Loading
Loading