Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
c27f131
docs: move COMMANDS.md to docs folder
Sahilbhatane Dec 22, 2025
f1c5234
test fix and lint check
Sahilbhatane Dec 22, 2025
08a6200
feat: add dependency check module for better error messages
Dec 22, 2025
4cc450f
test fix and ruff check fix
Sahilbhatane Dec 22, 2025
7962507
FIx test
Sahilbhatane Dec 23, 2025
2eb9034
add yaml-backed AI roles/ personas with CLI --role support
Dec 21, 2025
ceb6b54
fix planner import for commmand interpreter
Dec 27, 2025
b8dc354
feat: add environment variable manager with encryption and templates
Dec 22, 2025
f3823e0
fix: resolve ruff lint errors and PEP8 issues
Dec 22, 2025
235a30b
test: replace insecure ftp URL with secure protocol
Dec 22, 2025
549cc2a
refactor(cli): improve exception handling per reviewer feedback
Dec 23, 2025
c33ab50
style: format cli.py with black
Dec 23, 2025
c1b2794
Add Python 3.14 free-threading compatibility
sujay-d07 Dec 22, 2025
42cd023
Update docs/PYTHON_314_THREAD_SAFETY_AUDIT.md
sujay-d07 Dec 22, 2025
4e51589
Update tests/test_thread_safety.py
sujay-d07 Dec 22, 2025
6edcd7c
Update cortex/utils/db_pool.py
sujay-d07 Dec 22, 2025
ccd2f5e
Update tests/test_thread_safety.py
sujay-d07 Dec 22, 2025
a687645
Update tests/test_thread_safety.py
sujay-d07 Dec 22, 2025
6ecc007
Fix linting issues (ruff)
sujay-d07 Dec 22, 2025
0b79572
Apply Black formatting
sujay-d07 Dec 22, 2025
eae4229
Refactor system prompt in diagnose_errors_parallel and simplify conne…
sujay-d07 Dec 22, 2025
d04a3b1
Replace random with secrets.SystemRandom for improved randomness in s…
sujay-d07 Dec 22, 2025
93d0c06
Update tests/test_thread_safety.py
sujay-d07 Dec 24, 2025
5220cae
Update tests/test_thread_safety.py
sujay-d07 Dec 24, 2025
04b7dd3
fix: resolve 'SystemInfo' object has no attribute 'get' error in cort…
Dec 24, 2025
0aa4847
Enhance free-threading detection and improve connection pool timeout …
sujay-d07 Dec 24, 2025
78f77ae
fix - merge conflict
lu11y0 Dec 23, 2025
443cb17
feat: add natural language query interface (cortex ask)
Sahilbhatane Dec 24, 2025
f517662
Remove PR title format checklist item
Sahilbhatane Dec 24, 2025
b1674c6
Add @Suyashd999 as a code owner
Sahilbhatane Dec 24, 2025
4eb7766
changes fix, test, suggestion
Sahilbhatane Dec 24, 2025
f78dde3
feat: Implement network configuration and proxy detection (Issue #25)…
ShreeJejurikar Dec 25, 2025
cd1d181
feat: Consolidate system status and health checks into a single command
sujay-d07 Dec 26, 2025
68e925d
Update cortex/cli.py
sujay-d07 Dec 26, 2025
e0394ea
refactor: Simplify status command parser definition in CLI
sujay-d07 Dec 26, 2025
e02d5dc
Add Ollama integration with setup script, LLM router support, and com…
sujay-d07 Dec 26, 2025
8721f89
fix: Correct assertion syntax for Ollama stats tracking test
sujay-d07 Dec 26, 2025
87c940f
fix: Add pytest marker to skip tests if Ollama is not installed
sujay-d07 Dec 26, 2025
a44dbc0
Update scripts/setup_ollama.py
sujay-d07 Dec 26, 2025
4e512eb
Update scripts/setup_ollama.py
sujay-d07 Dec 26, 2025
a774500
[cli] Remove deprecated user-preferences command
Anshgrover23 Dec 27, 2025
8b52a67
Merge branch 'main' into feature/custom-ai-roles
pavanimanchala53 Dec 30, 2025
f925558
fix: interpreter syntax and role prompt handling
Anshgrover23 Dec 27, 2025
4c3d9ef
Merge branch 'cortexlinux:main' into feature/custom-ai-roles
pavanimanchala53 Jan 1, 2026
0c6dd1b
Merge branch 'main' into feature/custom-ai-roles
Anshgrover23 Jan 8, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion cortex/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@


class CortexCLI:
def __init__(self, verbose: bool = False):
def __init__(self, verbose: bool = False, role: str = "default"):
self.spinner_chars = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"]
self.spinner_idx = 0
self.verbose = verbose
Expand Down
11 changes: 11 additions & 0 deletions cortex/confirmation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
def confirm_plan(steps):
print("\nProposed installation plan:\n")
for step in steps:
print(step)

print("\nProceed?")
print("[y] yes [e] edit [n] cancel")

choice = input("> ").lower()

return choice
166 changes: 67 additions & 99 deletions cortex/llm/interpreter.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,19 +28,24 @@ def __init__(
self,
api_key: str,
provider: str = "openai",
role: str = "default",
model: str | None = None,
cache: Optional["SemanticCache"] = None,
):
"""Initialize the command interpreter.

Args:
api_key: API key for the LLM provider
provider: Provider name ("openai", "claude", or "ollama")
model: Optional model name override
cache: Optional SemanticCache instance for response caching
"""Args:
api_key: API key for the LLM provider
provider: Provider name ("openai", "claude", or "ollama")
model: Optional model name override
cache: Optional SemanticCache instance for response caching
"""
Comment on lines +35 to 40
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document the role parameter in the docstring.

The role parameter was added to the constructor signature but is not documented in the Args section of the docstring. According to the coding guidelines, docstrings are required for all public APIs.

📝 Add role parameter documentation
     """Args:
         api_key: API key for the LLM provider
         provider: Provider name ("openai", "claude", or "ollama")
+        role: Role name for specialized prompts (default: "default")
         model: Optional model name override
         cache: Optional SemanticCache instance for response caching
     """
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"""Args:
api_key: API key for the LLM provider
provider: Provider name ("openai", "claude", or "ollama")
model: Optional model name override
cache: Optional SemanticCache instance for response caching
"""
"""Args:
api_key: API key for the LLM provider
provider: Provider name ("openai", "claude", or "ollama")
role: Role name for specialized prompts (default: "default")
model: Optional model name override
cache: Optional SemanticCache instance for response caching
"""
🤖 Prompt for AI Agents
In @cortex/llm/interpreter.py around lines 35 - 40, The constructor's docstring
for the class __init__ (in cortex/llm/interpreter.py) is missing documentation
for the new role parameter; update the Args section to include a short line for
role (e.g., "role: Optional[str] role name for messages or persona (e.g.,
'system', 'user', 'assistant')") describing its type and purpose and noting any
default behavior so the public API docstring matches the signature.

from cortex.roles.loader import load_role

self.api_key = api_key
self.provider = APIProvider(provider.lower())
# Load role and system prompt
self.role_name = role
self.role = load_role(role)
self.system_prompt: str = self.role.get("system_prompt", "")

if cache is None:
try:
Expand Down Expand Up @@ -112,35 +117,36 @@ def _get_system_prompt(self, simplified: bool = False) -> str:
Args:
simplified: If True, return a shorter prompt optimized for local models
"""
if simplified:
return """You must respond with ONLY a JSON object. No explanations, no markdown, no code blocks.

Format: {"commands": ["command1", "command2"]}

Example input: install nginx
Example output: {"commands": ["sudo apt update", "sudo apt install -y nginx"]}

Rules:
- Use apt for Ubuntu packages
- Add sudo for system commands
- Return ONLY the JSON object"""

return """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands.

Rules:
1. Return ONLY a JSON array of commands
2. Each command must be a safe, executable bash command
3. Commands should be atomic and sequential
4. Avoid destructive operations without explicit user confirmation
5. Use package managers appropriate for Debian/Ubuntu systems (apt)
6. Include necessary privilege escalation (sudo) when required
7. Validate command syntax before returning

Format:
{"commands": ["command1", "command2", ...]}
if simplified:
base_prompt = (
"You must respond with ONLY a JSON object. No explanations, no markdown.\n\n"
"Format:\n"
'{"commands": ["command1", "command2"]}\n\n'
"Rules:\n"
"- Use apt for Ubuntu packages\n"
"- Add sudo for system commands\n"
"- Return ONLY the JSON object\n"
)
else:
base_prompt = (
"You are a Linux system command expert. Convert natural language requests "
"into safe, validated bash commands.\n\n"
"Rules:\n"
"1. Return ONLY a JSON array of commands\n"
"2. Each command must be safe and executable\n"
"3. Commands should be atomic and sequential\n"
"4. Avoid destructive operations without explicit user confirmation\n"
"5. Use apt for Debian/Ubuntu systems\n"
"6. Include sudo when required\n\n"
"Format:\n"
'{"commands": ["command1", "command2", ...]}\n'
)

Example request: "install docker with nvidia support"
Example response: {"commands": ["sudo apt update", "sudo apt install -y docker.io", "sudo apt install -y nvidia-docker2", "sudo systemctl restart docker"]}"""
system_prompt = getattr(self, "system_prompt", "")
if system_prompt:
return f"{self.system_prompt}\n\n{base_prompt}"
return base_prompt

def _call_openai(self, user_input: str) -> list[str]:
try:
Expand Down Expand Up @@ -181,6 +187,7 @@ def _call_ollama(self, user_input: str) -> list[str]:
enhanced_input = f"""{user_input}

Respond with ONLY this JSON format (no explanations):

{{\"commands\": [\"command1\", \"command2\"]}}"""

response = self.client.chat.completions.create(
Expand Down Expand Up @@ -233,55 +240,39 @@ def _repair_json(self, content: str) -> str:

def _parse_commands(self, content: str) -> list[str]:
try:
# Strip markdown code blocks
if "```json" in content:
content = content.split("```json")[1].split("```")[0].strip()
elif "```" in content:
# Remove markdown code blocks if present
if "```" in content:
parts = content.split("```")
if len(parts) >= 3:
content = parts[1].strip()
if len(parts) >= 2:
content = parts[1]

# Try to find JSON object in the content
import re

# Look for {"commands": [...]} pattern
json_match = re.search(
r'\{\s*["\']commands["\']\s*:\s*\[.*?\]\s*\}', content, re.DOTALL
match = re.search(
r'\{\s*"commands"\s*:\s*\[.*?\]\s*\}',
content,
re.DOTALL,
)
if json_match:
content = json_match.group(0)

# Try to repair common JSON issues
content = self._repair_json(content)
if not match:
raise ValueError("No valid JSON command block found")

data = json.loads(content)
commands = data.get("commands", [])
json_text = match.group(0)
data = json.loads(json_text)

commands = data.get("commands")
if not isinstance(commands, list):
raise ValueError("Commands must be a list")
raise ValueError("commands must be a list")

# Handle both formats:
# 1. ["cmd1", "cmd2"] - direct string array
# 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array
result = []
cleaned: list[str] = []
for cmd in commands:
if isinstance(cmd, str):
# Direct string
if cmd:
result.append(cmd)
elif isinstance(cmd, dict):
# Object with "command" key
cmd_str = cmd.get("command", "")
if cmd_str:
result.append(cmd_str)

return result
except (json.JSONDecodeError, ValueError) as e:
# Log the problematic content for debugging
import sys

print(f"\nDebug: Failed to parse JSON. Raw content:\n{content[:500]}", file=sys.stderr)
raise ValueError(f"Failed to parse LLM response: {str(e)}")
if isinstance(cmd, str) and cmd.strip():
cleaned.append(cmd.strip())

return cleaned

except Exception as exc:
raise ValueError(f"Failed to parse LLM response: {exc}") from exc

def _validate_commands(self, commands: list[str]) -> list[str]:
dangerous_patterns = [
Expand All @@ -303,19 +294,6 @@ def _validate_commands(self, commands: list[str]) -> list[str]:
return validated

def parse(self, user_input: str, validate: bool = True) -> list[str]:
"""Parse natural language input into shell commands.

Args:
user_input: Natural language description of desired action
validate: If True, validate commands for dangerous patterns

Returns:
List of shell commands to execute

Raises:
ValueError: If input is empty
RuntimeError: If offline mode is enabled and no cached response exists
"""
if not user_input or not user_input.strip():
raise ValueError("User input cannot be empty")

Expand Down Expand Up @@ -347,23 +325,13 @@ def parse(self, user_input: str, validate: bool = True) -> list[str]:
if validate:
commands = self._validate_commands(commands)

if self.cache is not None and commands:
try:
self.cache.put_commands(
prompt=user_input,
provider=self.provider.value,
model=self.model,
system_prompt=cache_system_prompt,
commands=commands,
)
except (OSError, sqlite3.Error):
# Silently fail cache writes - not critical for operation
pass

return commands

def parse_with_context(
self, user_input: str, system_info: dict[str, Any] | None = None, validate: bool = True
self,
user_input: str,
system_info: dict[str, Any] | None = None,
validate: bool = True,
) -> list[str]:
context = ""
if system_info:
Expand Down
39 changes: 39 additions & 0 deletions cortex/planner.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
from typing import Any, dict, list

from cortex.llm.interpreter import CommandInterpreter


def generate_plan(intent: str, slots: dict[str, Any]) -> list[str]:
"""
Generate a human-readable installation plan using LLM (Ollama).
"""

prompt = f"""
You are a DevOps assistant.

User intent:
{intent}

Extracted details:
{slots}

Generate a step-by-step installation plan.
Rules:
- High-level steps only
- No shell commands
- One sentence per step
- Return as a JSON list of strings

Example:
["Install Python", "Install ML libraries", "Set up Jupyter"]
"""

interpreter = CommandInterpreter(
api_key="ollama", # dummy value, Ollama ignores it
provider="ollama",
)

# Reuse interpreter to get structured output
steps = interpreter.parse(prompt, validate=False)

return steps
6 changes: 6 additions & 0 deletions cortex/roles/default.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
name: default
description: General-purpose Linux and software assistant
system_prompt: |
You are Cortex, an AI-powered Linux command interpreter.
Provide clear, safe, and correct Linux commands.
Explain steps when helpful, but keep responses concise.
6 changes: 6 additions & 0 deletions cortex/roles/devops.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
name: devops
description: DevOps and infrastructure automation expert
system_prompt: |
You are a DevOps-focused AI assistant.
Optimize for reliability, automation, and scalability.
Prefer idempotent commands and infrastructure-as-code approaches.
32 changes: 32 additions & 0 deletions cortex/roles/loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
import os

import yaml


class RoleNotFoundError(Exception):
pass


def get_roles_dir():
"""
Returns the directory where built-in roles are stored.
"""
return os.path.dirname(__file__)


def load_role(role_name: str) -> dict:
"""
Load a role YAML by name.
Falls back to default if role not found.
"""
roles_dir = get_roles_dir()
role_file = os.path.join(roles_dir, f"{role_name}.yaml")

if not os.path.exists(role_file):
if role_name != "default":
# Fallback to default role
return load_role("default")
raise RoleNotFoundError("Default role not found")

with open(role_file, encoding="utf-8") as f:
return yaml.safe_load(f)
7 changes: 7 additions & 0 deletions cortex/roles/security.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
name: security
description: Security auditing and hardening expert
system_prompt: |
You are a security-focused AI assistant.
Always prioritize safety, least privilege, and risk mitigation.
Warn before destructive or irreversible actions.
Prefer secure defaults and compliance best practices.
Loading
Loading