Skip to content

Releases: cortexlinux/cortex-llm

v0.1.0 - Initial Release

15 Jan 07:48

Choose a tag to compare

🎉 Initial Release of Cortex LLM

Features

  • Ollama integration for model management
  • GPU auto-detection (NVIDIA/AMD)
  • CUDA setup automation
  • Model download and management
  • llama.cpp backend support
  • Streaming inference responses
  • Context window management
  • Memory optimization for large models
  • Prompt templates system
  • Batch inference mode

Installation

pip install -e .

Documentation

See README.md

Full Changelog: https://github.com/cortexlinux/cortex-llm/commits/v0.1.0