A clean, enterprise-grade architecture for a Python system with three first-class services:
- S1 — Code Generation from Requirements
- S2 — Metadata Generation from Code
- S3 — Validation (syntax → tests → AI logic checks)
Built using Hexagonal (Ports & Adapters) architecture with:
- Provider pattern for language-specific behavior
- Pipeline pattern for multi-stage validation
- Command pattern for consistent execution
- Registry pattern for dynamic provider discovery
- Strategy pattern for pluggable AI backends
pip install -r requirements.txt
# or with poetry
poetry installcp .env.example .env
# Edit .env with your OpenAI/Azure credentialspython -m platform.interfaces.cli.main status# Generate Python code from requirements
python -m platform.interfaces.cli.main s1-generate \
examples/requirements/sample_calculator.json \
python \
--output ./generated \
--context examples/context/python_context.json# Extract metadata from generated code
python -m platform.interfaces.cli.main s2-metadata \
./generated \
--output metadata.json# Run full validation pipeline
python -m platform.interfaces.cli.main s3-validate \
./generated \
--requirements examples/requirements/sample_calculator.json \
--metadata metadata.json \
--ai-check \
--output validation_report.jsonsrc/platform/
├── app/ # Application layer (use-cases, pipelines)
│ ├── s1_codegen/ # S1 - Code Generation service
│ ├── s2_metadata/ # S2 - Metadata extraction service
│ └── s3_validation/ # S3 - Validation service
│
├── domain/ # Pure domain logic
│ ├── models/ # Domain models (Pydantic)
│ ├── errors.py # Domain errors
│ └── policies.py # Business policies
│
├── ports/ # Interfaces (contracts)
│ ├── ai.py # LLM & embeddings clients
│ ├── fs.py # File system operations
│ ├── providers.py # Language providers
│ ├── runners.py # Test runners & sandbox
│ └── observability.py # Logging, metrics, tracing
│
├── adapters/ # External system implementations
│ ├── ai/ # OpenAI, Azure OpenAI
│ ├── fs/ # Local file system
│ ├── runners/ # pytest, subprocess sandbox
│ └── providers/ # Python, TypeScript, etc.
│
├── kernel/ # Cross-cutting infrastructure
│ ├── config.py # Pydantic settings
│ ├── di.py # Dependency injection
│ ├── registry.py # Provider discovery
│ └── logging.py # Structured logging
│
└── interfaces/ # CLI & API boundaries
├── cli/ # Typer CLI
└── api/ # FastAPI (future)
├── function_app.py # Azure Functions entry
├── config.py # High-level config (paths, filters, limits; uses .env)
├── requirements.txt # Python deps for the v1 toolset
├── quick_setup.py # Helper to install minimal deps
├── install_dependencies.py # Installs broader toolchain (linters, pytest, etc.)
├── check_env.py # Verifies env vars (.env) and prints guidance
├── env.template # Sample env vars (copy to .env and edit)
├── input/ # Place your input files here (e.g., CSV requirements)
├── src/
│ ├── HandlePython/ # v1: Python-specific S1/S2/S3 modules
│ │ ├── AIBrain/ # Azure OpenAI wrapper & CLI
│ │ ├── CheckCodeRequirements/ # CSV diff utilities
│ │ ├── GenerateCodeFromRequirements/ # S1: plan, generate, integrate
│ │ ├── GenerateMetadataFromCode/ # S2: AST-based metadata
│ │ └── ValidationUnit/ # S3: syntax / tests / AI checks
│ ├── HandleGeneric/ # v1: language-agnostic base + providers
│ │ ├── core/ # registry, detection, generic generator/validator
│ │ └── providers/ # python/typescript/java… providers
│ └── HandleGeneric v2/ # v2: layered "platform" (Domain/Adapters/App/Interfaces)
│ ├── pyproject.toml # can be installed as a package
│ └── src/platform/ # see "v2 platform" above
├── test_ai.py # Smoke tests for the AI layer (v1)
├── test_ai_cli.py # Smoke tests for AI CLI (v1)
└── src/validate_python_code.py # Standalone syntax check helper
Generates production-ready code from requirements:
# Basic usage
handle s1-generate requirements.json python
# With context and custom output
handle s1-generate requirements.json python \
--output ./src \
--context context.json \
--dry-runFeatures:
- Multi-language support (Python, TypeScript, Java)
- Prompt templating with context
- Automatic formatting (Black, isort)
- Syntax validation
- Cost tracking
Extracts structured metadata from codebases:
# Extract metadata from project
handle s2-metadata ./my-project --output metadata.json
# Filter by language
handle s2-metadata ./my-project \
--languages python,typescript \
--exclude node_modules,venvFeatures:
- AST-based parsing
- Multi-language support
- Function/class extraction
- Import analysis
- LOC counting
Comprehensive validation in stages:
# Full validation pipeline
handle s3-validate ./project \
--requirements requirements.json \
--ai-check \
--output report.json
# Syntax only
handle s3-validate ./project --no-testsPipeline Stages:
- Syntax: AST parsing, linting
- Tests: pytest, jest, etc.
- AI Logic: Requirements consistency check
The platform uses providers for language-specific operations:
- Code Generation: PEP 8, type hints, docstrings
- Metadata: AST parsing for functions/classes
- Syntax: ast.parse validation
- Tests: pytest runner
- Code Generation: TSDoc, interfaces
- Metadata: TS compiler API
- Syntax: tsc --noEmit
- Tests: jest runner
Environment variables in .env:
# AI Configuration
LLM_BACKEND=openai # openai | azure
OPENAI_API_KEY=sk-...
MODEL=gpt-4
TEMPERATURE=0.0
# Limits
MAX_TOKENS=4000
MAX_REQUESTS_PER_HOUR=100
DRY_RUN=false
# File Processing
MAX_FILE_SIZE_MB=10
IGNORED_DIRECTORIES=.git,node_modules,dist
# Logging
LOG_LEVEL=INFO
LOG_FORMAT=jsonhandle s1-generate- Generate code from requirementshandle s2-metadata- Extract metadata from codehandle s3-validate- Validate code (syntax → tests → AI)handle status- Show platform statushandle version- Show version info
# Generate Python calculator
handle s1-generate examples/requirements/calculator.json python
# Extract metadata with language filter
handle s2-metadata ./src --languages python --output meta.json
# Validate with AI logic check
handle s3-validate ./src --requirements req.json --ai-check
# Check what providers are available
handle status# Run unit tests
pytest tests/unit/
# Run integration tests
pytest tests/integration/
# Run end-to-end tests
pytest tests/e2e/
# With coverage
pytest --cov=platform tests/# Build image
docker build -t platform:latest .
# Run CLI
docker run -it platform:latest handle status
# Run API (future)
docker run -p 8000:8000 platform:latest uvicorn platform.interfaces.api.app:appGitHub Actions workflow included for:
- Linting (ruff, mypy)
- Testing (pytest)
- Security (bandit)
- Docker builds
- Python providers (all 3 services)
- CLI interface with Typer
- OpenAI/Azure OpenAI support
- Basic validation pipeline
- TypeScript providers
- Java providers
- Plugin system
- Web dashboard
- FastAPI REST API
- Authentication & authorization
- Rate limiting & quotas
- Metrics & monitoring
- Multi-tenant support
- Custom model fine-tuning
- Advanced static analysis
- Integration with IDEs
- Collaborative features
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
To add a new language:
- Create provider directory:
adapters/providers/mylang/ - Implement the three providers:
codegen_provider.pymetadata_provider.pysyntax_validator.py
- Register in
kernel/di.py - Add tests in
tests/unit/providers/mylang/
MIT License - see LICENSE file for details.
- OpenAI for GPT models
- Pydantic for data validation
- Typer for CLI framework
- FastAPI for future API layer
- Rich for beautiful CLI output