Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .eslintignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
dist
node_modules
examples
locales
56 changes: 36 additions & 20 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
- **Installed Size**: ~106MB (including all dependencies in node_modules)
- **Languages**: TypeScript (primary), JavaScript (compiled output)
- **Target Runtime**: Node.js (CommonJS modules)
- **Framework**: Model-agnostic, no AI framework dependencies
- **Framework**: Model-agnostic core; optional adapters (LangChain)
- **Package Manager**: pnpm (preferred) or npm (fallback)
- **Testing**: vitest
- **Linting**: ESLint with TypeScript support
Expand Down Expand Up @@ -52,7 +52,7 @@
npm run test
# Runs all vitest tests
# Duration: ~5-10 seconds
# Should show: "Test Files 2 passed (2), Tests 7 passed (7)"
# Should show: "Test Files 3 passed (3), Tests 16 passed (16)"
```

4. **Format Code**
Expand Down Expand Up @@ -97,20 +97,24 @@ The repository uses GitHub Actions CI that runs:

```
/
├── src/ # TypeScript source code
│ ├── index.ts # Main exports (TrimCompressor, SummarizeCompressor, interfaces)
│ ├── interfaces.ts # Core type definitions (SlimContextMessage, etc.)
│ └── strategies/ # Compression strategy implementations
│ ├── trim.ts # TrimCompressor: keeps first + last N messages
│ └── summarize.ts # SummarizeCompressor: AI-powered summarization
├── tests/ # vitest test files
│ ├── trim.test.ts # Tests for TrimCompressor
│ └── summarize.test.ts # Tests for SummarizeCompressor
├── examples/ # Documentation-only examples (not code)
│ ├── OPENAI_EXAMPLE.md # Copy-paste OpenAI integration
│ └── LANGCHAIN_EXAMPLE.md # Copy-paste LangChain integration
├── dist/ # Compiled JavaScript output (generated)
└── package.json # npm package configuration
├── src/ # TypeScript source code
│ ├── index.ts # Main exports (trim, summarize, interfaces, adapters namespace)
│ ├── interfaces.ts # Core type definitions (SlimContextMessage, etc.)
│ ├── adapters/ # Integration adapters (optional)
│ │ └── langchain.ts # LangChain adapter + helpers (compressLangChainHistory, toSlimModel)
│ └── strategies/ # Compression strategy implementations
│ ├── trim.ts # TrimCompressor: keeps first + last N messages
│ └── summarize.ts # SummarizeCompressor: AI-powered summarization
├── tests/ # vitest test files
│ ├── trim.test.ts # Tests for TrimCompressor
│ ├── summarize.test.ts # Tests for SummarizeCompressor
│ └── langchain.test.ts # Tests for LangChain adapter + helper
├── examples/ # Documentation-only examples (not code)
│ ├── OPENAI_EXAMPLE.md # Copy-paste OpenAI integration
│ ├── LANGCHAIN_EXAMPLE.md # Copy-paste LangChain integration
│ └── LANGCHAIN_COMPRESS_HISTORY.md # One-call compressLangChainHistory usage
├── dist/ # Compiled JavaScript output (generated)
└── package.json # npm package configuration
```

### Configuration Files
Expand All @@ -134,12 +138,13 @@ The repository uses GitHub Actions CI that runs:
- **TrimCompressor**: Simple strategy keeping first (system) message + last N-1 messages
- **SummarizeCompressor**: AI-powered strategy that summarizes middle conversations when exceeding maxMessages

**Framework Independence**: No dependencies on OpenAI, LangChain, or other AI frameworks. Users implement the minimal `SlimContextChatModel` interface to connect their preferred model.
**Framework Independence**: Core library has no framework dependencies. An optional LangChain adapter is provided for convenience; core remains BYOM.

### Dependencies and Build Artifacts

- **Production**: Zero dependencies (framework-agnostic design)
- **Production**: Zero runtime dependencies (framework-agnostic design)
- **Development**: TypeScript, ESLint, Prettier, vitest, various type definitions
- **Optional peer**: `@langchain/core` (only if using the LangChain adapter). The adapter is exported under `slimcontext/adapters/langchain` and as a `langchain` namespace from the root export.
- **Ignored Files**: dist/, node_modules/, examples/ (linting), \*.tgz
- **Distributed Files**: Only dist/ directory (compiled JS + .d.ts files)

Expand All @@ -149,8 +154,8 @@ The repository uses GitHub Actions CI that runs:

```bash
npm run test
# Expects: 7 tests across 2 files, all passing
# Tests cover both TrimCompressor and SummarizeCompressor functionality
# Expects: ~16 tests across 3 files, all passing
# Tests cover TrimCompressor, SummarizeCompressor, and the LangChain adapter/helper
```

### Manual Verification Steps
Expand Down Expand Up @@ -181,6 +186,17 @@ npm run test
- **src/interfaces.ts**: Core type definitions - modify for interface changes
- **src/strategies/trim.ts**: Simple compression logic
- **src/strategies/summarize.ts**: AI-powered compression with alignment logic
- **src/adapters/langchain.ts**: LangChain adapter and helpers (`compressLangChainHistory`, `toSlimModel`, conversions)

### Adapters

- LangChain adapter import options:
- Recommended (works across module systems): `import * as langchain from 'slimcontext/adapters/langchain'`
- Root namespace (available as a property on the root export; usage differs by module system):
- CommonJS: `const { langchain } = require('slimcontext')`
- ESM/TypeScript: `import * as slim from 'slimcontext'; const { langchain } = slim;`
- Note: `import { langchain } from 'slimcontext'` may not work in all environments due to CJS/ESM interop. Prefer one of the patterns above.
- Includes a one-call history helper: `compressLangChainHistory(history, options)`

---

Expand Down
24 changes: 23 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,29 @@ All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

## [Unreleased] - 2025-08-24
## [2.1.0] - 2025-08-27

### Added

- LangChain adapter under `src/adapters/langchain.ts` with helpers:
- `extractContent`, `roleFromMessageType`, `baseToSlim`, `slimToLangChain`
- `toSlimModel(llm)` wrapper to use LangChain `BaseChatModel` with `SummarizeCompressor`.
- `compressLangChainHistory(history, options)` high-level helper for one-call compression on `BaseMessage[]`.
- Tests for adapter behavior in `tests/langchain.test.ts`.
- Examples:
- `examples/LANGCHAIN_EXAMPLE.md`: adapting a LangChain model to `SlimContextChatModel`.
- `examples/LANGCHAIN_COMPRESS_HISTORY.md`: using `compressLangChainHistory` directly.

### Changed

- README updated with a LangChain adapter section and one-call usage sample.

### Notes

- The adapter treats LangChain `tool` messages as `assistant` during compression.
- `@langchain/core` is an optional peer dependency; only needed if you use the adapter.

## [2.0.0] - 2025-08-24

### Breaking

Expand Down
47 changes: 44 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,14 @@ Lightweight, model-agnostic chat history compression utilities for AI assistants
## Examples

- OpenAI: see `examples/OPENAI_EXAMPLE.md` (copy-paste snippet; BYOM, no deps added here).
- LangChain: see `examples/LANGCHAIN_EXAMPLE.md` (adapts a LangChain chat model to `SlimContextChatModel`).
- LangChain: see `examples/LANGCHAIN_EXAMPLE.md` and `examples/LANGCHAIN_COMPRESS_HISTORY.md`.

## Features

- Trim strategy: keep the first (system) message and last N messages.
- Summarize strategy: summarize the middle portion using your own chat model.
- Framework agnostic: plug in any model wrapper implementing a minimal `invoke()` interface.
- Optional LangChain adapter with a one-call helper for compressing histories.

## Installation

Expand Down Expand Up @@ -113,8 +114,48 @@ if (history.length > 50) {

## Example Integration

See `examples/LANGCHAIN_EXAMPLE.md` for a LangChain-style example.
See `examples/OPENAI_EXAMPLE.md` for an OpenAI example (copy-paste snippet).
- See `examples/OPENAI_EXAMPLE.md` for an OpenAI copy-paste snippet.
- See `examples/LANGCHAIN_EXAMPLE.md` for a LangChain-style integration.
- See `examples/LANGCHAIN_COMPRESS_HISTORY.md` for a one-call LangChain history compression helper.

## Adapters

### LangChain

If you already use LangChain chat models, you can use the built-in adapter. It’s exported in two ways:

- Namespaced: `import { langchain } from 'slimcontext'`
- Direct path: `import * as langchain from 'slimcontext/adapters/langchain'`

Common helpers:

- `compressLangChainHistory(history, options)` – one-call compression for LangChain `BaseMessage[]`.
- `toSlimModel(llm)` – wrap a LangChain `BaseChatModel` for `SummarizeCompressor`.

Example (one-call history compression):

```ts
import { AIMessage, HumanMessage, SystemMessage } from '@langchain/core/messages';
import { ChatOpenAI } from '@langchain/openai';
import { langchain } from 'slimcontext';

const lc = new ChatOpenAI({ model: 'gpt-4o-mini', temperature: 0 });

const history = [
new SystemMessage('You are helpful.'),
new HumanMessage('Please summarize the discussion so far.'),
new AIMessage('Certainly!'),
// ...more messages
];

const compact = await langchain.compressLangChainHistory(history, {
strategy: 'summarize',
llm: lc, // BaseChatModel
maxMessages: 12,
});
```

See `examples/LANGCHAIN_COMPRESS_HISTORY.md` for a fuller copy-paste example.

## API

Expand Down
42 changes: 42 additions & 0 deletions examples/LANGCHAIN_COMPRESS_HISTORY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# LangChain one-call history compression

This example shows how to use `compressLangChainHistory` to compress a LangChain message history in a single call.

```ts
import { AIMessage, HumanMessage, SystemMessage } from '@langchain/core/messages';
import { ChatOpenAI } from '@langchain/openai';
import { langchain } from 'slimcontext';

// 1) Create your LangChain chat model (any BaseChatModel works)
const llm = new ChatOpenAI({ model: 'gpt-4o-mini', temperature: 0 });

// 2) Build your existing LangChain-compatible history
const history = [
new SystemMessage('You are a helpful assistant.'),
new HumanMessage('Hi! Help me plan a 3-day trip to Tokyo.'),
new AIMessage('Sure, what are your interests?'),
// ... many more messages
];

// 3) Compress with either summarize (default) or trim strategy
const compact = await langchain.compressLangChainHistory(history, {
strategy: 'summarize',
llm, // pass your BaseChatModel
maxMessages: 12, // target total messages after compression (system + summary + recent)
});

// Alternatively, use trimming without an LLM:
const trimmed = await langchain.compressLangChainHistory(history, {
strategy: 'trim',
messagesToKeep: 8,
});

console.log('Original size:', history.length);
console.log('Summarized size:', compact.length);
console.log('Trimmed size:', trimmed.length);
```

Notes

- `@langchain/core` is an optional peer dependency. Install it only if you use the adapter.
- `maxMessages` must be at least 4 for summarize (system + summary + 2 recent).
17 changes: 15 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
{
"name": "slimcontext",
"version": "2.0.1",
"version": "2.1.0",
"description": "Lightweight, model-agnostic chat history compression (trim + summarize) for AI assistants.",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"exports": {
".": "./dist/index.js",
"./adapters/langchain": "./dist/adapters/langchain.js"
},
"files": [
"dist"
],
Expand Down Expand Up @@ -37,6 +41,14 @@
},
"homepage": "https://github.com/Agentailor/slimcontext#readme",
"packageManager": "pnpm@10.14.0",
"peerDependencies": {
"@langchain/core": ">=0.3.71 <1"
},
"peerDependenciesMeta": {
"@langchain/core": {
"optional": true
}
},
"devDependencies": {
"eslint": "^8.57.0",
"@types/node": "^24.3.0",
Expand All @@ -47,6 +59,7 @@
"eslint-plugin-unused-imports": "^4.2.0",
"prettier": "^3.6.2",
"typescript": "^5.9.2",
"vitest": "^3.2.4"
"vitest": "^3.2.4",
"@langchain/core": "^0.3.71"
}
}
Loading