Language / θͺθ¨: English | ηΉι«δΈζ
A complete LeetCode practice framework with multiple test cases, auto-comparison, and debug integration.
neetcode/
β
βββ .vscode/ β VS Code integration
β βββ settings.json β Python environment settings
β βββ tasks.json β Ctrl+Shift+B shortcuts
β βββ launch.json β F5 Debug configuration
β
βββ runner/ β Test runner modules
β βββ test_runner.py β Run all .in/.out and compare
β βββ case_runner.py β Run single test case (for debugging)
β βββ util.py β Shared utilities
β
βββ solutions/ β Solution files for each problem
β βββ 0001_two_sum.py
β
βββ tests/ β All test cases
β βββ 0001_two_sum_1.in
β βββ 0001_two_sum_1.out
β βββ *_failed_*.in β Auto-saved failed generated cases (with --save-failed)
β βββ ...
β
βββ templates/ β Templates for new problems
β βββ template_solution.py β Single solution template
β βββ template_solution_multi.py β Multi-solution (one class)
β βββ template_solution_wrapper.py β Multi-solution (wrapper pattern)
β βββ template_test.txt
β
βββ leetcode/ β Python virtual environment (Python 3.11)
β
βββ run_tests.bat β Windows: Run all tests
βββ run_case.bat β Windows: Run single test
βββ new_problem.bat β Windows: Create new problem
β
βββ run_tests.sh β Linux/macOS: Run all tests
βββ run_case.sh β Linux/macOS: Run single test
βββ new_problem.sh β Linux/macOS: Create new problem
β
βββ README.md
Reference: LeetCode Official Environment
Prerequisite: To use
py installcommand, you need to install Python Install Manager from the Python official website first.
# Navigate to project directory
cd /d "D:\Developer\program\python\neetcode"
# Install Python 3.11 (if not already installed)
# Note: Requires Python Install Manager from https://www.python.org/downloads/
py install 3.11
# Create virtual environment
py -3.11 -m venv leetcode
# Activate virtual environment
leetcode\Scripts\activate
# Install debugpy (for debugging)
pip install debugpyWhy pyenv? Installs Python in user directory without affecting system Python. Supports multiple versions.
# ============================================
# Step 1: Install pyenv (one-time setup)
# ============================================
# --- macOS ---
brew install pyenv
# --- Linux (Ubuntu/Debian/Fedora/etc.) ---
# Install dependencies first:
sudo apt update && sudo apt install -y build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev curl \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev
# Install pyenv:
curl https://pyenv.run | bash
# ============================================
# Step 2: Configure shell (add to ~/.bashrc or ~/.zshrc)
# ============================================
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
# Reload shell
source ~/.bashrc # or: source ~/.zshrc
# ============================================
# Step 3: Install Python 3.11 and setup project
# ============================================
# Navigate to project directory
cd ~/path/to/neetcode
# Install Python 3.11 (doesn't affect system Python)
pyenv install 3.11
# Set Python 3.11 for this project only
pyenv local 3.11
# Create virtual environment
python -m venv leetcode
# Activate virtual environment
source leetcode/bin/activate
# Install debugpy (for debugging)
pip install debugpy
# Make shell scripts executable (first time only)
chmod +x run_tests.sh run_case.sh new_problem.shπ Alternative: Direct system install (may affect existing Python)
# Ubuntu/Debian:
sudo apt update && sudo apt install python3.11 python3.11-venv
# macOS (Homebrew):
brew install python@3.11
# Then create venv:
python3.11 -m venv leetcodecd /d "D:\Developer\program\python\neetcode"
leetcode\Scripts\activatecd ~/path/to/neetcode
source leetcode/bin/activate# Single solution template
new_problem.bat 0007_reverse_integer
# Multi-solution template (one class, multiple methods)
new_problem.bat 0023_merge_k_lists --multi
# Wrapper-based template (multiple classes, preserves LeetCode method names)
new_problem.bat 0025_reverse_nodes --wrapper# Single solution template
./new_problem.sh 0007_reverse_integer
# Multi-solution template (one class, multiple methods)
./new_problem.sh 0023_merge_k_lists --multi
# Wrapper-based template (multiple classes, preserves LeetCode method names)
./new_problem.sh 0025_reverse_nodes --wrapperThis will create:
solutions/0007_reverse_integer.pytests/0007_reverse_integer_1.intests/0007_reverse_integer_1.out
# Run all test cases
run_tests.bat 0001_two_sum
# Run single test case
run_case.bat 0001_two_sum 1# Run all test cases
./run_tests.sh 0001_two_sum
# Run single test case
./run_case.sh 0001_two_sum 1| Shortcut | Function |
|---|---|
Ctrl+Shift+B |
Run all tests for current file |
F5 |
Debug current file with case #1 |
Note: Open a solution file in
solutions/before using shortcuts.
| Task | Description |
|---|---|
| Run all tests for current problem | Basic test run |
| Run case #1 / #2 | Run specific test case |
| Benchmark current problem | Show execution time |
| Run all solutions with benchmark | Compare all solutions |
| Run with generated cases (10) | Static + 10 generated |
| Run generated only | Skip static tests |
| Run generated with seed | Reproducible generation |
| Run generated + save failed | Save failed inputs |
| Run all solutions + generated | All solutions with generator |
| Configuration | Description |
|---|---|
| Debug current problem (case #1/2/3) | Debug specific test case |
| Debug all tests | Debug full test suite |
| Benchmark current problem | Run with timing |
| Debug with generated cases | Static + generated |
| Debug generated only | Only generated cases |
| Debug generated with seed | Reproducible debug |
| Debug all solutions + generated | Compare all with generator |
π‘ Tip: These tasks/configs run the same commands documented in Command Line Usage and Test Case Generator.
Example: "Benchmark current problem" runs
python runner/test_runner.py {problem} --benchmark
# solutions/0001_two_sum.py
from typing import List
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
# Your solution
pass
def solve():
import sys
lines = sys.stdin.read().strip().split('\n')
# Parse input
nums = list(map(int, lines[0].split(',')))
target = int(lines[1])
sol = Solution()
result = sol.twoSum(nums, target)
# Print result
print(result)
if __name__ == "__main__":
solve()| Item | Requirement |
|---|---|
| Line Ending | LF (Unix/Linux format, \n) |
| Encoding | UTF-8 |
| File Ending | Must end with single newline |
| Naming | {problem_number}_{problem_name}_{case_number}.in/.out |
2,7,11,15
9
[0, 1]
# Run all test cases
python runner/test_runner.py <problem_name>
# Run single test case
python runner/case_runner.py <problem_name> <case_index>python runner/test_runner.py 0001_two_sum
python runner/case_runner.py 0001_two_sum 1Test multiple solutions and compare performance for the same problem.
# Run default solution
python runner/test_runner.py 0023_merge_k_sorted_lists
# Run specific solution
python runner/test_runner.py 0023_merge_k_sorted_lists --method heap
python runner/test_runner.py 0023_merge_k_sorted_lists --method greedy
# Run all solutions
python runner/test_runner.py 0023_merge_k_sorted_lists --all
# Run all solutions + performance comparison
python runner/test_runner.py 0023_merge_k_sorted_lists --all --benchmarkAdd a SOLUTIONS dictionary in your solution file:
# solutions/0023_merge_k_sorted_lists.py
SOLUTIONS = {
"default": {
"method": "mergeKListsPriorityQueue", # Name of the method in Solution class
"complexity": "O(N log k)", # Time complexity
"description": "Priority Queue approach"
},
"heap": {
"method": "mergeKListsPriorityQueue",
"complexity": "O(N log k)",
"description": "Priority Queue (Min Heap)"
},
"divide": {
"method": "mergeKListsDivideConquer",
"complexity": "O(N log k)",
"description": "Divide and Conquer"
},
"greedy": {
"method": "mergeKListsGreedy",
"complexity": "O(kN)",
"description": "Greedy comparison"
},
}
class Solution:
def mergeKListsPriorityQueue(self, lists):
# Heap solution...
pass
def mergeKListsDivideConquer(self, lists):
# Divide & Conquer solution...
pass
def mergeKListsGreedy(self, lists):
# Greedy solution...
pass
def solve():
import os
# Get solution method from environment variable
method_name = os.environ.get('SOLUTION_METHOD', 'default')
method_info = SOLUTIONS.get(method_name, SOLUTIONS['default'])
method_func_name = method_info['method']
sol = Solution()
method_func = getattr(sol, method_func_name)
result = method_func(...)
print(result)| Field | Description | Required |
|---|---|---|
method |
Method name in Solution class | β |
complexity |
Time complexity (for display) | β |
description |
Solution description | β |
The key in SOLUTIONS is the short name used in command line:
SOLUTIONS = {
"default": {"method": "solve_optimal", ...}, # Default solution
"heap": {"method": "solve_heap", ...}, # --method heap
"h": {"method": "solve_heap", ...}, # --method h (alias)
"pq": {"method": "solve_priority_queue", ...}, # --method pq
"bf": {"method": "solve_bruteforce", ...}, # --method bf
}Note:
defaultis used when--methodis not specified- Time complexity must be annotated by user; system only measures actual execution time
When implementing multiple approaches (e.g., recursive vs iterative), you may encounter:
- Method name conflicts inside one class
- Having to rename methods away from their original LeetCode signatures
Solution: Use separate Solution classes with wrapper functions.
# solutions/0025_reverse_nodes_in_k_group.py
# ============================================
# Solution 1: Recursive approach
# ============================================
class SolutionRecursive:
def reverseKGroup(self, head, k):
# Recursive implementation...
pass
# ============================================
# Solution 2: Iterative approach
# ============================================
class SolutionIterative:
def reverseKGroup(self, head, k):
# Iterative implementation...
pass
# ============================================
# Wrapper functions for test_runner integration
# ============================================
def solve_recursive(head, k):
"""Wrapper for SolutionRecursive."""
return SolutionRecursive().reverseKGroup(head, k)
def solve_iterative(head, k):
"""Wrapper for SolutionIterative."""
return SolutionIterative().reverseKGroup(head, k)
# ============================================
# SOLUTIONS metadata
# ============================================
SOLUTIONS = {
"default": {
"method": "solve_iterative",
"complexity": "O(N) time, O(1) space",
"description": "Iterative in-place reversal"
},
"recursive": {
"method": "solve_recursive",
"complexity": "O(N) time, O(N) space",
"description": "Recursive reversal with stack"
},
"iterative": {
"method": "solve_iterative",
"complexity": "O(N) time, O(1) space",
"description": "Iterative in-place reversal"
},
}
def solve():
import os
import sys
# Get solution method from environment variable
method_name = os.environ.get('SOLUTION_METHOD', 'default')
method_info = SOLUTIONS.get(method_name, SOLUTIONS['default'])
method_func_name = method_info['method']
# Parse input
lines = sys.stdin.read().strip().split('\n')
# ... parse your input ...
# Call wrapper function directly (not via class)
method_func = globals()[method_func_name]
result = method_func(head, k)
print(result)Benefits of this pattern:
- Each solution stays in its own class (
SolutionRecursive,SolutionIterative) - Preserve original LeetCode method names (e.g.,
reverseKGroup,mergeKLists) - No method name collisions inside a single class
- Scales nicely when a problem has more than two approaches
Tip: Use
new_problem.bat <name> --wrapper(Windows) or./new_problem.sh <name> --wrapper(Linux/macOS) to create a template with this pattern.
Some LeetCode problems state "You may return the answer in any order" or have multiple valid answers. The test runner supports flexible validation with clear output labels.
| Label | Description | Requires .out |
|---|---|---|
[judge] |
JUDGE_FUNC with .out reference |
β |
[judge-only] |
JUDGE_FUNC without .out (pure validation) |
β |
[exact] |
Exact string match | β |
[sorted] |
Sort lists before comparison | β |
[set] |
Set comparison | β |
1. JUDGE_FUNC (custom validation) - highest priority
2. COMPARE_MODE (sorted/set comparison)
3. Exact string match (default)
============================================================
π§ͺ Testing: 0051_n_queens
βοΈ Judge: JUDGE_FUNC
============================================================
π Method: default
0051_n_queens_1: β
PASS (88.33ms) [judge]
0051_n_queens_2: β
PASS (92.15ms) [judge]
0051_n_queens_3: β
PASS (156.20ms) [judge-only]
Result: 3 / 3 cases passed.
Use Decision Problem approach: verify the answer is valid, not just identical.
Key Feature: .out file is optional when JUDGE_FUNC is defined!
# solutions/0051_n_queens.py
def judge(actual: list, expected, input_data: str) -> bool:
"""
Custom validation function.
Args:
actual: Program output (parsed as Python object if possible)
expected: Expected output, or None if .out file doesn't exist
input_data: Input data (raw string)
Returns:
bool: Whether the answer is correct
"""
n = int(input_data.strip())
# Validate solution regardless of expected
for board in actual:
if not is_valid_n_queens(board, n):
return False
# Check count only if expected is provided
if expected is not None:
if len(actual) != len(expected):
return False
# Check no duplicates
return len(set(tuple(b) for b in actual)) == len(actual)
JUDGE_FUNC = judge # Tell test_runner to use this functionBenefits:
- Validates correctness, not just string equality
- Handles multiple valid answers
.outfile optional - supports judge-only mode for custom test cases- Works with any output format (strings, objects, custom formats)
Use Cases for Judge-Only Mode (no .out):
- Custom large test cases you generate
- Stress testing with random inputs
- Cases where computing expected output is complex
For simple order-independent comparisons (requires .out file):
# solutions/0046_permutations.py
COMPARE_MODE = "sorted" # Options: "exact" | "sorted" | "set"| Mode | Description | Use Case |
|---|---|---|
"exact" |
Exact string match (default) | Most problems |
"sorted" |
Sort lists before comparison | Permutations, Combinations |
"set" |
Set comparison (ignores duplicates) | Unique elements |
def judge(actual: list, expected, input_data: str) -> bool:
n = int(input_data.strip())
# Always validate board correctness
if not all(is_valid_board(b, n) for b in actual):
return False
# If .out exists, also check count
if expected is not None:
return len(actual) == len(expected)
return True # Judge-only mode: just validate
JUDGE_FUNC = judgedef judge(actual: str, expected: str, input_data: str) -> bool:
# Parse "1->2->3" format
def parse(s):
return s.strip().split("->") if s.strip() else []
return parse(actual) == parse(expected)
JUDGE_FUNC = judgedef judge(actual: float, expected: float, input_data: str) -> bool:
return abs(actual - expected) < 1e-5
JUDGE_FUNC = judgedef judge(actual: list, expected, input_data: str) -> bool:
"""Validate without expected output."""
# expected is None when .out doesn't exist
params = parse_input(input_data)
return is_valid_solution(actual, params)
JUDGE_FUNC = judge| Problem | Recommended Approach | .out Required |
|---|---|---|
| N-Queens | JUDGE_FUNC (validate board) |
Optional |
| Permutations | COMPARE_MODE = "sorted" |
β |
| Subsets | COMPARE_MODE = "sorted" |
β |
| Shortest Path (multiple) | JUDGE_FUNC (validate path) |
Optional |
| Floating point | JUDGE_FUNC (tolerance) |
β |
| LinkedList/Tree | JUDGE_FUNC (parse format) |
β |
| Custom stress tests | JUDGE_FUNC (judge-only) |
β |
Automatically generate test cases to stress-test your solutions.
Create a generator file in generators/ with the same name as your solution:
generators/
βββ 0004_median_of_two_sorted_arrays.py
# generators/0004_median_of_two_sorted_arrays.py
"""
LeetCode Constraints:
- 0 <= m, n <= 1000
- 1 <= m + n <= 2000
- -10^6 <= nums1[i], nums2[i] <= 10^6
"""
import random
from typing import Iterator, Optional
def generate(count: int = 10, seed: Optional[int] = None) -> Iterator[str]:
"""
Generate test case inputs.
Args:
count: Number of test cases to generate
seed: Random seed for reproducibility
Yields:
str: Test input (same format as .in files)
"""
# Constraints
min_m, max_m = 0, 1000
min_n, max_n = 0, 1000
min_val, max_val = -10**6, 10**6
if seed is not None:
random.seed(seed)
# Edge cases first
yield "[]\n[1]"
yield "[1]\n[]"
count -= 2
# Random cases
for _ in range(count):
m = random.randint(min_m, max_m)
n = random.randint(min_n, max_n)
nums1 = sorted([random.randint(min_val, max_val) for _ in range(m)])
nums2 = sorted([random.randint(min_val, max_val) for _ in range(n)])
yield f"{nums1}\n{nums2}".replace(' ', '')# Run tests/ + 10 generated cases
python runner/test_runner.py 0004_median --generate 10
# Only run generated cases (skip tests/)
python runner/test_runner.py 0004_median --generate-only 10
# Use seed for reproducibility
python runner/test_runner.py 0004_median --generate 10 --seed 12345
# Save failed cases for debugging
# Failed cases will be saved to tests/ as {problem}_failed_{n}.in
python runner/test_runner.py 0004_median --generate 10 --save-failed============================================================
π§ͺ Testing: 0004_median_of_two_sorted_arrays
βοΈ Judge: JUDGE_FUNC
π² Generator: 10 cases, seed: 12345
============================================================
π Running default solution...
--- tests/ (static) ---
0004_median_1: β
PASS (12.33ms) [judge]
0004_median_2: β
PASS (11.15ms) [judge]
--- generators/ (10 cases, seed: 12345) ---
gen_1: β
PASS (8.20ms) [generated]
gen_2: β
PASS (7.15ms) [generated]
gen_3: β FAIL [generated]
ββ Input βββββββββββββββββββββββββββββββββ
β [1,3,5,7,9]
β [2,4,6,8,10]
ββ Actual ββββββββββββββββββββββββββββββββ
β 5.0
ββββββββββββββββββββββββββββββββββββββββββ
πΎ Saved to: tests/0004_median_failed_1.in
...
Summary: 11 / 12 cases passed.
ββ Static (tests/): 2/2
ββ Generated: 9/10
π‘ To reproduce: python runner/test_runner.py 0004_median --generate 10 --seed 12345
| Component | Required | Description |
|---|---|---|
generators/{problem}.py |
Generator file | Must have generate(count, seed) function |
JUDGE_FUNC in solution |
β | Generator cases have no .out, need judge |
tests/*.in |
Optional | Static tests run before generated |
tests/*_failed_*.in |
Auto-generated | Failed cases saved with --save-failed flag |
Automatically estimate algorithm time complexity using the big_O library approach.
Simple and generic - Only requires one additional function in your generator:
| Function | Purpose | Required |
|---|---|---|
generate(count, seed) |
Random test cases for functional testing | β Required |
generate_for_complexity(n) |
Controlled size cases for complexity estimation | Optional |
The estimator uses Mock stdin approach internally:
- β
Generic - works with any solution that has
solve()function - β No subprocess overhead
- β Maintains stdin abstraction design
# Estimate complexity (requires generate_for_complexity in generator)
python runner/test_runner.py 0004_median_of_two_sorted_arrays --estimate
# Combine with other flags
python runner/test_runner.py 0004 --all --benchmark --estimate# generators/0004_median_of_two_sorted_arrays.py
# Required: Random test generation
def generate(count: int, seed: Optional[int] = None) -> Iterator[str]:
"""Random sizes - tests functional correctness"""
for _ in range(count):
m = random.randint(0, 1000)
n = random.randint(0, 1000)
yield _generate_case(m, n)
# Optional: Enable complexity estimation
def generate_for_complexity(n: int) -> str:
"""
Generate test case with specific input size.
For this problem, n = total elements (m + n)
"""
m = random.randint(0, n)
return _generate_case(m, n - m)π Running complexity estimation...
Mode: Direct call (Mock stdin, no subprocess overhead)
Sizes: [10, 20, 50, 100, 200, 500, 1000, 2000]
n= 10: 0.0040ms (avg of 3 runs)
n= 100: 0.0082ms (avg of 3 runs)
n= 1000: 0.0685ms (avg of 3 runs)
n= 2000: 0.1796ms (avg of 3 runs)
β
Estimated: O(n log n)
Confidence: 1.00
| Component | Required | Description |
|---|---|---|
big-O package |
β | pip install big-O |
generate_for_complexity(n) |
β | Function that takes size n and returns test input |
Not all problems are suitable for time complexity estimation. The estimation works best when:
| β Suitable | β Not Suitable |
|---|---|
Input size n can vary continuously (10, 100, 1000...) |
Input size has hard constraints (e.g., n β€ 9) |
| Execution time scales with input size | Execution time is dominated by fixed overhead |
| Linear, logarithmic, polynomial complexity | Factorial/exponential with small n limit |
Examples:
| Problem | Suitable? | Reason |
|---|---|---|
| Two Sum | β | n can be 10 ~ 10000, O(n) scales clearly |
| Longest Substring | β | String length can vary widely |
| Merge k Sorted Lists | β | Total elements N can scale |
| N-Queens (0051) | β | n β€ 9 (factorial explosion), can't vary size meaningfully |
| Rotting Oranges (0994) | β | Grid size limited, BFS time dominated by grid structure |
| Sudoku Solver | β | Fixed 9x9 grid, backtracking complexity |
Tip: Only add
generate_for_complexity(n)to generators wherencan meaningfully vary from small (10) to large (1000+).
- Solution files: No changes required (must have
solve()function) - Existing generators: Continue to work without changes
- New feature: Add
generate_for_complexity(n)to enable estimation
============================================================
π§ͺ Testing: 0023_merge_k_sorted_lists
============================================================
π Method: default
Complexity: O(N log k)
Description: Priority Queue (Min Heap) approach
0023_merge_k_sorted_lists_1: β
PASS (53.04ms)
0023_merge_k_sorted_lists_2: β
PASS (43.11ms)
0023_merge_k_sorted_lists_3: β
PASS (44.50ms)
Result: 3 / 3 cases passed.
π Method: heap
Complexity: O(N log k)
Description: Priority Queue (Min Heap) approach
0023_merge_k_sorted_lists_1: β
PASS (44.40ms)
0023_merge_k_sorted_lists_2: β
PASS (43.89ms)
0023_merge_k_sorted_lists_3: β
PASS (44.79ms)
Result: 3 / 3 cases passed.
π Method: divide
Complexity: O(N log k)
Description: Divide and Conquer approach
0023_merge_k_sorted_lists_1: β
PASS (44.02ms)
0023_merge_k_sorted_lists_2: β
PASS (44.32ms)
0023_merge_k_sorted_lists_3: β
PASS (45.11ms)
Result: 3 / 3 cases passed.
π Method: greedy
Complexity: O(kN)
Description: Greedy comparison - compare all k heads each time
0023_merge_k_sorted_lists_1: β
PASS (44.68ms)
0023_merge_k_sorted_lists_2: β
PASS (45.00ms)
0023_merge_k_sorted_lists_3: β
PASS (44.78ms)
Result: 3 / 3 cases passed.
============================================================
π Performance Comparison
============================================================
Method Avg Time Complexity Pass Rate
------------------------------------------------------------
default 46.88ms O(N log k) 3/3
heap 44.36ms O(N log k) 3/3
divide 44.48ms O(N log k) 3/3
greedy 44.82ms O(kN) 3/3
============================================================
- Python Version: 3.11 (matches LeetCode Official Environment)
- Virtual Environment:
leetcode/(inside project) - Dependencies: See
requirements.txt
# Activate virtual environment first, then:
pip install -r requirements.txt| Package | Required | Description |
|---|---|---|
debugpy |
β | Debug support for VS Code |
big-O |
Optional | Time complexity estimation |
# PowerShell
.\leetcode\Scripts\Activate.ps1
# CMD
leetcode\Scripts\activate.batsource leetcode/bin/activate# Activate virtual environment first, then install
leetcode\Scripts\activate
pip install <package_name># Activate virtual environment first, then install
source leetcode/bin/activate
pip install <package_name>-
Add more test cases: Copy
.in/.outfiles and change the number0001_two_sum_1.in β 0001_two_sum_2.in 0001_two_sum_1.out β 0001_two_sum_2.out -
Debug specific test case: Modify case number in
launch.json -
Custom input format: Define parsing logic in
solve()function
MIT License - Free for personal learning