Skip to content

Python LeetCode framework with auto testing, random test generation, benchmarking, VS Code integration, and judge-based validation.

Notifications You must be signed in to change notification settings

lufftw/neetcode

Repository files navigation

🧩 NeetCode / LeetCode Practice Framework

Language / θͺžθ¨€: English | 繁體中文

A complete LeetCode practice framework with multiple test cases, auto-comparison, and debug integration.


πŸ“‘ Table of Contents


πŸ“ Project Structure

neetcode/
β”‚
β”œβ”€β”€ .vscode/                 ← VS Code integration
β”‚   β”œβ”€β”€ settings.json        ← Python environment settings
β”‚   β”œβ”€β”€ tasks.json           ← Ctrl+Shift+B shortcuts
β”‚   └── launch.json          ← F5 Debug configuration
β”‚
β”œβ”€β”€ runner/                  ← Test runner modules
β”‚   β”œβ”€β”€ test_runner.py       ← Run all .in/.out and compare
β”‚   β”œβ”€β”€ case_runner.py       ← Run single test case (for debugging)
β”‚   └── util.py              ← Shared utilities
β”‚
β”œβ”€β”€ solutions/               ← Solution files for each problem
β”‚   └── 0001_two_sum.py
β”‚
β”œβ”€β”€ tests/                   ← All test cases
β”‚   β”œβ”€β”€ 0001_two_sum_1.in
β”‚   β”œβ”€β”€ 0001_two_sum_1.out
β”‚   β”œβ”€β”€ *_failed_*.in        ← Auto-saved failed generated cases (with --save-failed)
β”‚   └── ...
β”‚
β”œβ”€β”€ templates/               ← Templates for new problems
β”‚   β”œβ”€β”€ template_solution.py         ← Single solution template
β”‚   β”œβ”€β”€ template_solution_multi.py   ← Multi-solution (one class)
β”‚   β”œβ”€β”€ template_solution_wrapper.py ← Multi-solution (wrapper pattern)
β”‚   └── template_test.txt
β”‚
β”œβ”€β”€ leetcode/                ← Python virtual environment (Python 3.11)
β”‚
β”œβ”€β”€ run_tests.bat            ← Windows: Run all tests
β”œβ”€β”€ run_case.bat             ← Windows: Run single test
β”œβ”€β”€ new_problem.bat          ← Windows: Create new problem
β”‚
β”œβ”€β”€ run_tests.sh             ← Linux/macOS: Run all tests
β”œβ”€β”€ run_case.sh              ← Linux/macOS: Run single test
β”œβ”€β”€ new_problem.sh           ← Linux/macOS: Create new problem
β”‚
└── README.md

πŸš€ Quick Start

1. Environment Setup (First Time)

Reference: LeetCode Official Environment

Windows (PowerShell)

Prerequisite: To use py install command, you need to install Python Install Manager from the Python official website first.

# Navigate to project directory
cd /d "D:\Developer\program\python\neetcode"

# Install Python 3.11 (if not already installed)
# Note: Requires Python Install Manager from https://www.python.org/downloads/
py install 3.11

# Create virtual environment
py -3.11 -m venv leetcode

# Activate virtual environment
leetcode\Scripts\activate

# Install debugpy (for debugging)
pip install debugpy

Linux / macOS (Using pyenv - Recommended)

Why pyenv? Installs Python in user directory without affecting system Python. Supports multiple versions.

# ============================================
# Step 1: Install pyenv (one-time setup)
# ============================================

# --- macOS ---
brew install pyenv

# --- Linux (Ubuntu/Debian/Fedora/etc.) ---
# Install dependencies first:
sudo apt update && sudo apt install -y build-essential libssl-dev zlib1g-dev \
  libbz2-dev libreadline-dev libsqlite3-dev curl \
  libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev

# Install pyenv:
curl https://pyenv.run | bash

# ============================================
# Step 2: Configure shell (add to ~/.bashrc or ~/.zshrc)
# ============================================
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init -)"' >> ~/.bashrc

# Reload shell
source ~/.bashrc   # or: source ~/.zshrc

# ============================================
# Step 3: Install Python 3.11 and setup project
# ============================================
# Navigate to project directory
cd ~/path/to/neetcode

# Install Python 3.11 (doesn't affect system Python)
pyenv install 3.11

# Set Python 3.11 for this project only
pyenv local 3.11

# Create virtual environment
python -m venv leetcode

# Activate virtual environment
source leetcode/bin/activate

# Install debugpy (for debugging)
pip install debugpy

# Make shell scripts executable (first time only)
chmod +x run_tests.sh run_case.sh new_problem.sh
πŸ“‹ Alternative: Direct system install (may affect existing Python)
# Ubuntu/Debian:
sudo apt update && sudo apt install python3.11 python3.11-venv

# macOS (Homebrew):
brew install python@3.11

# Then create venv:
python3.11 -m venv leetcode

2. Daily Usage (Activate Environment)

Windows

cd /d "D:\Developer\program\python\neetcode"
leetcode\Scripts\activate

Linux / macOS

cd ~/path/to/neetcode
source leetcode/bin/activate

3. Create New Problem

Windows

# Single solution template
new_problem.bat 0007_reverse_integer

# Multi-solution template (one class, multiple methods)
new_problem.bat 0023_merge_k_lists --multi

# Wrapper-based template (multiple classes, preserves LeetCode method names)
new_problem.bat 0025_reverse_nodes --wrapper

Linux / macOS

# Single solution template
./new_problem.sh 0007_reverse_integer

# Multi-solution template (one class, multiple methods)
./new_problem.sh 0023_merge_k_lists --multi

# Wrapper-based template (multiple classes, preserves LeetCode method names)
./new_problem.sh 0025_reverse_nodes --wrapper

This will create:

  • solutions/0007_reverse_integer.py
  • tests/0007_reverse_integer_1.in
  • tests/0007_reverse_integer_1.out

4. Run Tests

Windows

# Run all test cases
run_tests.bat 0001_two_sum

# Run single test case
run_case.bat 0001_two_sum 1

Linux / macOS

# Run all test cases
./run_tests.sh 0001_two_sum

# Run single test case
./run_case.sh 0001_two_sum 1

⌨️ VS Code Integration

Quick Shortcuts

Shortcut Function
Ctrl+Shift+B Run all tests for current file
F5 Debug current file with case #1

Note: Open a solution file in solutions/ before using shortcuts.

Tasks (Ctrl+Shift+P β†’ "Tasks: Run Task")

Task Description
Run all tests for current problem Basic test run
Run case #1 / #2 Run specific test case
Benchmark current problem Show execution time
Run all solutions with benchmark Compare all solutions
Run with generated cases (10) Static + 10 generated
Run generated only Skip static tests
Run generated with seed Reproducible generation
Run generated + save failed Save failed inputs
Run all solutions + generated All solutions with generator

Debug Configurations (F5 β†’ Select)

Configuration Description
Debug current problem (case #1/2/3) Debug specific test case
Debug all tests Debug full test suite
Benchmark current problem Run with timing
Debug with generated cases Static + generated
Debug generated only Only generated cases
Debug generated with seed Reproducible debug
Debug all solutions + generated Compare all with generator

πŸ’‘ Tip: These tasks/configs run the same commands documented in Command Line Usage and Test Case Generator.

Example: "Benchmark current problem" runs python runner/test_runner.py {problem} --benchmark


πŸ“ Solution File Format

# solutions/0001_two_sum.py
from typing import List

class Solution:
    def twoSum(self, nums: List[int], target: int) -> List[int]:
        # Your solution
        pass

def solve():
    import sys
    lines = sys.stdin.read().strip().split('\n')
    
    # Parse input
    nums = list(map(int, lines[0].split(',')))
    target = int(lines[1])
    
    sol = Solution()
    result = sol.twoSum(nums, target)
    
    # Print result
    print(result)

if __name__ == "__main__":
    solve()

πŸ“‹ Test File Format

Specifications

Item Requirement
Line Ending LF (Unix/Linux format, \n)
Encoding UTF-8
File Ending Must end with single newline
Naming {problem_number}_{problem_name}_{case_number}.in/.out

Input File (.in)

2,7,11,15
9

Output File (.out)

[0, 1]


πŸ”§ Command Line Usage

# Run all test cases
python runner/test_runner.py <problem_name>

# Run single test case
python runner/case_runner.py <problem_name> <case_index>

Examples

python runner/test_runner.py 0001_two_sum
python runner/case_runner.py 0001_two_sum 1

πŸš€ Multi-Solution Testing & Performance Comparison

Test multiple solutions and compare performance for the same problem.

Command Line Parameters

# Run default solution
python runner/test_runner.py 0023_merge_k_sorted_lists

# Run specific solution
python runner/test_runner.py 0023_merge_k_sorted_lists --method heap
python runner/test_runner.py 0023_merge_k_sorted_lists --method greedy

# Run all solutions
python runner/test_runner.py 0023_merge_k_sorted_lists --all

# Run all solutions + performance comparison
python runner/test_runner.py 0023_merge_k_sorted_lists --all --benchmark

How to Define Multiple Solutions

Add a SOLUTIONS dictionary in your solution file:

# solutions/0023_merge_k_sorted_lists.py

SOLUTIONS = {
    "default": {
        "method": "mergeKListsPriorityQueue",   # Name of the method in Solution class
        "complexity": "O(N log k)",             # Time complexity
        "description": "Priority Queue approach"
    },
    "heap": {
        "method": "mergeKListsPriorityQueue",
        "complexity": "O(N log k)",
        "description": "Priority Queue (Min Heap)"
    },
    "divide": {
        "method": "mergeKListsDivideConquer",
        "complexity": "O(N log k)",
        "description": "Divide and Conquer"
    },
    "greedy": {
        "method": "mergeKListsGreedy",
        "complexity": "O(kN)",
        "description": "Greedy comparison"
    },
}

class Solution:
    def mergeKListsPriorityQueue(self, lists):
        # Heap solution...
        pass

    def mergeKListsDivideConquer(self, lists):
        # Divide & Conquer solution...
        pass

    def mergeKListsGreedy(self, lists):
        # Greedy solution...
        pass

def solve():
    import os
    # Get solution method from environment variable
    method_name = os.environ.get('SOLUTION_METHOD', 'default')
    method_info = SOLUTIONS.get(method_name, SOLUTIONS['default'])
    method_func_name = method_info['method']
    
    sol = Solution()
    method_func = getattr(sol, method_func_name)
    result = method_func(...)
    print(result)

SOLUTIONS Field Description

Field Description Required
method Method name in Solution class βœ…
complexity Time complexity (for display) ❌
description Solution description ❌

Custom Short Names

The key in SOLUTIONS is the short name used in command line:

SOLUTIONS = {
    "default": {"method": "solve_optimal", ...},     # Default solution
    "heap": {"method": "solve_heap", ...},           # --method heap
    "h": {"method": "solve_heap", ...},              # --method h (alias)
    "pq": {"method": "solve_priority_queue", ...},   # --method pq
    "bf": {"method": "solve_bruteforce", ...},       # --method bf
}

Note:

  • default is used when --method is not specified
  • Time complexity must be annotated by user; system only measures actual execution time

Advanced: Wrapper-Based Pattern for Multiple Solution Classes

When implementing multiple approaches (e.g., recursive vs iterative), you may encounter:

  • Method name conflicts inside one class
  • Having to rename methods away from their original LeetCode signatures

Solution: Use separate Solution classes with wrapper functions.

# solutions/0025_reverse_nodes_in_k_group.py

# ============================================
# Solution 1: Recursive approach
# ============================================
class SolutionRecursive:
    def reverseKGroup(self, head, k):
        # Recursive implementation...
        pass

# ============================================
# Solution 2: Iterative approach  
# ============================================
class SolutionIterative:
    def reverseKGroup(self, head, k):
        # Iterative implementation...
        pass

# ============================================
# Wrapper functions for test_runner integration
# ============================================
def solve_recursive(head, k):
    """Wrapper for SolutionRecursive."""
    return SolutionRecursive().reverseKGroup(head, k)

def solve_iterative(head, k):
    """Wrapper for SolutionIterative."""
    return SolutionIterative().reverseKGroup(head, k)

# ============================================
# SOLUTIONS metadata
# ============================================
SOLUTIONS = {
    "default": {
        "method": "solve_iterative",
        "complexity": "O(N) time, O(1) space",
        "description": "Iterative in-place reversal"
    },
    "recursive": {
        "method": "solve_recursive",
        "complexity": "O(N) time, O(N) space",
        "description": "Recursive reversal with stack"
    },
    "iterative": {
        "method": "solve_iterative",
        "complexity": "O(N) time, O(1) space",
        "description": "Iterative in-place reversal"
    },
}

def solve():
    import os
    import sys
    
    # Get solution method from environment variable
    method_name = os.environ.get('SOLUTION_METHOD', 'default')
    method_info = SOLUTIONS.get(method_name, SOLUTIONS['default'])
    method_func_name = method_info['method']
    
    # Parse input
    lines = sys.stdin.read().strip().split('\n')
    # ... parse your input ...
    
    # Call wrapper function directly (not via class)
    method_func = globals()[method_func_name]
    result = method_func(head, k)
    
    print(result)

Benefits of this pattern:

  • Each solution stays in its own class (SolutionRecursive, SolutionIterative)
  • Preserve original LeetCode method names (e.g., reverseKGroup, mergeKLists)
  • No method name collisions inside a single class
  • Scales nicely when a problem has more than two approaches

Tip: Use new_problem.bat <name> --wrapper (Windows) or ./new_problem.sh <name> --wrapper (Linux/macOS) to create a template with this pattern.


πŸ”€ Flexible Output Comparison

Some LeetCode problems state "You may return the answer in any order" or have multiple valid answers. The test runner supports flexible validation with clear output labels.

Validation Modes

Label Description Requires .out
[judge] JUDGE_FUNC with .out reference βœ…
[judge-only] JUDGE_FUNC without .out (pure validation) ❌
[exact] Exact string match βœ…
[sorted] Sort lists before comparison βœ…
[set] Set comparison βœ…

Priority

1. JUDGE_FUNC (custom validation) - highest priority
2. COMPARE_MODE (sorted/set comparison)
3. Exact string match (default)

Test Output Example

============================================================
πŸ§ͺ Testing: 0051_n_queens
βš–οΈ  Judge: JUDGE_FUNC
============================================================

πŸ“Œ Method: default

   0051_n_queens_1: βœ… PASS (88.33ms) [judge]
   0051_n_queens_2: βœ… PASS (92.15ms) [judge]
   0051_n_queens_3: βœ… PASS (156.20ms) [judge-only]

   Result: 3 / 3 cases passed.

Approach 1: JUDGE_FUNC (Recommended for Complex Cases)

Use Decision Problem approach: verify the answer is valid, not just identical.

Key Feature: .out file is optional when JUDGE_FUNC is defined!

# solutions/0051_n_queens.py

def judge(actual: list, expected, input_data: str) -> bool:
    """
    Custom validation function.
    
    Args:
        actual: Program output (parsed as Python object if possible)
        expected: Expected output, or None if .out file doesn't exist
        input_data: Input data (raw string)
    
    Returns:
        bool: Whether the answer is correct
    """
    n = int(input_data.strip())
    
    # Validate solution regardless of expected
    for board in actual:
        if not is_valid_n_queens(board, n):
            return False
    
    # Check count only if expected is provided
    if expected is not None:
        if len(actual) != len(expected):
            return False
    
    # Check no duplicates
    return len(set(tuple(b) for b in actual)) == len(actual)

JUDGE_FUNC = judge  # Tell test_runner to use this function

Benefits:

  • Validates correctness, not just string equality
  • Handles multiple valid answers
  • .out file optional - supports judge-only mode for custom test cases
  • Works with any output format (strings, objects, custom formats)

Use Cases for Judge-Only Mode (no .out):

  • Custom large test cases you generate
  • Stress testing with random inputs
  • Cases where computing expected output is complex

Approach 2: COMPARE_MODE (Simple Cases)

For simple order-independent comparisons (requires .out file):

# solutions/0046_permutations.py

COMPARE_MODE = "sorted"  # Options: "exact" | "sorted" | "set"
Mode Description Use Case
"exact" Exact string match (default) Most problems
"sorted" Sort lists before comparison Permutations, Combinations
"set" Set comparison (ignores duplicates) Unique elements

JUDGE_FUNC Examples

Example 1: N-Queens (with optional .out)

def judge(actual: list, expected, input_data: str) -> bool:
    n = int(input_data.strip())
    
    # Always validate board correctness
    if not all(is_valid_board(b, n) for b in actual):
        return False
    
    # If .out exists, also check count
    if expected is not None:
        return len(actual) == len(expected)
    
    return True  # Judge-only mode: just validate

JUDGE_FUNC = judge

Example 2: LinkedList (String Mode)

def judge(actual: str, expected: str, input_data: str) -> bool:
    # Parse "1->2->3" format
    def parse(s):
        return s.strip().split("->") if s.strip() else []
    return parse(actual) == parse(expected)

JUDGE_FUNC = judge

Example 3: Floating Point Tolerance

def judge(actual: float, expected: float, input_data: str) -> bool:
    return abs(actual - expected) < 1e-5

JUDGE_FUNC = judge

Example 4: Pure Validation (Judge-Only)

def judge(actual: list, expected, input_data: str) -> bool:
    """Validate without expected output."""
    # expected is None when .out doesn't exist
    params = parse_input(input_data)
    return is_valid_solution(actual, params)

JUDGE_FUNC = judge

Applicable Problems

Problem Recommended Approach .out Required
N-Queens JUDGE_FUNC (validate board) Optional
Permutations COMPARE_MODE = "sorted" βœ…
Subsets COMPARE_MODE = "sorted" βœ…
Shortest Path (multiple) JUDGE_FUNC (validate path) Optional
Floating point JUDGE_FUNC (tolerance) βœ…
LinkedList/Tree JUDGE_FUNC (parse format) βœ…
Custom stress tests JUDGE_FUNC (judge-only) ❌

🎲 Test Case Generator

Automatically generate test cases to stress-test your solutions.

Setup

Create a generator file in generators/ with the same name as your solution:

generators/
└── 0004_median_of_two_sorted_arrays.py

Generator Template

# generators/0004_median_of_two_sorted_arrays.py
"""
LeetCode Constraints:
- 0 <= m, n <= 1000
- 1 <= m + n <= 2000
- -10^6 <= nums1[i], nums2[i] <= 10^6
"""
import random
from typing import Iterator, Optional


def generate(count: int = 10, seed: Optional[int] = None) -> Iterator[str]:
    """
    Generate test case inputs.
    
    Args:
        count: Number of test cases to generate
        seed: Random seed for reproducibility
    
    Yields:
        str: Test input (same format as .in files)
    """
    # Constraints
    min_m, max_m = 0, 1000
    min_n, max_n = 0, 1000
    min_val, max_val = -10**6, 10**6
    
    if seed is not None:
        random.seed(seed)
    
    # Edge cases first
    yield "[]\n[1]"
    yield "[1]\n[]"
    count -= 2
    
    # Random cases
    for _ in range(count):
        m = random.randint(min_m, max_m)
        n = random.randint(min_n, max_n)
        nums1 = sorted([random.randint(min_val, max_val) for _ in range(m)])
        nums2 = sorted([random.randint(min_val, max_val) for _ in range(n)])
        yield f"{nums1}\n{nums2}".replace(' ', '')

Usage

# Run tests/ + 10 generated cases
python runner/test_runner.py 0004_median --generate 10

# Only run generated cases (skip tests/)
python runner/test_runner.py 0004_median --generate-only 10

# Use seed for reproducibility
python runner/test_runner.py 0004_median --generate 10 --seed 12345

# Save failed cases for debugging
# Failed cases will be saved to tests/ as {problem}_failed_{n}.in
python runner/test_runner.py 0004_median --generate 10 --save-failed

Output Example

============================================================
πŸ§ͺ Testing: 0004_median_of_two_sorted_arrays
βš–οΈ  Judge: JUDGE_FUNC
🎲 Generator: 10 cases, seed: 12345
============================================================

πŸ“Œ Running default solution...

   --- tests/ (static) ---
   0004_median_1: βœ… PASS (12.33ms) [judge]
   0004_median_2: βœ… PASS (11.15ms) [judge]

   --- generators/ (10 cases, seed: 12345) ---
   gen_1: βœ… PASS (8.20ms) [generated]
   gen_2: βœ… PASS (7.15ms) [generated]
   gen_3: ❌ FAIL [generated]
      β”Œβ”€ Input ─────────────────────────────────
      β”‚ [1,3,5,7,9]
      β”‚ [2,4,6,8,10]
      β”œβ”€ Actual ────────────────────────────────
      β”‚ 5.0
      └─────────────────────────────────────────
      πŸ’Ύ Saved to: tests/0004_median_failed_1.in
   ...

Summary: 11 / 12 cases passed.
   β”œβ”€ Static (tests/): 2/2
   └─ Generated: 9/10

πŸ’‘ To reproduce: python runner/test_runner.py 0004_median --generate 10 --seed 12345

Requirements

Component Required Description
generators/{problem}.py Generator file Must have generate(count, seed) function
JUDGE_FUNC in solution βœ… Generator cases have no .out, need judge
tests/*.in Optional Static tests run before generated
tests/*_failed_*.in Auto-generated Failed cases saved with --save-failed flag

πŸ“ˆ Time Complexity Estimation

Automatically estimate algorithm time complexity using the big_O library approach.

Design Philosophy

Simple and generic - Only requires one additional function in your generator:

Function Purpose Required
generate(count, seed) Random test cases for functional testing βœ… Required
generate_for_complexity(n) Controlled size cases for complexity estimation Optional

The estimator uses Mock stdin approach internally:

  • βœ… Generic - works with any solution that has solve() function
  • βœ… No subprocess overhead
  • βœ… Maintains stdin abstraction design

Usage

# Estimate complexity (requires generate_for_complexity in generator)
python runner/test_runner.py 0004_median_of_two_sorted_arrays --estimate

# Combine with other flags
python runner/test_runner.py 0004 --all --benchmark --estimate

Generator Example

# generators/0004_median_of_two_sorted_arrays.py

# Required: Random test generation
def generate(count: int, seed: Optional[int] = None) -> Iterator[str]:
    """Random sizes - tests functional correctness"""
    for _ in range(count):
        m = random.randint(0, 1000)
        n = random.randint(0, 1000)
        yield _generate_case(m, n)


# Optional: Enable complexity estimation
def generate_for_complexity(n: int) -> str:
    """
    Generate test case with specific input size.
    
    For this problem, n = total elements (m + n)
    """
    m = random.randint(0, n)
    return _generate_case(m, n - m)

Output Example

πŸ“ˆ Running complexity estimation...
   Mode: Direct call (Mock stdin, no subprocess overhead)
   Sizes: [10, 20, 50, 100, 200, 500, 1000, 2000]
   n=   10: 0.0040ms (avg of 3 runs)
   n=  100: 0.0082ms (avg of 3 runs)
   n= 1000: 0.0685ms (avg of 3 runs)
   n= 2000: 0.1796ms (avg of 3 runs)

βœ… Estimated: O(n log n)
   Confidence: 1.00

Requirements

Component Required Description
big-O package βœ… pip install big-O
generate_for_complexity(n) βœ… Function that takes size n and returns test input

Suitable Problem Types

Not all problems are suitable for time complexity estimation. The estimation works best when:

βœ… Suitable ❌ Not Suitable
Input size n can vary continuously (10, 100, 1000...) Input size has hard constraints (e.g., n ≀ 9)
Execution time scales with input size Execution time is dominated by fixed overhead
Linear, logarithmic, polynomial complexity Factorial/exponential with small n limit

Examples:

Problem Suitable? Reason
Two Sum βœ… n can be 10 ~ 10000, O(n) scales clearly
Longest Substring βœ… String length can vary widely
Merge k Sorted Lists βœ… Total elements N can scale
N-Queens (0051) ❌ n ≀ 9 (factorial explosion), can't vary size meaningfully
Rotting Oranges (0994) ❌ Grid size limited, BFS time dominated by grid structure
Sudoku Solver ❌ Fixed 9x9 grid, backtracking complexity

Tip: Only add generate_for_complexity(n) to generators where n can meaningfully vary from small (10) to large (1000+).

Backward Compatibility

  • Solution files: No changes required (must have solve() function)
  • Existing generators: Continue to work without changes
  • New feature: Add generate_for_complexity(n) to enable estimation

πŸ“Š Test Result Example

============================================================
πŸ§ͺ Testing: 0023_merge_k_sorted_lists
============================================================

πŸ“Œ Method: default
   Complexity: O(N log k)
   Description: Priority Queue (Min Heap) approach

   0023_merge_k_sorted_lists_1: βœ… PASS (53.04ms)
   0023_merge_k_sorted_lists_2: βœ… PASS (43.11ms)
   0023_merge_k_sorted_lists_3: βœ… PASS (44.50ms)

   Result: 3 / 3 cases passed.

πŸ“Œ Method: heap
   Complexity: O(N log k)
   Description: Priority Queue (Min Heap) approach

   0023_merge_k_sorted_lists_1: βœ… PASS (44.40ms)
   0023_merge_k_sorted_lists_2: βœ… PASS (43.89ms)
   0023_merge_k_sorted_lists_3: βœ… PASS (44.79ms)

   Result: 3 / 3 cases passed.

πŸ“Œ Method: divide
   Complexity: O(N log k)
   Description: Divide and Conquer approach

   0023_merge_k_sorted_lists_1: βœ… PASS (44.02ms)
   0023_merge_k_sorted_lists_2: βœ… PASS (44.32ms)
   0023_merge_k_sorted_lists_3: βœ… PASS (45.11ms)

   Result: 3 / 3 cases passed.

πŸ“Œ Method: greedy
   Complexity: O(kN)
   Description: Greedy comparison - compare all k heads each time

   0023_merge_k_sorted_lists_1: βœ… PASS (44.68ms)
   0023_merge_k_sorted_lists_2: βœ… PASS (45.00ms)
   0023_merge_k_sorted_lists_3: βœ… PASS (44.78ms)

   Result: 3 / 3 cases passed.

============================================================
πŸ“Š Performance Comparison
============================================================
Method               Avg Time     Complexity      Pass Rate
------------------------------------------------------------
default                 46.88ms   O(N log k)      3/3
heap                    44.36ms   O(N log k)      3/3
divide                  44.48ms   O(N log k)      3/3
greedy                  44.82ms   O(kN)           3/3
============================================================

🐍 Python Environment

  • Python Version: 3.11 (matches LeetCode Official Environment)
  • Virtual Environment: leetcode/ (inside project)
  • Dependencies: See requirements.txt

Install Dependencies

# Activate virtual environment first, then:
pip install -r requirements.txt
Package Required Description
debugpy βœ… Debug support for VS Code
big-O Optional Time complexity estimation

Activate Virtual Environment

Windows

# PowerShell
.\leetcode\Scripts\Activate.ps1

# CMD
leetcode\Scripts\activate.bat

Linux / macOS

source leetcode/bin/activate

Install New Packages

Windows

# Activate virtual environment first, then install
leetcode\Scripts\activate
pip install <package_name>

Linux / macOS

# Activate virtual environment first, then install
source leetcode/bin/activate
pip install <package_name>

πŸ’‘ Tips

  1. Add more test cases: Copy .in/.out files and change the number

    0001_two_sum_1.in β†’ 0001_two_sum_2.in
    0001_two_sum_1.out β†’ 0001_two_sum_2.out
    
  2. Debug specific test case: Modify case number in launch.json

  3. Custom input format: Define parsing logic in solve() function


πŸ“œ License

MIT License - Free for personal learning